text
stringlengths 6
128k
|
---|
Despite the recent interest in "organic spintronics", the dominant spin
relaxation mechanism of electrons or holes in an organic compound semiconductor
has not been conclusively identified. There have been sporadic suggestions that
it might be hyperfine interaction caused by background nuclear spins, but no
confirmatory evidence to support this has ever been presented. Here, we report
the electric-field dependence of the spin diffusion length in an organic
spin-valve structure consisting of an Alq3 spacer layer, and argue that this
data, as well as available data on the temperature dependence of this length,
contradict the notion that hyperfine interactions relax spin. Instead, they
suggest that the Elliott-Yafet mechanism, arising from spin-orbit interaction,
is more likely the dominant spin relaxing mechanism.
|
Let ${\bf x}=(x_n)_n$ be a sequence in a Banach space. A set $A\subseteq
\mathbb{N}$ is perfectly bounded, if there is $M$ such that $\|\sum_{n\in
F}x_n\|\leq M$ for every finite $F\subseteq A$. The collection $B({\bf x})$ of
all perfectly bounded sets is an ideal of subsets of $\mathbb{N}$. We show that
an ideal $\mathcal{I}$ is of the form $B({\bf x})$ iff there is a non
pathological lower semicontinuous submeasure $\varphi$ on $\mathbb{N}$ such
that $\mathcal{I} =FIN(\varphi)=\{A\subseteq \mathbb{N}:
\;\varphi(A)<\infty\}$. We address the questions of when $FIN(\varphi)$ is a
tall ideal and has a Borel selector. We show that in $c_0$ the ideal $B({\bf
x})$ is tall iff $(x_n)_n$ is weakly null, in which case, it also has a Borel
selector.
|
Following the development of digitization, a growing number of large Original
Equipment Manufacturers (OEMs) are adapting computer vision or natural language
processing in a wide range of applications such as anomaly detection and
quality inspection in plants. Deployment of such a system is becoming an
extremely important topic. Our work starts with the least-automated deployment
technologies of machine learning systems includes several iterations of
updates, and ends with a comparison of automated deployment techniques. The
objective is, on the one hand, to compare the advantages and disadvantages of
various technologies in theory and practice, so as to facilitate later adopters
to avoid making the generalized mistakes when implementing actual use cases,
and thereby choose a better strategy for their own enterprises. On the other
hand, to raise awareness of the evaluation framework for the deployment of
machine learning systems, to have more comprehensive and useful evaluation
metrics (e.g. table 2), rather than only focusing on a single factor (e.g.
company cost). This is especially important for decision-makers in the
industry.
|
Deep learning models are susceptible to adversarial samples in white and
black-box environments. Although previous studies have shown high attack
success rates, coupling DNN models with interpretation models could offer a
sense of security when a human expert is involved, who can identify whether a
given sample is benign or malicious. However, in white-box environments,
interpretable deep learning systems (IDLSes) have been shown to be vulnerable
to malicious manipulations. In black-box settings, as access to the components
of IDLSes is limited, it becomes more challenging for the adversary to fool the
system. In this work, we propose a Query-efficient Score-based black-box attack
against IDLSes, QuScore, which requires no knowledge of the target model and
its coupled interpretation model. QuScore is based on transfer-based and
score-based methods by employing an effective microbial genetic algorithm. Our
method is designed to reduce the number of queries necessary to carry out
successful attacks, resulting in a more efficient process. By continuously
refining the adversarial samples created based on feedback scores from the
IDLS, our approach effectively navigates the search space to identify
perturbations that can fool the system. We evaluate the attack's effectiveness
on four CNN models (Inception, ResNet, VGG, DenseNet) and two interpretation
models (CAM, Grad), using both ImageNet and CIFAR datasets. Our results show
that the proposed approach is query-efficient with a high attack success rate
that can reach between 95% and 100% and transferability with an average success
rate of 69% in the ImageNet and CIFAR datasets. Our attack method generates
adversarial examples with attribution maps that resemble benign samples. We
have also demonstrated that our attack is resilient against various
preprocessing defense techniques and can easily be transferred to different DNN
models.
|
We calculate several neutron star properties, for static and/or rotating
stars, using equations of state based on different microscopic models. These
include our Dirac-Brueckner-Hartree-Fock model and others derived from the
non-relativistic Brueckner-Hartree-Fock approach implemented with microscopic
three-body forces. The model dependence is discussed.
|
In a secure message transmission (SMT) scenario a sender wants to send a
message in a private and reliable way to a receiver. Sender and receiver are
connected by $n$ vertex disjoint paths, referred to as wires, $t$ of which can
be controlled by an adaptive adversary with unlimited computational resources.
In Eurocrypt 2008, Garay and Ostrovsky considered an SMT scenario where sender
and receiver have access to a public discussion channel and showed that secure
and reliable communication is possible when $n \geq t+1$. In this paper we will
show that a secure protocol requires at least 3 rounds of communication and 2
rounds invocation of the public channel and hence give a complete answer to the
open question raised by Garay and Ostrovsky. We also describe a round optimal
protocol that has \emph{constant} transmission rate over the public channel.
|
According to the Onsager's semiclassical quantization rule, the Landau levels
of a band are bounded by its upper and lower band edges at zero magnetic field.
However, there are two notable systems where the Landau level spectra violate
this expectation, including topological bands and flat bands with singular band
crossings, whose wave functions possess some singularities. Here, we introduce
a distinct class of flat band systems where anomalous Landau level spreading
(LLS) appears outside the zero-field energy bounds, although the relevant wave
function is nonsingular. The anomalous LLS of isolated flat bands are governed
by the cross-gap Berry connection that measures the wave-function geometry of
multi bands. We also find that symmetry puts strong constraints on the LLS of
flat bands. Our work demonstrates that an isolated flat band is an ideal system
for studying the fundamental role of wave-function geometry in describing
magnetic responses of solids.
|
Many physical questions in fluid dynamics can be recast in terms of norm
constrained optimisation problems; which in-turn, can be further recast as
unconstrained problems on spherical manifolds. Due to the nonlinearities of the
governing PDEs, and the computational cost of performing optimal control on
such systems, improving the numerical convergence of the optimisation procedure
is crucial. Borrowing tools from the optimisation on manifolds community we
outline a numerically consistent, discrete formulation of the direct-adjoint
looping method accompanied by gradient descent and line-search algorithms with
global convergence guarantees. We numerically demonstrate the robustness of
this formulation on three example problems of relevance in fluid dynamics and
provide an accompanying library SphereManOpt
|
Within an effective field theory framework, we obtain an expression for the
next-to-leading term in the $1/m$ expansion of the singlet $Q{\bar Q}$ QCD
potential in terms of Wilson loops, which holds beyond perturbation theory. The
ambiguities in the definition of the QCD potential beyond leading order in
$1/m$ are discussed and a specific expression for the $1/m$ potential is given.
We explicitly evaluate this expression at one loop and compare the outcome with
the existing perturbative results. On general grounds we show that for quenched
QED and fully Abelian-like models this expression exactly vanishes.
|
We consider an inhomogeneous anisotropic gap superconductor in the vicinity
of the quantum critical point, where the transition temperature is suppressed
to zero by disorder. Starting with the BCS Hamiltonian, we derive the
Ginzburg-Landau action for the superconducting order parameter. It is shown
that the critical theory corresponds to the marginal case in two dimensions and
is formally equivalent to the theory of an antiferromagnetic quantum critical
point, which is a quantum critical theory with the dynamic critical exponent,
z=2. This allows us to use a parquet method to calculate the non-perturbative
effect of quantum superconducting fluctuations on thermodynamic properties. We
derive a general expression for the fluctuation magnetic susceptibility, which
exhibits a crossover from the logarithmic dependence, $\chi ~ ln(dn)$, valid
beyond the Ginzburg region to $\chi ~ ln^{1/5}(dn)$ valid in the immediate
vicinity of the transition (where $dn$ is the deviation from the critical
disorder concentration). We suggest that the obtained non-perturbative results
describe the low-temperature critical behavior of a variety of diverse
superconducting systems, which include overdoped high-temperature cuprates,
disordered p-wave superconductors, and conventional superconducting films with
magnetic impurities.
|
Network Traffic Monitoring and Analysis (NTMA) represents a key component for
network management, especially to guarantee the correct operation of
large-scale networks such as the Internet. As the complexity of Internet
services and the volume of traffic continue to increase, it becomes difficult
to design scalable NTMA applications. Applications such as traffic
classification and policing require real-time and scalable approaches. Anomaly
detection and security mechanisms require to quickly identify and react to
unpredictable events while processing millions of heterogeneous events. At
last, the system has to collect, store, and process massive sets of historical
data for post-mortem analysis. Those are precisely the challenges faced by
general big data approaches: Volume, Velocity, Variety, and Veracity. This
survey brings together NTMA and big data. We catalog previous work on NTMA that
adopt big data approaches to understand to what extent the potential of big
data is being explored in NTMA. This survey mainly focuses on approaches and
technologies to manage the big NTMA data, additionally briefly discussing big
data analytics (e.g., machine learning) for the sake of NTMA. Finally, we
provide guidelines for future work, discussing lessons learned, and research
directions.
|
The Belle II experiment recently observed the decay $B^+ \to K^+ \nu \nu$ for
the first time, with a measured value for the branching ratio of $ (2.3 \pm
0.7) \times 10^{-5}$. This result exhibits a $\sim 3\sigma$ deviation from the
Standard Model (SM) prediction. The observed enhancement with respect to the
Standard Model could indicate the presence of invisible light new physics. In
this paper, we investigate whether this result can be accommodated in a minimal
Higgs portal model, where the SM is extended by a singlet Higgs scalar that
decays invisibly to dark sector states. We find that current and future bounds
on invisible decays of the 125 GeV Higgs boson completely exclude a new scalar
with a mass $\gtrsim 10$ GeV. On the other hand, the Belle II results can be
successfully accommodated if the new scalar is lighter than $B$ mesons but
heavier than kaons. We also investigate the cosmological implications of the
new states and explore the possibility that they are part of an abelian Higgs
extension of the SM. Future Higgs factories are expected to place stringent
bounds on the invisible branching ratio of the 125 GeV Higgs boson, and will be
able to definitively test the region of parameter space favored by the Belle II
results.
|
In this paper, we establish weak consistency and asymptotic normality of an
M-estimator of the regression function for left truncated and right censored
(LTRC) model, where it is assumed that the observations form a stationary
alpha-mixing sequence. The result holds with unbounded objective function, and
are applied to derive weak consistency and asymptotic normality of a kernel
classical regression curve estimate. We also obtain a uniform weak convergence
rate for the product-limit estimator of the lifetime and censored distribution
under dependence, which are useful results for our study and other LTRC strong
mixing framework. Some simulations are drawn to illustrate the results for
finite sample.
|
In order to faithfully detect the state of an individual two-state quantum
system (qubit) realized using, for example, a trapped ion or atom, state
selective scattering of resonance fluorescence is well established. The
simplest way to read out this measurement and assign a state is the threshold
method. The detection error can be decreased by using more advanced detection
methods like the time-resolved method or the $\pi$-pulse detection method.
These methods were introduced to qubits with a single possible state change
during the measurement process. However, there exist many qubits like the
hyperfine qubit of $^{171}Yb^+$ where several state change are possible. To
decrease the detection error for such qubits, we develope generalizations of
the time-resolved method and the $\pi$-pulse detection method for such qubits.
We show the advantages of these generalized detection methods in numerical
simulations and experiments using the hyperfine qubit of $^{171}Yb^+$. The
generalized detection methods developed here can be implemented in an efficient
way such that experimental real time state discrimination with improved
fidelity is possible.
|
The stability of the low-frequency peaks (< 1 Hz) obtained in the passive
seismic survey of Campo de Dal\'ias basin (CDB) by applying the
horizontal-to-vertical spectral ratio (HVSR) method was investigated. Three
temporary seismic stations were installed in remote sites that enabled studying
the stationarity of their characteristic microtremor HVSR (MHVSR) shapes. All
stations began to operate in mid-2016 and recorded at least one year of
continuous seismic ambient noise data, having up to two years in some. Each
seismic station counted with a monitored borehole in their vicinity,
registering the groundwater level every 30 minutes. The MHVSR curves were
calculated for time windows of 150 s and averaged hourly. Four parameters have
been defined to characterize the shape of the MHVSR around the main peak and to
compare them with several environmental variables. Correlations between MHVSR
characteristics and the groundwater level showed to be the most persistent. The
robustness of MHVSR method for applications to seismic engineering was not
found to be compromised since the observed variations were within the margins
of acceptable deviations. Our results widen the possibilities of the MHVSR
method from being a reliable predictor for seismic resonance to also being an
autonomous monitoring tool, especially sensitive to the S-wave modifications.
|
We analyze initial-boundary value problems for an integrable generalization
of the nonlinear Schr\"odinger equation formulated on the half-line. In
particular, we investigate the so-called linearizable boundary conditions,
which in this case are of Robin type. Furthermore, we use a particular solution
to verify explicitly all the steps needed for the solution of a well-posed
problem.
|
We report the discovery of a high redshift, narrow emission-line galaxy
identified in the optical follow-up of deep ROSAT fields. The object has a
redshift of z=2.35 and its narrow emission lines together with its high optical
and X-ray luminosity imply that this is a rare example of a type 2 QSO. The
intrinsic X-ray absorption is either very low or we are observing scattered
flux which does not come directly from the nucleus. The X-ray spectrum of this
object is harder than that of normal QSOs, and it is possible that a hitherto
unidentified population of similar objects at fainter X-ray fluxes could
account for the missing hard component of the X-ray background.
|
This article presents methods to efficiently compute the Coriolis matrix and
underlying Christoffel symbols (of the first kind) for tree-structure
rigid-body systems. The algorithms can be executed purely numerically, without
requiring partial derivatives as in unscalable symbolic techniques. The
computations share a recursive structure in common with classical methods such
as the Composite-Rigid-Body Algorithm and are of the lowest possible order:
$O(Nd)$ for the Coriolis matrix and $O(Nd^2)$ for the Christoffel symbols,
where $N$ is the number of bodies and $d$ is the depth of the kinematic tree.
Implementation in C/C++ shows computation times on the order of 10-20 $\mu$s
for the Coriolis matrix and 40-120 $\mu$s for the Christoffel symbols on
systems with 20 degrees of freedom. The results demonstrate feasibility for the
adoption of these algorithms within high-rate ($>$1kHz) loops for model-based
control applications.
|
We present a new three-dimensional Monte-Carlo code MUSIC (MUon SImulation
Code) for muon propagation through the rock. All processes of muon interaction
with matter with high energy loss (including the knock-on electron production)
are treated as stochastic processes. The angular deviation and lateral
displacement of muons due to multiple scattering, as well as bremsstrahlung,
pair production and inelastic scattering are taken into account. The code has
been applied to obtain the energy distribution and angular and lateral
deviations of single muons at different depths underground. The muon
multiplicity distributions obtained with MUSIC and CORSIKA (Extensive Air
Shower simulation code) are also presented. We discuss the systematic
uncertainties of the results due to different muon bremsstrahlung
cross-sections.
|
We give a new proof of weak version of R. Holzman and D.J. Kleitman bound on
a number of the $n$-dimensional cube vertices strictly separated by a
hyperplane, tangent to the inscribed sphere.
|
In this paper, we attempt to find out the `spectro-temporal' characteristics
during the jet ejection, of several outbursting Galactic black hole sources
based on RXTE-PCA/HEXTE data in the energy band of 2 - 100 keV. We present
results of detailed analysis of these sources during the rising phase of their
outburst, whenever simultaneous or near-simultaneous X-ray and Radio
observations are `available'. We find that before the peak radio flare
(transient jet) a few of the sources (in addition to those reported earlier)
exhibit `local' softening within the soft intermediate state itself. Except the
duration, all the properties of the `local' softening (QPO not observed,
reduction in total rms, soft spectra) are observed to be similar to the
canonical soft state. We find similar `local' softening for the recent outburst
of V404 Cyg also based on SWIFT observations. Fast changes in the
`spectro-temporal' properties during the `local' softening implies that it may
not be occurring due to change in Keplerian accretion rate. We discuss these
results in the framework of the magnetized two component advective flow model.
|
The method for a problem solution of expenditures reduction of computing
resources and time is developed at a pattern recognition, with the way of
construction of the minimum tests sets or separate minimum tests on Boolean
matrixes is suggested
|
We study stochastic convex optimization under infinite noise variance.
Specifically, when the stochastic gradient is unbiased and has uniformly
bounded $(1+\kappa)$-th moment, for some $\kappa \in (0,1]$, we quantify the
convergence rate of the Stochastic Mirror Descent algorithm with a particular
class of uniformly convex mirror maps, in terms of the number of iterations,
dimensionality and related geometric parameters of the optimization problem.
Interestingly this algorithm does not require any explicit gradient clipping or
normalization, which have been extensively used in several recent empirical and
theoretical works. We complement our convergence results with
information-theoretic lower bounds showing that no other algorithm using only
stochastic first-order oracles can achieve improved rates. Our results have
several interesting consequences for devising online/streaming stochastic
approximation algorithms for problems arising in robust statistics and machine
learning.
|
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) plays an
important role in diagnosis and grading of brain tumor. Although manual DCE
biomarker extraction algorithms boost the diagnostic yield of DCE-MRI by
providing quantitative information on tumor prognosis and prediction, they are
time-consuming and prone to human error. In this paper, we propose a
fully-automated, end-to-end system for DCE-MRI analysis of brain tumors. Our
deep learning-powered technique does not require any user interaction, it
yields reproducible results, and it is rigorously validated against benchmark
(BraTS'17 for tumor segmentation, and a test dataset released by the
Quantitative Imaging Biomarkers Alliance for the contrast-concentration
fitting) and clinical (44 low-grade glioma patients) data. Also, we introduce a
cubic model of the vascular input function used for pharmacokinetic modeling
which significantly decreases the fitting error when compared with the state of
the art, alongside a real-time algorithm for determination of the vascular
input region. An extensive experimental study, backed up with statistical
tests, showed that our system delivers state-of-the-art results (in terms of
segmentation accuracy and contrast-concentration fitting) while requiring less
than 3 minutes to process an entire input DCE-MRI study using a single GPU.
|
Risk Assessment is a well known and powerful method for discovering and
mitigating risks, and hence improving safety. Ethical Risk Assessment uses the
same approach but extends the envelope of risk to cover ethical risks in
addition to safety risks. In this paper we outline Ethical Risk Assessment
(ERA) and set ERA within the broader framework of Responsible Robotics. We then
illustrate ERA with a case study of a hypothetical smart robot toy teddy bear:
RoboTed. The case study shows the value of ERA and how consideration of ethical
risks can prompt design changes, resulting in a more ethical and sustainable
robot.
|
In the paper, we study the properties of the $Z$-boson hadronic decay width
by using the $\mathcal{O}(\alpha_s^4)$-order quantum chromodynamics (QCD)
corrections with the help of the principle of maximum conformality (PMC). By
using the PMC single-scale approach, we obtain an accurate renormalization
scale-and-scheme independent perturbative QCD (pQCD) correction for the
$Z$-boson hadronic decay width, which is independent to any choice of
renormalization scale. After applying the PMC, a more convergent pQCD series
has been obtained; and the contributions from the unknown
$\mathcal{O}(\alpha_s^5)$-order terms are highly suppressed, e.g.
conservatively, we have $\Delta \Gamma_{\rm Z}^{\rm had}|^{{\cal
O}(\alpha_s^5)}_{\rm PMC}\simeq \pm 0.004$ MeV. In combination with the known
electro-weak (EW) corrections, QED corrections, EW-QCD mixed corrections, and
QED-QCD mixed corrections, our final prediction of the hadronic $Z$ decay width
is $\Gamma_{\rm Z}^{\rm had}=1744.439^{+1.390}_{-1.433}$ MeV, which agrees with
the PDG global fit of experimental measurements, $1744.4\pm 2.0$ MeV.
|
Object detection is increasingly used onboard Unmanned Aerial Vehicles (UAV)
for various applications; however, the machine learning (ML) models for
UAV-based detection are often validated using data curated for tasks unrelated
to the UAV application. This is a concern because training neural networks on
large-scale benchmarks have shown excellent capability in generic object
detection tasks, yet conventional training approaches can lead to large
inference errors for UAV-based images. Such errors arise due to differences in
imaging conditions between images from UAVs and images in training. To overcome
this problem, we characterize boundary conditions of ML models, beyond which
the models exhibit rapid degradation in detection accuracy. Our work is focused
on understanding the impact of different UAV-based imaging conditions on
detection performance by using synthetic data generated using a game engine.
Properties of the game engine are exploited to populate the synthetic datasets
with realistic and annotated images. Specifically, it enables the fine control
of various parameters, such as camera position, view angle, illumination
conditions, and object pose. Using the synthetic datasets, we analyze detection
accuracy in different imaging conditions as a function of the above parameters.
We use three well-known neural network models with different model complexity
in our work. In our experiment, we observe and quantify the following: 1) how
detection accuracy drops as the camera moves toward the nadir-view region; 2)
how detection accuracy varies depending on different object poses, and 3) the
degree to which the robustness of the models changes as illumination conditions
vary.
|
Like many other advanced imaging methods, x-ray phase contrast imaging and
tomography require mathematical inversion of the observed data to obtain
real-space information. While an accurate forward model describing the
generally nonlinear image formation from a given object to the observations is
often available, explicit inversion formulas are typically not known. Moreover,
the measured data might be insufficient for stable image reconstruction, in
which case it has to be complemented by suitable a priori information. In this
work, regularized Newton methods are presented as a general framework for the
solution of such ill-posed nonlinear imaging problems. For a proof of
principle, the approach is applied to x-ray phase contrast imaging in the
near-field propagation regime. Simultaneous recovery of the phase- and
amplitude from a single near-field diffraction pattern without homogeneity
constraints is demonstrated for the first time. The presented methods further
permit all-at-once phase contrast tomography, i.e. simultaneous phase retrieval
and tomographic inversion. We demonstrate the potential of this approach by
three-dimensional imaging of a colloidal crystal at 95 nm isotropic resolution.
|
The sixth-generation (6G) network is expected to achieve global coverage
based on the space-air-ground integrated network, and the latest satellite
network will play an important role in it. The introduction of inter-satellite
links (ISLs) can significantly improve the throughput of the satellite network,
and recently gets lots of attention from both academia and industry. In this
paper, we illustrate the advantages of using the laser for ISLs due to its
longer communication distance, higher data speed, and stronger security.
Specifically, space-borne laser terminals with the acquisition, pointing and
tracking mechanism which realize long-distance communication are illustrated,
advanced modulation and multiplexing modes that make high communication rates
possible are introduced, and the security of ISLs ensured by the
characteristics of both laser and the optical channel is also analyzed.
Moreover, some open issues such as advanced optical beam steering, routing and
scheduling algorithm, and integrated sensing and communication are discussed to
direct future research.
|
Type III multi-step rationally-extended harmonic oscillator and radial
harmonic oscillator potentials, characterized by a set of $k$ integers $m_1$,
$m_2$, \ldots, $m_k$, such that $m_1 < m_2 < \cdots < m_k$ with $m_i$ even
(resp.\ odd) for $i$ odd (resp.\ even), are considered. The state-adding and
state-deleting approaches to these potentials in a supersymmetric quantum
mechanical framework are combined to construct new ladder operators. The
eigenstates of the Hamiltonians are shown to separate into $m_k+1$
infinite-dimensional unitary irreducible representations of the corresponding
polynomial Heisenberg algebras. These ladder operators are then used to build a
higher-order integral of motion for seven new infinite families of
superintegrable two-dimensional systems separable in cartesian coordinates. The
finite-dimensional unitary irreducible representations of the polynomial
algebras of such systems are directly determined from the ladder operator
action on the constituent one-dimensional Hamiltonian eigenstates and provide
an algebraic derivation of the superintegrable systems whole spectrum including
the level total degeneracies.
|
In previous articles [J. Chem. Phys. 121 4501 (2004), J. Chem. Phys. 124
034115 (2006), J. Chem. Phys. 124 034116 (2006)] a bipolar counter-propagating
wave decomposition, Psi = Psi+ + Psi-, was presented for stationary states Psi
of the one-dimensional Schrodinger equation, such that the components Psi+-
approach their semiclassical WKB analogs in the large action limit. The
corresponding bipolar quantum trajectories are classical-like and well-behaved,
even when Psi has many nodes, or is wildly oscillatory. In this paper, the
method is generalized for multisurface scattering applications, and applied to
several benchmark problems. A natural connection is established between
intersurface transitions and (+/-) transitions.
|
Here we show that the Pfaffian state proposed for the $\frac52$ fractional
quantum Hall states in conventional two-dimensional electron systems can be
readily realized in a bilayer graphene at one of the Landau levels. The
properties and stability of the Pfaffian state at this special Landau level
strongly depend on the magnetic field strength. The graphene system shows a
transition from the incompressible to a compressible state with increasing
magnetic field. At a finite magnetic field of ~10 Tesla, the Pfaffian state in
bilayer graphene becomes more stable than its counterpart in conventional
electron systems.
|
Data pruning aims to obtain lossless performances with less overall cost. A
common approach is to filter out samples that make less contribution to the
training. This could lead to gradient expectation bias compared to the original
data. To solve this problem, we propose \textbf{InfoBatch}, a novel framework
aiming to achieve lossless training acceleration by unbiased dynamic data
pruning. Specifically, InfoBatch randomly prunes a portion of less informative
samples based on the loss distribution and rescales the gradients of the
remaining samples to approximate the original gradient. As a plug-and-play and
architecture-agnostic framework, InfoBatch consistently obtains lossless
training results on classification, semantic segmentation, vision pertaining,
and instruction fine-tuning tasks. On CIFAR10/100, ImageNet-1K, and ADE20K,
InfoBatch losslessly saves 40\% overall cost. For pertaining MAE and diffusion
model, InfoBatch can respectively save 24.8\% and 27\% cost. For LLaMA
instruction fine-tuning, InfoBatch is also able to save 20\% cost and is
compatible with coreset selection methods. The code is publicly available at
\href{https://github.com/henryqin1997/InfoBatch}{github.com/NUS-HPC-AI-Lab/InfoBatch}.
|
We present a tool to generate mock quasar microlensing light curves and
sample them according to any observing strategy. An updated treatment of the
fixed and random velocity components of observer, lens, and source is used,
together with a proper alignment with the external shear defining the
magnification map caustic orientation. Our tool produces quantitative results
on high magnification events and caustic crossings, which we use to study three
lensed quasars known to display microlensing, viz. RX J1131-1231, HE 0230-2130,
and Q 2237+0305, as they would be monitored by The Rubin Observatory Legacy
Survey of Space and Time (LSST). We conclude that depending on the location on
the sky, the lens and source redshift, and the caustic network density, the
microlensing variability may deviate significantly than the expected
$\sim$20-year average time scale (Mosquera & Kochanek 2011, arXiv:1104.2356).
We estimate that $\sim300$ high magnification events with $\Delta$mag$>1$ mag
could potentially be observed by LSST each year. The duration of the majority
of high magnification events is between 10 and 100 days, requiring a very high
cadence to capture and resolve them. Uniform LSST observing strategies perform
the best in recovering microlensing high magnification events. Our web tool can
be extended to any instrument and observing strategy, and is freely available
as a service at http://gerlumph.swin.edu.au/tools/lsst_generator/, along with
all the related code.
|
Recently, BERT realized significant progress for sentence matching via
word-level cross sentence attention. However, the performance significantly
drops when using siamese BERT-networks to derive two sentence embeddings, which
fall short in capturing the global semantic since the word-level attention
between two sentences is absent. In this paper, we propose a Dual-view
distilled BERT~(DvBERT) for sentence matching with sentence embeddings. Our
method deals with a sentence pair from two distinct views, i.e., Siamese View
and Interaction View. Siamese View is the backbone where we generate sentence
embeddings. Interaction View integrates the cross sentence interaction as
multiple teachers to boost the representation ability of sentence embeddings.
Experiments on six STS tasks show that our method outperforms the
state-of-the-art sentence embedding methods significantly.
|
We generalize Gassert-Shor formula for numerical semigroups.
|
Let $d\in\mathbb{N}$ and let $\varphi\colon(0,1)\to[0,d]$. We prove that
there exists a set $F\subset\mathbb{R}^d$ such that
$\operatorname{dim}_A^\theta F=\varphi(\theta)$ for all $\theta\in(0,1)$ if and
only if for every $0<\lambda<\theta<1$, \[0\leq
(1-\lambda)\varphi(\lambda)-(1-\theta)\varphi(\theta)\leq
(\theta-\lambda)\varphi\Bigl(\frac{\lambda}{\theta}\Bigr).\] In particular, the
following behaviours which have not previously been witnessed in any examples
are possible: the Assouad spectrum can be non-monotonic on every open set, and
can fail to be H\"older in a neighbourhood of 1.
|
This paper extends the theory of subset team games, a generalization of
cooperative game theory requiring a payoff function that is defined for all
subsets of players. This subset utility is used to define both altruistic and
selfish contributions of a player to the team. We investigate properties of
these games, and analyze the implications of altruism and selfishness for
general situations, for prisoner's dilemma, and for a specific game with a
Cobb-Douglas utility.
|
We define a notion of cofibration among n-categories and show that the
cofibrant objects are exactly the free ones, that is those generated by
polygraphs.
|
In information theory, the link between continuous information and discrete
information is established through well-known sampling theorems. Sampling
theory explains, for example, how frequency-filtered music signals are
reconstructible perfectly from discrete samples. In this Letter, sampling
theory is generalized to pseudo-Riemannian manifolds. This provides a new set
of mathematical tools for the study of space-time at the Planck scale: theories
formulated on a differentiable space-time manifold can be completely equivalent
to lattice theories. There is a close connection to generalized uncertainty
relations which have appeared in string theory and other studies of quantum
gravity.
|
In this paper, we find the necessary and sufficient conditions, inclusion
relations for Poisson distribution series $\mathcal{K}(m,z)=z+\sum_{n=2}^\infty
\frac{m^{n-1}}{(n-1)!}e^{-m}z^{n}$ belonging to the subclasses
$\mathcal{S}(k,\lambda )$ and $\mathcal{C}(k,\lambda )$ of analytic functions
with negative coefficients. Further, we consider the integral operator
$\mathcal{G}(m,z) = \int_0^z \frac{\mathcal{F}(m,\zeta)}{\zeta} d\zeta$
belonging to the above classes.
|
The optical conductivity [$\sigma(\omega)$] spectra of alkaline-earth-filled
skutterudites with the chemical formula $A^{2+}M_{4}$Sb$_{12}$ ($A$ = Sr, Ba,
$M$ = Fe, Ru, Os) and a reference material La$^{3+}$Fe$_{4}$Sb$_{12}$ were
obtained and compared with the corresponding band structure calculations and
with calculated $\sigma(\omega)$ spectra to investigate their electronic
structures. At high temperatures, the energy of the plasma edge decreased with
the increasing valence of the guest atoms $A$ in the Fe$_{4}$Sb$_{12}$ cage
indicating hole-type conduction. A narrow peak with a pseudogap of 25 meV was
observed in SrFe$_{4}$Sb$_{12}$, while the corresponding peak were located at
200 and 100 meV in the Ru- and Os-counterparts, respectively. The order of the
peak energy in these compounds is consistent with the thermodynamical
properties in which the Os-compound is located between the Fe- and
Ru-compounds. This indicated that the electronic structure observed in the
infrared $\sigma(\omega)$ spectra directly affects the thermodynamical
properties. The band structure calculation implies the different electronic
structure among these compounds originates from the different $d$ states of the
$M$ ions.
|
We consider dimensional reduction of the Bagger-Lambert-Gustavsson theory to
a zero-dimensional 3-Lie algebra model and construct various stable solutions
corresponding to quantized Nambu-Poisson manifolds. A recently proposed Higgs
mechanism reduces this model to the IKKT matrix model. We find that in the
strong coupling limit, our solutions correspond to ordinary noncommutative
spaces arising as stable solutions in the IKKT model with D-brane backgrounds.
In particular, this happens for S^3, R^3 and five-dimensional Neveu-Schwarz
Hpp-waves. We expand our model around these backgrounds and find effective
noncommutative field theories with complicated interactions involving
higher-derivative terms. We also describe the relation of our reduced model to
a cubic supermatrix model based on an osp(1|32) supersymmetry algebra.
|
The one body density matrix, momentum distribution, natural orbits and quasi
hole states of 16O and 40Ca are analyzed in the framework of the correlated
basis function theory using state dependent correlations with central and
tensor components. Fermi hypernetted chain integral equations and single
operator chain approximation are employed to sum cluster diagrams at all
orders. The optimal trial wave function is determined by means of the
variational principle and the realistic Argonne v8' two-nucleon and Urbana IX
three-nucleon interactions. The correlated momentum distributions are in good
agreement with the available variational Monte Carlo results and show the well
known enhancement at large momentum values with respect to the independent
particle model. Diagonalization of the density matrix provides the natural
orbits and their occupation numbers. Correlations deplete the occupation number
of the first natural orbitals by more than 10%. The first following ones result
instead occupied by a few percent. Jastrow correlations lower the spectroscopic
factors of the valence states by a few percent (~1-3%) and an additional ~8-12%
depletion is provided by tensor correlations. It is confirmed that short range
correlations do not explain the spectroscopic factors extracted from (e,e'p)
experiments. 2h-1p perturbative corrections in the correlated basis are
expected to provide most of the remaining strength, as in nuclear matter.
|
The paper provides an introduction into p-mechanics, which is a consistent
physical theory suitable for a simultaneous description of classical and
quantum mechanics. p-Mechanics naturally provides a common ground for several
different approaches to quantisation (geometric, Weyl, coherent states,
Berezin, deformation, Moyal, etc.) and has a potential for expansions into
field and string theories. The backbone of p-mechanics is solely the
representation theory of the Heisenberg group. Keywords: Classical mechanics,
quantum mechanics, Moyal brackets, Poisson brackets, commutator, Heisenberg
group, orbit method, deformation quantisation, symplectic group, representation
theory, metaplectic representation, Berezin quantisation, Weyl quantisation,
Segal--Bargmann--Fock space, coherent states, wavelet transform, contextual
interpretation, string theory, field theory.
|
Bilinear R-parity violation is a simple extension of the MSSM allowing for
Majorana neutrino masses. One of the three neutrinos picks up mass by mixing
with the neutralinos of the MSSM, while the other two neutrinos gain mass from
1-loop corrections. Once 1-loop corrections are carefully taken into account
the model is able to explain solar and atmospheric neutrino data for specific
though simple choices of the R-parity violating parameters.
|
When considering flows in biological membranes, they are usually treated as
flat, though more often than not, they are curved surfaces, even extremely
curved, as in the case of the endoplasmic reticulum. Here, we study the
topological effects of curvature on flows in membranes. Focusing on a system of
many point vortical defects, we are able to cast the viscous dynamics of the
defects in terms of a geometric Hamiltonian. In contrast to the planar
situation, the flows generate additional defects of positive index. For the
simpler situation of two vortices, we analytically predict the location of
these stagnation points. At the low curvature limit, the dynamics resemble that
of vortices in an ideal fluid, but considerable deviations occur at high
curvatures. The geometric formulation allows us to construct the
spatio-temporal evolution of streamline topology of the flows resulting from
hydrodynamic interactions between the vortices. The streamlines reveal novel
dynamical bifurcations leading to spontaneous defect-pair creation and fusion.
Further, we find that membrane curvature mediates defect binding and imparts a
global rotation to the many-vortex system, with the individual vortices still
interacting locally.
|
Rising maintenance costs of ageing infrastructure necessitate innovative
monitoring techniques. This paper presents a new approach for detecting axles,
enabling real-time application of Bridge Weigh-In-Motion (BWIM) systems without
dedicated axle detectors. The proposed Virtual Axle Detector with Enhanced
Receptive Field (VADER) is independent of bridge type and sensor placement
while only using raw acceleration data as input. By using raw data instead of
spectograms as input, the receptive field can be enhanced without increasing
the number of parameters. We also introduce a novel receptive field (RF) rule
for an object-size driven design of Convolutional Neural Network (CNN)
architectures. We were able to show, that the RF rule has the potential to
bridge the gap between physical boundary conditions and deep learning model
development. Based on the RF rule, our results suggest that models using raw
data could achieve better performance than those using spectrograms, offering a
compelling reason to consider raw data as input. The proposed VADER achieves to
detect 99.9 % of axles with a spatial error of 4.13 cm using only acceleration
measurements, while cutting computational and memory costs by 99 % compared to
the state-of-the-art using spectograms.
|
In this paper, we will analyze the effect of thermal fluctuations on the
thermodynamics of a charged dilatonic black Saturn. These thermal fluctuations
will correct the thermodynamics of the charged dilatonic black Saturn. We will
analyze the corrections to the thermodynamics of this system by first relating
the fluctuations in the entropy to the fluctuations in the energy. Then, we
will use the relation between entropy and a conformal field theory to analyze
the fluctuations in the entropy. We will demonstrate that similar physical
results are obtained from both these approaches. We will also study the effect
of thermal fluctuations on the phase transition in this charged dilatonic black
Saturn.
|
Robotic Process Automation (RPA) is a fast-emerging automation technology
that sits between the fields of Business Process Management (BPM) and
Artificial Intelligence (AI), and allows organizations to automate high volume
routines. RPA tools are able to capture the execution of such routines
previously performed by a human users on the interface of a computer system,
and then emulate their enactment in place of the user by means of a software
robot. Nowadays, in the BPM domain, only simple, predictable business processes
involving routine work can be automated by RPA tools in situations where there
is no room for interpretation, while more sophisticated work is still left to
human experts. In this paper, starting from an in-depth experimentation of the
RPA tools available on the market, we provide a classification framework to
categorize them on the basis of some key dimensions. Then, based on this
analysis, we derive four research challenges and discuss prospective approaches
necessary to inject intelligence into current RPA technology, in order to
achieve more widespread adoption of RPA in the BPM domain.
|
Transport properties of the classical antiferromagnetic XXZ model on the
square lattice have been theoretically investigated, putting emphasis on how
the occurrence of a phase transition is reflected in spin and thermal
transports. As is well known, the anisotropy of the exchange interaction
$\Delta\equiv J_z/J_x$ plays a role to control the universality class of the
transition of the model, i.e., either a second-order transition at $T_N$ into a
magnetically ordered state or the Kosterlitz-Thouless (KT) transition at
$T_{KT}$, which respectively occur for the Ising-type ($\Delta >1$) and
$XY$-type ($\Delta <1$) anisotropies, while for the isotropic Heisenberg case
of $\Delta=1$, a phase transition does not occur at any finite temperature. It
is found by means of the hybrid Monte-Carlo and spin-dynamics simulations that
the spin current probes the difference in the ordering properties, while the
thermal current does not. For the $XY$-type anisotropy, the longitudinal
spin-current conductivity $\sigma^s_{xx}$ ($=\sigma^s_{yy}$) exhibits a
divergence at $T_{KT}$ of the exponential form, $\sigma^s_{xx} \propto
\exp\big[ B/\sqrt{T/T_{KT}-1 }\, \big]$ with $B={\cal O}(1)$, while for the
Ising-type anisotropy, the temperature dependence of $\sigma^s_{xx}$ is almost
monotonic without showing a clear anomaly at $T_{N}$ and such a monotonic
behavior is also the case in the Heisenberg-type spin system. The significant
enhancement of $\sigma^s_{xx}$ at $T_{KT}$ is found to be due to the
exponential rapid growth of the spin-current-relaxation time toward $T_{KT}$,
which can be understood as a manifestation of the topological nature of a
vortex whose lifetime is expected to get longer toward $T_{KT}$. Possible
experimental platforms for the spin-transport phenomena associated with the KT
topological transition are discussed.
|
We consider an asynchronous system with transitions corresponding to the
instructions of a computer system. For each instruction, a runtime is given. We
propose a mathematical model, allowing us to construct an algorithm for finding
the minimum time of the parallel process with a given trace. We consider a
problem of constructing a parallel process which transforms the initial state
to given and has the minimum execution time. We show that it is reduced to the
problem of finding the shortest path in a directed graph with edge lengths
equal to 1.
|
A novel, generic scheme for off-line handwritten English alphabets character
images is proposed. The advantage of the technique is that it can be applied in
a generic manner to different applications and is expected to perform better in
uncertain and noisy environments. The recognition scheme is using a multilayer
perceptron(MLP) neural networks. The system was trained and tested on a
database of 300 samples of handwritten characters. For improved generalization
and to avoid overtraining, the whole available dataset has been divided into
two subsets: training set and test set. We achieved 99.10% and 94.15% correct
recognition rates on training and test sets respectively. The purposed scheme
is robust with respect to various writing styles and size as well as presence
of considerable noise.
|
We explore multiple-instance verification, a problem setting where a query
instance is verified against a bag of target instances with heterogeneous,
unknown relevancy. We show that naive adaptations of attention-based multiple
instance learning (MIL) methods and standard verification methods like Siamese
neural networks are unsuitable for this setting: directly combining
state-of-the-art (SOTA) MIL methods and Siamese networks is shown to be no
better, and sometimes significantly worse, than a simple baseline model.
Postulating that this may be caused by the failure of the representation of the
target bag to incorporate the query instance, we introduce a new pooling
approach named ``cross-attention pooling'' (CAP). Under the CAP framework, we
propose two novel attention functions to address the challenge of
distinguishing between highly similar instances in a target bag. Through
empirical studies on three different verification tasks, we demonstrate that
CAP outperforms adaptations of SOTA MIL methods and the baseline by substantial
margins, in terms of both classification accuracy and quality of the
explanations provided for the classifications. Ablation studies confirm the
superior ability of the new attention functions to identify key instances.
|
We investigate the finite-time stabilization of a tree-shaped network of
strings. Transparent boundary conditions are applied at all the external nodes.
At any internal node, in addition to the usual continuity conditions, a
modified Kirchhoff law incorporating a damping term $\alpha u_t$ with a
coefficient $\alpha$ that may depend on the node is considered. We show that
for a convenient choice of the sequence of coefficients $\alpha$, any solution
of the wave equation on the network becomes constant after a finite time. The
condition on the coefficients proves to be sharp at least for a star-shaped
tree. Similar results are derived when we replace the transparent boundary
condition by the Dirichlet (resp. Neumann) boundary condition at one external
node.
|
We study the time-dependent Ginzburg--Landau equations in a three-dimensional
curved polyhedron (possibly nonconvex). Compared with the previous works, we
prove existence and uniqueness of a global weak solution based on weaker
regularity of the solution in the presence of edges or corners, where the
magnetic potential may not be in $L^2(0,T;H^1(\Omega)^3)$.
|
We study a two-dimensional cylindrically-symmetric electron droplet separated
from a surrounding electron ring by a tunable barrier using the exact
diagonalization method. The magnetic field is assumed strong so that the
electrons become spin-polarized and reside on the lowest Fock-Darwin band. We
calculate the ground state phase diagram for 6 electrons. At weak coupling, the
phase diagram exhibits a clear diamond structure due to the blockade caused by
the angular momentum difference between the two systems. We find separate
excitations of the droplet and the ring as well as the transfer of charge
between the two parts of the system. At strong coupling, interactions destroy
the coherent structure of the phase diagram, while individual phases are still
heavily affected by the potential barrier.
|
Recently, metasurfaces have experienced revolutionary growth in the sensing
and superresolution imaging field, due to their enabling of subwavelength
manipulation of electromagnetic waves. However, the addition of metasurfaces
multiplies the complexity of retrieving target information from the detected
fields. Besides, although the deep learning method affords a compelling
platform for a series of electromagnetic problems, many studies mainly
concentrate on resolving one single function and limit the research's
versatility. In this study, a multifunctional deep neural network is
demonstrated to reconstruct target information in a metasurface targets
interactive system. Firstly, the interactive scenario is confirmed to tolerate
the system noises in a primary verification experiment. Then, fed with the
electric field distributions, the multitask deep neural network can not only
sense the quantity and permittivity of targets but also generate
superresolution images with high precision. The deep learning method provides
another way to recover targets' diverse information in metasurface based target
detection, accelerating the progression of target reconstruction areas. This
methodology may also hold promise for inverse reconstruction or forward
prediction problems in other electromagnetic scenarios.
|
A campaign is described, open to participation by interested AAVSO members,
of follow-up observations for newly-discovered Cepheid variables in
undersampled and obscured regions of the Galaxy. A primary objective being to
use these supergiants to clarify the Galaxy's spiral nature. Preliminary
multiband photometric observations are presented for three Cepheids discovered
beyond the obscuring dust between the Cygnus & Aquila Rifts (40 \le l \le 50
degrees), a region reputedly tied to a segment of the Sagittarius-Carina arm
which appears to cease unexpectedly. The data confirm the existence of
exceptional extinction along the line of sight at upwards of A_V~6 magnitudes
(d~2 kpc, l~47 degrees), however, the noted paucity of optical spiral tracers
in the region does not arise solely from incompleteness owing to extinction. A
hybrid spiral map of the Galaxy comprised of classical Cepheids, young open
clusters & H II regions, and molecular clouds, presents a consistent picture of
the Milky Way and confirms that the three Cepheids do not populate the main
portion of the Sagittarius-Carina arm, which does not emanate locally from this
region. The Sagitarrius-Carina arm, along with other distinct spiral features,
are found to deviate from the canonical logarithmic spiral pattern. Revised
parameters are also issued for the Cepheid BY Cas, and it is identified on the
spiral map as lying mainly in the foreground to young associations in
Cassiopeia. A Fourier analysis of BY Cas' light-curve implies overtone
pulsation, and the Cepheid is probably unassociated with the open cluster NGC
663 since the distances, ages, and radial velocities do not match.
|
Let $\Gamma$ denote an undirected, connected, regular graph with vertex set
$X$, adjacency matrix $A$, and ${d+1}$ distinct eigenvalues. Let ${\mathcal
A}={\mathcal A}(\Gamma)$ denote the subalgebra of Mat$_X({\mathbb C})$
generated by $A$. We refer to ${\mathcal A}$ as the {\it adjacency algebra} of
$\Gamma$. In this paper we investigate algebraic and combinatorial structure of
$\Gamma$ for which the adjacency algebra ${\mathcal A}$ is closed under
Hadamard multiplication. In particular, under this simple assumption, we show
the following: (i) ${\mathcal A}$ has a standard basis $\{I,F_1,\ldots,F_d\}$;
(ii) for every vertex there exists identical distance-faithful intersection
diagram of $\Gamma$ with $d+1$ cells; (iii) the graph $\Gamma$ is
quotient-polynomial; and (iv) if we pick $F\in \{I,F_1,\ldots,F_d\}$ then $F$
has $d+1$ distinct eigenvalues if and only if
span$\{I,F_1,\ldots,F_d\}=$span$\{I,F,\ldots,F^d\}$. We describe the
combinatorial structure of quotient-polynomial graphs with diameter $2$ and $4$
distinct eigenvalues. As a consequence of the technique from the paper we give
an algorithm which computes the number of distinct eigenvalues of any Hermitian
matrix using only elementary operations. When such a matrix is the adjacency
matrix of a graph $\Gamma$, a simple variation of the algorithm allow us to
decide wheter $\Gamma$ is distance-regular or not. In this context, we also
propose an algorithm to find which distance-$i$ matrices are polynomial in $A$,
giving also these polynomials.
|
We evaluate the ability of temporal difference learning to track the reward
function of a policy as it changes over time. Our results apply a new adiabatic
theorem that bounds the mixing time of time-inhomogeneous Markov chains. We
derive finite-time bounds for tabular temporal difference learning and
$Q$-learning when the policy used for training changes in time. To achieve
this, we develop bounds for stochastic approximation under asynchronous
adiabatic updates.
|
A variant of the Anderson model, that describes hybridization between
localized state (c-state) of a quantum dot and a Fermi sea conduction band, is
investigated. We demonstrate that, as a function of the hybridization parameter
v, the system undergoes a crossover from the state where the conduction band
and the c-level are fully coupled to a state where these are decoupled. The
c-electron spectrum, however, has a gap together with the presence of the Kondo
peak in the former state. For the latter, we have a Mott-like localization
where the c-electron spectrum again has a gap without the Kondo peak. Within
this gap the conduction electrons fully recover the free band density of states
and the effective hybridization is practically zero. Our main aim, however, is
to study the emission and absorption in a quantum dot with strongly correlated
Kondo ground state. We use the Green function equation of motion method for
this purpose. We calculate the absorption/emission (A/E) spectrum in the Kondo
regime through a symmetrized quantum autocorrelation function obtainable
directly within perturbation theory using the Fermi golden rule approximation.
The spectrum reveals a sharp, tall peak close to Kondo-Abrikosov-Suhl peak and
a few smaller, distinguishable ones on either side. The former clearly
indicates that the Kondo phenomenon has its impact on A/E (non-Kondo
processes), which are driven by the coupling involving the dipole moment of
quantum dot transitions reflecting the physical structure of the dot including
the confinement potential, in the Kondo regime.
|
In several question answering benchmarks, pretrained models have reached
human parity through fine-tuning on an order of 100,000 annotated questions and
answers. We explore the more realistic few-shot setting, where only a few
hundred training examples are available, and observe that standard models
perform poorly, highlighting the discrepancy between current pretraining
objectives and question answering. We propose a new pretraining scheme tailored
for question answering: recurring span selection. Given a passage with multiple
sets of recurring spans, we mask in each set all recurring spans but one, and
ask the model to select the correct span in the passage for each masked span.
Masked spans are replaced with a special token, viewed as a question
representation, that is later used during fine-tuning to select the answer
span. The resulting model obtains surprisingly good results on multiple
benchmarks (e.g., 72.7 F1 on SQuAD with only 128 training examples), while
maintaining competitive performance in the high-resource setting.
|
The influence of an electron-vibrational coupling on the laser control of
electron transport through a molecular wire that is attached to several
electronic leads is investigated. These molecular vibrational modes induce an
effective electron-electron interaction. In the regime where the wire electrons
couple weakly to both the external leads and the vibrational modes, we derive
within a Hartree-Fock approximation a nonlinear set of quantum kinetic
equations. The quantum kinetic theory is then used to evaluate the laser
driven, time-averaged electron current through the wire-leads contacts. This
novel formalism is applied to two archetypical situations in the presence of
electron-vibrational effects, namely, (i) the generation of a ratchet or pump
current in a symmetrical molecule by a harmonic mixing field and (ii) the laser
switching of the current through the molecule.
|
We report on the R-band eclipse mapping analysis of high-speed photometry of
the dwarf nova EX Dra on the rise to the maximum of the November 1995 outburst.
The eclipse map shows a one-armed spiral structure of ~180 degrees in azimuth,
extending in radius from R ~0.2 to 0.43 R_{L1} (where R_{L1} is the distance
from the disk center to the inner Lagrangian point), that contributes about 22
per cent of the total flux of the eclipse map. The spiral structure is
stationary in a reference frame co-rotating with the binary and is stable for a
timescale of at least 5 binary orbits. The comparison of the eclipse maps on
the rise and in quiescence suggests that the outbursts of EX Dra may be driven
by episodes of enhanced mass-transfer from the secondary star. Possible
explanations for the nature of the spiral structure are discussed.
|
Here we propose an implementation of all possible Positive Operator Value
Measures (POVMs) of two-photon polarization states. POVMs are the most general
class of quantum measurements. Our setup requires linear optics, Bell State
measurements and an entangled three-photon ancilla state, which can be prepared
separately and in advance (or 'off-line'). As an example we give the detailed
settings for a simultaneous measurement of all four Bell States for an
arbitrary two-photon polarization state, which is impossible with linear optics
alone.
|
Signal detection in large multiple-input multiple-output (large-MIMO) systems
presents greater challenges compared to conventional massive-MIMO for two
primary reasons. First, large-MIMO systems lack favorable propagation
conditions as they do not require a substantially greater number of service
antennas relative to user antennas. Second, the wireless channel may exhibit
spatial non-stationarity when an extremely large aperture array (ELAA) is
deployed in a large-MIMO system. In this paper, we propose a scalable iterative
large-MIMO detector named ANPID, which simultaneously delivers 1) close to
maximum-likelihood detection performance, 2) low computational-complexity
(i.e., square-order of transmit antennas), 3) fast convergence, and 4)
robustness to the spatial non-stationarity in ELAA channels. ANPID incorporates
a damping demodulation step into stationary iterative (SI) methods and
alternates between two distinct demodulated SI methods. Simulation results
demonstrate that ANPID fulfills all the four features concurrently and
outperforms existing low-complexity MIMO detectors, especially in highly-loaded
large MIMO systems.
|
This is a report from the Libraries and Tools Working Group of the High
Energy Physics Forum for Computational Excellence. It presents the vision of
the working group for how the HEP software community may organize and be
supported in order to more efficiently share and develop common software
libraries and tools across the world's diverse set of HEP experiments. It gives
prioritized recommendations for achieving this goal and provides a survey of a
select number of areas in the current HEP software library and tools landscape.
The survey identifies aspects which support this goal and areas with
opportunities for improvements. The survey covers event processing software
frameworks, software development, data management, workflow and workload
management, geometry information management and conditions databases.
|
We analyze a new random algorithm for numerical integration of $d$-variate
functions over $[0,1]^d$ from a weighted Sobolev space with dominating mixed
smoothness $\alpha\ge 0$ and product weights
$1\ge\gamma_1\ge\gamma_2\ge\cdots>0$, where the functions are continuous and
periodic when $\alpha>1/2$. The algorithm is based on rank-$1$ lattice rules
with a random number of points~$n$. For the case $\alpha>1/2$, we prove that
the algorithm achieves almost the optimal order of convergence of
$\mathcal{O}(n^{-\alpha-1/2})$, where the implied constant is independent of
the dimension~$d$ if the weights satisfy $\sum_{j=1}^\infty
\gamma_j^{1/\alpha}<\infty$. The same rate of convergence holds for the more
general case $\alpha>0$ by adding a random shift to the lattice rule with
random $n$. This shows, in particular, that the exponent of strong tractability
in the randomized setting equals $1/(\alpha+1/2)$, if the weights decay fast
enough. We obtain a lower bound to indicate that our results are essentially
optimal. This paper is a significant advancement over previous related works
with respect to the potential for implementation and the independence of error
bounds on the problem dimension. Other known algorithms which achieve the
optimal error bounds, such as those based on Frolov's method, are very
difficult to implement especially in high dimensions. Here we adapt a
lesser-known randomization technique introduced by Bakhvalov in 1961. This
algorithm is based on rank-$1$ lattice rules which are very easy to implement
given the integer generating vectors. A simple probabilistic approach can be
used to obtain suitable generating vectors.
|
We derive covariant wave functions for hadrons composed of two constituents
for arbitrary Lorentz boosts. Focussing explicitly on baryons as quark-diquark
systems, we reduce their manifestly covariant Bethe-Salpeter equation to
covariant 3-dimensional forms by projecting on the relative quark-diquark
energy. Guided by a phenomenological multi gluon exchange representation of
covariant confining kernels, we derive explicit solutions for harmonic
confinement and for the MIT Bag Model. We briefly sketch implications of
breaking the spherical symmetry of the ground state and the transition from the
instant form to the light cone via the infinite momentum frame.
|
We analyze four-dimensional quantum field theories with continuous 2-group
global symmetries. At the level of their charges such symmetries are identical
to a product of continuous flavor or spacetime symmetries with a 1-form global
symmetry $U(1)^{(1)}_B$, which arises from a conserved 2-form current
$J_B^{(2)}$. Rather, 2-group symmetries are characterized by deformed current
algebras, with quantized structure constants, which allow two flavor currents
or stress tensors to fuse into $J_B^{(2)}$. This leads to unconventional Ward
identities, which constrain the allowed patterns of spontaneous 2-group
symmetry breaking and other aspects of the renormalization group flow. If
$J_B^{(2)}$ is coupled to a 2-form background gauge field $B^{(2)}$, the
2-group current algebra modifies the behavior of $B^{(2)}$ under background
gauge transformations. Its transformation rule takes the same form as in the
Green-Schwarz mechanism, but only involves the background gauge or gravity
fields that couple to the other 2-group currents. This makes it possible to
partially cancel reducible 't Hooft anomalies using Green-Schwarz counterterms
for the 2-group background gauge fields. The parts that cannot be cancelled are
reinterpreted as mixed, global anomalies involving $U(1)_B^{(1)}$ and receive
contributions from topological, as well as massless, degrees of freedom.
Theories with 2-group symmetry are constructed by gauging an abelian flavor
symmetry with suitable mixed 't Hooft anomalies, which leads to many simple and
explicit examples. Some of them have dynamical string excitations that carry
$U(1)_B^{(1)}$ charge, and 2-group symmetry determines certain 't Hooft
anomalies on the world sheets of these strings. Finally, we point out that
holographic theories with 2-group global symmetries have a bulk description in
terms of dynamical gauge fields that participate in a conventional
Green-Schwarz mechanism.
|
We obtain estimates of the multiplicative constants appearing in local
convergence results of the Riemannian Gauss-Newton method for least squares
problems on manifolds and relate them to the geometric condition number of [P.
B\"urgisser and F. Cucker, Condition: The Geometry of Numerical Algorithms,
2013].
|
Text classification is a fundamental language task in Natural Language
Processing. A variety of sequential models is capable making good predictions
yet there is lack of connection between language semantics and prediction
results. This paper proposes a novel influence score (I-score), a greedy search
algorithm called Backward Dropping Algorithm (BDA), and a novel feature
engineering technique called the "dagger technique". First, the paper proposes
a novel influence score (I-score) to detect and search for the important
language semantics in text document that are useful for making good prediction
in text classification tasks. Next, a greedy search algorithm called the
Backward Dropping Algorithm is proposed to handle long-term dependencies in the
dataset. Moreover, the paper proposes a novel engineering technique called the
"dagger technique" that fully preserve the relationship between explanatory
variable and response variable. The proposed techniques can be further
generalized into any feed-forward Artificial Neural Networks (ANNs) and
Convolutional Neural Networks (CNNs), and any neural network. A real-world
application on the Internet Movie Database (IMDB) is used and the proposed
methods are applied to improve prediction performance with an 81% error
reduction comparing with other popular peers if I-score and "dagger technique"
are not implemented.
|
Short Term Load Forecast (STLF) is necessary for effective scheduling,
operation optimization trading, and decision-making for electricity consumers.
Modern and efficient machine learning methods are recalled nowadays to manage
complicated structural big datasets, which are characterized by having a
nonlinear temporal dependence structure. We propose different statistical
nonlinear models to manage these challenges of hard type datasets and forecast
15-min frequency electricity load up to 2-days ahead. We show that the
Long-short Term Memory (LSTM) and the Gated Recurrent Unit (GRU) models applied
to the production line of a chemical production facility outperform several
other predictive models in terms of out-of-sample forecasting accuracy by the
Diebold-Mariano (DM) test with several metrics. The predictive information is
fundamental for the risk and production management of electricity consumers.
|
We describe a timing technique that allows to obtain precise orbital
parameters of an accreting millisecond pulsar in those cases in which intrinsic
variations of the phase delays (caused e.g. by proper variation of the spin
frequency) with characteristic timescale longer than the orbital period do not
allow to fit the orbital parameters over a long observation (tens of days). We
show under which conditions this method can be applied and show the results
obtained applying this method to the 2003 outburst observed by RXTE of the
accreting millisecond pulsar XTE J1807-294 which shows in its phase delays a
non-negligible erratic behavior. We refined the orbital parameters of XTE
J1807-294 using all the 90 days in which the pulsation is strongly detected and
the method applicable. In this way we obtain the orbital parameters of the
source with a precision more than one order of magnitude better than the
previous available orbital solution, a precision obtained to date, on accreting
millisecond pulsars, only for XTE J1807-294 analyzing several outbursts
spanning over seven years and with a much better statistics.
|
Here we introduce the interstellar dust modelling framework THEMIS (The
Heterogeneous dust Evolution Model for Interstellar Solids), which takes a
global view of dust and its evolution in response to the local conditions in
interstellar media. This approach is built upon a core model that was developed
to explain the dust extinction and emission in the diffuse interstellar medium.
The model was then further developed to self-consistently include the effects
of dust evolution in the transition to denser regions. The THEMIS approach is
under continuous development and currently we are extending the framework to
explore the implications of dust evolution in HII regions and the
photon-dominated regions associated with star formation. We provide links to
the THEMIS, DustEM and DustPedia websites where more information about the
model, its input data and applications can be found.
|
In this paper, we propose a 2D based partition method for solving the problem
of Ranking under Team Context(RTC) on datasets without a priori. We first map
the data into 2D space using its minimum and maximum value among all
dimensions. Then we construct window queries with consideration of current team
context. Besides, during the query mapping procedure, we can pre-prune some
tuples which are not top ranked ones. This pre-classified step will defer
processing those tuples and can save cost while providing solutions for the
problem. Experiments show that our algorithm performs well especially on large
datasets with correctness.
|
We study the limiting distribution of particles at the frontier of a
branching random walk. The positions of these particles can be viewed as the
lowest energies of a directed polymer in a random medium in the mean-field
case. We show that the average distances between these leading particles can be
computed as the delay of a traveling wave evolving according to the Fisher-KPP
front equation. These average distances exhibit universal behaviors, different
from those of the probability cascades studied recently in the context of mean
field spin-glasses.
|
Online platforms collect rich information about participants and then share
some of this information back with them to improve market outcomes. In this
paper we study the following information disclosure problem in two-sided
markets: If a platform wants to maximize revenue, which sellers should the
platform allow to participate, and how much of its available information about
participating sellers' quality should the platform share with buyers? We study
this information disclosure problem in the context of two distinct two-sided
market models: one in which the platform chooses prices and the sellers choose
quantities (similar to ride-sharing), and one in which the sellers choose
prices (similar to e-commerce). Our main results provide conditions under which
simple information structures commonly observed in practice, such as banning
certain sellers from the platform while not distinguishing between
participating sellers, maximize the platform's revenue. The platform's
information disclosure problem naturally transforms into a constrained price
discrimination problem where the constraints are determined by the equilibrium
outcomes of the specific two-sided market model being studied. We analyze this
constrained price discrimination problem to obtain our structural results.
|
Machine learning models for radiology benefit from large-scale data sets with
high quality labels for abnormalities. We curated and analyzed a chest computed
tomography (CT) data set of 36,316 volumes from 19,993 unique patients. This is
the largest multiply-annotated volumetric medical imaging data set reported. To
annotate this data set, we developed a rule-based method for automatically
extracting abnormality labels from free-text radiology reports with an average
F-score of 0.976 (min 0.941, max 1.0). We also developed a model for
multi-organ, multi-disease classification of chest CT volumes that uses a deep
convolutional neural network (CNN). This model reached a classification
performance of AUROC greater than 0.90 for 18 abnormalities, with an average
AUROC of 0.773 for all 83 abnormalities, demonstrating the feasibility of
learning from unfiltered whole volume CT data. We show that training on more
labels improves performance significantly: for a subset of 9 labels - nodule,
opacity, atelectasis, pleural effusion, consolidation, mass, pericardial
effusion, cardiomegaly, and pneumothorax - the model's average AUROC increased
by 10% when the number of training labels was increased from 9 to all 83. All
code for volume preprocessing, automated label extraction, and the volume
abnormality prediction model will be made publicly available. The 36,316 CT
volumes and labels will also be made publicly available pending institutional
approval.
|
We demonstrate the application of the circular cumulant approach for
thermodynamically large populations of phase elements, where the Ott-Antonsen
properties are violated by a multiplicative intrinsic noise. The infinite
cumulant equation chain is derived for the case of a sinusoidal sensitivity of
the phase to noise. For inhomogeneous populations, a Lorentzian distribution of
natural frequencies is adopted. Two-cumulant model reductions, which serve as a
generalization of the Ott-Antonsen ansatz, are reported. The accuracy of these
model reductions and the macroscopic collective dynamics of the system are
explored for the case of a Kuramototype global coupling. The Ott-Antonsen
ansatz and the Gaussian approximation are found to be not uniformly accurate
for non-high frequencies.
|
For every compact K\"ahler manifold $X$ of algebraic dimension $a(X) = \dim X
- 1$, we prove that $X$ has arbitrarily small deformations to some projective
manifolds.
|
We show that for complex nonlinear systems, model reduction and compressive
sensing strategies can be combined to great advantage for classifying,
projecting, and reconstructing the relevant low-dimensional dynamics.
$\ell_2$-based dimensionality reduction methods such as the proper orthogonal
decomposition are used to construct separate modal libraries and Galerkin
models based on data from a number of bifurcation regimes. These libraries are
then concatenated into an over-complete library, and $\ell_1$ sparse
representation in this library from a few noisy measurements results in correct
identification of the bifurcation regime. This technique provides an objective
and general framework for classifying the bifurcation parameters, and
therefore, the underlying dynamics and stability. After classifying the
bifurcation regime, it is possible to employ a low-dimensional Galerkin model,
only on modes relevant to that bifurcation value. These methods are
demonstrated on the complex Ginzburg-Landau equation using sparse, noisy
measurements. In particular, three noisy measurements are used to accurately
classify and reconstruct the dynamics associated with six distinct bifurcation
regimes; in contrast, classification based on least-squares fitting ($\ell_2$)
fails consistently.
|
Based on the Watson expansion of the multiple scattering series, we employ a
nonlocal translationally invariant nuclear density derived within the
symmetry-adapted no-core shell model (SA-NCSM) framework from a chiral
next-to-next-to-leading order (NNLO) nucleon-nucleon interaction and the very
same interaction for a consistent full-folding calculation of the effective
(optical) potential for nucleon-nucleus scattering for medium-heavy nuclei. The
leading order effective (optical) folding potential is computed by integrating
over a translationally invariant SA-NCSM one-body scalar density,
spin-projected momentum distribution, and the Wolfenstein amplitudes $A$, $C$,
and $M$. The resulting nonlocal potentials serve as input for a momentum space
Lippmann-Schwinger equation, whose solutions are summed up to obtain
nucleon-nucleus scattering observables. In the SA-NCSM, the model space is
systematically up-selected using $\SpR{3}$ symmetry considerations. For the
light nucleus of $^6$He, we establish a systematic selection scheme in the
SA-NCSM for scattering observables. Then, we apply this scheme to calculations
of scattering observables, such as differential cross sections, analyzing
powers, and spin rotation functions for elastic proton scattering from
$^{20}$Ne and $^{40}$Ca in the energy regime between 65 and 200 MeV, and
compare to available data. Our calculations show that the leading order
effective nucleon-nucleus potential in the Watson expansion of multiple
scattering theory obtained from an up-selected SA-NCSM model space describes
$^{40}$Ca elastic scattering observables reasonably well to about 60 degrees in
the center-of-mass frame, which coincides roughly with the validity of the NNLO
chiral interaction used to calculate both the nucleon-nucleon amplitudes and
the one-body scalar and spin nuclear densities.
|
We classify all complex quadratic number fields with 2-class group of type
(2,2^m) whose Hilbert 2-class fields have class groups of 2-rank equal to 2.
These fields all have 2-class field tower of length 2. We still don't know
examples of fields with 2-class field tower of length 3, but the smallest
candidate is the field with discriminant -1015.
|
This paper presents 452 new 21-cm neutral hydrogen line measurements carried
out with the FORT receiver of the meridian transit Nan\c{c}ay radiotelescope
(NRT) in the period April 2003 -- March 2005. This observational programme is
part of a larger project aiming at collecting an exhaustive and
magnitude-complete HI extragalactic catalogue for Tully-Fisher applications
(the so-called KLUN project, for Kinematics of the Local Universe studies, end
in 2008). The whole on-line HI archive of the NRT contains today reduced
HI-profiles for ~4500 spiral galaxies of declination delta > -40°
(http://klun.obs-nancay.fr). As an example of application, we use direct
Tully-Fisher relation in three (JHK) bands in deriving distances to a large
catalog of 3126 spiral galaxies distributed through the whole sky and sampling
well the radial velocity range between 0 and 8000 km/s. Thanks to an iterative
method accounting for selection bias and smoothing effects, we show as a
preliminary output a detailed and original map of the velocity field in the
Local Universe.
|
We compute the homology of the spaces in the Omega spectrum for $BoP$. There
is no torsion in $H_*(\underline{BoP}_{\; i})$ for $i \ge 2$, and things are
only slightly more complicated for $i < 2$. We find the complete homotopy type
of $\underline{BoP}_{\; i}$ for $i \le 6$ and conjecture the homotopy type for
$i > 6$. This completes the computation of all $H_*(\underline{MSU}_{\;*})$.
|
We point out that a light scalar field fluctuating around a symmetry-enhaced
point can generate large non-Gaussianity in density fluctuations. We name such
a particle as an "ungaussiton", a scalar field dominantly produced by the
quantum fluctuations,generating sizable non-Gaussianity in the density
fluctuations. We derive a consistency relation between the bispectrum and the
trispectrum, tau_NL = 10^3 f_NL^(4/3), which can be extended to arbitrary high
order correlation functions. If such a relation is confirmed by future
observations, it will strongly support this mechanism.
|
We investigate differences between upper and lower porosity. In finite
dimensional Banach spaces every upper porous set is directionally upper porous.
We show the situation is very different for lower porous sets; there exists a
lower porous set in the plane which is not even a countable union of
directionally lower porous sets.
|
We present three measurements of the top quark mass in the lepton plus jets
channel with 1.9 fb-1 of data using quantities with minimal dependence on the
jet energy scale in the lepton plus jets channel at CDF. One measurement uses
the mean transverse decay length of b-tagged jets (L2d) to determine the top
mass, another uses the transverse momentum of the lepton (LepPt) to determine
the top mass, and a third measurement uses both variables simultaneously.
Using the L2d variable we measure a top mass of 176.7 (+10.0) (-8.9) (stat)
+/- 3.4 (syst) GeV/c^2, using the LepPt variable we measure a top mass of 173.5
(+8.9) (-9.1) (stat) +/- 4.2 (syst) GeV/c^2, and doing the combined measurement
using both variables we arrive at a top mass result of 175.3 +/- 6.2 (stat) +/-
3.0 (syst) GeV/c^2. Since some of the systematic uncertainties are
statistically limited, these results are expected to improve significantly if
more data is added at the Tevatron in the future, or if the measurement is done
at the LHC.
|
The harmonic structure of speech is resistant to noise, but the harmonics may
still be partially masked by noise. Therefore, we previously proposed a
harmonic gated compensation network (HGCN) to predict the full harmonic
locations based on the unmasked harmonics and process the result of a coarse
enhancement module to recover the masked harmonics. In addition, the auditory
loudness loss function is used to train the network. For the DNS Challenge, we
update HGCN with the following aspects, resulting in HGCN+. First, a high-band
module is employed to help the model handle full-band signals. Second, cosine
is used to model the harmonic structure more accurately. Then, the dual-path
encoder and dual-path rnn (DPRNN) are introduced to take full advantage of the
features. Finally, a gated residual linear structure replaces the gated
convolution in the compensation module to increase the receptive field of
frequency. The experimental results show that each updated module brings
performance improvement to the model. HGCN+ also outperforms the referenced
models on both wide-band and full-band test sets.
|
Aims. We seek is to identify old and massive galaxies at 0.5<z<2.1 on the
basis of the magnesium index MgUV and then study their physical properties. We
computed the MgUV index based on the best spectral fitting template of
$\sim$3700 galaxies using data from the VLT VIMOS Deep Survey (VVDS) and VIMOS
Ultra Deep Survey (VUDS) galaxy redshift surveys. Based on galaxies with the
largest signal to noise and the best fit spectra we selected 103 objects with
the highest spectral MgUV signature. We performed an independent fit of the
photometric data of these galaxies and computed their stellar masses, star
formation rates, extinction by dust and age, and we related these quantities to
the MgUV index. We find that the MgUV index is a suitable tracer of early-type
galaxies at an advanced stage of evolution. Selecting galaxies with the highest
MgUV index allows us to choose the most massive, passive, and oldest galaxies
at any epoch. The formation epoch t_f computed from the fitted age as a
function of the total mass in stars supports the downsizing formation paradigm
in which galaxies with the highest mass formed most of their stars at an
earlier epoch.
|
Compared to RGB semantic segmentation, RGBD semantic segmentation can achieve
better performance by taking depth information into consideration. However, it
is still problematic for contemporary segmenters to effectively exploit RGBD
information since the feature distributions of RGB and depth (D) images vary
significantly in different scenes. In this paper, we propose an Attention
Complementary Network (ACNet) that selectively gathers features from RGB and
depth branches. The main contributions lie in the Attention Complementary
Module (ACM) and the architecture with three parallel branches. More precisely,
ACM is a channel attention-based module that extracts weighted features from
RGB and depth branches. The architecture preserves the inference of the
original RGB and depth branches, and enables the fusion branch at the same
time. Based on the above structures, ACNet is capable of exploiting more
high-quality features from different channels. We evaluate our model on
SUN-RGBD and NYUDv2 datasets, and prove that our model outperforms
state-of-the-art methods. In particular, a mIoU score of 48.3\% on NYUDv2 test
set is achieved with ResNet50. We will release our source code based on PyTorch
and the trained segmentation model at https://github.com/anheidelonghu/ACNet.
|
A mean-field model to describe electron transfer processes in ion-molecule
collisions at the $\hbar =0$ level is presented and applied to collisions
involving water and ammonia molecules. Multicenter model potentials account for
the molecular structure and geometry. They include charge screening parameters
which in the most advanced version of the model depend on the instantaneous
degree of ionization so that dynamical screening effects are taken into
account. The work is implemented using the classical-trajectory Monte Carlo
method, i.e., Hamilton's equations are solved for classical statistical
ensembles that represent the initially populated orbitals. The time-evolved
trajectories are sorted into ionizing and electron capture events, and a
multinomial analysis of the ensuing single-particle probabilities is employed
to calculate differential and total cross sections for processes that involve
single- and multiple-electron transitions. Comparison is made with experimental
data and some previously reported calculations to shed light on the
capabilities and limitations of the approach.
|
Most currently available methods for modeling multiphysics, including
thermoelasticity, using machine learning approaches, are focused on solving
complete multiphysics problems using data-driven or physics-informed
multi-layer perceptron (MLP) networks. Such models rely on incremental
step-wise training of the MLPs, and lead to elevated computational expense;
they also lack the rigor of existing numerical methods like the finite element
method. We propose an integrated finite element neural network (I-FENN)
framework to expedite the solution of coupled transient thermoelasticity. A
novel physics-informed temporal convolutional network (PI-TCN) is developed and
embedded within the finite element framework to leverage the fast inference of
neural networks (NNs). The PI-TCN model captures some of the fields in the
multiphysics problem; then, the network output is used to compute the other
fields of interest using the finite element method. We establish a framework
that computationally decouples the energy equation from the linear momentum
equation. We first develop a PI-TCN model to predict the spatiotemporal
evolution of the temperature field across the simulation time based on the
energy equation and strain data. The PI-TCN model is integrated into the finite
element framework, where the PI-TCN output (temperature) is used to introduce
the temperature effect to the linear momentum equation. The finite element
problem is solved using the implicit Euler time discretization scheme,
resulting in a computational cost comparable to that of a weakly-coupled
thermoelasticity problem but with the ability to solve fully-coupled problems.
Finally, we demonstrate I-FENN's computational efficiency and generalization
capability in thermoelasticity through several numerical examples.
|
Neutral atoms may be trapped via the interaction of their magnetic dipole
moment with magnetic field gradients. One of the possible schemes is the
cloverleaf trap. It is often desirable to have at hand a fast and precise
technique for measuring the magnetic field distribution. We introduce a novel
diagnostic tool for instantaneous imaging the equipotential lines of a magnetic
field within a region of space (the vacuum recipient) that is not accessible to
massive probes. Our technique is based on spatially resolved observation of the
fluorescence emitted by a hot beam of sodium atoms crossing a thin slice of
resonant laser light within the magnetic field region to be investigated. The
inhomogeneous magnetic field spatially modulates the resonance condition
between the Zeeman-shifted hyperfine sublevels and the laser light and
therefore the amount of scattered photons. We demonstrate this technique by
mapping the field of our cloverleaf trap in three dimensions under various
conditions.
|
To deal with permanent deformations and residual stresses, we consider a
morphoelastic model for the scar formation as the result of wound healing after
a skin trauma. Next to the mechanical components such as strain and
displacements, the model accounts for biological constituents such as the
concentration of signaling molecules, the cellular densities of fibroblasts and
myofibroblasts, and the density of collagen. Here we present stability
constraints for the one-dimensional counterpart of this morphoelastic model,
for both the continuous and (semi-) discrete problem. We show that the
truncation error between these eigenvalues associated with the continuous and
semi-discrete problem is of order $\mathcal{O}(h^2)$. Next, we perform
numerical validation to these constraints and provide a biological
interpretation of the (in)stability. For the mechanical part of the model, the
results show the components reach equilibria in a (non) monotonic way,
depending on the value of the viscosity. The results show that the parameters
of the chemical part of the model need to meet the stability constraint,
depending on the decay rate of the signaling molecules, to avoid unrealistic
results.
|
The Green functions play a big role in the calculation of the local density
of states of the carbon nanostructures. We investigate their nature for the
variously oriented and disclinated graphene-like surface. Next, we investigate
the case of a small perturbation generated by two heptagonal defects and from
the character of the local density of states in the border sites of these
defects we derive their minimal and maximal distance on the perturbed
cylindrical surface. For this purpose, we transform the given surface into a
chain using the Haydock recursion method. We will suppose only the
nearest-neighbor interactions between the atom orbitals, in other words, the
calculations suppose the short-range potential.
|
This paper presents a general framework how controlled natural languages can
be evaluated and compared on the basis of user experiments. The subjects are
asked to classify given statements (in the language to be tested) as either
true or false with respect to a certain situation that is shown in a graphical
notation called "ontographs". A first experiment has been conducted that
applies this framework to the language Attempto Controlled English (ACE).
|
Open quantum many-body systems with controllable dissipation can exhibit
novel features in their dynamics and steady states. A paradigmatic example is
the dissipative transverse field Ising model. It has been shown recently that
the steady state of this model with all-to-all interactions is genuinely
non-equilibrium near criticality, exhibiting a modified time-reversal symmetry
and violating the fluctuation-dissipation theorem. Experimental study of such
non-equilibrium steady-state phase transitions is however lacking. Here we
propose realistic experimental setups and measurement schemes for current
trapped-ion quantum simulators to demonstrate this phase transition, where
controllable dissipation is engineered via a continuous weak optical pumping
laser. With extensive numerical calculations, we show that strong signatures of
this dissipative phase transition and its non-equilibrium properties can be
observed with a small system size across a wide range of system parameters. In
addition, we show that the same signatures can also be seen if the dissipation
is instead achieved via Floquet dynamics with periodic and probabilistic
resetting of the spins. Dissipation engineered in this way may allow the
simulation of more general types of driven-dissipative systems or facilitate
the dissipative preparation of useful many-body entangled states.
|
Subsets and Splits