query_id
stringlengths 1
6
| query
stringlengths 2
185
| positive_passages
listlengths 1
121
| negative_passages
listlengths 15
100
|
---|---|---|---|
1840513 | Patient outcome prediction via convolutional neural networks based on multi-granularity medical concept embedding | [
{
"docid": "pos:1840513_0",
"text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.",
"title": ""
},
{
"docid": "pos:1840513_1",
"text": "We apply deep learning to the problem of discovery and detection of characteristic patterns of physiology in clinical time series data. We propose two novel modifications to standard neural net training that address challenges and exploit properties that are peculiar, if not exclusive, to medical data. First, we examine a general framework for using prior knowledge to regularize parameters in the topmost layers. This framework can leverage priors of any form, ranging from formal ontologies (e.g., ICD9 codes) to data-derived similarity. Second, we describe a scalable procedure for training a collection of neural networks of different sizes but with partially shared architectures. Both of these innovations are well-suited to medical applications, where available data are not yet Internet scale and have many sparse outputs (e.g., rare diagnoses) but which have exploitable structure (e.g., temporal order and relationships between labels). However, both techniques are sufficiently general to be applied to other problems and domains. We demonstrate the empirical efficacy of both techniques on two real-world hospital data sets and show that the resulting neural nets learn interpretable and clinically relevant features.",
"title": ""
},
{
"docid": "pos:1840513_2",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] | [
{
"docid": "neg:1840513_0",
"text": "INTRODUCTION\nIn this study we report a large series of patients with unilateral winged scapula (WS), with special attention to long thoracic nerve (LTN) palsy.\n\n\nMETHODS\nClinical and electrodiagnostic data were collected from 128 patients over a 25-year period.\n\n\nRESULTS\nCauses of unilateral WS were LTN palsy (n = 70), spinal accessory nerve (SAN) palsy (n = 39), both LTN and SAN palsy (n = 5), facioscapulohumeral dystrophy (FSH) (n = 5), orthopedic causes (n = 11), voluntary WS (n = 6), and no definite cause (n = 2). LTN palsy was related to neuralgic amyotrophy (NA) in 61 patients and involved the right side in 62 patients.\n\n\nDISCUSSION\nClinical data allow for identifying 2 main clinical patterns for LTN and SAN palsy. Electrodiagnostic examination should consider bilateral nerve conduction studies of the LTN and SAN, and needle electromyography of their target muscles. LTN palsy is the most frequent cause of unilateral WS and is usually related to NA. Voluntary WS and FSH must be considered in young patients. Muscle Nerve 57: 913-920, 2018.",
"title": ""
},
{
"docid": "neg:1840513_1",
"text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.",
"title": ""
},
{
"docid": "neg:1840513_2",
"text": "Alarm correlation plays an important role in improving the service and reliability in modern telecommunication networks. Most previous research of alarm correlation didnt consider the effects of noise data in the database. This paper focuses on the method of discovering alarm correlation rules from the database containing noise data. We firstly define two parameters Win_freq and Win_add as the measures of noise data and then present the Robust_search algorithm to solve the problem. At different size of Win_freq and Win_add, the experiments on alarm database containing noise data show that the Robust_search Algorithm can discover more rules with the bigger size of Win_add. We also compare two different interestingness measures of confidence and correlation by experiments.",
"title": ""
},
{
"docid": "neg:1840513_3",
"text": "In this meta-analysis, we synthesized data from published journal articles that investigated viewers’ enjoyment of fright and violence. Given the limited research on this topic, this analysis was primarily a way of summarizing the current state of knowledge and developing directions for future research. The studies selected (a) examined frightening or violent media content; (b) used self-report measures of enjoyment or preference for such content (the dependent variable); and (c) included independent variables that were given theoretical consideration in the literature. The independent variables examined were negative affect and arousal during viewing, empathy, sensation seeking, aggressiveness, and the respondents’ gender and age. The analysis confirmed that male viewers, individuals lower in empathy, and those higher in sensation seeking and aggressiveness reported more enjoyment of fright and violence. Some support emerged for Zillmann’s (1980, 1996) model of suspense enjoyment. Overall, the results demonstrate the importance of considering how viewers interpret or appraise their reactions to fright and violence. However, the studies were so diverse in design and measurement methods that it was difficult to identify the underlying processes. Suggestions are proposed for future research that will move toward the integration of separate lines of inquiry in a unified approach to understanding entertainment. MEDIA PSYCHOLOGY, 7, 207–237 Copyright © 2005, Lawrence Erlbaum Associates, Inc.",
"title": ""
},
{
"docid": "neg:1840513_4",
"text": "The fifth generation (5G) wireless network technology is to be standardized by 2020, where main goals are to improve capacity, reliability, and energy efficiency, while reducing latency and massively increasing connection density. An integral part of 5G is the capability to transmit touch perception type real-time communication empowered by applicable robotics and haptics equipment at the network edge. In this regard, we need drastic changes in network architecture including core and radio access network (RAN) for achieving end-to-end latency on the order of 1 ms. In this paper, we present a detailed survey on the emerging technologies to achieve low latency communications considering three different solution domains: 1) RAN; 2) core network; and 3) caching. We also present a general overview of major 5G cellular network elements such as software defined network, network function virtualization, caching, and mobile edge computing capable of meeting latency and other 5G requirements.",
"title": ""
},
{
"docid": "neg:1840513_5",
"text": "The first edition of Artificial Intelligence: A Modern Approach has become a classic in the AI literature. It has been adopted by over 600 universities in 60 countries, and has been praised as the definitive synthesis of the field. In the second edition, every chapter has been extensively rewritten. Significant new material has been introduced to cover areas such as constraint satisfaction, fast propositional inference, planning graphs, internet agents, exact probabilistic inference, Markov Chain Monte Carlo techniques, Kalman filters, ensemble learning methods, statistical learning, probabilistic natural language models, probabilistic robotics, and ethical aspects of AI. The book is supported by a suite of online resources including source code, figures, lecture slides, a directory of over 800 links to \"AI on the Web,\" and an online discussion group. All of this is available at: aima.cs.berkeley.edu.",
"title": ""
},
{
"docid": "neg:1840513_6",
"text": "We present a superpixel method for full spatial phase and amplitude control of a light beam using a digital micromirror device (DMD) combined with a spatial filter. We combine square regions of nearby micromirrors into superpixels by low pass filtering in a Fourier plane of the DMD. At each superpixel we are able to independently modulate the phase and the amplitude of light, while retaining a high resolution and the very high speed of a DMD. The method achieves a measured fidelity F = 0.98 for a target field with fully independent phase and amplitude at a resolution of 8 × 8 pixels per diffraction limited spot. For the LG10 orbital angular momentum mode the calculated fidelity is F = 0.99993, using 768 × 768 DMD pixels. The superpixel method reduces the errors when compared to the state of the art Lee holography method for these test fields by 50% and 18%, with a comparable light efficiency of around 5%. Our control software is publicly available.",
"title": ""
},
{
"docid": "neg:1840513_7",
"text": "Humans demonstrate remarkable abilities to predict physical events in complex scenes. Two classes of models for physical scene understanding have recently been proposed: “Intuitive Physics Engines”, or IPEs, which posit that people make predictions by running approximate probabilistic simulations in causal mental models similar in nature to video-game physics engines, and memory-based models, which make judgments based on analogies to stored experiences of previously encountered scenes and physical outcomes. Versions of the latter have recently been instantiated in convolutional neural network (CNN) architectures. Here we report four experiments that, to our knowledge, are the first rigorous comparisons of simulation-based and CNN-based models, where both approaches are concretely instantiated in algorithms that can run on raw image inputs and produce as outputs physical judgments such as whether a stack of blocks will fall. Both approaches can achieve super-human accuracy levels and can quantitatively predict human judgments to a similar degree, but only the simulation-based models generalize to novel situations in ways that people do, and are qualitatively consistent with systematic perceptual illusions and judgment asymmetries that people show.",
"title": ""
},
{
"docid": "neg:1840513_8",
"text": "This article analyzes late-life depression, looking carefully at what defines a person as elderly, the incidence of late-life depression, complications and differences in symptoms between young and old patients with depression, subsyndromal depression, bipolar depression in the elderly, the relationship between grief and depression, along with sleep disturbances and suicidal ideation.",
"title": ""
},
{
"docid": "neg:1840513_9",
"text": "the od. cted ly genof 997 Abstract. Algorithms of filtering, edge detection, and extraction of details and their implementation using cellular neural networks (CNN) are developed in this paper. The theory of CNN based on universal binary neurons (UBN) is also developed. A new learning algorithm for this type of neurons is carried out. Implementation of low-pass filtering algorithms using CNN is considered. Separate processing of the binary planes of gray-scale images is proposed. Algorithms of edge detection and impulsive noise filtering based on this approach and their implementation using CNN-UBN are presented. Algorithms of frequency correction reduced to filtering in the spatial domain are considered. These algorithms make it possible to extract details of given sizes. Implementation of such algorithms using CNN is presented. Finally, a general strategy of gray-scale image processing using CNN is considered. © 1997 SPIE and IS&T. [S1017-9909(97)00703-4]",
"title": ""
},
{
"docid": "neg:1840513_10",
"text": "■ Abstract Theory and research on small group performance and decision making is reviewed. Recent trends in group performance research have found that process gains as well as losses are possible, and both are frequently explained by situational and procedural contexts that differentially affect motivation and resource coordination. Research has continued on classic topics (e.g., brainstorming, group goal setting, stress, and group performance) and relatively new areas (e.g., collective induction). Group decision making research has focused on preference combination for continuous response distributions and group information processing. New approaches (e.g., group-level signal detection) and traditional topics (e.g., groupthink) are discussed. New directions, such as nonlinear dynamic systems, evolutionary adaptation, and technological advances, should keep small group research vigorous well into the future.",
"title": ""
},
{
"docid": "neg:1840513_11",
"text": "Traditional approaches to the task of ACE event detection primarily regard multiple events in one sentence as independent ones and recognize them separately by using sentence-level information. However, events in one sentence are usually interdependent and sentence-level information is often insufficient to resolve ambiguities for some types of events. This paper proposes a novel framework dubbed as Hierarchical and Bias Tagging Networks with Gated Multi-level Attention Mechanisms (HBTNGMA) to solve the two problems simultaneously. Firstly, we propose a hierarchical and bias tagging networks to detect multiple events in one sentence collectively. Then, we devise a gated multi-level attention to automatically extract and dynamically fuse the sentence-level and document-level information. The experimental results on the widely used ACE 2005 dataset show that our approach significantly outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840513_12",
"text": "Magnetometers and accelerometers are sensors that are now integrated in objects of everyday life like automotive applications, mobile phones and so on. Some applications need information of acceleration and attitude with a high accuracy. For example, MEMS magnetometers and accelerometers can be integrated in embedded like mobile phones and GPS receivers. The parameters of such sensors must be precisely estimated to avoid drift and biased values. Thus, calibration is an important step to correctly use these sensors and get the expected measurements. This paper presents the theoretical and experimental steps of a method to compute gains, bias and non orthogonality factors of magnetometer and accelerometer sensors. This method of calibration can be used for automatic calibration in embedded systems. The calibration procedure involves arbitrary rotations of the sensors platform and a visual 2D projection of measurements.",
"title": ""
},
{
"docid": "neg:1840513_13",
"text": "Over the last decade, the “digitization” of the electron enterprise has grown at exponential rates. Utility, industrial, commercial, and even residential consumers are transforming all aspects of their lives into the digital domain. Moving forward, it is expected that every piece of equipment, every receptacle, every switch, and even every light bulb will possess some type of setting, monitoring and/or control. In order to be able to manage the large number of devices and to enable the various devices to communicate with one another, a new communication model was needed. That model has been developed and standardized as IEC61850 – Communication Networks and Systems in Substations. This paper looks at the needs of next generation communication systems and provides an overview of the IEC61850 protocol and how it meets these needs. I. Communication System Needs Communication has always played a critical role in the real-time operation of the power system. In the beginning, the telephone was used to communicate line loadings back to the control center as well as to dispatch operators to perform switching operations at substations. Telephoneswitching based remote control units were available as early as the 1930’s and were able to provide status and control for a few points. As digital communications became a viable option in the 1960’s, data acquisition systems (DAS) were installed to automatically collect measurement data from the substations. Since bandwidth was limited, DAS communication protocols were optimized to operate over low-bandwidth communication channels. The “cost” of this optimization was the time it took to configure, map, and document the location of the various data bits received by the protocol. As we move into the digital age, literally thousands of analog and digital data points are available in a single Intelligent Electronic Device (IED) and communication bandwidth is no longer a limiting factor. Substation to master communication data paths operating at 64,000 bits per second are becoming commonplace with an obvious migration path to much high rates. With this migration in technology, the “cost” component of a data acquisition system has now become the configuration and documentation component. Consequently, a key component of a communication system is the ability to describe themselves from both a data and services (communication functions that an IED performs) perspective. Other “key” requirements include: • High-speed IED to IED communication",
"title": ""
},
{
"docid": "neg:1840513_14",
"text": "Passive geolocaton of communication emitters provides great benefits to military and civilian surveillance and security operations. Time Difference of Arrival (TDOA) and Frequency Difference of Arrival (FDOA) measurement combination for stationary emitters may be obtained by sensors mounted on mobile platforms, for example on a pair of UAVs. Complex Ambiguity Function (CAF) of received complex signals can be efficiently calculated to provide required TDOA / FDOA measurement combination. TDOA and FDOA measurements are nonlinear in the sense that the emitter uncertainty given measurements in the Cartesian domain is non-Gaussian. Multiple non-linear measurements of emitter location need to be fused to provide the geolocation estimates. Gaussian Mixture Measurement (GMM) filter fuses nonlinear measurements as long as the uncertainty of each measurement in the surveillance (Cartesian) space is modeled by a Gaussian Mixture. Simulation results confirm this approach and compare it with geolocation using Bearings Only (BO) measurements.",
"title": ""
},
{
"docid": "neg:1840513_15",
"text": "This paper pulls together existing theory and evidence to assess whether international financial liberalization, by improving the functioning of domestic financial markets and banks, accelerates economic growth. The analysis suggests that the answer is yes. First, liberalizing restrictions on international portfolio flows tends to enhance stock market liquidity. In turn, enhanced stock market liquidity accelerates economic growth primarily by boosting productivity growth. Second, allowing greater foreign bank presence tends to enhance the efficiency of the domestic banking system. In turn, better-developed banks spur economic growth primarily by accelerating productivity growth. Thus, international financial integration can promote economic development by encouraging improvements in the domestic financial system. *Levine: Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: [email protected]. I thank, without implicating, Maria Carkovic and two anonymous referees for very helpful comments. JEL Classification Numbers: F3, G2, O4 Abbreviations: GDP, TFP Number of Figures: 0 Number of Tables: 2 Date: September 5, 2000 Address of Contact Author: Ross Levine, Finance Department, Carlson School of Management, University of Minnesota, 321 19 Avenue South, Minneapolis, MN 55455. Tel: 612-624-9551, Fax: 612-626-1335, E-mail: [email protected].",
"title": ""
},
{
"docid": "neg:1840513_16",
"text": "Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.",
"title": ""
},
{
"docid": "neg:1840513_17",
"text": "Recently, there has been substantial research on augmenting aggregate forecasts with individual consumer data from internet platforms, such as search traffic or social network shares. Although the majority of studies report increased accuracy, many exhibit design weaknesses including lack of adequate benchmarks or rigorous evaluation. Furthermore, their usefulness over the product life-cycle has not been investigated, which may change, as initially, consumers may search for pre-purchase information, but later for after-sales support. In this study, we first review the relevant literature and then attempt to support the key findings using two forecasting case studies. Our findings are in stark contrast to the literature, and we find that established univariate forecasting benchmarks, such as exponential smoothing, consistently perform better than when online information is included. Our research underlines the need for thorough forecast evaluation and argues that online platform data may be of limited use for supporting operational decisions.",
"title": ""
},
{
"docid": "neg:1840513_18",
"text": "This paper describes quorum leases, a new technique that allows Paxos-based systems to perform reads with high throughput and low latency. Quorum leases do not sacrifice consistency and have only a small impact on system availability and write latency. Quorum leases allow a majority of replicas to perform strongly consistent local reads, which substantially reduces read latency at those replicas (e.g., by two orders of magnitude in wide-area scenarios). Previous techniques for performing local reads in Paxos systems either (a) sacrifice consistency; (b) allow only one replica to read locally; or (c) decrease the availability of the system and increase the latency of all updates by requiring all replicas to be notified synchronously. We describe the design of quorum leases and evaluate their benefits compared to previous approaches through an implementation running in five geo-distributed Amazon EC2 datacenters.",
"title": ""
},
{
"docid": "neg:1840513_19",
"text": "Abstract Phase segregation, the process by which the components of a binary mixture spontaneously separate, is a key process in the evolution and design of many chemical, mechanical, and biological systems. In this work, we present a data-driven approach for the learning, modeling, and prediction of phase segregation. A direct mapping between an initially dispersed, immiscible binary fluid and the equilibrium concentration field is learned by conditional generative convolutional neural networks. Concentration field predictions by the deep learning model conserve phase fraction, correctly predict phase transition, and reproduce area, perimeter, and total free energy distributions up to 98% accuracy.",
"title": ""
}
] |
1840514 | Performance analysis of data security algorithms used in the railway traffic control systems | [
{
"docid": "pos:1840514_0",
"text": "The principal goal guiding the design of any encryption algorithm must be security against unauthorized attacks. However, for all practical applications, performance and the cost of implementation are also important concerns. A data encryption algorithm would not be of much use if it is secure enough but slow in performance because it is a common practice to embed encryption algorithms in other applications such as e-commerce, banking, and online transaction processing applications. Embedding of encryption algorithms in other applications also precludes a hardware implementation, and is thus a major cause of degraded overall performance of the system. In this paper, the four of the popular secret key encryption algorithms, i.e., DES, 3DES, AES (Rijndael), and the Blowfish have been implemented, and their performance is compared by encrypting input files of varying contents and sizes, on different Hardware platforms. The algorithms have been implemented in a uniform language, using their standard specifications, to allow a fair comparison of execution speeds. The performance results have been summarized and a conclusion has been presented. Based on the experiments, it has been concluded that the Blowfish is the best performing algorithm among the algorithms chosen for implementation.",
"title": ""
}
] | [
{
"docid": "neg:1840514_0",
"text": "It is known that in the Tower of Ha noi graphs there are at most two different shortest paths between any fixed pair of vertices. A formula is given that counts, for a given vertex v, thenumber of verticesu such that there are two shortest u, v-paths. The formul a is expressed in terms of Stern’s diatomic sequenceb(n) (n ≥ 0) and implies that only for vertices of degree two this number is zero. Plane embeddings of the Tower of Hanoi graphs are also presented that provide an explicit description ofb(n) as the number of elements of the sets of vertices of the Tower of Hanoi graphs intersected by certain lines in the plane. © 2004 Elsevier Ltd. All rights reserved. MSC (2000):05A15; 05C12; 11B83; 51M15",
"title": ""
},
{
"docid": "neg:1840514_1",
"text": "Content addressable memories (CAMs) are very attractive for high-speed table lookups in modern network systems. This paper presents a low-power dual match line (ML) ternary CAM (TCAM) to address the power consumption issue of CAMs. The highly capacitive ML is divided into two segments to reduce the active capacitance and hence the power. We analyze possible cases of mismatches and demonstrate a significant reduction in power (up to 43%) for a small penalty in search speed (4%).",
"title": ""
},
{
"docid": "neg:1840514_2",
"text": "Existing social-aware routing protocols for packet switched networks make use of the information about the social structure of the network deduced by state information of nodes (e.g., history of past encounters) to optimize routing. Although these approaches are shown to have superior performance to social-oblivious, stateless routing protocols (BinarySW, Epidemic), the improvement comes at the cost of considerable storage overhead required on the nodes. In this paper we present SANE, the first routing mechanism that combines the advantages of both social-aware and stateless approaches. SANE is based on the observation - that we validate on a real-world trace - that individuals with similar interests tend to meet more often. In SANE, individuals (network members) are characterized by their interest profile, a compact representation of their interests. By implementing a simple routing rule based on interest profile similarity, SANE is free of network state information, thus overcoming the storage capacity problem with existing social-aware approaches. Through thorough experiments, we show the superiority of SANE over existing approaches, both stateful, social-aware and stateless, social-oblivious. We discuss the statelessness of our approach in the supplementary file, which can be found on the Computer Society Digital Library at http://doi.ieeecomputersociety.org/10.1109/TPDS.2014.2307857, of this manuscript. Our interest-based approach easily enables innovative networking services, such as interest-casting. An interest-casting protocol is also introduced in this paper, and evaluated through experiments based on both real-world and synthetic mobility traces.",
"title": ""
},
{
"docid": "neg:1840514_3",
"text": "In 1989 the IT function of the exploration and production division of British Petroleum Company set out to transform itself in response to a severe economic environment and poor internal perceptions of IT performance. This case study traces and analyzes the changes made over six years. The authors derive a model of the transformed IT organization comprising seven components which they suggest can guide IT departments in general as they seek to reform themselves in the late 1990's. This model is seen to fit well with recent thinking on general management in that the seven components of change can be reclassified into the Bartlett and Ghoshal (1994) framework of Purpose, Process and People. Some suggestions are made on how to apply the model in other organizations.",
"title": ""
},
{
"docid": "neg:1840514_4",
"text": "Croston’s method is a widely used to predict inventory demand when it is inter mittent. However, it is an ad hoc method with no properly formulated underlying stochastic model. In this paper, we explore possible models underlying Croston’s method and three related methods, and we show that any underlying model will be inconsistent with the prop erties of intermittent demand data. However, we find that the point forecasts and prediction intervals based on such underlying models may still be useful. [JEL: C53, C22, C51]",
"title": ""
},
{
"docid": "neg:1840514_5",
"text": "Modern processors use high-performance cache replacement policies that outperform traditional alternatives like least-recently used (LRU). Unfortunately, current cache models do not capture these high-performance policies as most use stack distances, which are inherently tied to LRU or its variants. Accurate predictions of cache performance enable many optimizations in multicore systems. For example, cache partitioning uses these predictions to divide capacity among applications in order to maximize performance, guarantee quality of service, or achieve other system objectives. Without an accurate model for high-performance replacement policies, these optimizations are unavailable to modern processors. We present a new probabilistic cache model designed for high-performance replacement policies. It uses absolute reuse distances instead of stack distances, and models replacement policies as abstract ranking functions. These innovations let us model arbitrary age-based replacement policies. Our model achieves median error of less than 1% across several high-performance policies on both synthetic and SPEC CPU2006 benchmarks. Finally, we present a case study showing how to use the model to improve shared cache performance.",
"title": ""
},
{
"docid": "neg:1840514_6",
"text": "We propose a data-driven method for automatic deception detection in real-life trial data using visual and verbal cues. Using OpenFace with facial action unit recognition, we analyze the movement of facial features of the witness when posed with questions and the acoustic patterns using OpenSmile. We then perform a lexical analysis on the spoken words, emphasizing the use of pauses and utterance breaks, feeding that to a Support Vector Machine to test deceit or truth prediction. We then try out a method to incorporate utterance-based fusion of visual and lexical analysis, using string based matching.",
"title": ""
},
{
"docid": "neg:1840514_7",
"text": "Autonomous driving has attracted tremendous attention especially in the past few years. The key techniques for a self-driving car include solving tasks like 3D map construction, self-localization, parsing the driving road and understanding objects, which enable vehicles to reason and act. However, large scale data set for training and system evaluation is still a bottleneck for developing robust perception models. In this paper, we present the ApolloScape dataset [1] and its applications for autonomous driving. Compared with existing public datasets from real scenes, e.g. KITTI [2] or Cityscapes [3], ApolloScape contains much large and richer labelling including holistic semantic dense point cloud for each site, stereo, per-pixel semantic labelling, lanemark labelling, instance segmentation, 3D car instance, high accurate location for every frame in various driving videos from multiple sites, cities and daytimes. For each task, it contains at lease 15x larger amount of images than SOTA datasets. To label such a complete dataset, we develop various tools and algorithms specified for each task to accelerate the labelling process, such as 3D-2D segment labeling tools, active labelling in videos etc. Depend on ApolloScape, we are able to develop algorithms jointly consider the learning and inference of multiple tasks. In this paper, we provide a sensor fusion scheme integrating camera videos, consumer-grade motion sensors (GPS/IMU), and a 3D semantic map in order to achieve robust self-localization and semantic segmentation for autonomous driving. We show that practically, sensor fusion and joint learning of multiple tasks are beneficial to achieve a more robust and accurate system. We expect our dataset and proposed relevant algorithms can support and motivate researchers for further development of multi-sensor fusion and multi-task learning in the field of computer vision.",
"title": ""
},
{
"docid": "neg:1840514_8",
"text": "Today when there are more than 1 billion Android users all over the world, it shows that its popularity has no equal. These days mobile phones have become so intrusive in our daily lives that when they needed can give huge amount of information to forensic examiners. Till the date of writing this paper there are many papers citing the need of mobile device forensic and ways of getting the vital artifacts through mobile devices for different purposes. With vast options of popular and less popular forensic tools and techniques available today, this papers aims to bring them together under a comparative study so that this paper could serve as a starting point for several android users, future forensic examiners and investigators. During our survey we found scarcity for papers on tools for android forensic. In this paper we have analyzed different tools and techniques used in android forensic and at the end tabulated the results and findings.",
"title": ""
},
{
"docid": "neg:1840514_9",
"text": "Book recommendation systems can benefit commercial websites, social media sites, and digital libraries, to name a few, by alleviating the knowledge acquisition process of users who look for books that are appealing to them. Even though existing book recommenders, which are based on either collaborative filtering, text content, or the hybrid approach, aid users in locating books (among the millions available), their recommendations are not personalized enough to meet users’ expectations due to their collective assumption on group preference and/or exact content matching, which is a failure. To address this problem, we have developed PBRecS, a book recommendation system that is based on social interactions and personal interests to suggest books appealing to users. PBRecS relies on the friendships established on a social networking site, such as LibraryThing, to generate more personalized suggestions by including in the recommendations solely books that belong to a user’s friends who share common interests with the user, in addition to applying word-correlation factors for partially matching book tags to disclose books similar in contents. The conducted empirical study on data extracted from LibraryThing has verified (i) the effectiveness of PBRecS using social-media data to improve the quality of book recommendations and (ii) that PBRecS outperforms the recommenders employed by Amazon and LibraryThing.",
"title": ""
},
{
"docid": "neg:1840514_10",
"text": "Recent developments in information technology have enabled collection and processing of vast amounts of personal data, such as criminal records, shopping habits, credit and medical history, and driving records. This information is undoubtedly very useful in many areas, including medical research, law enforcement and national security. However, there is an increasing public concern about the individuals' privacy. Privacy is commonly seen as the right of individuals to control information about themselves. The appearance of technology for Knowledge Discovery and Data Mining (KDDM) has revitalized concern about the following general privacy issues: • secondary use of the personal information, • handling misinformation, and • granulated access to personal information. They demonstrate that existing privacy laws and policies are well behind the developments in technology, and no longer offer adequate protection. We also discuss new privacy threats posed KDDM, which includes massive data collection, data warehouses, statistical analysis and deductive learning techniques. KDDM uses vast amounts of data to generate hypotheses and discover general patterns. KDDM poses the following new challenges to privacy.",
"title": ""
},
{
"docid": "neg:1840514_11",
"text": "The work here presented contributes to the development of ground target tracking control systems for fixed wing unmanned aerial vehicles (UAVs). The control laws are derived at the kinematic level, relying on a commercial inner loop controller onboard that accepts commands in indicated air speed and bank, and appropriately sets the control surface deflections and thrust in order to follow those references in the presence of unknown wind. Position and velocity of the target on the ground is assumed to be known. The algorithm proposed derives from a path following control law that enables the UAV to converge to a circumference centered at the target and moving with it, thus keeping the UAV in the vicinity of the target even if the target moves at a velocity lower than the UAV stall speed. If the target speed is close to the UAV speed, the control law behaves similar to a controller that tracks a particular T. Oliveira Science Laboratory, Portuguese Air Force Academy, Sintra, 2715-021, Portugal e-mail: [email protected] P. Encarnação (B) Faculty of Engineering, Catholic University of Portugal, Rio de Mouro, 2635-631, Portugal e-mail: [email protected] point on the circumference centered at the target position. Real flight tests results show the good performance of the control scheme presented.",
"title": ""
},
{
"docid": "neg:1840514_12",
"text": "environment, are becoming increasingly prevalent. However, if agents are to behave intelligently in complex, dynamic, and noisy environments, we believe that they must be able to learn and adapt. The reinforcement learning (RL) paradigm is a popular way for such agents to learn from experience with minimal feedback. One of the central questions in RL is how best to generalize knowledge to successfully learn and adapt. In reinforcement learning problems, agents sequentially observe their state and execute actions. The goal is to maximize a real-valued reward signal, which may be time delayed. For example, an agent could learn to play a game by being told what the state of the board is, what the legal actions are, and then whether it wins or loses at the end of the game. However, unlike in supervised learning scenarios, the agent is never provided the “correct” action. Instead, the agent can only gather data by interacting with an environment, receiving information about the results, its actions, and the reward signal. RL is often used because of the framework’s flexibility and due to the development of increasingly data-efficient algorithms. RL agents learn by interacting with the environment, gathering data. If the agent is virtual and acts in a simulated environment, training data can be collected at the expense of computer time. However, if the agent is physical, or the agent must act on a “real-world” problem where the online reward is critical, such data can be expensive. For instance, a physical robot will degrade over time and must be replaced, and an agent learning to automate a company’s operations may lose money while training. When RL agents begin learning tabula rasa, mastering difficult tasks may be infeasible, as they require significant amounts of data even when using state-of-the-art RL approaches. There are many contemporary approaches to speed up “vanilla” RL methods. Transfer learning (TL) is one such technique. Transfer learning is an umbrella term used when knowledge is Articles",
"title": ""
},
{
"docid": "neg:1840514_13",
"text": "In many scientific and engineering applications, we are tasked with the optimisation of an expensive to evaluate black box function f . Traditional methods for this problem assume just the availability of this single function. However, in many cases, cheap approximations to f may be obtainable. For example, the expensive real world behaviour of a robot can be approximated by a cheap computer simulation. We can use these approximations to eliminate low function value regions cheaply and use the expensive evaluations of f in a small but promising region and speedily identify the optimum. We formalise this task as a multi-fidelity bandit problem where the target function and its approximations are sampled from a Gaussian process. We develop MF-GP-UCB, a novel method based on upper confidence bound techniques. In our theoretical analysis we demonstrate that it exhibits precisely the above behaviour, and achieves better regret than strategies which ignore multi-fidelity information. MF-GP-UCB outperforms such naive strategies and other multi-fidelity methods on several synthetic and real experiments.",
"title": ""
},
{
"docid": "neg:1840514_14",
"text": "The degree of heavy metal (Hg, Cr, Cd, and Pb) pollution in honeybees (Apis mellifera) was investigated in several sampling sites around central Italy including both polluted and wildlife areas. The honeybee readily inhabits all environmental compartments, such as soil, vegetation, air, and water, and actively forages the area around the hive. Therefore, if it functions in a polluted environment, plant products used by bees may also be contaminated, and as a result, also a part of these pollutants will accumulate in the organism. The bees, foragers in particular, are good biological indicators that quickly detect the chemical impairment of the environment by the high mortality and the presence of pollutants in their body or in beehive products. The experiment was carried out using 24 colonies of honeybees bred in hives dislocated whether within urban areas or in wide countryside areas. Metals were analyzed on the foragers during all spring and summer seasons, when the bees were active. Results showed no presence of mercury in all samples analyzed, but honeybees accumulated several amounts of lead, chromium, and cadmium. Pb reported a statistically significant difference among the stations located in urban areas and those in the natural reserves, showing the highest values in honeybees collected from hives located in Ciampino area (Rome), next to the airport. The mean value for this sampling station was 0.52 mg kg−1, and July and September were characterized by the highest concentrations of Pb. Cd also showed statistically significant differences among areas, while for Cr no statistically significant differences were found.",
"title": ""
},
{
"docid": "neg:1840514_15",
"text": "The Reactor design pattern handles service requests that are delivered concurrently to an application by one or more clients. Each service in an application may consist of serveral methods and is represented by a separate event handler that is responsible for dispatching service-specific requests. Dispatching of event handlers is performed by an initiation dispatcher, which manages the registered event handlers. Demultiplexing of service requests is performed by a synchronous event demultiplexer.",
"title": ""
},
{
"docid": "neg:1840514_16",
"text": "We have proposed and verified an efficient architecture for a high-speed I/O transceiver design that implements far-end crosstalk (FEXT) cancellation. In this design, TX pre-emphasis, used traditionally to reduce ISI, is combined with FEXT cancellation at the transmitter to remove crosstalk-induced jitter and interference. The architecture has been verified via simulation models based on channel measurement. A prototype implementation of a 12.8Gbps source-synchronous serial link transmitter has been developed in TSMC's 0.18mum CMOS technology. The proposed design consists of three 12.8Gbps data lines that uses a half-rate PLL clock of 6.4GHz. The chip includes a PRBS generator to simplify multi-lane testing. Simulation results show that, even with a 2times reduction in line separation, FEXT cancellation can successfully reduce jitter by 51.2 %UI and widen the eye by 14.5%. The 2.5 times 1.5 mm2 core consumes 630mW per lane at 12.8Gbps with a 1.8V supply",
"title": ""
},
{
"docid": "neg:1840514_17",
"text": "N Engl J Med 2005;353:1387-94. Copyright © 2005 Massachusetts Medical Society. A 56-year-old man was referred to the transplantation infectious-disease clinic because of a low-grade fever and left axillary lymphadenopathy. The patient had received a cadaveric kidney transplant five years earlier for polycystic kidney disease. He had been in his usual state of health until three weeks before the referral to the infectious-disease clinic, when he discovered palpable, tender lymph nodes in the left epitrochlear region and axilla. Ten days later a low-grade fever, dry cough, nasal congestion, and night sweats developed, for which trimethoprim–sulfamethoxazole was prescribed, without benefit. He was referred to a specialist in infectious diseases. The patient did not have headache, sore throat, chest or abdominal pain, dyspnea, diarrhea, or dysuria. He had hypertension, gout, nephrolithiasis, gastroesophageal reflux disease, and prostate cancer, which had been treated with radiation therapy two years earlier. He was a policeman who worked in an office. He had not traveled outside of the United States recently. He had acquired a kitten several months earlier and recalled receiving multiple scratches on his hands when he played with it. His medications were cyclosporine (325 mg daily), mycophenolate mofetil (2 g daily), amlodipine, furosemide, colchicine, doxazosin, and pravastatin. Prednisone had been discontinued one year previously. He reported no allergies to medications. The temperature was 36.0°C and the blood pressure 105/75 mm Hg. On physical examination, the patient appeared well. The head, neck, lungs, heart, and abdomen were unremarkable. On the dorsum of the left hand was a single, violaceous nodule with a flat, necrotic eschar on top (Fig. 1); there was no erythema, fluctuance, pus, or other drainage, and there was no sinus tract. The patient said that this lesion had nearly healed, but that he had been scratching it and thought that this irritation prevented it from healing. There was a tender left epitrochlear lymph node, 2 cm by 2 cm, and a mass of matted, tender lymph nodes, 5 cm in diameter, in the left axilla. There was no lymphangitic streaking or cellulitis. The results of a complete blood count revealed no abnormalities (Table 1). Additional laboratory studies were obtained, and clarithromycin (500 mg, twice a day) was prescribed. Within a day of starting treatment, the patient’s temperature rose to 39.4°C, and the fever was accompanied by shaking chills. He was admitted to the hospital. The temperature was 38.6°C, the pulse was 78 beats per minute, and the blood pressure was 100/60 mm Hg. The results of a physical examination were unchanged presentation of case",
"title": ""
},
{
"docid": "neg:1840514_18",
"text": "The ability to write diverse poems in different styles under the same poetic imagery is an important characteristic of human poetry writing. Most previous works on automatic Chinese poetry generation focused on improving the coherency among lines. Some work explored style transfer but suffered from expensive expert labeling of poem styles. In this paper, we target on stylistic poetry generation in a fully unsupervised manner for the first time. We propose a novel model which requires no supervised style labeling by incorporating mutual information, a concept in information theory, into modeling. Experimental results show that our model is able to generate stylistic poems without losing fluency and coherency.",
"title": ""
},
{
"docid": "neg:1840514_19",
"text": "Hyperbolic embeddings offer excellent quality with few dimensions when embedding hierarchical data structures. We give a combinatorial construction that embeds trees into hyperbolic space with arbitrarily low distortion without optimization. On WordNet, this algorithm obtains a meanaverage-precision of 0.989 with only two dimensions, outperforming existing work by 0.11 points. We provide bounds characterizing the precisiondimensionality tradeoff inherent in any hyperbolic embedding. To embed general metric spaces, we propose a hyperbolic generalization of multidimensional scaling (h-MDS). We show how to perform exact recovery of hyperbolic points from distances, provide a perturbation analysis, and give a recovery result that enables us to reduce dimensionality. Finally, we extract lessons from the algorithms and theory above to design a scalable PyTorch-based implementation that can handle incomplete information.",
"title": ""
}
] |
1840515 | Aerodynamic Loads on Tall Buildings : Interactive Database | [
{
"docid": "pos:1840515_0",
"text": "An evaluation and comparison of seven of the world’s major building codes and standards is conducted in this study, with specific discussion of their estimations of the alongwind, acrosswind, and torsional response, where applicable, for a given building. The codes and standards highlighted by this study are those of the United States, Japan, Australia, the United Kingdom, Canada, China, and Europe. In addition, the response predicted by using the measured power spectra of the alongwind, acrosswind, and torsional responses for several building shapes tested in a wind tunnel are presented, and a comparison between the response predicted by wind tunnel data and that estimated by some of the standards is conducted. This study serves not only as a comparison of the response estimates by international codes and standards, but also introduces a new set of wind tunnel data for validation of wind tunnel-based empirical expressions. 1.0 Introduction Under the influence of dynamic wind loads, typical high-rise buildings oscillate in the alongwind, acrosswind, and torsional directions. The alongwind motion primarily results from pressure fluctuations on the windward and leeward faces, which generally follows the fluctuations in the approach flow, at least in the low frequency range. Therefore, alongwind aerodynamic loads may be quantified analytically utilizing quasi-steady and strip theories, with dynamic effects customarily represented by a random-vibrationbased “Gust Factor Approach” (Davenport 1967, Vellozzi & Cohen 1968, Vickery 1970, Simiu 1976, Solari 1982, ESDU 1989, Gurley & Kareem 1993). However, the acrosswind motion is introduced by pressure fluctuations on the side faces which are influenced by fluctuations in the separated shear layers and wake dynamics (Kareem 1982). This renders the applicability of strip and quasi-steady theories rather doubtful. Similarly, the wind-induced torsional effects result from an imbalance in the instantaneous pressure distribution on the building surface. These load effects are further amplified in asymmetric buildings as a result of inertial coupling (Kareem 1985). Due to the complexity of the acrosswind and torsional responses, physical modeling of fluid-structure interactions remains the only viable means of obtaining information on wind loads, though recently, research in the area of computational fluid dynam1. Graduate Student & Corresponding Author, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556. e-mail: [email protected] 2. Professor, NatHaz Modeling Laboratory, Department of Civil Engineering and Geological Sciences, University of Notre Dame, Notre Dame, IN, 46556",
"title": ""
},
{
"docid": "pos:1840515_1",
"text": "Wind loads on structures under the buffeting action of wind gusts have traditionally been treated by the ‘‘gust loading factor’’ (GLF) method in most major codes and standards around the world. In this scheme, the equivalent-static wind loading used for design is equal to the mean wind force multiplied by the GLF. Although the traditional GLF method ensures an accurate estimation of the displacement response, it may fall short in providing a reliable estimate of other response components. To overcome this shortcoming, a more consistent procedure for determining design loads on tall structures is proposed. This paper highlights an alternative model, in which the GLF is based on the base bending moment rather than the displacement. The expected extreme base moment is computed by multiplying the mean base moment by the proposed GLF. The base moment is then distributed to each floor in terms of the floor load in a format that is very similar to the one used to distribute the base shear in earthquake engineering practice. In addition, a simple relationship between the proposed base moment GLF and the traditional GLF is derived, which makes it convenient to employ the proposed approach while utilizing the existing background information. Numerical examples are presented to demonstrate the efficacy of the proposed procedure in light of the traditional approach. This paper also extends the new framework for the formulation of wind load effects in the acrosswind and torsional directions along the ‘‘GLF’’ format that has generally been used for the alongwind response. A 3D GLF concept is advanced, which draws upon a database of aerodynamic wind loads on typical tall buildings, a mode shape correction procedure and a more realistic formulation of the equivalent-static wind loads and their effects. A numerical example is presented to demonstrate the efficacy of the proposed procedure in light of the traditional approach. It is envisaged that the proposed formulation will be most appropriate for inclusion in codes and standards. r 2003 Elsevier Ltd. All rights reserved.",
"title": ""
}
] | [
{
"docid": "neg:1840515_0",
"text": "Generative adversarial nets (GANs) have been successfully applied to the artificial generation of image data. In terms of text data, much has been done on the artificial generation of natural language from a single corpus. We consider multiple text corpora as the input data, for which there can be two applications of GANs: (1) the creation of consistent cross-corpus word embeddings given different word embeddings per corpus; (2) the generation of robust bag-of-words document embeddings for each corpora. We demonstrate our GAN models on real-world text data sets from different corpora, and show that embeddings from both models lead to improvements in supervised learning problems.",
"title": ""
},
{
"docid": "neg:1840515_1",
"text": "Describing the contents of images is a challenging task for machines to achieve. It requires not only accurate recognition of objects and humans, but also their attributes and relationships as well as scene information. It would be even more challenging to extend this process to identify falls and hazardous objects to aid elderly or users in need of care. This research makes initial attempts to deal with the above challenges to produce multi-sentence natural language description of image contents. It employs a local region based approach to extract regional image details and combines multiple techniques including deep learning and attribute learning through the use of machine learned features to create high level labels that can generate detailed description of real-world images. The system contains the core functions of scene classification, object detection and classification, attribute learning, relationship detection and sentence generation. We have also further extended this process to deal with open-ended fall detection and hazard identification. In comparison to state-of-the-art related research, our system shows superior robustness and flexibility in dealing with test images from new, unrelated domains, which poses great challenges to many existing methods. Our system is evaluated on a subset from Flickr8k and Pascal VOC 2012 and achieves an impressive average BLEU score of 46 and outperforms related research by a significant margin of 10 BLEU score when evaluated with a small dataset of images containing falls and hazardous objects. It also shows impressive performance when evaluated using a subset of IAPR TC-12 dataset.",
"title": ""
},
{
"docid": "neg:1840515_2",
"text": "Compared to previous head-mounted displays, the compact and low-cost Oculus Rift has claimed to offer improved virtual reality experiences. However, how and what kinds of user experiences are encountered by people when using the Rift in actual gameplay has not been examined. We present an exploration of 10 participants' experiences of playing a first-person shooter game using the Rift. Despite cybersickness and a lack of control, participants experienced heightened experiences, a richer engagement with passive game elements, a higher degree of flow and a deeper immersion on the Rift than on a desktop setup. Overly demanding movements, such as the large range of head motion required to navigate the game environment were found to adversely affect gaming experiences. Based on these and other findings, we also present some insights for designing games for the Rift.",
"title": ""
},
{
"docid": "neg:1840515_3",
"text": "In next generation cellular networks, cloud computing will have profound impacts on mobile wireless communications. On the one hand, the integration of cloud computing into the mobile environment enables MCC systems. On the other hand, the powerful computing platforms in the cloud for radio access networks lead to a novel concept of C-RAN. In this article we study the topology configuration and rate allocation problem in C-RAN with the objective of optimizing the end-to-end performance of MCC users in next generation cellular networks. We use a decision theoretical approach to tackle the delayed channel state information problem in C-RAN. Simulation results show that the design and operation of future mobile wireless networks can be significantly affected by cloud computing, and the proposed scheme is capable of achieving substantial performance gains over existing schemes.",
"title": ""
},
{
"docid": "neg:1840515_4",
"text": "The mechanisms of anterior cruciate ligament (ACL) injuries are still inconclusive from an epidemiological standpoint. An epidemiological approach in a large sample group over an appropriate period of years will be necessary to enhance the current knowledge of the ACL injury mechanism. The objective of the study was to investigate the ACL injury occurrence in a large sample over twenty years and demonstrate the relationships between the ACL injury occurrence and the dynamic knee alignment at the time of the injury. We investigated the activity, the injury mechanism, and the dynamic knee alignment at the time of the injury in 1,718 patients diagnosed as having the ACL injuries. Regarding the activity at the time of the injury, \"competition \"was the most common, accounting for about half of all the injuries. The current result also showed that the noncontact injury was the most common, which was observed especially in many female athletes. Finally, the dynamic alignment of \"Knee-in & Toe- out \"(i.e. dynamic knee valgus) was the most common, accounting for about half. These results enhance our understanding of the ACL injury mechanism and may be used to guide future injury prevention strategies. Key pointsWe investigated the situation of ACL injury occurrence, especially dynamic alignments at the time of injury, in 1,718 patients who had visited our institution for surgery and physical therapy for twenty years.Our epidemiological study of the large patient group revealed that \"knee-in & toe-out \"alignment was the most frequently seen at the time of the ACL injury.From an epidemiological standpoint, we need to pay much attention to avoiding \"Knee-in & Toe-out \"alignment during sports activities.",
"title": ""
},
{
"docid": "neg:1840515_5",
"text": "Within NASA, there is an increasing awareness that software is of growing importance to the success of missions. Much data has been collected, and many theories have been advanced on how to reduce or eliminate errors in code. However, learning requires experience. This article documents a new NASA initiative to build a centralized repository of software defect data; in particular, it documents one specific case study on software metrics. Software metrics are used as a basis for prediction of errors in code modules, but there are many different metrics available. McCabe is one of the more popular tools used to produce metrics, but, as will be shown in this paper, other metrics can be more significant.",
"title": ""
},
{
"docid": "neg:1840515_6",
"text": "The fabrication of digital Integrated Circuits (ICs) is increasingly outsourced. Given this trend, security is recognized as an important issue. The threat agent is an attacker at the IC foundry that has information about the circuit and inserts covert, malicious circuitry. The use of 3D IC technology has been suggested as a possible technique to counter this threat. However, to our knowledge, there is no prior work on how such technology can be used effectively. We propose a way to use 3D IC technology for security in this context. Specifically, we obfuscate the circuit by lifting wires to a trusted tier, which is fabricated separately. This is referred to as split manufacturing. For this setting, we provide a precise notion of security, that we call k-security, and a characterization of the underlying computational problems and their complexity. We further propose a concrete approach for identifying sets of wires to be lifted, and the corresponding security they provide. We conclude with a comprehensive empirical assessment with benchmark circuits that highlights the security versus cost trade-offs introduced by 3D IC based circuit obfuscation.",
"title": ""
},
{
"docid": "neg:1840515_7",
"text": "Indian population is growing very fast and is responsible for posing various environmental risks like traffic noise which is the primitive contributor to the overall noise pollution in urban environment. So, an attempt has been made to develop a web enabled application for spatio-temporal semantic analysis of traffic noise of one of the urban road segments in India. Initially, a traffic noise model was proposed for the study area based on the Calixto model. Later, a City Geographic Markup Language (CityGML) model, which is an OGC encoding standard for 3D data representation, was developed and stored into PostGIS. A web GIS framework was implemented for simulation of traffic noise level mapped on building walls using the data from PostGIS. Finally, spatio-temporal semantic analysis to quantify the effects in terms of threshold noise level, number of walls and roofs affected from start to the end of the day, was performed.",
"title": ""
},
{
"docid": "neg:1840515_8",
"text": "Links between issue reports and their corresponding commits in version control systems are often missing. However, these links are important for measuring the quality of a software system, predicting defects, and many other tasks. Several approaches have been designed to solve this problem by automatically linking bug reports to source code commits via comparison of textual information in commit messages and bug reports. Yet, the effectiveness of these techniques is oftentimes suboptimal when commit messages are empty or contain minimum information; this particular problem makes the process of recovering traceability links between commits and bug reports particularly challenging. In this work, we aim at improving the effectiveness of existing bug linking techniques by utilizing rich contextual information. We rely on a recently proposed approach, namely ChangeScribe, which generates commit messages containing rich contextual information by using code summarization techniques. Our approach then extracts features from these automatically generated commit messages and bug reports, and inputs them into a classification technique that creates a discriminative model used to predict if a link exists between a commit message and a bug report. We compared our approach, coined as RCLinker (Rich Context Linker), to MLink, which is an existing state-of-the-art bug linking approach. Our experiment results on bug reports from six software projects show that RCLinker outperforms MLink in terms of F-measure by 138.66%.",
"title": ""
},
{
"docid": "neg:1840515_9",
"text": "Initial work on automatic emotion recognition concentrates mainly on audio-based emotion classification. Speech is the most important channel for the communication between humans and it may expected that emotional states are trans-fered though content, prosody or paralinguistic cues. Besides the audio modality, with the rapidly developing computer hardware and video-processing devices researches start exploring the video modality. Visual-based emotion recognition works focus mainly on the extraction and recognition of emotional information from the facial expressions. There are also attempts to classify emotional states from body or head gestures and to combine different visual modalities, for instance facial expressions and body gesture captured by two separate cameras [3]. Emotion recognition from psycho-physiological measurements, such as skin conductance, respiration, electro-cardiogram (ECG), electromyography (EMG), electroencephalography (EEG) is another attempt. In contrast to speech, gestures or facial expressions these biopotentials are the result of the autonomic nervous system and cannot be imitated [4]. Research activities in facial expression and speech based emotion recognition [6] are usually performed independently from each other. But in almost all practical applications people speak and exhibit facial expressions at the same time, and consequently both modalities should be used in order to perform robust affect recognition. Therefore, multimodal, and in particularly audiovisual emotion recognition has been emerging in recent times [11], for example multiple classifier systems have been widely investigated for the classification of human emotions [1, 9, 12, 14]. Combining classifiers is a promising approach to improve the overall classifier performance [13, 8]. In multiple classifier systems (MCS) it is assumed that the raw data X originates from an underlying source, but each classifier receives different subsets of (X) of the same raw input data X. Feature vector F j (X) are used as the input to the j−th classifier computing an estimate y j of the class membership of F j (X). This output y j might be a crisp class label or a vector of class memberships, e.g. estimates of posteriori probabilities. Based on the multiple classifier outputs y 1 ,. .. , y N the combiner produces the final decision y. Combiners used in this study are fixed transformations of the multiple classifier outputs y 1 ,. .. , y N. Examples of such combining rules are Voting, (weighted) Averaging, and Multiplying, just to mention the most popular types. 2 Friedhelm Schwenker In addition to a priori fixed combination rules the combiner can be a …",
"title": ""
},
{
"docid": "neg:1840515_10",
"text": "This paper describes a novel driver-support system that helps to maintain the correct speed and headway (distance) with respect to lane curvature and other vehicles ahead. The system has been developed as part of the Integrating Project PReVENT under the European Framework Programme 6, which is named SAfe SPEed and safe distaNCE (SASPENCE). The application uses a detailed description of the situation ahead of the vehicle. Many sensors [radar, video camera, Global Positioning System (GPS) and accelerometers, digital maps, and vehicle-to-vehicle wireless local area network (WLAN) connections] are used, and state-of-the-art data fusion provides a model of the environment. The system then computes a feasible maneuver and compares it with the driver's behavior to detect possible mistakes. The warning strategies are based on this comparison. The system “talks” to the driver mainly via a haptic pedal or seat belt and “listens” to the driver mainly via the vehicle acceleration. This kind of operation, i.e., the comparison between what the system thinks is possible and what the driver appears to be doing, and the consequent dialog can be regarded as simple implementations of the rider-horse metaphor (H-metaphor). The system has been tested in several situations (driving simulator, hardware in the loop, and real road tests). Objective and subjective data have been collected, revealing good acceptance and effectiveness, particularly in awakening distracted drivers. The system intervenes only when a problem is actually detected in the headway and/or speed (approaching curves or objects) and has been shown to cause prompt reactions and significant speed correction before getting into really dangerous situations.",
"title": ""
},
{
"docid": "neg:1840515_11",
"text": "This paper presents the conceptual design, detailed development and flight testing of AtlantikSolar, a 5.6m-wingspan solar-powered Low-Altitude Long-Endurance (LALE) Unmanned Aerial Vehicle (UAV) designed and built at ETH Zurich. The UAV is required to provide perpetual endurance at a geographic latitude of 45°N in a 4-month window centered around June 21st. An improved conceptual design method is presented and applied to maximize the perpetual flight robustness with respect to local meteorological disturbances such as clouds or winds. Airframe, avionics hardware, state estimation and control method development for autonomous flight operations are described. Flight test results include a 12-hour flight relying solely on batteries to replicate night-flight conditions. In addition, we present flight results from Search-And-Rescue field trials where a camera and processing pod were mounted on the aircraft to create high-fidelity 3D-maps of a simulated disaster area.",
"title": ""
},
{
"docid": "neg:1840515_12",
"text": "In order to solve the problem that the long cycle and the repetitive work in the process of designing the industrial robot, a modular manipulator system developed for general industrial applications is introduced in this paper. When the application scene is changed, the corresponding robotic modules can be selected to assemble a new robot configuration that meets the requirements. The modules can be divided into two categories: joint modules and link modules. Joint modules consist of three types of revolute joint modules with different torque, and link modules mainly contain T link module and L link module. By connection of different types of modules, various of configurations can be achieved. Considering the traditional 6-DoF manipulators are difficult to meet the needs of the unstructured industrial applications, a 7-DoF redundant manipulator prototype is designed on the basis of the robotic modules.",
"title": ""
},
{
"docid": "neg:1840515_13",
"text": "In this paper, we propose a low-power level shifter (LS) capable of converting extremely low-input voltage into high-output voltage. The proposed LS consists of a pre-amplifier with a logic error correction circuit and an output latch stage. The pre-amplifier generates complementary amplified signals, and the latch stage converts them into full-swing output signals. Simulated results demonstrated that the proposed LS in a 0.18-μm CMOS process can convert a 0.19-V input into 1.8-V output correctly. The energy and the delay time of the proposed LS were 0.24 pJ and 21.4 ns when the low supply voltage, high supply voltage, and the input pulse frequency, were 0.4, 1.8 V, and 100 kHz, respectively.",
"title": ""
},
{
"docid": "neg:1840515_14",
"text": "Nowadays, the usage of mobile device among the community worldwide has been tremendously increased. With this proliferation of mobile devices, more users are able to access the internet for variety of online application and services. As the use of mobile devices and applications grows, the rate of vulnerabilities exploitation and sophistication of attack towards the mobile user are increasing as well. To date, Google's Android Operating System (OS) are among the widely used OS for the mobile devices, the openness design and ease of use have made them popular among developer and user. Despite the advantages the android-based mobile devices have, it also invited the malware author to exploit the mobile application on the market. Prior to this matter, this research focused on investigating the behaviour of mobile malware through hybrid approach. The hybrid approach correlates and reconstructs the result from the static and dynamic malware analysis in producing a trace of malicious event. Based on the finding, this research proposed a general mobile malware behaviour model that can contribute in identifying the key features in detecting mobile malware on an Android Platform device.",
"title": ""
},
{
"docid": "neg:1840515_15",
"text": "Many of the world's most popular websites catalyze their growth through invitations from existing members. New members can then in turn issue invitations, and so on, creating cascades of member signups that can spread on a global scale. Although these diffusive invitation processes are critical to the popularity and growth of many websites, they have rarely been studied, and their properties remain elusive. For instance, it is not known how viral these cascades structures are, how cascades grow over time, or how diffusive growth affects the resulting distribution of member characteristics present on the site. In this paper, we study the diffusion of LinkedIn, an online professional network comprising over 332 million members, a large fraction of whom joined the site as part of a signup cascade. First we analyze the structural patterns of these signup cascades, and find them to be qualitatively different from previously studied information diffusion cascades. We also examine how signup cascades grow over time, and observe that diffusion via invitations on LinkedIn occurs over much longer timescales than are typically associated with other types of online diffusion. Finally, we connect the cascade structures with rich individual-level attribute data to investigate the interplay between the two. Using novel techniques to study the role of homophily in diffusion, we find striking differences between the local, edge-wise homophily and the global, cascade-level homophily we observe in our data, suggesting that signup cascades form surprisingly coherent groups of members.",
"title": ""
},
{
"docid": "neg:1840515_16",
"text": "Video streaming over HTTP is becoming the de facto dominating paradigm for today's video applications. HTTP as an over-the-top (OTT) protocol has been leveraged for quality video traversal over the Internet. High user-received quality-of-experience (QoE) is driven not only by the new technology, but also by a wide range of user demands. Given the limitation of a traditional TCP/IP network for supporting video transmission, the typical on-off transfer pattern is inevitable. Dynamic adaptive streaming over HTTP (DASH) establishes a simple architecture and enables new video applications to fully utilize the exiting physical network infrastructure. By deploying robust adaptive algorithms at the client side, DASH can provide a smooth streaming experience. We propose a dynamic adaptive algorithm in order to keep a high QoE for the average user's experience. We formulated our QoE optimization in a set of key factors. The results obtained by our empirical network traces show that our approach not only achieves a high average QoE but it also works stably under different network conditions.",
"title": ""
},
{
"docid": "neg:1840515_17",
"text": "The continued amalgamation of cloud technologies into all aspects of our daily lives and the technologies we use (i.e. cloud-of-things) creates business opportunities, security and privacy risks, and investigative challenges (in the event of a cybersecurity incident). This study examines the extent to which data acquisition fromWindows phone, a common cloud-of-thing device, is supported by three popular mobile forensics tools. The effect of device settings modification (i.e. enabling screen lock and device reset operations) and alternative acquisition processes (i.e. individual and combined acquisition) on the extraction results are also examined. Our results show that current mobile forensic tool support for Windows Phone 8 remains limited. The results also showed that logical acquisition support was more complete in comparison to physical acquisition support. In one example, the tool was able to complete a physical acquisition of a Nokia Lumia 625, but its deleted contacts and SMSs could not be recovered/extracted. In addition we found that separate acquisition is needed for device removable media to maximize acquisition results, particularly when trying to recover deleted data. Furthermore, enabling flight-mode and disabling location services are highly recommended to eliminate the potential for data alteration during the acquisition process. These results should provide practitioners with an overview of the current capability of mobile forensic tools and the challenges in successfully extracting evidence from the Windows phone platform. Copyright © 2016 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "neg:1840515_18",
"text": "This paper presents a comparative study of different neural network models for forecasting the weather of Vancouver, British Columbia, Canada. For developing the models, we used one year’s data comprising of daily maximum and minimum temperature, and wind-speed. We used Multi-Layered Perceptron (MLP) and an Elman Recurrent Neural Network (ERNN), which were trained using the one-step-secant and LevenbergMarquardt algorithms. To ensure the effectiveness of neurocomputing techniques, we also tested the different connectionist models using a different training and test data set. Our goal is to develop an accurate and reliable predictive model for weather analysis. Radial Basis Function Network (RBFN) exhibits a good universal approximation capability and high learning convergence rate of weights in the hidden and output layers. Experimental results obtained have shown RBFN produced the most accurate forecast model as compared to ERNN and MLP networks.",
"title": ""
},
{
"docid": "neg:1840515_19",
"text": "Clients with generalized anxiety disorder (GAD) received either (a) applied relaxation and self-control desensitization, (b) cognitive therapy, or (c) a combination of these methods. Treatment resulted in significant improvement in anxiety and depression that was maintained for 2 years. The large majority no longer met diagnostic criteria; a minority sought further treatment during follow-up. No differences in outcome were found between conditions; review of the GAD therapy literature suggested that this may have been due to strong effects generated by each component condition. Finally, interpersonal difficulties remaining at posttherapy, measured by the Inventory of Interpersonal Problems Circumplex Scales (L. E. Alden, J. S. Wiggins, & A. L. Pincus, 1990) in a subset of clients, were negatively associated with posttherapy and follow-up improvement, suggesting the possible utility of adding interpersonal treatment to cognitive-behavioral therapy to increase therapeutic effectiveness.",
"title": ""
}
] |
1840516 | An In-depth Comparison of Subgraph Isomorphism Algorithms in Graph Databases | [
{
"docid": "pos:1840516_0",
"text": "We invesrigare new appmaches for frequent graph-based patrem mining in graph darasers andpmpose a novel ofgorirhm called gSpan (graph-based,Tubsmrure parrern mining), which discovers frequenr subsrrucrures z h o u r candidate generorion. &an builds a new lexicographic or. der among graphs, and maps each graph to a unique minimum DFS code as irs canonical label. Based on rhis lexicographic orde,: &an adopts rhe deprh-jrsr search srraregy ro mine frequenr cannecred subgraphs eflciently. Our performance study shows rhar gSpan subsianriolly outperforms previous algorithm, somerimes by an order of magnirude.",
"title": ""
}
] | [
{
"docid": "neg:1840516_0",
"text": "The review paper describes the application of various image processing techniques for automatic detection of glaucoma. Glaucoma is a neurodegenerative disorder of the optic nerve, which causes partial loss of vision. Large number of people suffers from eye diseases in rural and semi urban areas all over the world. Current diagnosis of retinal disease relies upon examining retinal fundus image using image processing. The key image processing techniques to detect eye diseases include image registration, image fusion, image segmentation, feature extraction, image enhancement, morphology, pattern matching, image classification, analysis and statistical measurements. KeywordsImage Registration; Fusion; Segmentation; Statistical measures; Morphological operation; Classification Full Text: http://www.ijcsmc.com/docs/papers/November2013/V2I11201336.pdf",
"title": ""
},
{
"docid": "neg:1840516_1",
"text": "Gill morphometric and gill plasticity of the air-breathing striped catfish (Pangasianodon hypophthalmus) exposed to different temperatures (present day 27°C and future 33°C) and different air saturation levels (92% and 35%) during 6weeks were investigated using vertical sections to estimate the respiratory lamellae surface areas, harmonic mean barrier thicknesses, and gill component volumes. Gill respiratory surface area (SA) and harmonic mean water - blood barrier thicknesses (HM) of the fish were strongly affected by both environmental temperature and oxygen level. Thus initial values for 27°C normoxic fish (12.4±0.8g) were 211.8±21.6mm2g-1 and 1.67±0.12μm for SA and HM respectively. After 5weeks in same conditions or in the combinations of 33°C and/or PO2 of 55mmHg, this initial surface area scaled allometrically with size for the 33°C hypoxic group, whereas branchial SA was almost eliminated in the 27°C normoxic group, with other groups intermediate. In addition, elevated temperature had an astounding effect on growth with the 33°C group growing nearly 8-fold faster than the 27°C fish.",
"title": ""
},
{
"docid": "neg:1840516_2",
"text": "Modern machine learning algorithms are increasingly computationally demanding, requiring specialized hardware and distributed computation to achieve high performance in a reasonable time frame. Many hyperparameter search algorithms have been proposed for improving the efficiency of model selection, however their adaptation to the distributed compute environment is often ad-hoc. We propose Tune, a unified framework for model selection and training that provides a narrow-waist interface between training scripts and search algorithms. We show that this interface meets the requirements for a broad range of hyperparameter search algorithms, allows straightforward scaling of search to large clusters, and simplifies algorithm implementation. We demonstrate the implementation of several state-of-the-art hyperparameter search algorithms in Tune. Tune is available at http://ray.readthedocs.io/en/latest/tune.html.",
"title": ""
},
{
"docid": "neg:1840516_3",
"text": "Maritime transportation is accountable for 2.7% of the worlds CO emissions and the liner shipping industry is committed to a slow steaming policy to provide low cost and environmentally conscious global transport of goods without compromising the level of service. The potential for making cost effective and energy efficient liner shipping networks using operations research is huge and neglected. The implementation of logistic planning tools based upon operations research has enhanced performance of both airlines, railways and general transportation companies, but within the field of liner shipping very little operations research has been done. We believe that access to domain knowledge and data is an entry barrier for researchers to approach the important liner shipping network design problem. This paper presents a thorough description of the liner shipping domain applied to network design along with a rich integer programming model based on the services, that constitute the fixed schedule of a liner shipping company. The model may be relaxed as well as decomposed. The design of a benchmark suite of data instances to reflect the business structure of a global liner shipping network is discussed. The paper is motivated by providing easy access to the domain and the data sources of liner shipping for operations researchers in general. A set of data instances with offset in real world data is presented and made available upon request. Future work is to provide computational results for the instances.",
"title": ""
},
{
"docid": "neg:1840516_4",
"text": "Recently, style transfer has received a lot of attention. While much of this research has aimed at speeding up processing, the approaches are still lacking from a principled, art historical standpoint: a style is more than just a single image or an artist, but previous work is limited to only a single instance of a style or shows no benefit from more images. Moreover, previous work has relied on a direct comparison of art in the domain of RGB images or on CNNs pre-trained on ImageNet, which requires millions of labeled object bounding boxes and can introduce an extra bias, since it has been assembled without artistic consideration. To circumvent these issues, we propose a style-aware content loss, which is trained jointly with a deep encoder-decoder network for real-time, high-resolution stylization of images and videos. We propose a quantitative measure for evaluating the quality of a stylized image and also have art historians rank patches from our approach against those from previous work. These and our qualitative results ranging from small image patches to megapixel stylistic images and videos show that our approach better captures the subtle nature in which a style affects content.",
"title": ""
},
{
"docid": "neg:1840516_5",
"text": "Bringing Blockchain technology and business process management together, we follow the Design Science Research approach and design, implement, and evaluate a Blockchain prototype for crossorganizational workflow management together with a German bank. For the use case of a documentary letter of credit we describe the status quo of the process, identify areas of improvement, implement a Blockchain solution, and compare both workflows. The prototype illustrates that the process, as of today paper-based and with high manual effort, can be significantly improved. Our research reveals that a tamper-proof process history for improved auditability, automation of manual process steps and the decentralized nature of the system can be major advantages of a Blockchain solution for crossorganizational workflow management. Further, our research provides insights how Blockchain technology can be used for business process management in general.",
"title": ""
},
{
"docid": "neg:1840516_6",
"text": "For many networking applications, recent data is more significant than older data, motivating the need for sliding window solutions. Various capabilities, such as DDoS detection and load balancing, require insights about multiple metrics including Bloom filters, per-flow counting, count distinct and entropy estimation. In this work, we present a unified construction that solves all the above problems in the sliding window model. Our single solution offers a better space to accuracy tradeoff than the state-of-the-art for each of these individual problems! We show this both analytically and by running multiple real Internet backbone and datacenter packet traces.",
"title": ""
},
{
"docid": "neg:1840516_7",
"text": "Complex numbers have long been favoured for digital signal processing, yet complex representations rarely appear in deep learning architectures. RNNs, widely used to process time series and sequence information, could greatly benefit from complex representations. We present a novel complex gate recurrent cell. When used together with norm-preserving state transition matrices, our complex gated RNN exhibits excellent stability and convergence properties. We demonstrate competitive performance of our complex gated RNN on the synthetic memory and adding task, as well as on the real-world task of human motion prediction.",
"title": ""
},
{
"docid": "neg:1840516_8",
"text": "To maximize network lifetime in Wireless Sensor Networks (WSNs) the paths for data transfer are selected in such a way that the total energy consumed along the path is minimized. To support high scalability and better data aggregation, sensor nodes are often grouped into disjoint, non overlapping subsets called clusters. Clusters create hierarchical WSNs which incorporate efficient utilization of limited resources of sensor nodes and thus extends network lifetime. The objective of this paper is to present a state of the art survey on clustering algorithms reported in the literature of WSNs. Our paper presents a taxonomy of energy efficient clustering algorithms in WSNs. And also present timeline and description of LEACH and Its descendant in WSNs.",
"title": ""
},
{
"docid": "neg:1840516_9",
"text": "Railway infrastructure monitoring is a vital task to ensure rail transportation safety. A rail failure could result in not only a considerable impact on train delays and maintenance costs, but also on safety of passengers. In this article, the aim is to assess the risk of a rail failure by analyzing a type of rail surface defect called squats that are detected automatically among the huge number of records from video cameras. We propose an image processing approach for automatic detection of squats, especially severe types that are prone to rail breaks. We measure the visual length of the squats and use them to model the failure risk. For the assessment of the rail failure risk, we estimate the probability of rail failure based on the growth of squats. Moreover, we perform severity and crack growth analyses to consider the impact of rail traffic loads on defects in three different growth scenarios. The failure risk estimations are provided for several samples of squats with different crack growth lengths on a busy rail track of the Dutch railway network. The results illustrate the practicality and efficiency of the proposed approach.",
"title": ""
},
{
"docid": "neg:1840516_10",
"text": "Android rooting enables device owners to freely customize their own devices and run useful apps that require root privileges. While useful, rooting weakens the security of Android devices and opens the door for malware to obtain privileged access easily. Thus, several rooting prevention mechanisms have been introduced by vendors, and sensitive or high-value mobile apps perform rooting detection to mitigate potential security exposures on rooted devices. However, there is a lack of understanding whether existing rooting prevention and detection methods are effective. To fill this knowledge gap, we studied existing Android rooting methods and performed manual and dynamic analysis on 182 selected apps, in order to identify current rooting detection methods and evaluate their effectiveness. Our results suggest that these methods are ineffective. We conclude that reliable methods for detecting rooting must come from integrity-protected kernels or trusted execution environments, which are difficult to bypass.",
"title": ""
},
{
"docid": "neg:1840516_11",
"text": "Today, and possibly for a long time to come, the full driving task is too complex an activity to be fully formalized as a sensing-acting robotics system that can be explicitly solved through model-based and learning-based approaches in order to achieve full unconstrained vehicle autonomy. Localization, mapping, scene perception, vehicle control, trajectory optimization, and higher-level planning decisions associated with autonomous vehicle development remain full of open challenges. This is especially true for unconstrained, real-world operation where the margin of allowable error is extremely small and the number of edge-cases is extremely large. Until these problems are solved, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. The governing objectives of the MIT Autonomous Vehicle Technology (MIT-AVT) study are to (1) undertake large-scale real-world driving data collection that includes high-definition video to fuel the development of deep learning based internal and external perception systems, (2) gain a holistic understanding of how human beings interact with vehicle automation technology by integrating video data with vehicle state data, driver characteristics, mental models, and self-reported experiences with technology, and (3) identify how technology and other factors related to automation adoption and use can be improved in ways that save lives. In pursuing these objectives, we have instrumented 21 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 99 participants, 11,846 days of participation, 405,807 miles, and 5.5 billion video frames. This paper presents the design of the study, the data collection hardware, the processing of the data, and the computer vision algorithms currently being used to extract actionable knowledge from the data. MIT Autonomous Vehicle",
"title": ""
},
{
"docid": "neg:1840516_12",
"text": "A compact dual band-notched ultra-wideband (UWB) multiple-input multiple-output (MIMO) antenna with high isolation is designed on a FR4 substrate (27 × 30 × 0.8 mm3). To improve the input impedance matching and increase the isolation for the frequencies ≥ 4.0 GHz, the two antenna elements with compact size of 5.5 × 11 mm2 are connected to the two protruded ground parts, respectively. A 1/3 λ rectangular metal strip producing a 1.0 λ loop path with the corresponding antenna element is used to obtain the notched frequency from 5.15 to 5.85 GHz. For the rejected band of 3.30-3.70 GHz, a 1/4 λ open slot is etched into the radiator. Moreover, the two protruded ground parts are connected by a compact metal strip to reduce the mutual coupling for the band of 3.0-4.0 GHz. The simulated and measured results show a bandwidth with |S11| ≤ -10 dB, |S21| ≤ -20 dB and frequency ranged from 3.0 to 11.0 GHz excluding the two rejected bands, is achieved, and all the measured and calculated results show the proposed UWB MIMO antenna is a good candidate for UWB MIMO systems.",
"title": ""
},
{
"docid": "neg:1840516_13",
"text": "The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM-based scheduling, and to identify how it should be incorporated into current practice. The analysis of the current scheduling processes identifies significant discrepancies between the overall and the detailed levels of scheduling. The overall scheduling process is described as an individual endeavor with limited and unsystematic sharing of knowledge within and between projects. Thus, the reuse of scheduling data and experiences are inadequate, preventing continuous improvements of the overall schedules. Besides, the overall scheduling process suffers from lack of information, caused by uncoordinated and unsynchronized overlap of the design and construction processes. Consequently, the overall scheduling is primarily based on intuition and personal experiences, rather than well founded figures of the specific project. Finally, the overall schedule is comprehensive and complex, and consequently, difficult to overview and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve overall scheduling, which in turn, can secure a better starting point of the LPS. The study points to the necessity of involving subcontractors and manufactures in the earliest phases of the project in order to create project specific information for the overall schedule. In addition, the design process should be prioritized and coordinated with each craft, a process library should be introduced to promote transfer of knowledge and continuous improvements, and information flow between design and scheduling processes must change from push to pull.",
"title": ""
},
{
"docid": "neg:1840516_14",
"text": "The introduction of semantics on the web will lead to a new generation of services based on content rather than on syntax. Search engines will provide topic-based searches, retrieving resources conceptually related to the user informational need. Queries will be expressed in several ways, and will be mapped on the semantic level defining topics that must be retrieved from the web. Moving towards this new Web era, effective semantic search engines will provide means for successful searches avoiding the heavy burden experimented by users in a classical query-string based search task. In this paper we propose a search engine based on web resource semantics. Resources to be retrieved are semantically annotated using an existing open semantic elaboration platform and an ontology is used to describe the knowledge domain into which perform queries. Ontology navigation provides semantic level reasoning in order to retrieve meaningful resources with respect to a given information request.",
"title": ""
},
{
"docid": "neg:1840516_15",
"text": "The advancement of visual sensing has introduced better capturing of the discrete information from a complex, crowded scene for assisting in the analysis. However, after reviewing existing system, we find that majority of the work carried out till date is associated with significant problems in modeling event detection as well as reviewing abnormality of the given scene. Therefore, the proposed system introduces a model that is capable of identifying the degree of abnormality for an event captured on the crowded scene using unsupervised training methodology. The proposed system contributes to developing a novel region-wise repository to extract the contextual information about the discrete-event for a given scene. The study outcome shows highly improved the balance between the computational time and overall accuracy as compared to the majority of the standard research work emphasizing on event",
"title": ""
},
{
"docid": "neg:1840516_16",
"text": "Accelerated graphics cards, or Graphics Processing Units (GPUs), have become ubiquitous in recent years. On the right kinds of problems, GPUs greatly surpass CPUs in terms of raw performance. However, because they are difficult to program, GPUs are used only for a narrow class of special-purpose applications; the raw processing power made available by GPUs is unused most of the time.\n This paper presents an extension to a Java JIT compiler that executes suitable code on the GPU instead of the CPU. Both static and dynamic features are used to decide whether it is feasible and beneficial to off-load a piece of code on the GPU. The paper presents a cost model that balances the speedup available from the GPU against the cost of transferring input and output data between main memory and GPU memory. The cost model is parameterized so that it can be applied to different hardware combinations. The paper also presents ways to overcome several obstacles to parallelization inherent in the design of the Java bytecode language: unstructured control flow, the lack of multi-dimensional arrays, the precise exception semantics, and the proliferation of indirect references.",
"title": ""
},
{
"docid": "neg:1840516_17",
"text": "Nanoscale windows in graphene (nanowindows) have the ability to switch between open and closed states, allowing them to become selective, fast, and energy-efficient membranes for molecular separations. These special pores, or nanowindows, are not electrically neutral due to passivation of the carbon edges under ambient conditions, becoming flexible atomic frameworks with functional groups along their rims. Through computer simulations of oxygen, nitrogen, and argon permeation, here we reveal the remarkable nanowindow behavior at the atomic scale: flexible nanowindows have a thousand times higher permeability than conventional membranes and at least twice their selectivity for oxygen/nitrogen separation. Also, weakly interacting functional groups open or close the nanowindow with their thermal vibrations to selectively control permeation. This selective fast permeation of oxygen, nitrogen, and argon in very restricted nanowindows suggests alternatives for future air separation membranes. Graphene with nanowindows can have 1000 times higher permeability and four times the selectivity for air separation than conventional membranes, Vallejos-Burgos et al. reveal by molecular simulation, due to flexibility at the nanoscale and thermal vibrations of the nanowindows' functional groups.",
"title": ""
},
{
"docid": "neg:1840516_18",
"text": "In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.",
"title": ""
},
{
"docid": "neg:1840516_19",
"text": "Abstract: Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work. Rapid increase in internet users along with growing power of online review sites and social media has given birth to Sentiment analysis or Opinion mining, which aims at determining what other people think and comment. Sentiments or Opinions contain public generated content about products, services, policies and politics. People are usually interested to seek positive and negative opinions containing likes and dislikes, shared by users for features of particular product or service. Therefore product features or aspects have got significant role in sentiment analysis. In addition to sufficient work being performed in text analytics, feature extraction in sentiment analysis is now becoming an active area of research. This review paper discusses existing techniques and approaches for feature extraction in sentiment analysis and opinion mining. In this review we have adopted a systematic literature review process to identify areas well focused by researchers, least addressed areas are also highlighted giving an opportunity to researchers for further work. We have also tried to identify most and least commonly used feature selection techniques to find research gaps for future work.",
"title": ""
}
] |
1840517 | Evaluating the robustness of repeated measures analyses: the case of small sample sizes and nonnormal data. | [
{
"docid": "pos:1840517_0",
"text": "It has been suggested that when the variance assumptions of a repeated measures ANOVA are not met, the df of the mean square ratio should be adjusted by the sample estimate of the Box correction factor, e. This procedure works well when e is low, but the estimate is seriously biased when this is not the case. An alternate estimate is proposed which is shown by Monte Carlo methods to be less biased for moderately large e.",
"title": ""
}
] | [
{
"docid": "neg:1840517_0",
"text": "Flight delays have a significant impact on the nationpsilas economy. Taxi-out delays in particular constitute a significant portion of the block time of a flight. In the future, it can be expected that accurate predictions of dasiawheels-offpsila time may be used in determining whether an aircraft can meet its allocated slot time, thereby fitting into an en-route traffic flow. Without an accurate taxi-out time prediction for departures, there is no way to effectively manage fuel consumption, emissions, or cost. Dynamically changing operations at the airport makes it difficult to accurately predict taxi-out time. This paper describes a method for estimating average taxi-out times at the airport in 15 minute intervals of the day and at least 15 minutes in advance of aircraft scheduled gate push-back time. A probabilistic framework of stochastic dynamic programming with a learning-based solution strategy called Reinforcement Learning (RL) has been applied. Historic data from the Federal Aviation Administrationpsilas (FAA) Aviation System Performance Metrics (ASPM) database were used to train and test the algorithm. The algorithm was tested on John F. Kennedy International airport (JFK), one of the busiest, challenging, and difficult to predict airports in the United States that significantly influences operations across the entire National Airspace System (NAS). Due to the nature of departure operations at JFK the prediction accuracy of the algorithm for a given day was analyzed in two separate time periods (1) before 4:00 P.M and (2) after 4:00 P.M. On an average across 15 days, the predicted average taxi-out times matched the actual average taxi-out times within plusmn5 minutes for about 65 % of the time (for the period before 4:00 P.M) and 53 % of the time (for the period after 4:00 P.M). The prediction accuracy over the entire day within plusmn5 minutes range of accuracy was about 60 %. Further, application of the RL algorithm to estimate taxi-out times at airports with multi-dependent static surface surveillance data will likely improve the accuracy of prediction. The implications of these results for airline operations and network flow planning are discussed.",
"title": ""
},
{
"docid": "neg:1840517_1",
"text": "This paper explores the relationship between domain scheduling in avirtual machine monitor (VMM) and I/O performance. Traditionally, VMM schedulers have focused on fairly sharing the processor resources among domains while leaving the scheduling of I/O resources as asecondary concern. However, this can resultin poor and/or unpredictable application performance, making virtualization less desirable for applications that require efficient and consistent I/O behavior.\n This paper is the first to study the impact of the VMM scheduler on performance using multiple guest domains concurrently running different types of applications. In particular, different combinations of processor-intensive, bandwidth-intensive, andlatency-sensitive applications are run concurrently to quantify the impacts of different scheduler configurations on processor and I/O performance. These applications are evaluated on 11 different scheduler configurations within the Xen VMM. These configurations include a variety of scheduler extensions aimed at improving I/O performance. This cross product of scheduler configurations and application types offers insight into the key problems in VMM scheduling for I/O and motivates future innovation in this area.",
"title": ""
},
{
"docid": "neg:1840517_2",
"text": "The aim of this paper is to research the effectiveness of SMS verification by understanding the correlation between notification and verification of flood early warning messages. This study contributes to the design of the dissemination techniques for SMS as an early warning messages. The metrics used in this study are using user perceptions of tasks, which include the ease of use (EOU) perception for using SMS and confidence with SMS skills perception, as well as, the users' positive perceptions, which include users' perception of usefulness and satisfaction perception towards using SMS as an early warning messages for floods. Experiments and surveys were conducted in flood-prone areas in Semarang, Indonesia. The results showed that the correlation is in users' perceptions of tasks for the confidence with skill.",
"title": ""
},
{
"docid": "neg:1840517_3",
"text": "Data mining is the extraction of knowledge from large databases. One of the popular data mining techniques is Classification in which different objects are classified into different classes depending on the common properties among them. Decision Trees are widely used in Classification. This paper proposes a tool which applies an enhanced Decision Tree Algorithm to detect the suspicious e-mails about the criminal activities. An improved ID3 Algorithm with enhanced feature selection method and attribute- importance factor is applied to generate a better and faster Decision Tree. The objective is to detect the suspicious criminal activities and minimize them. That's why the tool is named as “Z-Crime” depicting the “Zero Crime” in the society. This paper aims at highlighting the importance of data mining technology to design proactive application to detect the suspicious criminal activities.",
"title": ""
},
{
"docid": "neg:1840517_4",
"text": "Mitochondria are important cellular organelles in most metabolic processes and have a highly dynamic nature, undergoing frequent fission and fusion. The dynamic balance between fission and fusion plays critical roles in mitochondrial functions. In recent studies, several large GTPases have been identified as key molecular factors in mitochondrial fission and fusion. Moreover, the posttranslational modifications of these large GTPases, including phosphorylation, ubiquitination and SUMOylation, have been shown to be involved in the regulation of mitochondrial dynamics. Neurons are particularly sensitive and vulnerable to any abnormalities in mitochondrial dynamics, due to their large energy demand and long extended processes. Emerging evidences have thus indicated a strong linkage between mitochondria and neurodegenerative diseases, including Alzheimer's disease, Parkinson's disease and Huntington's disease. In this review, we will describe the regulation of mitochondrial dynamics and its role in neurodegenerative diseases.",
"title": ""
},
{
"docid": "neg:1840517_5",
"text": "Immunologic checkpoint blockade with antibodies that target cytotoxic T lymphocyte-associated antigen 4 (CTLA-4) and the programmed cell death protein 1 pathway (PD-1/PD-L1) have demonstrated promise in a variety of malignancies. Ipilimumab (CTLA-4) and pembrolizumab (PD-1) are approved by the US Food and Drug Administration for the treatment of advanced melanoma, and additional regulatory approvals are expected across the oncologic spectrum for a variety of other agents that target these pathways. Treatment with both CTLA-4 and PD-1/PD-L1 blockade is associated with a unique pattern of adverse events called immune-related adverse events, and occasionally, unusual kinetics of tumor response are seen. Combination approaches involving CTLA-4 and PD-1/PD-L1 blockade are being investigated to determine whether they enhance the efficacy of either approach alone. Principles learned during the development of CTLA-4 and PD-1/PD-L1 approaches will likely be used as new immunologic checkpoint blocking antibodies begin clinical investigation.",
"title": ""
},
{
"docid": "neg:1840517_6",
"text": "Predictive microbiology is the area of food microbiology that attempts to forecast the quantitative evolution of microbial populations over time. This is achieved to a great extent through models that include the mechanisms governing population dynamics. Traditionally, the models used in predictive microbiology are whole-system continuous models that describe population dynamics by means of equations applied to extensive or averaged variables of the whole system. Many existing models can be classified by specific criteria. We can distinguish between survival and growth models by seeing whether they tackle mortality or cell duplication. We can distinguish between empirical (phenomenological) models, which mathematically describe specific behaviour, and theoretical (mechanistic) models with a biological basis, which search for the underlying mechanisms driving already observed phenomena. We can also distinguish between primary, secondary and tertiary models, by examining their treatment of the effects of external factors and constraints on the microbial community. Recently, the use of spatially explicit Individual-based Models (IbMs) has spread through predictive microbiology, due to the current technological capacity of performing measurements on single individual cells and thanks to the consolidation of computational modelling. Spatially explicit IbMs are bottom-up approaches to microbial communities that build bridges between the description of micro-organisms at the cell level and macroscopic observations at the population level. They provide greater insight into the mesoscale phenomena that link unicellular and population levels. Every model is built in response to a particular question and with different aims. Even so, in this research we conducted a SWOT (Strength, Weaknesses, Opportunities and Threats) analysis of the different approaches (population continuous modelling and Individual-based Modelling), which we hope will be helpful for current and future researchers.",
"title": ""
},
{
"docid": "neg:1840517_7",
"text": "The aims of the study were to evaluate the per- and post-operative complications and outcomes after cystocele repair with transobturator mesh. A retrospective continuous series study was conducted over a period of 3 years. Clinical evaluation was up to 1 year with additional telephonic interview performed after 34 months on average. When stress urinary incontinence (SUI) was associated with the cystocele, it was treated with the same mesh. One hundred twenty-three patients were treated for cystocele. Per-operative complications occurred in six patients. After 1 year, erosion rate was 6.5%, and only three cystoceles recurred. After treatment of SUI with the same mesh, 87.7% restored continence. Overall patient’s satisfaction rate was 93.5%. Treatment of cystocele using transobturator four arms mesh appears to reduce the risk of recurrence at 1 year, along with high rate of patient’s satisfaction. The transobturator path of the prosthesis arms seems devoid of serious per- and post-operative risks and allows restoring continence when SUI is present.",
"title": ""
},
{
"docid": "neg:1840517_8",
"text": "The aim of Chapter 2 is to give an overview of the GPR basic principles and technology. A lot of definitions and often-used terms that will be used throughout the whole work will be explained here. Readers who are familiar with GPR and the demining application can skip parts of this chapter. Section 2.2.4 however can be interesting since a description of the hardware and the design parameters of a time domain GPR are given there. The description is far from complete, but it gives a good overview of the technological difficulties encountered in GPR systems.",
"title": ""
},
{
"docid": "neg:1840517_9",
"text": "This review provides a comprehensive examination of the literature surrounding the current state of K–12 distance education. The growth in K–12 distance education follows in the footsteps of expanded learning opportunities at all levels of public education and training in corporate environments. Implementation has been accomplished with a limited research base, often drawing from studies in adult distance education and policies adapted from traditional learning environments. This review of literature provides an overview of the field of distance education with a focus on the research conducted in K–12 distance education environments. (",
"title": ""
},
{
"docid": "neg:1840517_10",
"text": "The rapidly growing world energy use has already raised concerns over supply difficulties, exhaustion of energy resources and heavy environmental impacts (ozone layer depletion, global warming, climate change, etc.). The global contribution from buildings towards energy consumption, both residential and commercial, has steadily increased reaching figures between 20% and 40% in developed countries, and has exceeded the other major sectors: industrial and transportation. Growth in population, increasing demand for building services and comfort levels, together with the rise in time spent inside buildings, assure the upward trend in energy demand will continue in the future. For this reason, energy efficiency in buildings is today a prime objective for energy policy at regional, national and international levels. Among building services, the growth in HVAC systems energy use is particularly significant (50% of building consumption and 20% of total consumption in the USA). This paper analyses available information concerning energy consumption in buildings, and particularly related to HVAC systems. Many questions arise: Is the necessary information available? Which are the main building types? What end uses should be considered in the breakdown? Comparisons between different countries are presented specially for commercial buildings. The case of offices is analysed in deeper detail. # 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840517_11",
"text": "In this paper, a triple active bridge converter is proposed. The topology is capable of achieving ZVS across the full load range with wide input voltage while minimizing heavy load conduction losses to increase overall efficiency. This topology comprises three full bridges coupled by a three-winding transformer. At light load, by adjusting the phase shift between two input bridges, all switching devices can maintain ZVS due to a controlled circulating current. At heavy load, the two input bridges work in parallel to reduce conduction loss. The operation principles of this topology are introduced and the ZVS boundaries are derived. Based on analytical models of power loss, a 200W laboratory prototype has been built to verify theoretical considerations.",
"title": ""
},
{
"docid": "neg:1840517_12",
"text": "BACKGROUND\nThe aging of society is a global trend, and care of older adults with dementia is an urgent challenge. As dementia progresses, patients exhibit negative emotions, memory disorders, sleep disorders, and agitated behavior. Agitated behavior is one of the most difficult problems for family caregivers and healthcare providers to handle when caring for older adults with dementia.\n\n\nPURPOSE\nThe aim of this study was to investigate the effectiveness of white noise in improving agitated behavior, mental status, and activities of daily living in older adults with dementia.\n\n\nMETHODS\nAn experimental research design was used to study elderly participants two times (pretest and posttest). Six dementia care centers in central and southern Taiwan were targeted to recruit participants. There were 63 participants: 28 were in the experimental group, and 35 were in the comparison group. Experimental group participants received 20 minutes of white noise consisting of ocean, rain, wind, and running water sounds between 4 and 5 P.M. daily over a period of 4 weeks. The comparison group received routine care. Questionnaires were completed, and observations of agitated behaviors were collected before and after the intervention.\n\n\nRESULTS\nAgitated behavior in the experimental group improved significantly between pretest and posttest. Furthermore, posttest scores on the Mini-Mental Status Examination and Barthel Index were slightly better for this group than at pretest. However, the experimental group registered no significant difference in mental status or activities of daily living at posttest. For the comparison group, agitated behavior was unchanged between pretest and posttest.\n\n\nCONCLUSIONS\nThe results of this study support white noise as a simple, convenient, and noninvasive intervention that improves agitated behavior in older adults with dementia. These results may provide a reference for related healthcare providers, educators, and administrators who care for older adults with dementia.",
"title": ""
},
{
"docid": "neg:1840517_13",
"text": "Automating the development of construction schedules has been an interesting topic for researchers around the world for almost three decades. Researchers have approached solving scheduling problems with different tools and techniques. Whenever a new artificial intelligence or optimization tool has been introduced, researchers in the construction field have tried to use it to find the answer to one of their key problems—the “better” construction schedule. Each researcher defines this “better” slightly different. This article reviews the research on automation in construction scheduling from 1985 to 2014. It also covers the topic using different approaches, including case-based reasoning, knowledge-based approaches, model-based approaches, genetic algorithms, expert systems, neural networks, and other methods. The synthesis of the results highlights the share of the aforementioned methods in tackling the scheduling challenge, with genetic algorithms shown to be the most dominant approach. Although the synthesis reveals the high applicability of genetic algorithms to the different aspects of managing a project, including schedule, cost, and quality, it exposed a more limited project management application for the other methods.",
"title": ""
},
{
"docid": "neg:1840517_14",
"text": "In this paper we describe a privacy-preserving method for commissioning an IoT device into a cloud ecosystem. The commissioning consists of the device proving its manufacturing provenance in an anonymous fashion without reliance on a trusted third party, and for the device to be anonymously registered through the use of a blockchain system. We introduce the ChainAnchor architecture that provides device commissioning in a privacy-preserving fashion. The goal of ChainAnchor is (i) to support anonymous device commissioning, (ii) to support device-owners being remunerated for selling their device sensor-data to service providers, and (iii) to incentivize device-owners and service providers to share sensor-data in a privacy-preserving manner.",
"title": ""
},
{
"docid": "neg:1840517_15",
"text": "Voice SMS is an application developed in this work that allows a user to record and convert spoken messages into SMS text message. User can send messages to the entered phone number or the number of contact from the phonebook. Speech recognition is done via the Internet, connecting to Google's server. The application is adapted to input messages in English. Used tools are Android SDK and the installation is done on mobile phone with Android operating system. In this article we will give basic features of the speech recognition and used algorithm. Speech recognition for Voice SMS uses a technique based on hidden Markov models (HMM - Hidden Markov Model). It is currently the most successful and most flexible approach to speech recognition.",
"title": ""
},
{
"docid": "neg:1840517_16",
"text": "With the prevalence of server blades and systems-on-a-chip (SoCs), interconnection networks are becoming an important part of the microprocessor landscape. However, there is limited tool support available for their design. While performance simulators have been built that enable performance estimation while varying network parameters, these cover only one metric of interest in modern designs. System power consumption is increasingly becoming equally, if not more important than performance. It is now critical to get detailed power-performance tradeoff information early in the microarchitectural design cycle. This is especially so as interconnection networks consume a significant fraction of total system power. It is exactly this gap that the work presented in this paper aims to fill.We present Orion, a power-performance interconnection network simulator that is capable of providing detailed power characteristics, in addition to performance characteristics, to enable rapid power-performance trade-offs at the architectural-level. This capability is provided within a general framework that builds a simulator starting from a microarchitectural specification of the interconnection network. A key component of this construction is the architectural-level parameterized power models that we have derived as part of this effort. Using component power models and a synthesized efficient power (and performance) simulator, a microarchitect can rapidly explore the design space. As case studies, we demonstrate the use of Orion in determining optimal system parameters, in examining the effect of diverse traffic conditions, as well as evaluating new network microarchitectures. In each of the above, the ability to simultaneously monitor power and performance is key in determining suitable microarchitectures.",
"title": ""
},
{
"docid": "neg:1840517_17",
"text": "IQ heritability, the portion of a population's IQ variability attributable to the effects of genes, has been investigated for nearly a century, yet it remains controversial. Covariance between relatives may be due not only to genes, but also to shared environments, and most previous models have assumed different degrees of similarity induced by environments specific to twins, to non-twin siblings (henceforth siblings), and to parents and offspring. We now evaluate an alternative model that replaces these three environments by two maternal womb environments, one for twins and another for siblings, along with a common home environment. Meta-analysis of 212 previous studies shows that our ‘maternal-effects’ model fits the data better than the ‘family-environments’ model. Maternal effects, often assumed to be negligible, account for 20% of covariance between twins and 5% between siblings, and the effects of genes are correspondingly reduced, with two measures of heritability being less than 50%. The shared maternal environment may explain the striking correlation between the IQs of twins, especially those of adult twins that were reared apart. IQ heritability increases during early childhood, but whether it stabilizes thereafter remains unclear. A recent study of octogenarians, for instance, suggests that IQ heritability either remains constant through adolescence and adulthood, or continues to increase with age. Although the latter hypothesis has recently been endorsed, it gathers only modest statistical support in our analysis when compared to the maternal-effects hypothesis. Our analysis suggests that it will be important to understand the basis for these maternal effects if ways in which IQ might be increased are to be identified.",
"title": ""
},
{
"docid": "neg:1840517_18",
"text": "We report four experiments examining effects of instance similarity on the application of simple explicit rules. We found effects of similarity to illustrative exemplars in error patterns and reaction times. These effects arose even though participants were given perfectly predictive rules, the similarity manipulation depended entirely on rule-irrelevant features, and attention to exemplar similarity was detrimental to task performance. Comparison of results across studies suggests that the effects are mandatory, non-strategic and not subject to conscious control, and as a result, should be pervasive throughout categorization.",
"title": ""
},
{
"docid": "neg:1840517_19",
"text": "If self-regulation conforms to an energy or strength model, then self-control should be impaired by prior exertion. In Study 1, trying to regulate one's emotional response to an upsetting movie was followed by a decrease in physical stamina. In Study 2, suppressing forbidden thoughts led to a subsequent tendency to give up quickly on unsolvable anagrams. In Study 3, suppressing thoughts impaired subsequent efforts to control the expression of amusement and enjoyment. In Study 4, autobiographical accounts of successful versus failed emotional control linked prior regulatory demands and fatigue to self-regulatory failure. A strength model of self-regulation fits the data better than activation, priming, skill, or constant capacity models of self-regulation.",
"title": ""
}
] |
1840518 | Code Hot Spot: A tool for extraction and analysis of code change history | [
{
"docid": "pos:1840518_0",
"text": "High cohesion is a desirable property of software as it positively impacts understanding, reuse, and maintenance. Currently proposed measures for cohesion in Object-Oriented (OO) software reflect particular interpretations of cohesion and capture different aspects of it. Existing approaches are largely based on using the structural information from the source code, such as attribute references, in methods to measure cohesion. This paper proposes a new measure for the cohesion of classes in OO software systems based on the analysis of the unstructured information embedded in the source code, such as comments and identifiers. The measure, named the Conceptual Cohesion of Classes (C3), is inspired by the mechanisms used to measure textual coherence in cognitive psychology and computational linguistics. This paper presents the principles and the technology that stand behind the C3 measure. A large case study on three open source software systems is presented which compares the new measure with an extensive set of existing metrics and uses them to construct models that predict software faults. The case study shows that the novel measure captures different aspects of class cohesion compared to any of the existing cohesion measures. In addition, combining C3 with existing structural cohesion metrics proves to be a better predictor of faulty classes when compared to different combinations of structural cohesion metrics.",
"title": ""
},
{
"docid": "pos:1840518_1",
"text": "The size and high rate of change of source code comprising a software system make it difficult for software developers to keep up with who on the team knows about particular parts of the code. Existing approaches to this problem are based solely on authorship of code. In this paper, we present data from two professional software development teams to show that both authorship and interaction information about how a developer interacts with the code are important in characterizing a developer's knowledge of code. We introduce the degree-of-knowledge model that computes automatically a real value for each source code element based on both authorship and interaction information. We show that the degree-of-knowledge model can provide better results than an existing expertise finding approach and also report on case studies of the use of the model to support knowledge transfer and to identify changes of interest.",
"title": ""
}
] | [
{
"docid": "neg:1840518_0",
"text": "Large numbers of websites have started to markup their content using standards such as Microdata, Microformats, and RDFa. The marked-up content elements comprise descriptions of people, organizations, places, events, products, ratings, and reviews. This development has accelerated in last years as major search engines such as Google, Bing and Yahoo! use the markup to improve their search results. Embedding semantic markup facilitates identifying content elements on webpages. However, the markup is mostly not as fine-grained as desirable for applications that aim to integrate data from large numbers of websites. This paper discusses the challenges that arise in the task of integrating descriptions of electronic products from several thousand e-shops that offer Microdata markup. We present a solution for each step of the data integration process including Microdata extraction, product classification, product feature extraction, identity resolution, and data fusion. We evaluate our processing pipeline using 1.9 million product offers from 9240 e-shops which we extracted from the Common Crawl 2012, a large public Web corpus.",
"title": ""
},
{
"docid": "neg:1840518_1",
"text": "We present an active detection model for localizing objects in scenes. The model is class-specific and allows an agent to focus attention on candidate regions for identifying the correct location of a target object. This agent learns to deform a bounding box using simple transformation actions, with the goal of determining the most specific location of target objects following top-down reasoning. The proposed localization agent is trained using deep reinforcement learning, and evaluated on the Pascal VOC 2007 dataset. We show that agents guided by the proposed model are able to localize a single instance of an object after analyzing only between 11 and 25 regions in an image, and obtain the best detection results among systems that do not use object proposals for object localization.",
"title": ""
},
{
"docid": "neg:1840518_2",
"text": "By Ravindra K. Ahuja, Thomas L. Magnanti, James B. Orlin : Network Flows: Theory, Algorithms, and Applications bringing together the classic and the contemporary aspects of the field this comprehensive introduction to network flows provides an integrative view of theory network flows pearson new international edition theory algorithms and applications on amazon free shipping on qualifying offers Network Flows: Theory, Algorithms, and Applications:",
"title": ""
},
{
"docid": "neg:1840518_3",
"text": "Suppose fx, h , ■ • ■ , fk are polynomials in one variable with all coefficients integral and leading coefficients positive, their degrees being h\\ , h2, •• -, A* respectively. Suppose each of these polynomials is irreducible over the field of rational numbers and no two of them differ by a constant factor. Let Q(fx ,f2, • • • ,fk ; N) denote the number of positive integers n between 1 and N inclusive such that /i(n), f2(n), • ■ ■ , fk(n) are all primes. (We ignore the finitely many values of n for which some /,(n) is negative.) Then heuristically we would expect to have for N large",
"title": ""
},
{
"docid": "neg:1840518_4",
"text": "We present a simple hierarchical Bayesian approach to the modeling collections of texts and other large-scale data collections. For text collections, we posit that a document is generated by choosing a random set of multinomial probabilities for a set of possible “topics,” and then repeatedly generating words by sampling from the topic mixture. This model is intractable for exact probabilistic inference, but approximate posterior probabilities and marginal likelihoods can be obtained via fast variational methods. We also present extensions to coupled models for joint text/image data and multiresolution models for topic hierarchies.",
"title": ""
},
{
"docid": "neg:1840518_5",
"text": "In many applications based on the use of unmanned aerial vehicles (UAVs), it is possible to establish a cluster of UAVs in which each UAV knows the other vehicle's position. Assuming that the common channel condition between any two nodes of UAVs is line-of-sight (LOS), the time and energy consumption for data transmission on each path that connecting two nodes may be estimated by a node itself. In this paper, we use a modified Bellman-Ford algorithm to find the best selection of relay nodes in order to minimize the time and energy consumption for data transmission between any UAV node in the cluster and the UAV acting as the cluster head. This algorithm is applied with a proposed cooperative MAC protocol that is compatible with the IEEE 802.11 standard. The evaluations under data saturation conditions illustrate noticeable benefits in successful packet delivery ratio, average delay, and in particular the cost of time and energy.",
"title": ""
},
{
"docid": "neg:1840518_6",
"text": "The 15 kV SiC N-IGBT is the state-of-the-art high voltage power semiconductor device developed by Cree. The SiC IGBT is exposed to a peak stress of 10-11 kV in power converter systems, with punch-through turn-on dv/dt over 100 kV/μs and turn-off dv/dt about 35 kV/μs. Such high dv/dt requires ultralow coupling capacitance in the dc-dc isolation stage of the gate driver for maintaining fidelity of the signals on the control-supply ground side. Accelerated aging of the insulation in the isolation stage is another serious concern. In this paper, a simple transformer based isolation with a toroid core is investigated for the above requirements of the 15 kV IGBT. The gate driver prototype has been developed with over 100 kV dc insulation capability, and its inter-winding coupling capacitance has been found to be 3.4 pF and 13 pF at 50 MHz and 100 MHz respectively. The performance of the gate driver prototype has been evaluated up to the above mentioned specification using double-pulse tests on high-side IGBT in a half-bridge configuration. The continuous testing at 5 kHz has been performed till 8 kV, and turn-on dv/dt of 85 kV/μs on a buck-boost converter. The corresponding experimental results are presented. Also, the test methodology of evaluating the gate driver at such high voltage, without a high voltage power supply is discussed. Finally, experimental results validating fidelity of the signals on the control-ground side are provided to show the influence of increased inter-winding coupling capacitance on the performance of the gate driver.",
"title": ""
},
{
"docid": "neg:1840518_7",
"text": "Deep neural networks have proven to be particularly eective in visual and audio recognition tasks. Existing models tend to be computationally expensive and memory intensive, however, and so methods for hardwareoriented approximation have become a hot topic. Research has shown that custom hardware-based neural network accelerators can surpass their general-purpose processor equivalents in terms of both throughput and energy eciency. Application-tailored accelerators, when co-designed with approximation-based network training methods, transform large, dense and computationally expensive networks into small, sparse and hardware-ecient alternatives, increasing the feasibility of network deployment. In this article, we provide a comprehensive evaluation of approximation methods for high-performance network inference along with in-depth discussion of their eectiveness for custom hardware implementation. We also include proposals for future research based on a thorough analysis of current trends. is article represents the rst survey providing detailed comparisons of custom hardware accelerators featuring approximation for both convolutional and recurrent neural networks, through which we hope to inspire exciting new developments in the eld.",
"title": ""
},
{
"docid": "neg:1840518_8",
"text": "In a document retrieval, or other pattern matching environment where stored entities (documents) are compared with each other or with incoming patterns (search requests), it appears that the best indexing (property) space is one where each entity lies as far away from the others as possible; in these circumstances the value of an indexing system may be expressible as a function of the density of the object space; in particular, retrieval performance may correlate inversely with space density. An approach based on space density computations is used to choose an optimum indexing vocabulary for a collection of documents. Typical evaluation results are shown, demonstating the usefulness of the model.",
"title": ""
},
{
"docid": "neg:1840518_9",
"text": "Wireless communication by leveraging the use of low-altitude unmanned aerial vehicles (UAVs) has received significant interests recently due to its low-cost and flexibility in providing wireless connectivity in areas without infrastructure coverage. This paper studies a UAV-enabled mobile relaying system, where a high-mobility UAV is deployed to assist in the information transmission from a ground source to a ground destination with their direct link blocked. By assuming that the UAV adopts the energy-efficient circular trajectory and employs time-division duplexing (TDD) based decode-and-forward (DF) relaying, we maximize the spectrum efficiency (SE) in bits/second/Hz as well as energy efficiency (EE) in bits/Joule of the considered system by jointly optimizing the time allocations for the UAV's relaying together with its flying speed and trajectory. It is revealed that for UAV-enabled mobile relaying with the UAV propulsion energy consumption taken into account, there exists a trade-off between the maximum achievable SE and EE by exploiting the new degree of freedom of UAV trajectory design.",
"title": ""
},
{
"docid": "neg:1840518_10",
"text": "Although the accuracy of super-resolution (SR) methods based on convolutional neural networks (CNN) soars high, the complexity and computation also explode with the increased depth and width of the network. Thus, we propose the convolutional anchored regression network (CARN) for fast and accurate single image super-resolution (SISR). Inspired by locally linear regression methods (A+ and ARN), the new architecture consists of regression blocks that map input features from one feature space to another. Different from A+ and ARN, CARN is no longer relying on or limited by hand-crafted features. Instead, it is an end-to-end design where all the operations are converted to convolutions so that the key concepts, i.e., features, anchors, and regressors, are learned jointly. The experiments show that CARN achieves the best speed and accuracy trade-off among the SR methods. The code is available at https://github.com/ofsoundof/CARN.",
"title": ""
},
{
"docid": "neg:1840518_11",
"text": "This article addresses the position taken by Clark (1983) that media do not influence learning under any conditions. The article reframes the questions raised by Clark to explore the conditions under which media will influence learning. Specifically, it posits the need to consider the capabilities of media, and the methods that employ them, as they interact with the cognitive and social processes by which knowledge is constructed. This approach is examined within the context of two major media-based projects, one which uses computers and the other,video. The article discusses the implications of this approach for media theory, research and practice.",
"title": ""
},
{
"docid": "neg:1840518_12",
"text": "The strategic management of agricultural lands involves crop field monitoring each year. Crop discrimination via remote sensing is a complex task, especially if different crops have a similar spectral response and cropping pattern. In such cases, crop identification could be improved by combining object-based image analysis and advanced machine learning methods. In this investigation, we evaluated the C4.5 decision tree, logistic regression (LR), support vector machine (SVM) and multilayer perceptron (MLP) neural network methods, both as single classifiers and combined in a hierarchical classification, for the mapping of nine major summer crops (both woody and herbaceous) from ASTER satellite images captured in two different dates. Each method was built with different combinations of spectral and textural features obtained after the segmentation of the remote images in an object-based framework. As single classifiers, MLP and SVM obtained maximum overall accuracy of 88%, slightly higher than LR (86%) and notably higher than C4.5 (79%). The SVM+SVM classifier (best method) improved these results to 89%. In most cases, the hierarchical classifiers considerably increased the accuracy of the most poorly classified class (minimum sensitivity). The SVM+SVM method offered a significant improvement in classification accuracy for all of the studied crops compared to OPEN ACCESS Remote Sens. 2014, 6 5020 the conventional decision tree classifier, ranging between 4% for safflower and 29% for corn, which suggests the application of object-based image analysis and advanced machine learning methods in complex crop classification tasks.",
"title": ""
},
{
"docid": "neg:1840518_13",
"text": "Modeling how visual saliency guides the deployment of atten tion over visual scenes has attracted much interest recently — among both computer v ision and experimental/computational researchers — since visual attention is a key function of both machine and biological vision systems. Research efforts in compute r vision have mostly been focused on modeling bottom-up saliency. Strong influences o n attention and eye movements, however, come from instantaneous task demands. Here , w propose models of top-down visual guidance considering task influences. The n ew models estimate the state of a human subject performing a task (here, playing video gam es), and map that state to an eye position. Factors influencing state come from scene gi st, physical actions, events, and bottom-up saliency. Proposed models fall into two categ ori s. In the first category, we use classical discriminative classifiers, including Reg ression, kNN and SVM. In the second category, we use Bayesian Networks to combine all the multi-modal factors in a unified framework. Our approaches significantly outperfor m 15 competing bottom-up and top-down attention models in predicting future eye fixat ions on 18,000 and 75,00 video frames and eye movement samples from a driving and a flig ht combat video game, respectively. We further test and validate our approaches o n 1.4M video frames and 11M fixations samples and in all cases obtain higher prediction s c re that reference models.",
"title": ""
},
{
"docid": "neg:1840518_14",
"text": "Skyline queries ask for a set of interesting points from a potentially large set of data points. If we are traveling, for instance, a restaurant might be interesting if there is no other restaurant which is nearer, cheaper, and has better food. Skyline queries retrieve all such interesting restaurants so that the user can choose the most promising one. In this paper, we present a new online algorithm that computes the Skyline. Unlike most existing algorithms that compute the Skyline in a batch, this algorithm returns the first results immediately, produces more and more results continuously, and allows the user to give preferences during the running time of the algorithm so that the user can control what kind of results are produced next (e.g., rather cheap or rather near restaurants).",
"title": ""
},
{
"docid": "neg:1840518_15",
"text": "Women are generally more risk averse than men. We investigated whether between- and within-gender variation in financial risk aversion was accounted for by variation in salivary concentrations of testosterone and in markers of prenatal testosterone exposure in a sample of >500 MBA students. Higher levels of circulating testosterone were associated with lower risk aversion among women, but not among men. At comparably low concentrations of salivary testosterone, however, the gender difference in risk aversion disappeared, suggesting that testosterone has nonlinear effects on risk aversion regardless of gender. A similar relationship between risk aversion and testosterone was also found using markers of prenatal testosterone exposure. Finally, both testosterone levels and risk aversion predicted career choices after graduation: Individuals high in testosterone and low in risk aversion were more likely to choose risky careers in finance. These results suggest that testosterone has both organizational and activational effects on risk-sensitive financial decisions and long-term career choices.",
"title": ""
},
{
"docid": "neg:1840518_16",
"text": "We propose a multistart CMA-ES with equal budgets for two interlaced restart strategies, one with an increasing population size and one with varying small population sizes. This BI-population CMA-ES is benchmarked on the BBOB-2009 noiseless function testbed and could solve 23, 22 and 20 functions out of 24 in search space dimensions 10, 20 and 40, respectively, within a budget of less than $10^6 D$ function evaluations per trial.",
"title": ""
},
{
"docid": "neg:1840518_17",
"text": "In this article, we present some extensions of the rough set approach and we outline a challenge for the rough set based research. 2006 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840518_18",
"text": "Many clustering algorithms suffer from scalability problems on massive datasets and do not support any user interaction during runtime. To tackle these problems, anytime clustering algorithms are proposed. They produce a fast approximate result which is continuously refined during the further run. Also, they can be stopped or suspended anytime and provide an answer. In this paper, we propose a novel anytime clustering algorithm based on the density-based clustering paradigm. Our algorithm called A-DBSCAN is applicable to very high dimensional databases such as time series, trajectory, medical data, etc. The general idea of our algorithm is to use a sequence of lower-bounding functions (LBs) of the true similarity measure to produce multiple approximate results of the true density-based clusters. ADBSCAN operates in multiple levels w.r.t. the LBs and is mainly based on two algorithmic schemes: (1) an efficient distance upgrade scheme which restricts distance calculations to core-objects at each level of the LBs; (2) a local reclustering scheme which restricts update operations to the relevant objects only. Extensive experiments demonstrate that A-DBSCAN acquires very good clustering results at very early stages of execution thus saves a large amount of computational time. Even if it runs to the end, A-DBSCAN is still orders of magnitude faster than DBSCAN.",
"title": ""
}
] |
1840519 | Association between substandard classroom ventilation rates and students' academic achievement. | [
{
"docid": "pos:1840519_0",
"text": "This meta-analysis reviewed the literature on socioeconomic status (SES) and academic achievement in journal articles published between 1990 and 2000. The sample included 101,157 students, 6,871 schools, and 128 school districts gathered from 74 independent samples. The results showed a medium to strong SES–achievement relation. This relation, however, is moderated by the unit, the source, the range of SES variable, and the type of SES–achievement measure. The relation is also contingent upon school level, minority status, and school location. The author conducted a replica of White’s (1982) meta-analysis to see whether the SES–achievement correlation had changed since White’s initial review was published. The results showed a slight decrease in the average correlation. Practical implications for future research and policy are discussed.",
"title": ""
}
] | [
{
"docid": "neg:1840519_0",
"text": "Computer architects put significant efforts on the design space exploration of a new processor, as it determines the overall characteristics (e.g., performance, power, cost) of the final product. To thoroughly explore the space and achieve the best results, they need high design evaluation throughput – the ability to quickly assess a large number of designs with minimal costs. Unfortunately, the existing simulators and performance models are either too slow or too inaccurate to meet this demand. As a result, architects often sacrifice the design space coverage to end up with a sub-optimal product. To address this challenge, we propose RpStacks-MT, a methodology to evaluate multi-core processor designs with high throughput. First, we propose a graph-based multi-core performance model, which overcomes the limitations of the existing models to accurately describe a multi-core processor's key performance behaviors. Second, we propose a reuse distance-based memory system model and a dynamic scheduling reconstruction method, which help our graph model to quickly track the performance changes from processor design changes. Lastly, we combine these models with a state of the art design exploration idea to evaluate multiple processor designs in an efficient way. Our evaluations show that RpStacks-MT achieves extremely high design evaluation throughput – 88× higher versus a conventional cycle-level simulator and 18× higher versus an accelerated simulator (on average, for evaluating 10,000 designs) – while maintaining simulator-level accuracy.",
"title": ""
},
{
"docid": "neg:1840519_1",
"text": "In today’s modern communication industry, antennas are the most important components required to create a communication link. Microstrip antennas are the most suited for aerospace and mobile applications because of their low profile, light weight and low power handling capacity. They can be designed in a variety of shapes in order to obtain enhanced gain and bandwidth, dual band and circular polarization to even ultra wideband operation. The thesis provides a detailed study of the design of probe-fed Rectangular Microstrip Patch Antenna to facilitate dual polarized, dual band operation. The design parameters of the antenna have been calculated using the transmission line model and the cavity model. For the simulation process IE3D electromagnetic software which is based on method of moment (MOM) has been used. The effect of antenna dimensions and substrate parameters on the performance of antenna have been discussed. The antenna has been designed with embedded spur lines and integrated reactive loading for dual band operation with better impedance matching. The designed antenna can be operated at two frequency band with center frequencies 7.62 (with a bandwidth of 11.68%) and 9.37 GHz (with a bandwidth of 9.83%). A cross slot of unequal length has been inserted so as to have dual polarization. This results in a minor shift in the central frequencies of the two bands to 7.81 and 9.28 GHz. At a frequency of 9.16 GHz, circular polarization has been obtained. So the dual band and dual frequency operation has successfully incorporated into a single patch.",
"title": ""
},
{
"docid": "neg:1840519_2",
"text": "In this paper, we analyze if cascade usage of the context encoder with increasing input can improve the results of the inpainting. For this purpose, we train context encoder for 64x64 pixels images in a standard way and use its resized output to fill in the missing input region of the 128x128 context encoder, both in training and evaluation phase. As the result, the inpainting is visibly more plausible. In order to thoroughly verify the results, we introduce normalized squared-distortion, a measure for quantitative inpainting evaluation, and we provide its mathematical explanation. This is the first attempt to formalize the inpainting measure, which is based on the properties of latent feature representation, instead of L2 reconstruction loss.",
"title": ""
},
{
"docid": "neg:1840519_3",
"text": "A constructive algorithm is proposed for feed-forward neural networks which uses node-splitting in the hidden layers to build large networks from smaller ones. The small network forms an approximate model of a set of training data, and the split creates a larger, more powerful network which is initialised with the approximate solution already found. The insufficiency of the smaller network in modelling the system which generated the data leads to oscillation in those hidden nodes whose weight vectors cover regions in the input space where more detail is required in the model. These nodes are identified and split in two using principal component analysis, allowing the new nodes to cover the two main modes of the oscillating vector. Nodes are selected for splitting using principal component analysis on the oscillating weight vectors, or by examining the Hessian matrix of second derivatives of the network error with respect to the weights.",
"title": ""
},
{
"docid": "neg:1840519_4",
"text": "Online distance e-learning systems allow introducing innovative methods in pedagogy, along with studying their effectiveness. Assessing the system effectiveness is based on analyzing the log files to track the studying time, the number of connections, and earned game bonus points. This study is based on an example of the online application for practical foreign language speaking skills training between random users, which select the role of a teacher or a student on their own. The main features of the developed system include pre-defined synchronized teaching and learning materials displayed for both participants, along with user motivation by means of gamification. The actual percentage of successful connects between specifically unmotivated and unfamiliar with each other users was measured. The obtained result can be used for gauging the developed system success and the proposed teaching methodology in general. Keywords—elearning; gamification; marketing; monetization; viral marketing; virality",
"title": ""
},
{
"docid": "neg:1840519_5",
"text": "Wearable smart devices are already amongst us. Currently, smartwatches are one of the key drivers of the wearable technology and are being used by a large population of consumers. This paper takes a first look at this increasingly popular technology with a systematic characterization of the smartwatch app markets. We conduct a large scale analysis of three popular smartwatch app markets: Android Wear, Samsung, and Apple, and characterize more than 14,000 smartwatch apps in multiple aspects such as prices, number of developers and categories. Our analysis shows that approximately 41% and 30% of the apps in Android Wear and Samsung app markets are Personalization apps that provide watch faces. Further, we provide a generic taxonomy for apps on all three platforms based on their packaging and modes of communication, that allow us to investigate apps with respect to privacy and security. Finally, we study the privacy risks associated with the app usage by identifying third party trackers integrated into these apps and personal information leakage through network traffic analysis. We show that a higher percentage of Apple apps (62%) are connected to third party trackers compared to Samsung (36%) and Android Wear (46%).",
"title": ""
},
{
"docid": "neg:1840519_6",
"text": "Argumentative text has been analyzed both theoretically and computationally in terms of argumentative structure that consists of argument components (e.g., claims, premises) and their argumentative relations (e.g., support, attack). Less emphasis has been placed on analyzing the semantic types of argument components. We propose a two-tiered annotation scheme to label claims and premises and their semantic types in an online persuasive forum, Change My View, with the long-term goal of understanding what makes a message persuasive. Premises are annotated with the three types of persuasive modes: ethos, logos, pathos, while claims are labeled as interpretation, evaluation, agreement, or disagreement, the latter two designed to account for the dialogical nature of our corpus. We aim to answer three questions: 1) can humans reliably annotate the semantic types of argument components? 2) are types of premises/claims positioned in recurrent orders? and 3) are certain types of claims and/or premises more likely to appear in persuasive messages than in nonpersuasive messages?",
"title": ""
},
{
"docid": "neg:1840519_7",
"text": "This paper presents a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions. The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets. As our results show, the method can generate different hair types with a very few input parameters, and perform virtual hairdressing operations such as wetting, cutting and drying hair.",
"title": ""
},
{
"docid": "neg:1840519_8",
"text": "In this article, a novel Vertical Take-Off and Landing (VTOL) Single Rotor Unmanned Aerial Vehicle (SR-UAV) will be presented. The SRUAV's design properties will be analysed in detail, with respect to technical novelties outlining the merits of such a conceptual approach. The system's model will be mathematically formulated, while a cascaded P-PI and PID-based control structure will be utilized in extensive simulation trials for the preliminary evaluation of the SR-UAV's attitude and translational performance.",
"title": ""
},
{
"docid": "neg:1840519_9",
"text": "This paper presents LiteOS, a multi-threaded operating system that provides Unix-like abstractions for wireless sensor networks. Aiming to be an easy-to-use platform, LiteOS offers a number of novel features, including: (1) a hierarchical file system and a wireless shell interface for user interaction using UNIX-like commands; (2) kernel support for dynamic loading and native execution of multithreaded applications; and (3) online debugging, dynamic memory, and file system assisted communication stacks. LiteOS also supports software updates through a separation between the kernel and user applications, which are bridged through a suite of system calls. Besides the features that have been implemented, we also describe our perspective on LiteOS as an enabling platform. We evaluate the platform experimentally by measuring the performance of common tasks, and demonstrate its programmability through twenty-one example applications.",
"title": ""
},
{
"docid": "neg:1840519_10",
"text": "The landscape of computing capabilities within the home has seen a recent shift from persistent desktops to mobile platforms, which has led to the use of the cloud as the primary computing platform implemented by developers today. Cloud computing platforms, such as Amazon EC2 and Google App Engine, are popular for many reasons including their reliable, always on, and robust nature. The capabilities that centralized computing platforms provide are inherent to their implementation, and unmatched by previous platforms (e.g., Desktop applications). Thus, third-party developers have come to rely on cloud computing platforms to provide high quality services to their end-users.",
"title": ""
},
{
"docid": "neg:1840519_11",
"text": "Motivated by increased concern over energy consumption in modern data centers, we propose a new, distributed computing platform called Nano Data Centers (NaDa). NaDa uses ISP-controlled home gateways to provide computing and storage services and adopts a managed peer-to-peer model to form a distributed data center infrastructure. To evaluate the potential for energy savings in NaDa platform we pick Video-on-Demand (VoD) services. We develop an energy consumption model for VoD in traditional and in NaDa data centers and evaluate this model using a large set of empirical VoD access data. We find that even under the most pessimistic scenarios, NaDa saves at least 20% to 30% of the energy compared to traditional data centers. These savings stem from energy-preserving properties inherent to NaDa such as the reuse of already committed baseline power on underutilized gateways, the avoidance of cooling costs, and the reduction of network energy consumption as a result of demand and service co-localization in NaDa.",
"title": ""
},
{
"docid": "neg:1840519_12",
"text": "Radar theory and radar system have developed a lot for the last 50 years or so. Recently, a new concept in array radar has been introduced by the multiple-input multiple-output (MIMO) radar, which has the potential to dramatically improve the performance of radars in parameters estimation. While an earlier appeared concept, synthetic impulse and aperture radar (SIAR) is a typical kind of MIMO radar and probes a channel by transmitting multiple signals separated both spectrally and spatially. To the best knowledge of the authors, almost all the analyses available are based on the simple linear array while our SIAR system is based on a circular array. This paper first introduces the recent research and development in and the features of MIMO radars, then discusses our SIAR system as a specific example of MIMO system and finally the unique advantages of SIAR are listed",
"title": ""
},
{
"docid": "neg:1840519_13",
"text": "Thousands of unique non-coding RNA (ncRNA) sequences exist within cells. Work from the past decade has altered our perception of ncRNAs from 'junk' transcriptional products to functional regulatory molecules that mediate cellular processes including chromatin remodelling, transcription, post-transcriptional modifications and signal transduction. The networks in which ncRNAs engage can influence numerous molecular targets to drive specific cell biological responses and fates. Consequently, ncRNAs act as key regulators of physiological programmes in developmental and disease contexts. Particularly relevant in cancer, ncRNAs have been identified as oncogenic drivers and tumour suppressors in every major cancer type. Thus, a deeper understanding of the complex networks of interactions that ncRNAs coordinate would provide a unique opportunity to design better therapeutic interventions.",
"title": ""
},
{
"docid": "neg:1840519_14",
"text": "After being introduced in 2009, the first fully homomorphic encryption (FHE) scheme has created significant excitement in academia and industry. Despite rapid advances in the last 6 years, FHE schemes are still not ready for deployment due to an efficiency bottleneck. Here we introduce a custom hardware accelerator optimized for a class of reconfigurable logic to bring LTV based somewhat homomorphic encryption (SWHE) schemes one step closer to deployment in real-life applications. The accelerator we present is connected via a fast PCIe interface to a CPU platform to provide homomorphic evaluation services to any application that needs to support blinded computations. Specifically we introduce a number theoretical transform based multiplier architecture capable of efficiently handling very large polynomials. When synthesized for the Xilinx Virtex 7 family the presented architecture can compute the product of large polynomials in under 6.25 msec making it the fastest multiplier design of its kind currently available in the literature and is more than 102 times faster than a software implementation. Using this multiplier we can compute a relinearization operation in 526 msec. When used as an accelerator, for instance, to evaluate the AES block cipher, we estimate a per block homomorphic evaluation performance of 442 msec yielding performance gains of 28.5 and 17 times over similar CPU and GPU implementations, respectively.",
"title": ""
},
{
"docid": "neg:1840519_15",
"text": "In the last two decades, Computer Aided Detection (CAD) systems were developed to help radiologists analyse screening mammograms, however benefits of current CAD technologies appear to be contradictory, therefore they should be improved to be ultimately considered useful. Since 2012, deep convolutional neural networks (CNN) have been a tremendous success in image recognition, reaching human performance. These methods have greatly surpassed the traditional approaches, which are similar to currently used CAD solutions. Deep CNN-s have the potential to revolutionize medical image analysis. We propose a CAD system based on one of the most successful object detection frameworks, Faster R-CNN. The system detects and classifies malignant or benign lesions on a mammogram without any human intervention. The proposed method sets the state of the art classification performance on the public INbreast database, AUC = 0.95. The approach described here has achieved 2nd place in the Digital Mammography DREAM Challenge with AUC = 0.85. When used as a detector, the system reaches high sensitivity with very few false positive marks per image on the INbreast dataset. Source code, the trained model and an OsiriX plugin are published online at https://github.com/riblidezso/frcnn_cad.",
"title": ""
},
{
"docid": "neg:1840519_16",
"text": "This Paper reveals the information about Deep Neural Network (DNN) and concept of deep learning in field of natural language processing i.e. machine translation. Now day's DNN is playing major role in machine leaning technics. Recursive recurrent neural network (R2NN) is a best technic for machine learning. It is the combination of recurrent neural network and recursive neural network (such as Recursive auto encoder). This paper presents how to train the recurrent neural network for reordering for source to target language by using Semi-supervised learning methods. Word2vec tool is required to generate word vectors of source language and Auto encoder helps us in reconstruction of the vectors for target language in tree structure. Results of word2vec play an important role in word alignment of the input vectors. RNN structure is very complicated and to train the large data file on word2vec is also a time-consuming task. Hence, a powerful hardware support (GPU) is required. GPU improves the system performance by decreasing training time period.",
"title": ""
},
{
"docid": "neg:1840519_17",
"text": "Weighted median, in the form of either solver or filter, has been employed in a wide range of computer vision solutions for its beneficial properties in sparsity representation. But it is hard to be accelerated due to the spatially varying weight and the median property. We propose a few efficient schemes to reduce computation complexity from O(r2) to O(r) where r is the kernel size. Our contribution is on a new joint-histogram representation, median tracking, and a new data structure that enables fast data access. The effectiveness of these schemes is demonstrated on optical flow estimation, stereo matching, structure-texture separation, image filtering, to name a few. The running time is largely shortened from several minutes to less than 1 second. The source code is provided in the project website.",
"title": ""
},
{
"docid": "neg:1840519_18",
"text": "Gases for electrical insulation are essential for the operation of electric power equipment. This Review gives a brief history of gaseous insulation that involved the emergence of the most potent industrial greenhouse gas known today, namely sulfur hexafluoride. SF6 paved the way to space-saving equipment for the transmission and distribution of electrical energy. Its ever-rising usage in the electrical grid also played a decisive role in the continuous increase of atmospheric SF6 abundance over the last decades. This Review broadly covers the environmental concerns related to SF6 emissions and assesses the latest generation of eco-friendly replacement gases. They offer great potential for reducing greenhouse gas emissions from electrical equipment but at the same time involve technical trade-offs. The rumors of one or the other being superior seem premature, in particular because of the lack of dielectric, environmental, and chemical information for these relatively novel compounds and their dissociation products during operation.",
"title": ""
},
{
"docid": "neg:1840519_19",
"text": "With the rapid growth of product review forums, discussion groups, and Blogs, it is almost impossible for a customer to make an informed purchase decision. Different and possibly contradictory opinions written by different reviewers can even make customers more confused. In the last few years, mining customer reviews (opinion mining) has emerged as an interesting new research direction to address this need. One of the interesting problem in opinion mining is Opinion Question Answering (Opinion QA). While traditional QA can only answer factual questions, opinion QA aims to find the authors' sentimental opinions on a specific target. Current opinion QA systems suffers from several weaknesses. The main cause of these weaknesses is that these methods can only answer a question if they find a content similar to the given question in the given documents. As a result, they cannot answer majority questions like \"What is the best digital camera?\" nor comparative questions, e.g. \"Does SamsungY work better than CanonX?\". In this paper we address the problem of opinion question answering to answer opinion questions about products by using reviewers' opinions. Our proposed method, called Aspect-based Opinion Question Answering (AQA), support answering of opinion-based questions while improving the weaknesses of current techniques. AQA contains five phases: question analysis, question expansion, high quality review retrieval, subjective sentence extraction, and answer grouping. AQA adopts an opinion mining technique in the preprocessing phase to identify target aspects and estimate their quality. Target aspects are attributes or components of the target product that have been commented on in the review, e.g. 'zoom' and 'battery life' for a digital camera. We conduct experiments on a real life dataset, Epinions.com, demonstrating the improved effectiveness of the AQA in terms of the accuracy of the retrieved answers.",
"title": ""
}
] |
1840520 | Kinect v2 Sensor-Based Mobile Terrestrial Laser Scanner for Agricultural Outdoor Applications | [
{
"docid": "pos:1840520_0",
"text": "The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter - DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.",
"title": ""
}
] | [
{
"docid": "neg:1840520_0",
"text": "We present a comprehensive study of evaluation methods for unsupervised embedding techniques that obtain meaningful representations of words from text. Different evaluations result in different orderings of embedding methods, calling into question the common assumption that there is one single optimal vector representation. We present new evaluation techniques that directly compare embeddings with respect to specific queries. These methods reduce bias, provide greater insight, and allow us to solicit data-driven relevance judgments rapidly and accurately through crowdsourcing.",
"title": ""
},
{
"docid": "neg:1840520_1",
"text": "How does scientific research affect the world around us? Being able to answer this question is of great importance in order to appropriately channel efforts and resources in science. The impact by scientists in academia is currently measured by citation based metrics such as h-index, i-index and citation counts. These academic metrics aim to represent the dissemination of knowledge among scientists rather than the impact of the research on the wider world. In this work we are interested in measuring scientific impact beyond academia, on the economy, society, health and legislation (comprehensive impact). Indeed scientists are asked to demonstrate evidence of such comprehensive impact by authoring case studies in the context of the Research Excellence Framework (REF). We first investigate the extent to which existing citation based metrics can be indicative of comprehensive impact. We have collected all recent REF impact case studies from 2014 and we have linked these to papers in citation networks that we constructed and derived from CiteSeerX, arXiv and PubMed Central using a number of text processing and information retrieval techniques. We have demonstrated that existing citation-based metrics for impact measurement do not correlate well with REF impact results. We also consider metrics of online attention surrounding scientific works, such as those provided by the Altmetric API. We argue that in order to be able to evaluate wider non-academic impact we need to mine information from a much wider set of resources, including social media posts, press releases, news articles and political debates stemming from academic work. We also provide our data as a free and reusable collection for further analysis, including the PubMed citation network and the correspondence between REF case studies, grant applications and the academic literature.",
"title": ""
},
{
"docid": "neg:1840520_2",
"text": "In this paper, we utilize tags in Twitter (the hashtags) as an indicator of events. We first study the properties of hashtags for event detection. Based on several observations, we proposed three attributes of hashtags, including (1) instability for temporal analysis, (2) Twitter meme possibility to distinguish social events from virtual topics or memes, and (3) authorship entropy for mining the most contributed authors. Based on these attributes, breaking events are discovered with hashtags, which cover a wide range of social events among different languages in the real world.",
"title": ""
},
{
"docid": "neg:1840520_3",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
},
{
"docid": "neg:1840520_4",
"text": "Essential oils are volatile, natural, complex mixtures of compounds characterized by a strong odour and formed by aromatic plants as secondary metabolites. The chemical composition of the essential oil obtained by hydrodistillation from the whole plant of Pulicaria inuloides grown in Yemen and collected at full flowering stage were analyzed by Gas chromatography-Mass spectrometry (GC-MS). Several oil components were identified based upon comparison of their mass spectral data with those of reference compounds. The main components identified in the oil were 47.34% of 2-Cyclohexen-1-one, 2-methyl-5-(1-methyl with Hexadecanoic acid (CAS) (12.82%) and Ethane, 1,2-diethoxy(9.613%). In this study, mineral contents of whole plant of P. inuloides were determined by atomic absorption spectroscopy. Highest level of K, Mg, Na, Fe and Ca of 159.5, 29.5, 14.2, 13.875 and 5.225 mg/100 g were found in P. inuloides.",
"title": ""
},
{
"docid": "neg:1840520_5",
"text": "Language and vision provide complementary information. Integrating both modalities in a single multimodal representation is an unsolved problem with wide-reaching applications to both natural language processing and computer vision. In this paper, we present a simple and effective method that learns a language-to-vision mapping and uses its output visual predictions to build multimodal representations. In this sense, our method provides a cognitively plausible way of building representations, consistent with the inherently reconstructive and associative nature of human memory. Using seven benchmark concept similarity tests we show that the mapped (or imagined) vectors not only help to fuse multimodal information, but also outperform strong unimodal baselines and state-of-the-art multimodal methods, thus exhibiting more human-like judgments. Ultimately, the present work sheds light on fundamental questions of natural language understanding concerning the fusion of vision and language such as the plausibility of more associative and reconstructive approaches.",
"title": ""
},
{
"docid": "neg:1840520_6",
"text": "We propose an approach to include contextual features for labeling images, in which each pixel is assigned to one of a finite set of labels. The features are incorporated into a probabilistic framework, which combines the outputs of several components. Components differ in the information they encode. Some focus on the image-label mapping, while others focus solely on patterns within the label field. Components also differ in their scale, as some focus on fine-resolution patterns while others on coarser, more global structure. A supervised version of the contrastive divergence algorithm is applied to learn these features from labeled image data. We demonstrate performance on two real-world image databases and compare it to a classifier and a Markov random field.",
"title": ""
},
{
"docid": "neg:1840520_7",
"text": "There is an industry need for wideband baluns to operate across several decades of bandwidth covering the HF, VHF, and UHF spectrum. For readers unfamiliar with the term \"balun,\" it is a compound word that combines the terms balanced and unbalanced. This is in reference to the conversion between a balanced source and an unbalanced load, often requiring an impedance transformation of some type. It's common in literature to see the terms \"balanced\" and \"unbalanced\" used interchangeably with the terms \"differential\" and \"single-ended,\" and this article will also share this naming convention. These devices are particularly useful in network matching applications and can be constructed at low cost and a relatively small bill of materials. Wideband baluns first found widespread use converting the balanced load of a dipole antenna to the unbalanced output of a single-ended amplifier. These devices can also be found in solid-state differential circuits such as amplifiers and mixers where network matching is required to achieve the maximum power transfer to the load. In the design of RF power amplifiers, wideband baluns play a critical role in an amplifier's performance, including its input and output impedances, gain flatness, linearity, power efficiency, and many other performance characteristics.This article describes the theory of operation, design procedure, and measured results of the winning wideband balun presented at the 2013 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2013), sponsored by the MTT-17 Technical Coordinating Committee on HF-VHF-UHF technology. The wideband balun was designed to deliver a 4:1 impedance transformation, converting a balanced 100 Ω source to an unbalanced 25 Ω load. It was constructed using a multiaperture ferrite core and a pair of bifilar wires with four parallel turns.",
"title": ""
},
{
"docid": "neg:1840520_8",
"text": "A new therapeutic approach to the rehabilitation of movement after stroke, termed constraint-induced (CI) movement therapy, has been derived from basic research with monkeys given somatosensory deafferentation. CI movement therapy consists of a family of therapies; their common element is that they induce stroke patients to greatly increase the use of an affected upper extremity for many hours a day over a period of 10 to 14 consecutive days. The signature intervention involves motor restriction of the contralateral upper extremity in a sling and training of the affected arm. The therapies result in large changes in amount of use of the affected arm in the activities of daily living outside of the clinic that have persisted for the 2 years measured to date. Patients who will benefit from Cl therapy can be identified before the beginning of treatment.",
"title": ""
},
{
"docid": "neg:1840520_9",
"text": "In this letter, a novel coaxial line to substrate integrated waveguide (SIW) broadband transition is presented. The transition is designed by connecting the inner conductor of a coaxial line to an open-circuited SIW. The configuration directly transforms the TEM mode of a coaxial line to the fundamental TE10 mode of the SIW. A prototype back-to-back transition is fabricated for X-band operation using a 0.508 mm thick RO 4003C substrate with dielectric constant 3.55. Comparison with other reported transitions shows that the present structure provides lower passband insertion loss, wider bandwidth and most compact. The area of each transition is 0.08λg2 where λg is the guided wavelength at passband center frequency of f0 = 10.5 GHz. Measured 15 dB and 20 dB matching bandwidths are over 48% and 20%, respectively, at f0.",
"title": ""
},
{
"docid": "neg:1840520_10",
"text": "Many software process methods and tools presuppose the existence of a formal model of a process. Unfortunately, developing a formal model for an on-going, complex process can be difficult, costly, and error prone. This presents a practical barrier to the adoption of process technologies, which would be lowered by automated assistance in creating formal models. To this end, we have developed a data analysis technique that we term process discovery. Under this technique, data describing process events are first captured from an on-going process and then used to generate a formal model of the behavior of that process. In this article we describe a Markov method that we developed specifically for process discovery, as well as describe two additional methods that we adopted from other domains and augmented for our purposes. The three methods range from the purely algorithmic to the purely statistical. We compare the methods and discuss their application in an industrial case study.",
"title": ""
},
{
"docid": "neg:1840520_11",
"text": "In this paper we propose a method for matching the scales of 3D point clouds. 3D point sets of the same scene obtained by 3D reconstruction techniques usually differ in scale. To match scales, we estimate the ratio of scales of two given 3D point clouds. By performing PCA of spin images over different scales of two point clouds, two sets of cumulative contribution rate curves are generated. Such sets of curves can be considered to characterize the scale of the given 3D point clouds. To find the scale ratio of two point clouds, we register the two sets of curves by using a variant of ICP that estimates the ratio of scales. Simulations with the Stanford bunny and experimental results with 3D reconstructions of artificial and real scenes demonstrate that the ratio of any 3D point clouds can be effectively used for scale matching.",
"title": ""
},
{
"docid": "neg:1840520_12",
"text": "What does this paper demonstrate. We show that a very simple 2D architecture (in the sense that it does not make any assumption or reasoning about the 3D information of the object) generally used for object classification, if properly adapted to the specific task, can provide top performance also for pose estimation. More specifically, we demonstrate how a 1-vs-all classification framework based on a Fisher Vector (FV) [1] pyramid or convolutional neural network (CNN) based features [2] can be used for pose estimation. In addition, suppressing neighboring viewpoints during training seems key to get good results.",
"title": ""
},
{
"docid": "neg:1840520_13",
"text": "An open-loop stereophonic acoustic echo suppression (SAES) method without preprocessing is presented for teleconferencing systems, where the Wiener filter in the short-time Fourier transform (STFT) domain is employed. Instead of identifying the echo path impulse responses with adaptive filters, the proposed algorithm estimates the echo spectra from the stereo signals using two weighting functions. The spectral modification technique originally proposed for noise reduction is adopted to remove the echo from the microphone signal. Moreover, a priori signal-to-echo ratio (SER) based Wiener filter is used as the gain function to achieve a trade-off between musical noise reduction and computational load for real-time operations. Computer simulation shows the effectiveness and the robustness of the proposed method in several different scenarios.",
"title": ""
},
{
"docid": "neg:1840520_14",
"text": "Lead time reduction is a key concern of many industrial buyers of capital facilities given current economic conditions. Supply chain initiatives in manufacturing settings have led owners to expect that dramatic reductions in lead time are possible in all phases of their business, including the delivery of capital materials. Further, narrowing product delivery windows and increasing pressure to be first-tomarket create significant external pressure to reduce lead time. In this paper, a case study is presented in which an owner entered the construction supply chain to procure and position key long-lead materials. The materials were held at a position in the supply chain selected to allow some flexibility for continued customization, but dramatic reduction in the time-to-site. Simulation was used as a tool to consider time-to-site tradeoffs for multiple inventory locations so as to better match the needs of the construction effort.",
"title": ""
},
{
"docid": "neg:1840520_15",
"text": "The paper attempts to describe the space of possible mind designs by first equating all minds to software. Next it proves some properties of the mind design space such as infinitude of minds, size and representation complexity of minds. A survey of mind design taxonomies is followed by a proposal for a new field of investigation devoted to study of minds, intellectology.",
"title": ""
},
{
"docid": "neg:1840520_16",
"text": "This paper focuses on modeling resolving and simulations of the inverse kinematics of an anthropomorphic redundant robotic structure with seven degrees of freedom and a workspace similar to human arm. Also the kinematical model and the kinematics equations of the robotic arm are presented. A method of resolving the redundancy of seven degrees of freedom robotic arm is presented using Fuzzy Logic toolbox from MATLAB®.",
"title": ""
},
{
"docid": "neg:1840520_17",
"text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1",
"title": ""
},
{
"docid": "neg:1840520_18",
"text": "In this paper we apply Conformal Prediction (CP) to the k -Nearest Neighbours Regression (k -NNR) algorithm and propose ways of extending the typical nonconformity measure used for regression so far. Unlike traditional regression methods which produce point predictions, Conformal Predictors output predictive regions that satisfy a given confidence level. The regions produced by any Conformal Predictor are automatically valid, however their tightness and therefore usefulness depends on the nonconformity measure used by each CP. In effect a nonconformity measure evaluates how strange a given example is compared to a set of other examples based on some traditional machine learning algorithm. We define six novel nonconformity measures based on the k -Nearest Neighbours Regression algorithm and develop the corresponding CPs following both the original (transductive) and the inductive CP approaches. A comparison of the predictive regions produced by our measures with those of the typical regression measure suggests that a major improvement in terms of predictive region tightness is achieved by the new measures.",
"title": ""
},
{
"docid": "neg:1840520_19",
"text": "Legislators, designers of legal information systems, as well as citizens face often problems due to the interdependence of the laws and the growing number of references needed to interpret them. In this paper, we introduce the ”Legislation Network” as a novel approach to address several quite challenging issues for identifying and quantifying the complexity inside the Legal Domain. We have collected an extensive data set of a more than 60-year old legislation corpus, as published in the Official Journal of the European Union, and we further analysed it as a complex network, thus gaining insight into its topological structure. Among other issues, we have performed a temporal analysis of the evolution of the Legislation Network, as well as a robust resilience test to assess its vulnerability under specific cases that may lead to possible breakdowns. Results are quite promising, showing that our approach can lead towards an enhanced explanation in respect to the structure and evolution of legislation properties.",
"title": ""
}
] |
1840521 | Sensing and coverage for a network of heterogeneous robots | [
{
"docid": "pos:1840521_0",
"text": "This paper presents deployment algorithms for multiple mobile robots with line-of-sight sensing and communication capabilities in a simple nonconvex polygonal environment. The objective of the proposed algorithms is to achieve full visibility of the environment. We solve the problem by constructing a novel data structure called the vertex-induced tree and designing schemes to deploy over the nodes of this tree by means of distributed algorithms. The agents are assumed to have access to a local memory and their operation is partially asynchronous",
"title": ""
}
] | [
{
"docid": "neg:1840521_0",
"text": "Copying an element from a photo and pasting it into a painting is a challenging task. Applying photo compositing techniques in this context yields subpar results that look like a collage — and existing painterly stylization algorithms, which are global, perform poorly when applied locally. We address these issues with a dedicated algorithm that carefully determines the local statistics to be transferred. We ensure both spatial and inter-scale statistical consistency and demonstrate that both aspects are key to generating quality results. To cope with the diversity of abstraction levels and types of paintings, we introduce a technique to adjust the parameters of the transfer depending on the painting. We show that our algorithm produces significantly better results than photo compositing or global stylization techniques and that it enables creative painterly edits that would be otherwise difficult to achieve. CCS Concepts •Computing methodologies → Image processing;",
"title": ""
},
{
"docid": "neg:1840521_1",
"text": "OBJECTIVE\nWe investigated the effect of low-fat (2.5%) dahi containing probiotic Lactobacillus acidophilus and Lactobacillus casei on progression of high fructose-induced type 2 diabetes in rats.\n\n\nMETHODS\nDiabetes was induced in male albino Wistar rats by feeding 21% fructose in water. The body weight, food and water intakes, fasting blood glucose, glycosylated hemoglobin, oral glucose tolerance test, plasma insulin, liver glycogen content, and blood lipid profile were recorded. The oxidative status in terms of thiobarbituric acid-reactive substances and reduced glutathione contents in liver and pancreatic tissues were also measured.\n\n\nRESULTS\nValues for blood glucose, glycosylated hemoglobin, glucose intolerance, plasma insulin, liver glycogen, plasma total cholesterol, triacylglycerol, low-density lipoprotein cholesterol, very low-density lipoprotein cholesterol, and blood free fatty acids were increased significantly after 8 wk of high fructose feeding; however, the dahi-supplemented diet restricted the elevation of these parameters in comparison with the high fructose-fed control group. In contrast, high-density lipoprotein cholesterol decreased slightly and was retained in the dahi-fed group. The dahi-fed group also exhibited lower values of thiobarbituric acid-reactive substances and higher values of reduced glutathione in liver and pancreatic tissues compared with the high fructose-fed control group.\n\n\nCONCLUSION\nThe probiotic dahi-supplemented diet significantly delayed the onset of glucose intolerance, hyperglycemia, hyperinsulinemia, dyslipidemia, and oxidative stress in high fructose-induced diabetic rats, indicating a lower risk of diabetes and its complications.",
"title": ""
},
{
"docid": "neg:1840521_2",
"text": "Blueberry, raspberry and strawberry may have evolved strategies for survival due to the different soil conditions available in their natural environment. Since this might be reflected in their response to rhizosphere pH and N form supplied, investigations were carried out in order to compare effects of nitrate and ammonium nutrition (the latter at two different pH regimes) on growth, CO2 gas exchange, and on the activity of key enzymes of the nitrogen metabolism of these plant species. Highbush blueberry (Vaccinium corymbosum L. cv. 13–16–A), raspberry (Rubus idaeus L. cv. Zeva II) and strawberry (Fragaria × ananassa Duch. cv. Senga Sengana) were grown in 10 L black polyethylene pots in quartz sand with and without 1% CaCO3 (w: v), respectively. Nutrient solutions supplied contained nitrate (6 mM) or ammonium (6 mM) as the sole nitrogen source. Compared with strawberries fed with nitrate nitrogen, supply of ammonium nitrogen caused a decrease in net photosynthesis and dry matter production when plants were grown in quartz sand without added CaCO3. In contrast, net photosynthesis and dry matter production increased in blueberries fed with ammonium nitrogen, while dry matter production of raspberries was not affected by the N form supplied. In quartz sand with CaCO3, ammonium nutrition caused less deleterious effects on strawberries, and net photosynthesis in raspberries increased as compared to plants grown in quartz sand without CaCO3 addition. Activity of nitrate reductase (NR) was low in blueberries and could only be detected in the roots of plants supplied with nitrate nitrogen. In contrast, NR activity was high in leaves, but low in roots of raspberry and strawberry plants. Ammonium nutrition caused a decrease in NR level in leaves. Activity of glutamine synthetase (GS) was high in leaves but lower in roots of blueberry, raspberry and strawberry plants. The GS level was not significantly affected by the nitrogen source supplied. The effects of nitrate or ammonium nitrogen on net photosynthesis, growth, and activity of enzymes in blueberry, raspberry and strawberry cultivars appear to reflect their different adaptability to soil pH and N form due to the conditions of their natural environment.",
"title": ""
},
{
"docid": "neg:1840521_3",
"text": "We propose a novel cancelable biometric approach, known as PalmHashing, to solve the non-revocable biometric proposed method hashes palmprint templates with a set of pseudo-random keys to obtain a unique code called palmhash. The palmhash code can be stored in portable devices such tokens and smartcards for verification. Multiple sets of palmha can be maintained in multiple applications. Thus the privacy and security of the applications can be greatly enhance compromised, revocation can also be achieved via direct replacement of a new set of palmhash code. In addition, PalmHashin offers several advantages over contemporary biometric approaches such as clear separation of the genuine-imposter and zero EER occurrences. In this paper, we outline the implementation details of this method and also highlight its p in security-critical applications. 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840521_4",
"text": "Stroke has now become the leading cause of severe disability. Rehabilitation robots are gradually becoming popular for stroke rehabilitation to improve motor recovery, as robotic technology can assist, enhance, and further quantify rehabilitation training for stroke patients. However, most of the available rehabilitation robots are complex and involve multiple degrees-of-freedom (DOFs) causing it to be very expensive and huge in size. Rehabilitation robots should be useful but also need to be affordable and portable enabling more patients to afford and train independently at home. This paper presents a development of an affordable, portable and compact rehabilitation robot that implements different rehabilitation strategies for stroke patient to train forearm and wrist movement in an enhanced virtual reality environment with haptic feedback.",
"title": ""
},
{
"docid": "neg:1840521_5",
"text": "Recent years have witnessed an increasing interest in image-based question-answering (QA) tasks. However, due to data limitations, there has been much less work on video-based QA. In this paper, we present TVQA, a largescale video QA dataset based on 6 popular TV shows. TVQA consists of 152,545 QA pairs from 21,793 clips, spanning over 460 hours of video. Questions are designed to be compositional in nature, requiring systems to jointly localize relevant moments within a clip, comprehend subtitle-based dialogue, and recognize relevant visual concepts. We provide analyses of this new dataset as well as several baselines and a multi-stream end-to-end trainable neural network framework for the TVQA task. The dataset is publicly available at http://tvqa.cs.unc.edu.",
"title": ""
},
{
"docid": "neg:1840521_6",
"text": "In this work, we present a true 3D 128 Gb 2 bit/cell vertical-NAND (V-NAND) Flash product for the first time. The use of barrier-engineered materials and gate all-around structure in the 3D V-NAND cell exhibits advantages over 1 × nm planar NAND, such as small Vth shift due to small cell coupling and narrow natural Vth distribution. Also, a negative counter-pulse scheme realizes a tightly programmed cell distribution. In order to reduce the effect of a large WL coupling, a glitch-canceling discharge scheme and a pre-offset control scheme is implemented. Furthermore, an external high-voltage supply scheme along with the proper protection scheme for a high-voltage failure is used to achieve low power consumption. The chip accomplishes 50 MB/s write throughput with 3 K endurance for typical embedded applications. Also, extended endurance of 35 K is achieved with 36 MB/s of write throughput for data center and enterprise SSD applications.",
"title": ""
},
{
"docid": "neg:1840521_7",
"text": "This paper announces and discusses the experimental results from the Noisy Iris Challenge Evaluation (NICE), an iris biometric evaluation initiative that received worldwide participation and whose main innovation is the use of heavily degraded data acquired in the visible wavelength and uncontrolled setups, with subjects moving and at widely varying distances. The NICE contest included two separate phases: 1) the NICE.I evaluated iris segmentation and noise detection techniques and 2) the NICE:II evaluated encoding and matching strategies for biometric signatures. Further, we give the performance values observed when fusing recognition methods at the score level, which was observed to outperform any isolated recognition strategy. These results provide an objective estimate of the potential of such recognition systems and should be regarded as reference values for further improvements of this technology, which-if successful-may significantly broaden the applicability of iris biometric systems to domains where the subjects cannot be expected to cooperate.",
"title": ""
},
{
"docid": "neg:1840521_8",
"text": "Reward models have become an important method for specifying performability models for many types of systems. Many methods have been proposed for solving reward models, but no method has proven itself to be applicable over all system classes and sizes. Furthermore, speci cation of reward models has usually been done at the state level, which can be extremely cumbersome for realistic models. We describe a method to specify reward models as stochastic activity networks (SANs) with impulse and rate rewards, and a method by which to solve these models via uniformization. The method is an extension of one proposed by de Souza e Silva and Gail in which impulse and rate rewards are speci ed at the SAN level, and solved in a single model. Furthermore, we propose a new technique for discarding paths in the uniformized process whose contribution to the reward variable is minimal, which greatly reduces the time and space required for a solution. A bound is calculated on the error introduced by this discarding, and its e ectiveness is illustrated through the study of the performability and availability of a degradable multi-processor system.",
"title": ""
},
{
"docid": "neg:1840521_9",
"text": "We extend the reach of functional encryption schemes that are provably secure under simple assumptions against unbounded collusion to include function-hiding inner product schemes. Our scheme is a private key functional encryption scheme, where ciphertexts correspond to vectors ~x, secret keys correspond to vectors ~y, and a decryptor learns 〈~x, ~y〉. Our scheme employs asymmetric bilinear maps and relies only on the SXDH assumption to satisfy a natural indistinguishability-based security notion where arbitrarily many key and ciphertext vectors can be simultaneously changed as long as the key-ciphertext dot product relationships are all preserved.",
"title": ""
},
{
"docid": "neg:1840521_10",
"text": "The flourishing synergy arising between organized crimes and the Internet has increased the insecurity of the digital world. How hackers frame their actions? What factors encourage and energize their behavior? These are very important but highly underresearched questions. We draw upon literatures on psychology, economics, international relation and warfare to propose a framework that addresses these questions. We found that countries across the world differ in terms of regulative, normative and cognitive legitimacy to different types of web attacks. Cyber wars and crimes are also functions of the stocks of hacking skills relative to the availability of economic opportunities. An attacking unit’s selection criteria for the target network include symbolic significance and criticalness, degree of digitization of values and weakness in defense mechanisms. Managerial and policy implications are discussed and directions for future research are suggested.",
"title": ""
},
{
"docid": "neg:1840521_11",
"text": "This paper aims to develop a load forecasting method for short-term load forecasting, based on an adaptive two-stage hybrid network with self-organized map (SOM) and support vector machine (SVM). In the first stage, a SOM network is applied to cluster the input data set into several subsets in an unsupervised manner. Then, groups of 24 SVMs for the next day's load profile are used to fit the training data of each subset in the second stage in a supervised way. The proposed structure is robust with different data types and can deal well with the nonstationarity of load series. In particular, our method has the ability to adapt to different models automatically for the regular days and anomalous days at the same time. With the trained network, we can straightforwardly predict the next-day hourly electricity load. To confirm the effectiveness, the proposed model has been trained and tested on the data of the historical energy load from New York Independent System Operator.",
"title": ""
},
{
"docid": "neg:1840521_12",
"text": "Natural locomotion in room-scale virtual reality (VR) is constrained by the user's immediate physical space. To overcome this obstacle, researchers have established the use of the impossible space design mechanic. This game illustrates the applied use of impossible spaces for enhancing the aesthetics of, and presence within, a room-scale VR game. This is done by creating impossible spaces with a gaming narrative intent. First, locomotion and impossible spaces in VR are surveyed; second, a VR game called Ares is put forth as a prototype; and third, a user study is briefly explored.",
"title": ""
},
{
"docid": "neg:1840521_13",
"text": "We introduce a novel scheme to train binary convolutional neural networks (CNNs) – CNNs with weights and activations constrained to {-1,+1} at run-time. It has been known that using binary weights and activations drastically reduce memory size and accesses, and can replace arithmetic operations with more efficient bitwise operations, leading to much faster test-time inference and lower power consumption. However, previous works on binarizing CNNs usually result in severe prediction accuracy degradation. In this paper, we address this issue with two major innovations: (1) approximating full-precision weights with the linear combination of multiple binary weight bases; (2) employing multiple binary activations to alleviate information loss. The implementation of the resulting binary CNN, denoted as ABC-Net, is shown to achieve much closer performance to its full-precision counterpart, and even reach the comparable prediction accuracy on ImageNet and forest trail datasets, given adequate binary weight bases and activations.",
"title": ""
},
{
"docid": "neg:1840521_14",
"text": "In vitro drug metabolism studies, which are inexpensive and readily carried out, serve as an adequate screening mechanism to characterize drug metabolites, elucidate their pathways, and make suggestions for further in vivo testing. This publication is a sequel to part I in a series and aims at providing a general framework to guide designs and protocols of the in vitro drug metabolism studies considered good practice in an efficient manner such that it would help researchers avoid common pitfalls and misleading results. The in vitro models include hepatic and non-hepatic microsomes, cDNA-expressed recombinant human CYPs expressed in insect cells or human B lymphoblastoid, chemical P450 inhibitors, S9 fraction, hepatocytes and liver slices. Important conditions for conducting the in vitro drug metabolism studies using these models are stated, including relevant concentrations of enzymes, co-factors, inhibitors and test drugs; time of incubation and sampling in order to establish kinetics of reactions; appropriate control settings, buffer selection and method validation. Separate in vitro data should be logically integrated to explain results from animal and human studies and to provide insights into the nature and consequences of in vivo drug metabolism. This article offers technical information and data and addresses scientific rationales and practical skills related to in vitro evaluation of drug metabolism to meet regulatory requirements for drug development.",
"title": ""
},
{
"docid": "neg:1840521_15",
"text": "According the universal serial cyclic redundancy check (CRC) technology, one of the new CRC algorithm based on matrix is referred, which describe an new parallel CRC coding circuit structure with r matrix transformation and pipeline technology. According to the method of parallel CRC coding in high-speed data transmitting, it requires a lot of artificial calculation. Due to the large amount of calculation, it is easy to produce some calculation error. According to the traditional thought of the serial CRC, the algorithm of parallel CRC based on the thought of matrix transformation and iterative has been deduced and expressed. The improved algorithm by pipeline technology has been applied in other systems which require high timing requirements of problem, The design has been implemented through Verilog hardware description language in FPGA device, which has achieved a good validation. It has become a very good method for high-speed CRC coding and decoding.",
"title": ""
},
{
"docid": "neg:1840521_16",
"text": "This paper presents a novel approach for inducing lexical taxonomies automatically from text. We recast the learning problem as that of inferring a hierarchy from a graph whose nodes represent taxonomic terms and edges their degree of relatedness. Our model takes this graph representation as input and fits a taxonomy to it via combination of a maximum likelihood approach with a Monte Carlo Sampling algorithm. Essentially, the method works by sampling hierarchical structures with probability proportional to the likelihood with which they produce the input graph. We use our model to infer a taxonomy over 541 nouns and show that it outperforms popular flat and hierarchical clustering algorithms.",
"title": ""
},
{
"docid": "neg:1840521_17",
"text": "An operational-transconductance-amplifier (OTA) design for ultra-low voltage ultra-low power applications is proposed. The input stage of the proposed OTA utilizes a bulk-driven pseudo-differential pair to allow minimum supply voltage while achieving a rail-to-rail input range. All the transistors in the proposed OTA operate in the subthreshold region. Using a novel self-biasing technique to bias the OTA obviates the need for extra biasing circuitry and enhances the performance of the OTA. The proposed technique ensures the OTA robustness to process variations and increases design feasibility under ultra-low-voltage conditions. Moreover, the proposed biasing technique significantly improves the common-mode and power-supply rejection of the OTA. To further enhance the bandwidth and allow the use of smaller compensation capacitors, a compensation network based on a damping-factor control circuit is exploited. The OTA is fabricated in a 65 nm CMOS technology. Measurement results show that the OTA provides a low-frequency gain of 46 dB and rail-to-rail input common-mode range with a supply voltage as low as 0.5 V. The dc gain of the OTA is greater than 42 dB for supply voltage as low as 0.35 V. The power dissipation is 182 μW at VDD=0.5 V and 17 μW at VDD=0.35 V.",
"title": ""
},
{
"docid": "neg:1840521_18",
"text": "The increasing complexity of Database Management Systems (DBMSs) and the dearth of their experienced administrators make an urgent call for an Autonomic DBMS that is capable of managing and maintaining itself. In this paper, we examine the characteristics that a DBMS should have in order to be considered autonomic and assess the position of today’s commercial DBMSs such as DB2, SQL Server, and Oracle.",
"title": ""
},
{
"docid": "neg:1840521_19",
"text": "We present a method for isotropic remeshing of arbitrary genus surfaces. The method is based on a mesh adaptation process, namely, a sequence of local modifications performed on a copy of the original mesh, while referring to the original mesh geometry. The algorithm has three stages. In the first stage the required number or vertices are generated by iterative simplification or refinement. The second stage performs an initial vertex partition using an area-based relaxation method. The third stage achieves precise isotropic vertex sampling prescribed by a given density function on the mesh. We use a modification of Lloyd’s relaxation method to construct a weighted centroidal Voronoi tessellation of the mesh. We apply these iterations locally on small patches of the mesh that are parameterized into the 2D plane. This allows us to handle arbitrary complex meshes with any genus and any number of boundaries. The efficiency and the accuracy of the remeshing process is achieved using a patch-wise parameterization technique. Key-words: Surface mesh generation, isotropic triangle meshing, centroidal Voronoi tessellation, local parameterization. ∗ Technion, Haifa, Israel † INRIA Sophia-Antipolis ‡ Technion, Haifa, Israel Remaillage isotrope de surfaces utilisant une paramétrisation locale Résumé : Cet article décrit une méthode de remaillage isotrope de surfaces triangulées. L’approche repose sur une technique d’adaptation locale du maillage. L’idée consiste à opérer une séquence d’opérations élémentaires sur une copie du maillage original, tout en faisant référence au maillage original pour la géométrie. L’algorithme comporte trois étapes. La première étape ramène la complexité du maillage au nombre de sommets désiré par raffinement ou décimation itérative. La seconde étape opère une première répartition des sommets via une technique de relaxation optimisant un équilibrage local des aires sur les triangles. La troisième étape opère un placement isotrope des sommets via une relaxation de Lloyd pour construire une tessellation de Voronoi centrée. Les itérations de relaxation de Lloyd sont appliquées localement dans un espace paramétrique 2D calculé à la volée sur un sous-ensemble de la triangulation originale de telle que sorte que les triangulations de complexité et de genre arbitraire puissent être efficacement remaillées. Mots-clés : Maillage de surfaces, maillage triangulaire isotrope, diagrammes de Voronoi centrés, paramétrisation locale. Isotropic Remeshing of Surfaces",
"title": ""
}
] |
1840522 | Similarity by Composition | [
{
"docid": "pos:1840522_0",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
}
] | [
{
"docid": "neg:1840522_0",
"text": "Northrop Grumman is developing an atom-based magnetometer technology that has the potential for providing a global position reference independent of GPS. The NAV-CAM sensor is a direct outgrowth of the Nuclear Magnetic Resonance Gyro under development by the same technical team. It is capable of providing simultaneous measurements of all 3 orthogonal axes of magnetic vector field components using a single compact vapor cell. The vector sum determination of the whole-field scalar measurement achieves similar precision to the individual vector components. By using a single sensitive element (vapor cell) this approach eliminates many of the problems encountered when using physically separate sensors or sensing elements.",
"title": ""
},
{
"docid": "neg:1840522_1",
"text": "A microwave sensor having features useful for the noninvasive determination of blood glucose levels is described. The sensor output is an amplitude only measurement of the standing wave versus frequency sampled at a fixed point on an open-terminated spiral-shaped microstrip line. Test subjects press their thumb against the line and apply contact pressure sufficient to fall within a narrow pressure range. Data are reported for test subjects whose blood glucose is independently measured using a commercial glucometer.",
"title": ""
},
{
"docid": "neg:1840522_2",
"text": "The unconstrained acquisition of facial data in real-world conditions may result in face images with significant pose variations, illumination changes, and occlusions, affecting the performance of facial landmark localization and recognition methods. In this paper, a novel method, robust to pose, illumination variations, and occlusions is proposed for joint face frontalization and landmark localization. Unlike the state-of-the-art methods for landmark localization and pose correction, where large amount of manually annotated images or 3D facial models are required, the proposed method relies on a small set of frontal images only. By observing that the frontal facial image of both humans and animals, is the one having the minimum rank of all different poses, a model which is able to jointly recover the frontalized version of the face as well as the facial landmarks is devised. To this end, a suitable optimization problem is solved, concerning minimization of the nuclear norm (convex surrogate of the rank function) and the matrix $$\\ell _1$$ ℓ 1 norm accounting for occlusions. The proposed method is assessed in frontal view reconstruction of human and animal faces, landmark localization, pose-invariant face recognition, face verification in unconstrained conditions, and video inpainting by conducting experiment on 9 databases. The experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods for the target problems.",
"title": ""
},
{
"docid": "neg:1840522_3",
"text": "In this paper we present a multivariate analysis of evoked hemodynamic responses and their spatiotemporal dynamics as measured with fast fMRI. This analysis uses standard multivariate statistics (MANCOVA) and the general linear model to make inferences about effects of interest and canonical variates analysis (CVA) to describe the important features of these effects. We have used these techniques to characterize the form of hemodynamic transients that are evoked during a cognitive or sensorimotor task. In particular we do not assume that the neural or hemodynamic response reaches some \"steady state\" but acknowledge that these physiological changes could show profound task-dependent adaptation and time-dependent changes during the task. To address this issue we have modeled hemodynamic responses using appropriate temporal basis functions and estimated their exact form within the general linear model using MANCOVA. We do not propose that this analysis is a particularly powerful way to make inferences about functional specialization (or more generally functional anatomy) because it only provides statistical inferences about the distributed (whole brain) responses evoked by different conditions. However, its application to characterizing the temporal aspects of evoked hemodynamic responses reveals some compelling and somewhat unexpected perspectives on transient but stereotyped responses to changes in cognitive or sensorimotor processing. The most remarkable observation is that these responses can be biphasic and show profound differences in their form depending on the extant task or condition. Furthermore these differences can be seen in the absence of changes in mean signal.",
"title": ""
},
{
"docid": "neg:1840522_4",
"text": "In this paper, we present an onboard monocular vision system for autonomous takeoff, hovering and landing of a Micro Aerial Vehicle (MAV). Since pose information with metric scale is critical for autonomous flight of a MAV, we present a novel solution to six degrees of freedom (DOF) pose estimation. It is based on a single image of a typical landing pad which consists of the letter “H” surrounded by a circle. A vision algorithm for robust and real-time landing pad recognition is implemented. Then the 5 DOF pose is estimated from the elliptic projection of the circle by using projective geometry. The remaining geometric ambiguity is resolved by incorporating the gravity vector estimated by the inertial measurement unit (IMU). The last degree of freedom pose, yaw angle of the MAV, is estimated from the ellipse fitted from the letter “H”. The efficiency of the presented vision system is demonstrated comprehensively by comparing it to ground truth data provided by a tracking system and by using its pose estimates as control inputs to autonomous flights of a quadrotor.",
"title": ""
},
{
"docid": "neg:1840522_5",
"text": "Joint torque sensory feedback is an effective technique for achieving high-performance robot force and motion control. However, most robots are not equipped with joint torque sensors, and it is difficult to add them without changing the joint's mechanical structure. A method for estimating joint torque that exploits the existing structural elasticity of robotic joints with harmonic drive transmission is proposed in this paper. In the presented joint torque estimation method, motor-side and link-side position measurements along with a proposed harmonic drive compliance model, are used to realize stiff and sensitive joint torque estimation, without the need for adding an additional elastic body and using strain gauges to measure the joint torque. The proposed method has been experimentally studied and its performance is compared with measurements of a commercial torque sensor. The results have attested the effectiveness of the proposed torque estimation method.",
"title": ""
},
{
"docid": "neg:1840522_6",
"text": "We develop a novel method for training of GANs for unsupervised and class conditional generation of images, called Linear Discriminant GAN (LD-GAN). The discriminator of an LD-GAN is trained to maximize the linear separability between distributions of hidden representations of generated and targeted samples, while the generator is updated based on the decision hyper-planes computed by performing LDA over the hidden representations. LD-GAN provides a concrete metric of separation capacity for the discriminator, and we experimentally show that it is possible to stabilize the training of LD-GAN simply by calibrating the update frequencies between generators and discriminators in the unsupervised case, without employment of normalization methods and constraints on weights. In the class conditional generation tasks, the proposed method shows improved training stability together with better generalization performance compared to WGAN (Arjovsky et al. 2017) that employs an auxiliary classifier.",
"title": ""
},
{
"docid": "neg:1840522_7",
"text": "For years we have known that cortical neurons collectively have synchronous or oscillatory patterns of activity, the frequencies and temporal dynamics of which are associated with distinct behavioural states. Although the function of these oscillations has remained obscure, recent experimental and theoretical results indicate that correlated fluctuations might be important for cortical processes, such as attention, that control the flow of information in the brain.",
"title": ""
},
{
"docid": "neg:1840522_8",
"text": "BACKGROUND\nBioengineered hyaluronic acid derivatives are currently available that provide for safe and effective soft-tissue augmentation in the comprehensive approach to nonsurgical facial rejuvenation. Current hyaluronic acid fillers do not require preinjection skin testing and produce reproducible, longer-lasting, nonpermanent results compared with other fillers, such as collagen.\n\n\nMETHODS\nA review of the authors' extensive experience at the University of Texas Southwestern Medical Center was conducted to formulate the salient requirements for successful utilization of hyaluronic acid fillers. Indications, technical refinements, and key components for optimized product administration categorized by anatomical location are described. The efficacy and longevity of results are also discussed.\n\n\nRESULTS\nBioengineered hyaluronic acid fillers allow for safe and effective augmentation of selected anatomical regions of the face, when properly administered. Combined treatment with botulinum toxin type A can enhance the effects and longevity by as much as 50 percent. Key components to optimal filler administration include proper anatomical evaluation, changing or combining various fillers based on particle size, altering the depth of injection, using different injection techniques, and coadministration of botulinum toxin type A when indicated. Concomitant administration of hyaluronic acid fillers along with surgical methods of facial rejuvenation can serve as a powerful tool in maximizing a comprehensive treatment plan.\n\n\nCONCLUSIONS\nCurrent techniques in nonsurgical facial rejuvenation and shaping with hyaluronic acid fillers are safe, effective, and long-lasting. Combination regimens that include surgical facial rejuvenation techniques and/or coadministration of botulinum toxin type A further optimize results, leading to greater patient satisfaction.",
"title": ""
},
{
"docid": "neg:1840522_9",
"text": "Focussed structured document retrieval employs the concept of best entry points (BEPs), which are intended to provide optimal starting-points from which users can browse to relevant document components. This paper describes two small-scale studies, using experimental data from the Shakespeare user study, which developed and evaluated different approaches to the problem of automatic identification of BEPs.",
"title": ""
},
{
"docid": "neg:1840522_10",
"text": "Insufficient supply of animal protein is a major problem in developing countries including Nigeria. Rabbits are adjudged to be a convenient source of palatable and nutritious meat, high in protein, and contain low fat and cholesterol. A doe can produce more than 15 times her own weight in offspring in a year. However, its productivity may be limited by inadequate nutrition. The objective of this study was to determine the effect of probiotic (Saccharomyces cerevisiae) supplementation on growth performance and some hematological parameters of rabbit. The appropriate level of the probiotic inclusion for excellent health status and optimum productivity was also determined. A total of 40 male rabbits were randomly divided into four groups (A–D) of ten rabbits each. Each group was subdivided into two replicates of five rabbits each. They were fed pelleted grower mash ad libitum. The feed for groups A to C were supplemented with bioactive yeast (probiotic) at inclusion levels of 0.08, 0.12, and 0.16 g yeast/kg diet, respectively. Group D had no yeast (control). Daily feed intake was determined. The rabbits were weighed weekly. The packed cell volume (PCV), hemoglobin concentration, white blood cell total, and differential counts were determined at the 8th week, 16th week, and 22nd week following standard procedures. The three results which did not have any significant difference were pooled together. Group A which had 0.08 g yeast/kg of diet had a significantly lower (P ≤ 0.05) PCV than groups B (which had 0.12 g yeast/kg of diet) and C (which had 0.16 g yeast/kg of diet) as well as D (the control). Total WBC count for groups B and C (14.35 ± 0.100 × 103/μl and 14.65 ± 0.786 × 103/μl, respectively) were significantly higher (P ≤ 0.05) than groups A and D (6.33 ± 0.335 × 103/μl and 10.40 ± 0.296 × 103/μl, respectively). Also the absolute neutrophils and lymphocytes counts were significantly higher (P ≤ 0.05) in groups B and C than in groups A and D. Group B had significantly higher (P ≤ 0.05) weight gain (1.025 ± 0.006 kg/rabbit) followed by group A (0.950 ± 0.092 kg/rabbit). The control (group D) had the least weight gain of 0.623 ± 0.0.099 kg/rabbit. These results showed that like most probiotics, bioactive yeast at an appropriate level of inclusion had a significant beneficial effect on health status and growth rate of rabbit. Probiotic supplementation level of 0.12 g yeast/kg of diet was recommended for optimum rabbit production.",
"title": ""
},
{
"docid": "neg:1840522_11",
"text": "We have built an anatomically correct testbed (ACT) hand with the purpose of understanding the intrinsic biomechanical and control features in human hands that are critical for achieving robust, versatile, and dexterous movements, as well as rich object and world exploration. By mimicking the underlying mechanics and controls of the human hand in a hardware platform, our goal is to achieve previously unmatched grasping and manipulation skills. In this paper, the novel constituting mechanisms, unique muscle to joint relationships, and movement demonstrations of the thumb, index finger, middle finger, and wrist of the ACT Hand are presented. The grasping and manipulation abilities of the ACT Hand are also illustrated. The fully functional ACT Hand platform allows for the possibility to design and experiment with novel control algorithms leading to a deeper understanding of human dexterity.",
"title": ""
},
{
"docid": "neg:1840522_12",
"text": "The potential of BIM is generally recognized in the construction industry, but the practical application of BIM for management purposes is, however, still limited among contractors. The objective of this study is to review the current scheduling process of construction in light of BIM-based scheduling, and to identify how it should be incorporated into current practice. The analysis of the current scheduling processes identifies significant discrepancies between the overall and the detailed levels of scheduling. The overall scheduling process is described as an individual endeavor with limited and unsystematic sharing of knowledge within and between projects. Thus, the reuse of scheduling data and experiences are inadequate, preventing continuous improvements of the overall schedules. Besides, the overall scheduling process suffers from lack of information, caused by uncoordinated and unsynchronized overlap of the design and construction processes. Consequently, the overall scheduling is primarily based on intuition and personal experiences, rather than well founded figures of the specific project. Finally, the overall schedule is comprehensive and complex, and consequently, difficult to overview and communicate. Scheduling on the detailed level, on the other hand, follows a stipulated approach to scheduling, i.e. the Last Planner System (LPS), which is characterized by involvement of all actors in the construction phase. Thus, the major challenge when implementing BIM-based scheduling is to improve overall scheduling, which in turn, can secure a better starting point of the LPS. The study points to the necessity of involving subcontractors and manufactures in the earliest phases of the project in order to create project specific information for the overall schedule. In addition, the design process should be prioritized and coordinated with each craft, a process library should be introduced to promote transfer of knowledge and continuous improvements, and information flow between design and scheduling processes must change from push to pull.",
"title": ""
},
{
"docid": "neg:1840522_13",
"text": "Finding sparse approximate solutions to large underdetermined linear systems of equations is a common problem in signal/image processing and statistics. Basis pursuit, the least absolute shrinkage and selection operator (LASSO), wavelet-based deconvolution and reconstruction, and compressed sensing (CS) are a few well-known areas in which problems of this type appear. One standard approach is to minimize an objective function that includes a quadratic (lscr 2) error term added to a sparsity-inducing (usually lscr1) regularizater. We present an algorithmic framework for the more general problem of minimizing the sum of a smooth convex function and a nonsmooth, possibly nonconvex regularizer. We propose iterative methods in which each step is obtained by solving an optimization subproblem involving a quadratic term with diagonal Hessian (i.e., separable in the unknowns) plus the original sparsity-inducing regularizer; our approach is suitable for cases in which this subproblem can be solved much more rapidly than the original problem. Under mild conditions (namely convexity of the regularizer), we prove convergence of the proposed iterative algorithm to a minimum of the objective function. In addition to solving the standard lscr2-lscr1 case, our framework yields efficient solution techniques for other regularizers, such as an lscrinfin norm and group-separable regularizers. It also generalizes immediately to the case in which the data is complex rather than real. Experiments with CS problems show that our approach is competitive with the fastest known methods for the standard lscr2-lscr1 problem, as well as being efficient on problems with other separable regularization terms.",
"title": ""
},
{
"docid": "neg:1840522_14",
"text": "Current Domain Adaptation (DA) methods based on deep architectures assume that the source samples arise from a single distribution. However, in practice most datasets can be regarded as mixtures of multiple domains. In these cases exploiting single-source DA methods for learning target classifiers may lead to sub-optimal, if not poor, results. In addition, in many applications it is difficult to manually provide the domain labels for all source data points, i.e. latent domains should be automatically discovered. This paper introduces a novel Convolutional Neural Network (CNN) architecture which (i) automatically discovers latent domains in visual datasets and (ii) exploits this information to learn robust target classifiers. Our approach is based on the introduction of two main components, which can be embedded into any existing CNN architecture: (i) a side branch that automatically computes the assignment of a source sample to a latent domain and (ii) novel layers that exploit domain membership information to appropriately align the distribution of the CNN internal feature representations to a reference distribution. We test our approach on publicly-available datasets, showing that it outperforms state-of-the-art multi-source DA methods by a large margin.",
"title": ""
},
{
"docid": "neg:1840522_15",
"text": "This paper proposes steganalysis methods for extensions of least-significant bit (LSB) overwriting to both of the two lowest bit planes in digital images: there are two distinct embedding paradigms. The author investigates how detectors for standard LSB replacement can be adapted to such embedding, and how the methods of \"structural steganalysis\", which gives the most sensitive detectors for standard LSB replacement, may be extended and applied to make more sensitive purpose-built detectors for two bit plane steganography. The literature contains only one other detector specialized to detect replacement multiple bits, and those presented here are substantially more sensitive. The author also compares the detectability of standard LSB embedding with the two methods of embedding in the lower two bit planes: although the novel detectors have a high accuracy from the steganographer's point of view, the empirical results indicate that embedding in the two lowest bit planes is preferable (in some cases, highly preferable) to embedding in one",
"title": ""
},
{
"docid": "neg:1840522_16",
"text": "Time series prediction techniques have been used in many real-world applications such as financial market prediction, electric utility load forecasting , weather and environmental state prediction, and reliability forecasting. The underlying system models and time series data generating processes are generally complex for these applications and the models for these systems are usually not known a priori. Accurate and unbiased estimation of the time series data produced by these systems cannot always be achieved using well known linear techniques, and thus the estimation process requires more advanced time series prediction algorithms. This paper provides a survey of time series prediction applications using a novel machine learning approach: support vector machines (SVM). The underlying motivation for using SVMs is the ability of this methodology to accurately forecast time series data when the underlying system processes are typically nonlinear, non-stationary and not defined a-priori. SVMs have also been proven to outperform other non-linear techniques including neural-network based non-linear prediction techniques such as multi-layer perceptrons.The ultimate goal is to provide the reader with insight into the applications using SVM for time series prediction, to give a brief tutorial on SVMs for time series prediction, to outline some of the advantages and challenges in using SVMs for time series prediction, and to provide a source for the reader to locate books, technical journals, and other online SVM research resources.",
"title": ""
},
{
"docid": "neg:1840522_17",
"text": "This letter proposes a novel wideband circularly polarized magnetoelectric dipole antenna. In the proposed antenna, a pair of rotationally symmetric horizontal patches functions as an electric dipole, and two vertical patches with the ground act as an equivalent magnetic dipole. A Γ-shaped probe is used to excite the antenna, and a metallic cavity with two gaps is designed for wideband and good performance in radiation. A prototype was fabricated and measured. The experimental results show that the proposed antenna has an impedance bandwidth of 65% for SWR≤2 from 1.76 to 3.46 GHz, a 3-dB axial-ratio bandwidth of 71.5% from 1.68 to 3.55 GHz, and a stable gain of 8 ± 1 dBi. Good unidirectional radiation characteristic and low back-lobe level are achieved over the whole operating frequency band.",
"title": ""
},
{
"docid": "neg:1840522_18",
"text": "P2P lending is a new form of lending where in the lenders and borrowers can meet at a common platform like Prosper and ZOPA and strike a best deal. While the borrower looks for a lender who offers the fund at a cheaper interest rate, the lender looks for a borrower whose probability of default is nil or minimal. The peer to peer lending sites can help the lenders judge the borrower by allowing the analysis of the social structures like friendship networks and group affiliations. A particular user can be judged based on his profile and on the information extracted from his social network like borrower's friend's profile and activities (like lending, borrowing and bidding activities). We are using classification algorithm to classify good and bad borrowers, where the input attributes consists of both core credit and social network information. Most of these algorithms only take a single table as input, whereas in the real world most data are stored in multiple tables and managed by relational database systems. Transferring data from multiple tables into a single table, especially merging the social network data causes problems like high redundancy. A simple classifier performs well on real single table data but when applied in a multi-relational (Multi table) setting; its accuracy suffers from the altered statistical information of individual attributes during “join”. Therefore we are using a multi relational Bayesian classification method to predict the default probabilities of borrowers.",
"title": ""
},
{
"docid": "neg:1840522_19",
"text": "Face recognition is challenge task which involves determining the identity of facial images. With availability of a massive amount of labeled facial images gathered from Internet, deep convolution neural networks(DCNNs) have achieved great success in face recognition tasks. Those images are gathered from unconstrain environment, which contain people with different ethnicity, age, gender and so on. However, in the actual application scenario, the target face database may be gathered under different conditions compered with source training dataset, e.g. different ethnicity, different age distribution, disparate shooting environment. These factors increase domain discrepancy between source training database and target application database which makes the learnt model degenerate in target database. Meanwhile, for the target database where labeled data are lacking or unavailable, directly using target data to fine-tune pre-learnt model becomes intractable and impractical. In this paper, we adopt unsupervised transfer learning methods to address this issue. To alleviate the discrepancy between source and target face database and ensure the generalization ability of the model, we constrain the maximum mean discrepancy (MMD) between source database and target database and utilize the massive amount of labeled facial images of source database to training the deep neural network at the same time. We evaluate our method on two face recognition benchmarks and significantly enhance the performance without utilizing the target label.",
"title": ""
}
] |
1840523 | Towards Music Imagery Information Retrieval: Introducing the OpenMIIR Dataset of EEG Recordings from Music Perception and Imagination | [
{
"docid": "pos:1840523_0",
"text": "Electroencephalographic measurements are commonly used in medical and research areas. This review article presents an introduction into EEG measurement. Its purpose is to help with orientation in EEG field and with building basic knowledge for performing EEG recordings. The article is divided into two parts. In the first part, background of the subject, a brief historical overview, and some EEG related research areas are given. The second part explains EEG recording.",
"title": ""
}
] | [
{
"docid": "neg:1840523_0",
"text": "Spectral graph partitioning provides a powerful approach to image segmentation. We introduce an alternate idea that finds partitions with a small isoperimetric constant, requiring solution to a linear system rather than an eigenvector problem. This approach produces the high quality segmentations of spectral methods, but with improved speed and stability.",
"title": ""
},
{
"docid": "neg:1840523_1",
"text": "OBJECTIVE\nThe aim of the study was to evaluate efficacy of fractional CO2 vaginal laser treatment (Laser, L) and compare it to local estrogen therapy (Estriol, E) and the combination of both treatments (Laser + Estriol, LE) in the treatment of vulvovaginal atrophy (VVA).\n\n\nMETHODS\nA total of 45 postmenopausal women meeting inclusion criteria were randomized in L, E, or LE groups. Assessments at baseline, 8 and 20 weeks, were conducted using Vaginal Health Index (VHI), Visual Analog Scale for VVA symptoms (dyspareunia, dryness, and burning), Female Sexual Function Index, and maturation value (MV) of Meisels.\n\n\nRESULTS\nForty-five women were included and 3 women were lost to follow-up. VHI average score was significantly higher at weeks 8 and 20 in all study arms. At week 20, the LE arm also showed incremental improvement of VHI score (P = 0.01). L and LE groups showed a significant improvement of dyspareunia, burning, and dryness, and the E arm only of dryness (P < 0.001). LE group presented significant improvement of total Female Sex Function Index (FSFI) score (P = 0.02) and individual domains of pain, desire, and lubrication. In contrast, the L group showed significant worsening of pain domain in FSFI (P = 0.04), but FSFI total scores were comparable in all treatment arms at week 20.\n\n\nCONCLUSIONS\nCO2 vaginal laser alone or in combination with topical estriol is a good treatment option for VVA symptoms. Sexual-related pain with vaginal laser treatment might be of concern.",
"title": ""
},
{
"docid": "neg:1840523_2",
"text": "This study explores the influence of wastewater feedstock composition on hydrothermal liquefaction (HTL) biocrude oil properties and physico-chemical characteristics. Spirulina algae, swine manure, and digested sludge were converted under HTL conditions (300°C, 10-12 MPa, and 30 min reaction time). Biocrude yields ranged from 9.4% (digested sludge) to 32.6% (Spirulina). Although similar higher heating values (32.0-34.7 MJ/kg) were estimated for all product oils, more detailed characterization revealed significant differences in biocrude chemistry. Feedstock composition influenced the individual compounds identified as well as the biocrude functional group chemistry. Molecular weights tracked with obdurate carbohydrate content and followed the order of Spirulina<swine manure<digested sludge. A similar trend was observed in boiling point distributions and the long branched aliphatic contents. These findings show the importance of HTL feedstock composition and highlight the need for better understanding of biocrude chemistries when considering bio-oil uses and upgrading requirements.",
"title": ""
},
{
"docid": "neg:1840523_3",
"text": "We discuss fast exponential time solutions for NP-complete problems. We survey known results and approaches, we provide pointers to the literature, and we discuss several open problems in this area. The list of discussed NP-complete problems includes the travelling salesman problem, scheduling under precedence constraints, satisfiability, knapsack, graph coloring, independent sets in graphs, bandwidth of a graph, and many more.",
"title": ""
},
{
"docid": "neg:1840523_4",
"text": "At the global level of the Big Five, Extraversion and Neuroticism are the strongest predictors of life satisfaction. However, Extraversion and Neuroticism are multifaceted constructs that combine more specific traits. This article examined the contribution of facets of Extraversion and Neuroticism to life satisfaction in four studies. The depression facet of Neuroticism and the positive emotions/cheerfulness facet of Extraversion were the strongest and most consistent predictors of life satisfaction. These two facets often accounted for more variance in life satisfaction than Neuroticism and Extraversion. The findings suggest that measures of depression and positive emotions/cheerfulness are necessary and sufficient to predict life satisfaction from personality traits. The results also lead to a more refined understanding of the specific personality traits that influence life satisfaction: Depression is more important than anxiety or anger and a cheerful temperament is more important than being active or sociable.",
"title": ""
},
{
"docid": "neg:1840523_5",
"text": "This paper presents a novel system to estimate body pose configuration from a single depth map. It combines both pose detection and pose refinement. The input depth map is matched with a set of pre-captured motion exemplars to generate a body configuration estimation, as well as semantic labeling of the input point cloud. The initial estimation is then refined by directly fitting the body configuration with the observation (e.g., the input depth). In addition to the new system architecture, our other contributions include modifying a point cloud smoothing technique to deal with very noisy input depth maps, a point cloud alignment and pose search algorithm that is view-independent and efficient. Experiments on a public dataset show that our approach achieves significantly higher accuracy than previous state-of-art methods.",
"title": ""
},
{
"docid": "neg:1840523_6",
"text": "Maneuvering vessel detection and tracking (VDT), incorporated with state estimation and trajectory prediction, are important tasks for vessel navigational systems (VNSs), as well as vessel traffic monitoring and information systems (VTMISs) to improve maritime safety and security in ocean navigation. Although conventional VNSs and VTMISs are equipped with maritime surveillance systems for the same purpose, intelligent capabilities for vessel detection, tracking, state estimation, and navigational trajectory prediction are underdeveloped. Therefore, the integration of intelligent features into VTMISs is proposed in this paper. The first part of this paper is focused on detecting and tracking of a multiple-vessel situation. An artificial neural network (ANN) is proposed as the mechanism for detecting and tracking multiple vessels. In the second part of this paper, vessel state estimation and navigational trajectory prediction of a single-vessel situation are considered. An extended Kalman filter (EKF) is proposed for the estimation of vessel states and further used for the prediction of vessel trajectories. Finally, the proposed VTMIS is simulated, and successful simulation results are presented in this paper.",
"title": ""
},
{
"docid": "neg:1840523_7",
"text": "In prior work we have demonstrated the noise robustness of a novel microphone solution, the PARAT earplug communication terminal. Here we extend that work with results for the ETSI Advanced Front-End and segmental cepstral mean and variance normalization (CMVN). We also propose a method for doing CMVN in the model domain. This removes the need to train models on normalized features, which may significantly extend the applicability of CMVN. The recognition results are comparable to those of the traditional approach.",
"title": ""
},
{
"docid": "neg:1840523_8",
"text": "Reinforcement learning schemes perform direct on-line search in control space. This makes them appropriate for modifying control rules to obtain improvements in the performance of a system. The effectiveness of a reinforcement learning strategy is studied here through the training of a learning classz$er system (LCS) that controls the movement of an autonomous vehicle in simulated paths including left and right turns. The LCS comprises a set of conditionaction rules (classifiers) that compete to control the system and evolve by means of a genetic algorithm (GA). Evolution and operation of classifiers depend upon an appropriate credit assignment mechanism based on reinforcement learning. Different design options and the role of various parameters have been investigated experimentally. The performance of vehicle movement under the proposed evolutionary approach is superior compared with that of other (neural) approaches based on reinforcement learning that have been applied previously to the same benchmark problem.",
"title": ""
},
{
"docid": "neg:1840523_9",
"text": "We propose a framework for automatic modeling, detection, and tracking of 3D objects with a Kinect. The detection part is mainly based on the recent template-based LINEMOD approach [1] for object detection. We show how to build the templates automatically from 3D models, and how to estimate the 6 degrees-of-freedom pose accurately and in real-time. The pose estimation and the color information allow us to check the detection hypotheses and improves the correct detection rate by 13% with respect to the original LINEMOD. These many improvements make our framework suitable for object manipulation in Robotics applications. Moreover we propose a new dataset made of 15 registered, 1100+ frame video sequences of 15 various objects for the evaluation of future competing methods. Fig. 1. 15 different texture-less 3D objects are simultaneously detected with our approach under different poses on heavy cluttered background with partial occlusion. Each detected object is augmented with its 3D model. We also show the corresponding coordinate systems.",
"title": ""
},
{
"docid": "neg:1840523_10",
"text": "GPU-based clusters are increasingly being deployed in HPC environments to accelerate a variety of scientific applications. Despite their growing popularity, the GPU devices themselves are under-utilized even for many computationally-intensive jobs. This stems from the fact that the typical GPU usage model is one in which a host processor periodically offloads computationally intensive portions of an application to the coprocessor. Since some portions of code cannot be offloaded to the GPU (for example, code performing network communication in MPI applications), this usage model results in periods of time when the GPU is idle. GPUs could be time-shared across jobs to \"fill\" these idle periods, but unlike CPU resources such as the cache, the effects of sharing the GPU are not well understood. Specifically, two jobs that time-share a single GPU will experience resource contention and interfere with each other. The resulting slow-down could lead to missed job deadlines. Current cluster managers do not support GPU-sharing, but instead dedicate GPUs to a job for the job's lifetime.\n In this paper, we present a framework to predict and handle interference when two or more jobs time-share GPUs in HPC clusters. Our framework consists of an analysis model, and a dynamic interference detection and response mechanism to detect excessive interference and restart the interfering jobs on different nodes. We implement our framework in Torque, an open-source cluster manager, and using real workloads on an HPC cluster, show that interference-aware two-job colocation (although our method is applicable to colocating more than two jobs) improves GPU utilization by 25%, reduces a job's waiting time in the queue by 39% and improves job latencies by around 20%.",
"title": ""
},
{
"docid": "neg:1840523_11",
"text": "Sign language, which is a medium of communication for deaf people, uses manual communication and body language to convey meaning, as opposed to using sound. This paper presents a prototype Malayalam text to sign language translation system. The proposed system takes Malayalam text as input and generates corresponding Sign Language. Output animation is rendered using a computer generated model. This system will help to disseminate information to the deaf people in public utility places like railways, banks, hospitals etc. This will also act as an educational tool in learning Sign Language.",
"title": ""
},
{
"docid": "neg:1840523_12",
"text": "Ultra wideband components have been developed using SIW technology. The various components including a GCPW transition with less than 0.4dB insertion loss are developed. In addition to, T and Y-junctions are optimized with relatively wide bandwidth of greater than 63% and 40% respectively that have less than 0.6 dB insertion loss. The developed transition was utilized to design an X-band 8 way power divider that demonstrated excellent performance over a 5 GHz bandwidth with less than ±4º and ±0.9 dB phase and amplitude imbalance, respectively. The developed SIW power divider has a low profile and is particularly suitable for circuits' integration.",
"title": ""
},
{
"docid": "neg:1840523_13",
"text": "In this work we present a method to improve the pruning step of the current state-of-the-art methodology to compress neural networks. The novelty of the proposed pruning technique is in its differentiability, which allows pruning to be performed during the backpropagation phase of the network training. This enables an end-to-end learning and strongly reduces the training time. The technique is based on a family of differentiable pruning functions and a new regularizer specifically designed to enforce pruning. The experimental results show that the joint optimization of both the thresholds and the network weights permits to reach a higher compression rate, reducing the number of weights of the pruned network by a further 14% to 33 % compared to the current state-of-the-art. Furthermore, we believe that this is the first study where the generalization capabilities in transfer learning tasks of the features extracted by a pruned network are analyzed. To achieve this goal, we show that the representations learned using the proposed pruning methodology maintain the same effectiveness and generality of those learned by the corresponding non-compressed network on a set of different recognition tasks.",
"title": ""
},
{
"docid": "neg:1840523_14",
"text": "This article investigates the problem of Simultaneous Localization and Mapping (SLAM) from the perspective of linear estimation theory. The problem is first formulated in terms of graph embedding: a graph describing robot poses at subsequent instants of time needs be embedded in a three-dimensional space, assuring that the estimated configuration maximizes measurement likelihood. Combining tools belonging to linear estimation and graph theory, a closed-form approximation to the full SLAM problem is proposed, under the assumption that the relative position and the relative orientation measurements are independent. The approach needs no initial guess for optimization and is formally proven to admit solution under the SLAM setup. The resulting estimate can be used as an approximation of the actual nonlinear solution or can be further refined by using it as an initial guess for nonlinear optimization techniques. Finally, the experimental analysis demonstrates that such refinement is often unnecessary, since the linear estimate is already accurate.",
"title": ""
},
{
"docid": "neg:1840523_15",
"text": "Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained.",
"title": ""
},
{
"docid": "neg:1840523_16",
"text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.",
"title": ""
},
{
"docid": "neg:1840523_17",
"text": "The wireless industry has been experiencing an explosion of data traffic usage in recent years and is now facing an even bigger challenge, an astounding 1000-fold data traffic increase in a decade. The required traffic increase is in bits per second per square kilometer, which is equivalent to bits per second per Hertz per cell × Hertz × cell per square kilometer. The innovations through higher utilization of the spectrum (bits per second per Hertz per cell) and utilization of more bandwidth (Hertz) are quite limited: spectral efficiency of a point-to-point link is very close to the theoretical limits, and utilization of more bandwidth is a very costly solution in general. Hyper-dense deployment of heterogeneous and small cell networks (HetSNets) that increase cells per square kilometer by deploying more cells in a given area is a very promising technique as it would provide a huge capacity gain by bringing small base stations closer to mobile devices. This article presents a holistic view on hyperdense HetSNets, which include fundamental preference in future wireless systems, and technical challenges and recent technological breakthroughs made in such networks. Advancements in modeling and analysis tools for hyper-dense HetSNets are also introduced with some additional interference mitigation and higher spectrum utilization techniques. This article ends with a promising view on the hyper-dense HetSNets to meet the upcoming 1000× data challenge.",
"title": ""
},
{
"docid": "neg:1840523_18",
"text": "In this paper, a prevalent type of zero-voltage- transition bidirectional converters is analyzed with the inclusion of the reverse recovery effect of the diodes. The main drawback of this type is missing the soft-switching condition of the main switches at operating duty cycles smaller than 0.5. As a result, soft-switching condition would be lost in one of the bidirectional converter operating modes (forward or reverse modes) since the duty cycles of the forward and reverse modes are complement of each other. Analysis shows that the rectifying diode reverse recovery would assist in providing the soft-switching condition for the duty cycles below 0.5, which is done by a proper design of the snubber capacitor and with no limitation on the rectifying diode current rate at turn-off. Hence, the problems associated with the soft-switching range and the reverse recovery of the rectifying diode are solved simultaneously, and soft-switching condition for both operating modes of the bidirectional converter is achieved with no extra auxiliary components and no complex control. The theoretical analysis for a bidirectional buck and boost converter is presented in detail, and the validity of the theoretical analysis is justified using the experimental results of a 250-W 135- to 200-V prototype converter.",
"title": ""
}
] |
1840524 | AGIL: Learning Attention from Human for Visuomotor Tasks | [
{
"docid": "pos:1840524_0",
"text": "A critical function in both machine vision and biological vision systems is attentional selection of scene regions worthy of further analysis by higher-level processes such as object recognition. Here we present the first model of spatial attention that (1) can be applied to arbitrary static and dynamic image sequences with interactive tasks and (2) combines a general computational implementation of both bottom-up (BU) saliency and dynamic top-down (TD) task relevance; the claimed novelty lies in the combination of these elements and in the fully computational nature of the model. The BU component computes a saliency map from 12 low-level multi-scale visual features. The TD component computes a low-level signature of the entire image, and learns to associate different classes of signatures with the different gaze patterns recorded from human subjects performing a task of interest. We measured the ability of this model to predict the eye movements of people playing contemporary video games. We found that the TD model alone predicts where humans look about twice as well as does the BU model alone; in addition, a combined BU*TD model performs significantly better than either individual component. Qualitatively, the combined model predicts some easy-to-describe but hard-to-compute aspects of attentional selection, such as shifting attention leftward when approaching a left turn along a racing track. Thus, our study demonstrates the advantages of integrating BU factors derived from a saliency map and TD factors learned from image and task contexts in predicting where humans look while performing complex visually-guided behavior.",
"title": ""
},
{
"docid": "pos:1840524_1",
"text": "Deep reinforcement learning (RL) has achieved several high profile successes in difficult decision-making problems. However, these algorithms typically require a huge amount of data before they reach reasonable performance. In fact, their performance during learning can be extremely poor. This may be acceptable for a simulator, but it severely limits the applicability of deep RL to many real-world tasks, where the agent must learn in the real environment. In this paper we study a setting where the agent may access data from previous control of the system. We present an algorithm, Deep Q-learning from Demonstrations (DQfD), that leverages small sets of demonstration data to massively accelerate the learning process even from relatively small amounts of demonstration data and is able to automatically assess the necessary ratio of demonstration data while learning thanks to a prioritized replay mechanism. DQfD works by combining temporal difference updates with supervised classification of the demonstrator’s actions. We show that DQfD has better initial performance than Prioritized Dueling Double Deep Q-Networks (PDD DQN) as it starts with better scores on the first million steps on 41 of 42 games and on average it takes PDD DQN 83 million steps to catch up to DQfD’s performance. DQfD learns to out-perform the best demonstration given in 14 of 42 games. In addition, DQfD leverages human demonstrations to achieve state-of-the-art results for 11 games. Finally, we show that DQfD performs better than three related algorithms for incorporating demonstration data into DQN.",
"title": ""
}
] | [
{
"docid": "neg:1840524_0",
"text": "1. Associate Professor of Oncology of the State University of Ceará; Clinical Director of the Cancer Hospital of Ceará 2. Resident in Urology of Urology Department of the Federal University of Ceará 3. Associate Professor of Urology of the State University of Ceará; Assistant of the Division of Uro-Oncology, Cancer Hospital of Ceará 4. Professor of Urology Department of the Federal University of Ceará; Chief of Division of Uro-Oncology, Cancer Hospital of Ceará",
"title": ""
},
{
"docid": "neg:1840524_1",
"text": "Count-based exploration algorithms are known to perform near-optimally when used in conjunction with tabular reinforcement learning (RL) methods for solving small discrete Markov decision processes (MDPs). It is generally thought that count-based methods cannot be applied in high-dimensional state spaces, since most states will only occur once. Recent deep RL exploration strategies are able to deal with high-dimensional continuous state spaces through complex heuristics, often relying on optimism in the face of uncertainty or intrinsic motivation. In this work, we describe a surprising finding: a simple generalization of the classic count-based approach can reach near state-of-the-art performance on various highdimensional and/or continuous deep RL benchmarks. States are mapped to hash codes, which allows to count their occurrences with a hash table. These counts are then used to compute a reward bonus according to the classic count-based exploration theory. We find that simple hash functions can achieve surprisingly good results on many challenging tasks. Furthermore, we show that a domaindependent learned hash code may further improve these results. Detailed analysis reveals important aspects of a good hash function: 1) having appropriate granularity and 2) encoding information relevant to solving the MDP. This exploration strategy achieves near state-of-the-art performance on both continuous control tasks and Atari 2600 games, hence providing a simple yet powerful baseline for solving MDPs that require considerable exploration.",
"title": ""
},
{
"docid": "neg:1840524_2",
"text": "Dempster-Shafer theory offers an alternative to traditional probabilistic theory for the mathematical representation of uncertainty. The significant innovation of this framework is that it allows for the allocation of a probability mass to sets or intervals. DempsterShafer theory does not require an assumption regarding the probability of the individual constituents of the set or interval. This is a potentially valuable tool for the evaluation of risk and reliability in engineering applications when it is not possible to obtain a precise measurement from experiments, or when knowledge is obtained from expert elicitation. An important aspect of this theory is the combination of evidence obtained from multiple sources and the modeling of conflict between them. This report surveys a number of possible combination rules for Dempster-Shafer structures and provides examples of the implementation of these rules for discrete and interval-valued data.",
"title": ""
},
{
"docid": "neg:1840524_3",
"text": "Emojis are small images that are commonly included in social media text messages. The combination of visual and textual content in the same message builds up a modern way of communication, that automatic systems are not used to deal with. In this paper we extend recent advances in emoji prediction by putting forward a multimodal approach that is able to predict emojis in Instagram posts. Instagram posts are composed of pictures together with texts which sometimes include emojis. We show that these emojis can be predicted by using the text, but also using the picture. Our main finding is that incorporating the two synergistic modalities, in a combined model, improves accuracy in an emoji prediction task. This result demonstrates that these two modalities (text and images) encode different information on the use of emojis and therefore can complement each other.",
"title": ""
},
{
"docid": "neg:1840524_4",
"text": "This paper introduces a new method to solve the cross-domain recognition problem. Different from the traditional domain adaption methods which rely on a global domain shift for all classes between the source and target domains, the proposed method is more flexible to capture individual class variations across domains. By adopting a natural and widely used assumption that the data samples from the same class should lay on an intrinsic low-dimensional subspace, even if they come from different domains, the proposed method circumvents the limitation of the global domain shift, and solves the cross-domain recognition by finding the joint subspaces of the source and target domains. Specifically, given labeled samples in the source domain, we construct a subspace for each of the classes. Then we construct subspaces in the target domain, called anchor subspaces, by collecting unlabeled samples that are close to each other and are highly likely to belong to the same class. The corresponding class label is then assigned by minimizing a cost function which reflects the overlap and topological structure consistency between subspaces across the source and target domains, and within the anchor subspaces, respectively. We further combine the anchor subspaces to the corresponding source subspaces to construct the joint subspaces. Subsequently, one-versus-rest support vector machine classifiers are trained using the data samples belonging to the same joint subspaces and applied to unlabeled data in the target domain. We evaluate the proposed method on two widely used datasets: 1) object recognition dataset for computer vision tasks and 2) sentiment classification dataset for natural language processing tasks. Comparison results demonstrate that the proposed method outperforms the comparison methods on both datasets.",
"title": ""
},
{
"docid": "neg:1840524_5",
"text": "Adult knowledge of a language involves correctly balancing lexically-based and more language-general patterns. For example, verb argument structures may sometimes readily generalize to new verbs, yet with particular verbs may resist generalization. From the perspective of acquisition, this creates significant learnability problems, with some researchers claiming a crucial role for verb semantics in the determination of when generalization may and may not occur. Similarly, there has been debate regarding how verb-specific and more generalized constraints interact in sentence processing and on the role of semantics in this process. The current work explores these issues using artificial language learning. In three experiments using languages without semantic cues to verb distribution, we demonstrate that learners can acquire both verb-specific and verb-general patterns, based on distributional information in the linguistic input regarding each of the verbs as well as across the language as a whole. As with natural languages, these factors are shown to affect production, judgments and real-time processing. We demonstrate that learners apply a rational procedure in determining their usage of these different input statistics and conclude by suggesting that a Bayesian perspective on statistical learning may be an appropriate framework for capturing our findings.",
"title": ""
},
{
"docid": "neg:1840524_6",
"text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.",
"title": ""
},
{
"docid": "neg:1840524_7",
"text": "The polar format algorithm (PFA) for spotlight synthetic aperture radar (SAR) is based on a linear approximation for the differential range to a scatterer. We derive a second-order Taylor series approximation of the differential range. We provide a simple and concise derivation of both the far-field linear approximation of the differential range, which forms the basis of the PFA, and the corresponding approximation limits based on the second-order terms of the approximation.",
"title": ""
},
{
"docid": "neg:1840524_8",
"text": "A brief account is given of the discovery of abscisic acid (ABA) in roots and root caps of higher plants as well as the techniques by which ABA may be demonstrated in these tissues. The remainder of the review is concerned with examining the rôle of ABA in the regulation of root growth. In this regard, it is well established that when ABA is supplied to roots their elongation is usually inhibited, although at low external concentrations a stimulation of growth may also be found. Fewer observations have been directed at exploring the connection between root growth and the level of naturally occurring, endogenous ABA. Nevertheless, the evidence here also suggests that ABA is an inhibitory regulator of root growth. Moreover, ABA appears to be involved in the differential growth that arises in response to a gravitational stimulus. Recent reports that deny a rôle for ABA in root gravitropism are considered inconclusive. The response of roots to osmotic stress and the changes in ABA levels which ensue, are summarised; so are the interrelations between ABA and other hormones, particularly auxin (e.g. indoleacetic acid); both are considered in the context of the root growth and development. Quantitative changes in auxin and ABA levels may together provide the root with a flexible means of regulating its growth.",
"title": ""
},
{
"docid": "neg:1840524_9",
"text": "In this paper, we present an extraction and characterization methodology which allows for the determination, from S-parameter measurements, of the threshold voltage, the gain factor, and the mobility degradation factor, neither requiring data regressions involving multiple devices nor DC measurements. This methodology takes into account the substrate effects occurring in MOSFETs built in bulk technology so that physically meaningful parameters can be obtained. Furthermore, an analysis of the substrate impedance is presented, showing that this parasitic component not only degrades the performance of a microwave MOSFET, but may also lead to determining unrealistic values for the model parameters when not considered during a high-frequency characterization process. Measurements were made on transistors of different lengths, the shortest being 80 nm, in the 10 MHz to 40 GHz frequency range. 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840524_10",
"text": "This paper proposes an acceleration-based robust controller for the motion control problem, i.e., position and force control problems, of a novel series elastic actuator (SEA). A variable stiffness SEA is designed by using soft and hard springs in series so as to relax the fundamental performance limitation of conventional SEAs. Although the proposed SEA intrinsically has several superiorities in force control, its motion control problem, especially position control problem, is harder than conventional stiff and SEAs due to its special mechanical structure. It is shown that the performance of the novel SEA is limited when conventional motion control methods are used. The performance of the steady-state response is significantly improved by using disturbance observer (DOb), i.e., improving the robustness; however, it degrades the transient response by increasing the vibration at tip point. The vibration of the novel SEA and external disturbances are suppressed by using resonance ratio control (RRC) and arm DOb, respectively. The proposed method can be used in the motion control problem of conventional SEAs as well. The intrinsically safe mechanical structure and high-performance motion control system provide several benefits in industrial applications, e.g., robots can perform dexterous and versatile industrial tasks alongside people in a factory setting. The experimental results show viability of the proposals.",
"title": ""
},
{
"docid": "neg:1840524_11",
"text": "In some people, problematic cell phone use can lead to situations in which they lose control, similar to those observed in other cases of addiction. Although different scales have been developed to assess its severity, we lack an instrument that is able to determine the desire or craving associated with it. Thus, with the objective of evaluating craving for cell phone use, in this study, we develop and present the Mobile Phone Addiction Craving Scale (MPACS). It consists of eight Likert-style items, with 10 response options, referring to possible situations in which the interviewee is asked to evaluate the degree of restlessness that he or she feels if the cell phone is unavailable at the moment. It can be self-administered or integrated in an interview when abuse or problems are suspected. With the existence of a single dimension, reflected in the exploratory factor analysis (EFA), the scale presents adequate reliability and internal consistency (α = 0.919). Simultaneously, we are able to show significantly increased correlations (r = 0.785, p = 0.000) with the Mobile Phone Problematic Use Scale (MPPUS) and state anxiety (r = 0.330, p = 0.000). We are also able to find associations with impulsivity, measured using the urgency, premeditation, perseverance, and sensation seeking scale, particularly in the dimensions of negative urgency (r = 0.303, p = 0.000) and positive urgency (r = 0.290, p = 0.000), which confirms its construct validity. The analysis of these results conveys important discriminant validity among the MPPUS user categories that are obtained using the criteria by Chow et al. (1). The MPACS demonstrates higher levels of craving in persons up to 35 years of age, reversing with age. In contrast, we do not find significant differences among the sexes. Finally, a receiver operating characteristic (ROC) analysis allows us to establish the scores from which we are able to determine the different levels of craving, from the absence of craving to that referred to as addiction. Based on these results, we can conclude that this scale is a reliable tool that complements ongoing studies on problematic cell phone use.",
"title": ""
},
{
"docid": "neg:1840524_12",
"text": "Cryptocurrencies (or digital tokens, digital currencies, e.g., BTC, ETH, XRP, NEO) have been rapidly gaining ground in use, value, and understanding among the public, bringing astonishing profits to investors. Unlike other money and banking systems, most digital tokens do not require central authorities. Being decentralized poses significant challenges for credit rating. Most ICOs are currently not subject to government regulations, which makes a reliable credit rating system for ICO projects necessary and urgent. In this paper, we introduce ICORATING, the first learning–based cryptocurrency rating system. We exploit natural-language processing techniques to analyze various aspects of 2,251 digital currencies to date, such as white paper content, founding teams, Github repositories, websites, etc. Supervised learning models are used to correlate the life span and the price change of cryptocurrencies with these features. For the best setting, the proposed system is able to identify scam ICO projects with 0.83 precision. We hope this work will help investors identify scam ICOs and attract more efforts in automatically evaluating and analyzing ICO projects. 1 2 Author contributions: J. Li designed research; Z. Sun, Z. Deng, F. Li and P. Shi prepared the data; S. Bian and A. Yuan contributed analytic tools; P. Shi and Z. Deng labeled the dataset; J. Li, W. Monroe and W. Wang designed the experiments; J. Li, W. Wu, Z. Deng and T. Zhang performed the experiments; J. Li and T. Zhang wrote the paper; W. Monroe and A. Yuan proofread the paper. Author Contacts: Figure 1: Market capitalization v.s. time. Figure 2: The number of new ICO projects v.s. time.",
"title": ""
},
{
"docid": "neg:1840524_13",
"text": "The present research tested the hypothesis that concepts of gratitude are prototypically organized and explored whether lay concepts of gratitude are broader than researchers' concepts of gratitude. In five studies, evidence was found that concepts of gratitude are indeed prototypically organized. In Study 1, participants listed features of gratitude. In Study 2, participants reliably rated the centrality of these features. In Studies 3a and 3b, participants perceived that a hypothetical other was experiencing more gratitude when they read a narrative containing central as opposed to peripheral features. In Study 4, participants remembered more central than peripheral features in gratitude narratives. In Study 5a, participants generated more central than peripheral features when they wrote narratives about a gratitude incident, and in Studies 5a and 5b, participants generated both more specific and more generalized types of gratitude in similar narratives. Throughout, evidence showed that lay conceptions of gratitude are broader than current research definitions.",
"title": ""
},
{
"docid": "neg:1840524_14",
"text": "We propose a deep feed-forward neural network architecture for pixel-wise semantic scene labeling. It uses a novel recursive neural network architecture for context propagation, referred to as rCPN. It first maps the local visual features into a semantic space followed by a bottom-up aggregation of local information into a global representation of the entire image. Then a top-down propagation of the aggregated information takes place that enhances the contextual information of each local feature. Therefore, the information from every location in the image is propagated to every other location. Experimental results on Stanford background and SIFT Flow datasets show that the proposed method outperforms previous approaches. It is also orders of magnitude faster than previous methods and takes only 0.07 seconds on a GPU for pixel-wise labeling of a 256 x 256 image starting from raw RGB pixel values, given the super-pixel mask that takes an additional 0.3 seconds using an off-the-shelf implementation.",
"title": ""
},
{
"docid": "neg:1840524_15",
"text": "Today, event logs contain vast amounts of data that can easily overwhelm a human. Therefore, mining patterns from event logs is an important system management task. This paper presents a novel clustering algorithm for log file data sets which helps one to detect frequent patterns from log files, to build log file profiles, and to identify anomalous log file lines. Keywords—system monitoring, data mining, data clustering",
"title": ""
},
{
"docid": "neg:1840524_16",
"text": "the rapid changing Internet environment has formed a competitive business setting, which provides opportunities for conducting businesses online. Availability of online transaction systems enable users to buy and make payment for products and services using the Internet platform. Thus, customers’ involvements in online purchasing have become an important trend. However, since the market is comprised of many different people and cultures, with diverse viewpoints, e-commerce businesses are being challenged by the reality of complex behavior of consumers. Therefore, it is vital to identify the factors that affect consumers purchasing decision through e-commerce in respective cultures and societies. In response to this claim, the purpose of this study is to explore the factors affecting customers’ purchasing decision through e-commerce (online shopping). Several factors such as trust, satisfaction, return policy, cash on delivery, after sale service, cash back warranty, business reputation, social and individual attitude, are considered. At this stage, the factors mentioned above, which are commonly considered influencing purchasing decision through online shopping in literature, are hypothesized to measure the causal relationship within the framework.",
"title": ""
},
{
"docid": "neg:1840524_17",
"text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.",
"title": ""
},
{
"docid": "neg:1840524_18",
"text": "BACKGROUND\nPrecise determination of breast volume facilitates reconstructive procedures and helps in the planning of tissue removal for breast reduction surgery. Various methods currently used to measure breast size are limited by technical drawbacks and unreliable volume determinations. The purpose of this study was to develop a formula to predict breast volume based on straightforward anthropomorphic measurements.\n\n\nMETHODS\nOne hundred one women participated in this study. Eleven anthropomorphic measurements were obtained on 202 breasts. Breast volumes were determined using a water displacement technique. Multiple stepwise linear regression was used to determine predictive variables and a unifying formula.\n\n\nRESULTS\nMean patient age was 37.7 years, with a mean body mass index of 31.8. Mean breast volumes on the right and left sides were 1328 and 1305 cc, respectively (range, 330 to 2600 cc). The final regression model incorporated the variables of breast base circumference in a standing position and a vertical measurement from the inframammary fold to a point representing the projection of the fold onto the anterior surface of the breast. The derived formula showed an adjusted R of 0.89, indicating that almost 90 percent of the variation in breast size was explained by the model.\n\n\nCONCLUSION\nSurgeons may find this formula a practical and relatively accurate method of determining breast volume.",
"title": ""
},
{
"docid": "neg:1840524_19",
"text": "Monte Carlo Tree Search (MCTS) has produced many recent breakthroughs in game AI research, particularly in computer Go. In this paper we consider how MCTS can be applied to create engaging AI for a popular commercial mobile phone game: Spades by AI Factory, which has been downloaded more than 2.5 million times. In particular, we show how MCTS can be integrated with knowledge-based methods to create an interesting, fun and strong player which makes far fewer plays that could be perceived by human observers as blunders than MCTS without the injection of knowledge. These blunders are particularly noticeable for Spades, where a human player must co-operate with an AI partner. MCTS gives objectively stronger play than the knowledge-based approach used in previous versions of the game and offers the flexibility to customise behaviour whilst maintaining a reusable core, with a reduced development cycle compared to purely knowledge-based techniques. Monte Carlo Tree Search (MCTS) is a family of game tree search algorithms that have advanced the state-of-theart in AI for a variety of challenging games, as surveyed in (Browne et al. 2012). Of particular note is the success of MCTS in the Chinese board game Go (Lee, Müller, and Teytaud 2010). MCTS has many appealing properties for decision making in games. It is an anytime algorithm that can effectively use whatever computation time is available. It also often performs well without any special knowledge or tuning for a particular game, although knowledge can be injected if desired to improve the AI’s strength or modify its playing style. These properties are attractive to a developer of a commercial game, where an AI that is perceived as high quality by players can be developed with significantly less effort than using purely knowledge-based AI methods. This paper presents findings from a collaboration between academic researchers and an independent game development company to integrate MCTS into a highly successful commercial version of the card game Spades for mobile devices running the Android operating system. Most previous work on MCTS uses win rate against a fixed AI opponent as the key metric of success. This is apCopyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. propriate when the aim is to win tournaments or to demonstrate MCTS’s ability to approximate optimal play. However for a commercial game, actual win rate is less important than how engaging the AI is for the players. For example if the AI is generally strong but occasionally makes moves that appear weak to a competent player, then the player’s enjoyment of the game is diminished. This is particularly important for games such as Spades where the player must cooperate with an AI partner whose apparent errors result in losses for the human player. In this paper we combine MCTS with knowledge-based approaches with the goal of creating an AI player that is not only strong in objective terms but is also perceived as strong by players. AI Factory1 is an independent UK-based company, incorporated in April 2003. AI Factory has developed a successful implementation of the popular card game Spades, which to date has been downloaded more than 2.5 million times and has an average review score of 4.5/5 from more than 78 000 reviews on the Google Play store. The knowledge-based AI used in previous versions plays competitively and has been well reviewed by users. This AI was developed using expert knowledge of the game and contains a large number of heuristics developed and tested over a period of 10 years. Much of the decision making is governed by these heuristics which are used to decide bids, infer what cards other players may hold, predict what cards other players may be likely to play and to decide what card to play. In AI Factory Spades, players interact with two AI opponents and one AI partner. Players can select their partners and opponents from a number of AI characters, each with a strength rating from 1 to 5 stars. Gameplay data shows that relatively few players choose intermediate level opponents: occasional or beginning players tend to choose 1-star opponents, whereas those players who play the game most frequently play almost exclusively against 5-star opponents. Presumably these are experienced card game players seeking a challenge. However some have expressed disappointment with the 5-star AI: although strong overall, it occasionally makes apparently bad moves. Our work provides strong evidence for a belief commonly held amongst game developers: the objective measures of strength (such as win rate) often used in the academic study of AI do not nechttp://www.aifactory.co.uk essarily provide a good metric for quality from a commercial AI perspective. The moves chosen by the AI may or may not be suboptimal in a game theoretic sense, but it is clear from player feedback that humans apply some intuition about which moves are good or bad. It is an unsatisfying experience when the AI makes moves which violate this intuition, except possibly where violating this intuition is a correct play, but even then this appears to lead to player dissatisfaction. The primary motivation for this work is to improve the strongest levels of AI play to satisfy experienced players, both in terms of the objective strength of the AI and in how convincing the chosen moves appear. Previous work has adapted MCTS to games which, like Spades, involve hidden information. This has led to the development of the Information Set Monte Carlo Tree Search (ISMCTS) family of algorithms (Cowling, Powley, and Whitehouse 2012). ISMCTS achieves a higher win rate than a knowledge-based AI developed by AI Factory for the Chinese card game Dou Di Zhu, and also performs well in other domains. ISMCTS uses determinizations, randomisations of the current game state which correspond to guessing hidden information. Each determinization is a game state that could conceivably be the actual current state, given the AI player’s observations so far. In Spades, a determinization is generated by randomly distributing the unseen cards amongst the other players. Each ISMCTS iteration is restricted to a newly generated determinization, resulting in a single tree that collects statistics from many determinizations. We demonstrate that the ISMCTS algorithm provides strong levels of play for Spades. However, previous work on ISMCTS has not dealt with the requirements for a commercially viable AI. Consequently, further research and development was needed in order to ensure the AI is perceived to be high quality by users. However, the effort required to inject knowledge into MCTS was small compared to the work needed to develop a heuristic-based AI from scratch. MCTS therefore shows great promise as a reusable basis for AI in commercial games. The ISMCTS player described in this paper is used in the currently available version of AI Factory Spades for the 4and 5-star AI levels, and AI Factory have already begun using the same code and techniques in products under development. This paper is structured as follows. We begin by outlining the rules of Spades and describing the knowledge-based approach used in AI Factory Spades. We then discuss some of the issues encountered in integrating MCTS with an existing mature codebase, and in running MCTS on mobile platforms with limited processor power and memory. We assess our MCTS player in terms of both raw playing strength and player engagement. We conclude with some thoughts on the promise of MCTS for future commercial games.",
"title": ""
}
] |
1840525 | Multi-task , Multi-Kernel Learning for Estimating Individual Wellbeing | [
{
"docid": "pos:1840525_0",
"text": "What can wearable sensors and usage of smart phones tell us about academic performance, self-reported sleep quality, stress and mental health condition? To answer this question, we collected extensive subjective and objective data using mobile phones, surveys, and wearable sensors worn day and night from 66 participants, for 30 days each, totaling 1,980 days of data. We analyzed daily and monthly behavioral and physiological patterns and identified factors that affect academic performance (GPA), Pittsburg Sleep Quality Index (PSQI) score, perceived stress scale (PSS), and mental health composite score (MCS) from SF-12, using these month-long data. We also examined how accurately the collected data classified the participants into groups of high/low GPA, good/poor sleep quality, high/low self-reported stress, high/low MCS using feature selection and machine learning techniques. We found associations among PSQI, PSS, MCS, and GPA and personality types. Classification accuracies using the objective data from wearable sensors and mobile phones ranged from 67-92%.",
"title": ""
},
{
"docid": "pos:1840525_1",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
}
] | [
{
"docid": "neg:1840525_0",
"text": "A multidatabase system provides integrated access to heterogeneous, autonomous local databases in a distributed system. An important problem in current multidatabase systems is identification of semantically similar data in different local databases. The Summary Schemas Model (SSM) is proposed as an extension to multidatabase systems to aid in semantic identification. The SSM uses a global data structure to abstract the information available in a multidatabase system. This abstracted form allows users to use their own terms (imprecise queries) when accessing data rather than being forced to use system-specified terms. The system uses the global data structure to match the user's terms to the semantically closest available system terms. A simulation of the SSM is presented to compare imprecise-query processing with corresponding query-processing costs in a standard multidatabase system. The costs and benefits of the SSM are discussed, and future research directions are presented.",
"title": ""
},
{
"docid": "neg:1840525_1",
"text": "Provides an abstract of the tutorial presentation and a brief professional biography of the presenter. The complete presentation was not made available for publication as part of the conference proceedings.",
"title": ""
},
{
"docid": "neg:1840525_2",
"text": "The neural network, using an unsupervised generalized Hebbian algorithm (GHA), is adopted to find the principal eigenvectors of a covariance matrix in different kinds of seismograms. We have shown that the extensive computer results of the principal components analysis (PCA) using the neural net of GHA can extract the information of seismic reflection layers and uniform neighboring traces. The analyzed seismic data are the seismic traces with 20-, 25-, and 30-Hz Ricker wavelets, the fault, the reflection and diffraction patterns after normal moveout (NMO) correction, the bright spot pattern, and the real seismogram at Mississippi Canyon. The properties of high amplitude, low frequency, and polarity reversal can be shown from the projections on the principal eigenvectors. For PCA, a theorem is proposed, which states that adding an extra point along the direction of the existing eigenvector can enhance that eigenvector. The theorem is applied to the interpretation of a fault seismogram and the uniform property of other seismograms. The PCA also provides a significant seismic data compression.",
"title": ""
},
{
"docid": "neg:1840525_3",
"text": "Social networks allow rapid spread of ideas and innovations while the negative information can also propagate widely. When the cascades with different opinions reaching the same user, the cascade arriving first is the most likely to be taken by the user. Therefore, once misinformation or rumor is detected, a natural containment method is to introduce a positive cascade competing against the rumor. Given a budget k, the rumor blocking problem asks for k seed users to trigger the spread of the positive cascade such that the number of the users who are not influenced by rumor can be maximized. The prior works have shown that the rumor blocking problem can be approximated within a factor of (1 − 1/e− δ) by a classic greedy algorithm combined with Monte Carlo simulation with the running time of O(k3 mn ln n/δ2), where n and m are the number of users and edges, respectively. Unfortunately, the Monte-Carlo-simulation-based methods are extremely time consuming and the existing algorithms either trade performance guarantees for practical efficiency or vice versa. In this paper, we present a randomized algorithm which runs in O(km ln n/δ2) expected time and provides a (1 − 1/e − δ)-approximation with a high probability. The experimentally results on both the real-world and synthetic social networks have shown that the proposed randomized rumor blocking algorithm is much more efficient than the state-of-the-art method and it is able to find the seed nodes which are effective in limiting the spread of rumor.",
"title": ""
},
{
"docid": "neg:1840525_4",
"text": "T. Ribot's (1881) law of retrograde amnesia states that brain damage impairs recently formed memories to a greater extent than older memories, which is generally taken to imply that memories need time to consolidate. A. Jost's (1897) law of forgetting states that if 2 memories are of the same strength but different ages, the older will decay more slowly than the younger. The main theoretical implication of this venerable law has never been worked out, but it may be the same as that implied by Ribot's law. A consolidation interpretation of Jost's law implies an interference theory of forgetting that is altogether different from the cue-overload view that has dominated thinking in the field of psychology for decades.",
"title": ""
},
{
"docid": "neg:1840525_5",
"text": "Face detection techniques have been developed for decades, and one of remaining open challenges is detecting small faces in unconstrained conditions. The reason is that tiny faces are often lacking detailed information and blurring. In this paper, we proposed an algorithm to directly generate a clear high-resolution face from a blurry small one by adopting a generative adversarial network (GAN). Toward this end, the basic GAN formulation achieves it by super-resolving and refining sequentially (e.g. SR-GAN and cycle-GAN). However, we design a novel network to address the problem of super-resolving and refining jointly. We also introduce new training losses to guide the generator network to recover fine details and to promote the discriminator network to distinguish real vs. fake and face vs. non-face simultaneously. Extensive experiments on the challenging dataset WIDER FACE demonstrate the effectiveness of our proposed method in restoring a clear high-resolution face from a blurry small one, and show that the detection performance outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840525_6",
"text": "We study the emergence of communication in multiagent adversarial settings inspired by the classic Imitation game. A class of three player games is used to explore how agents based on sequence to sequence (Seq2Seq) models can learn to communicate information in adversarial settings. We propose a modeling approach, an initial set of experiments and use signaling theory to support our analysis. In addition, we describe how we operationalize the learning process of actor-critic Seq2Seq based agents in these communicational games.",
"title": ""
},
{
"docid": "neg:1840525_7",
"text": "We develop graphene-based devices fabricated by alternating current dielectrophoresis (ac-DEP) for highly sensitive nitric oxide (NO) gas detection. The novel device comprises the sensitive channels of palladium-decorated reduced graphene oxide (Pd-RGO) and the electrodes covered with chemical vapor deposition (CVD)-grown graphene. The highly sensitive, recoverable, and reliable detection of NO gas ranging from 2 to 420 ppb with response time of several hundred seconds has been achieved at room temperature. The facile and scalable route for high performance suggests a promising application of graphene devices toward the human exhaled NO and environmental pollutant detections.",
"title": ""
},
{
"docid": "neg:1840525_8",
"text": "Steganography, coming from the Greek words stegos, meaning roof or covered and graphia which means writing, is the art and science of hiding the fact that communication is taking place. Using steganography, you can embed a secret message inside a piece of unsuspicious information and send it without anyone knowing of the existence of the secret message. Steganography and cryptography are closely related. Cryptography scrambles messages so they cannot be understood. Steganography on the other hand, will hide the message so there is no knowledge of the existence of the message in the first place. In some situations, sending an encrypted message will arouse suspicion while an ”invisible” message wil not do so. Both sciences can be combined to produce better protection of the message. In this case, when the steganography fails and the message can be detected, it is still of no use as it is encrypted using cryptography techniques. Therefore, the principle defined once by Kerckhoffs for cryptography, also stands for steganography: the quality of a cryptographic system should only depend on a small part of information, namely the secret key. The same is valid for good steganographic systems: knowledge of the system that is used, should not give any information about the existence of hidden messages. Finding a message should only be possible with knowledge of the key that is required to uncover it.",
"title": ""
},
{
"docid": "neg:1840525_9",
"text": "We describe a new algorithm for robot localization, efficient both in terms of memory and processing time. It transforms a stream of laser range sensor data into a probabilistic calculation of the robot’s position, using a bidirectional Long Short-Term Memory (LSTM) recurrent neural network (RNN) to learn the structure of the environment and to answer queries such as: in which room is the robot? To achieve this, the RNN builds an implicit map of the environment.",
"title": ""
},
{
"docid": "neg:1840525_10",
"text": "We explore dierent approaches to integrating a simple convolutional neural network (CNN) with the Lucene search engine in a multi-stage ranking architecture. Our models are trained using the PyTorch deep learning toolkit, which is implemented in C/C++ with a Python frontend. One obvious integration strategy is to expose the neural network directly as a service. For this, we use Apache ri, a soware framework for building scalable cross-language services. In exploring alternative architectures, we observe that once trained, the feedforward evaluation of neural networks is quite straightforward. erefore, we can extract the parameters of a trained CNN from PyTorch and import the model into Java, taking advantage of the Java Deeplearning4J library for feedforward evaluation. is has the advantage that the entire end-to-end system can be implemented in Java. As a third approach, we can extract the neural network from PyTorch and “compile” it into a C++ program that exposes a ri service. We evaluate these alternatives in terms of performance (latency and throughput) as well as ease of integration. Experiments show that feedforward evaluation of the convolutional neural network is signicantly slower in Java, while the performance of the compiled C++ network does not consistently beat the PyTorch implementation.",
"title": ""
},
{
"docid": "neg:1840525_11",
"text": "The demographic change towards an ageing population is introducing significant impact and drastic challenge to our society. We therefore need to find ways to assist older people to stay independently and prevent social isolation of these population. Information and Communication Technologies (ICT) can provide various solutions to help older adults to improve their quality of life, stay healthier, and live independently for longer time. The term of Ambient Assist Living (AAL) becomes a field to investigate innovative technologies to provide assistance as well as healthcare and rehabilitation to senior people with impairment. The paper provides a review of research background and technologies of AAL.",
"title": ""
},
{
"docid": "neg:1840525_12",
"text": "Wireless sensor networks (WSNs) use the unlicensed industrial, scientific, and medical (ISM) band for transmissions. However, with the increasing usage and demand of these networks, the currently available ISM band does not suffice for their transmissions. This spectrum insufficiency problem has been overcome by incorporating the opportunistic spectrum access capability of cognitive radio (CR) into the existing WSN, thus giving birth to CR sensor networks (CRSNs). The sensor nodes in CRSNs depend on power sources that have limited power supply capabilities. Therefore, advanced and intelligent radio resource allocation schemes are very essential to perform dynamic and efficient spectrum allocation among sensor nodes and to optimize the energy consumption of each individual node in the network. Radio resource allocation schemes aim to ensure QoS guarantee, maximize the network lifetime, reduce the internode and internetwork interferences, etc. In this paper, we present a survey of the recent advances in radio resource allocation in CRSNs. Radio resource allocation schemes in CRSNs are classified into three major categories, i.e., centralized, cluster-based, and distributed. The schemes are further divided into several classes on the basis of performance optimization criteria that include energy efficiency, throughput maximization, QoS assurance, interference avoidance, fairness and priority consideration, and hand-off reduction. An insight into the related issues and challenges is provided, and future research directions are clearly identified.",
"title": ""
},
{
"docid": "neg:1840525_13",
"text": "This review covers recent developments in the social influence literature, focusing primarily on compliance and conformity research published between 1997 and 2002. The principles and processes underlying a target's susceptibility to outside influences are considered in light of three goals fundamental to rewarding human functioning. Specifically, targets are motivated to form accurate perceptions of reality and react accordingly, to develop and preserve meaningful social relationships, and to maintain a favorable self-concept. Consistent with the current movement in compliance and conformity research, this review emphasizes the ways in which these goals interact with external forces to engender social influence processes that are subtle, indirect, and outside of awareness.",
"title": ""
},
{
"docid": "neg:1840525_14",
"text": "This paper asks how internet use, citizen satisfaction with e-government and citizen trust in government are interrelated. Prior research has found that agencies stress information and service provision on the Web (oneway e-government strategy), but have generally ignore applications that would enhance citizen-government interaction (two-way e-government strategy). Based on a review of the literature, we develop hypotheses about how two facets of e-democracy – transparency and interactivity – may affect citizen trust in government. Using data obtained from the Council on Excellence in Government, we apply a two stage multiple equation model. Findings indicate that internet use is positively associated with transparency satisfaction but negatively associated with interactivity satisfaction, and that both interactivity and transparency are positively associated with citizen trust in government. We conclude that the one-way e-transparency strategy may be insufficient, and that in the future agencies should make and effort to enhance e-interactivity.",
"title": ""
},
{
"docid": "neg:1840525_15",
"text": "We study the parsing complexity of Combinatory Categorial Grammar (CCG) in the formalism of Vijay-Shanker and Weir (1994). As our main result, we prove that any parsing algorithm for this formalism will take in the worst case exponential time when the size of the grammar, and not only the length of the input sentence, is included in the analysis. This sets the formalism of Vijay-Shanker andWeir (1994) apart from weakly equivalent formalisms such as Tree-Adjoining Grammar (TAG), for which parsing can be performed in time polynomial in the combined size of grammar and input sentence. Our results contribute to a refined understanding of the class of mildly context-sensitive grammars, and inform the search for new, mildly context-sensitive versions of CCG.",
"title": ""
},
{
"docid": "neg:1840525_16",
"text": "This paper presents the various crop yield prediction methods using data mining techniques. Agricultural system is very complex since it deals with large data situation which comes from a number of factors. Crop yield prediction has been a topic of interest for producers, consultants, and agricultural related organizations. In this paper our focus is on the applications of data mining techniques in agricultural field. Different Data Mining techniques such as K-Means, K-Nearest Neighbor(KNN), Artificial Neural Networks(ANN) and Support Vector Machines(SVM) for very recent applications of data mining techniques in agriculture field. Data mining technology has received a great progress with the rapid development of computer science, artificial intelligence. Data Mining is an emerging research field in agriculture crop yield analysis. Data Mining is the process of identifying the hidden patterns from large amount of data. Yield prediction is a very important agricultural problem that remains to be solved based on the available data. The problem of yield prediction can be solved by employing data mining techniques.",
"title": ""
},
{
"docid": "neg:1840525_17",
"text": "Medicine may stand at the cusp of a mobile transformation. Mobile health, or “mHealth,” is the use of portable devices such as smartphones and tablets for medical purposes, including diagnosis, treatment, or support of general health and well-being. Users can interface with mobile devices through software applications (“apps”) that typically gather input from interactive questionnaires, separate medical devices connected to the mobile device, or functionalities of the device itself, such as its camera, motion sensor, or microphone. Apps may even process these data with the use of medical algorithms or calculators to generate customized diagnoses and treatment recommendations. Mobile devices make it possible to collect more granular patient data than can be collected from devices that are typically used in hospitals or physicians’ offices. The experiences of a single patient can then be measured against large data sets to provide timely recommendations about managing both acute symptoms and chronic conditions.1,2 To give but a few examples: One app allows users who have diabetes to plug glucometers into their iPhones as it tracks insulin doses and sends alerts for abnormally high or low blood sugar levels.3,4 Another app allows patients to use their smartphones to record electrocardiograms,5 using a single lead that snaps to the back of the phone. Users can hold the phone against their chests, record cardiac events, and transmit results to their cardiologists.6 An imaging app allows users to analyze diagnostic images in multiple modalities, including positronemission tomography, computed tomography, magnetic resonance imaging, and ultrasonography.7 An even greater number of mHealth products perform health-management functions, such as medication reminders and symptom checkers, or administrative functions, such as patient scheduling and billing. The volume and variety of mHealth products are already immense and defy any strict taxonomy. More than 97,000 mHealth apps were available as of March 2013, according to one estimate.8 The number of mHealth apps, downloads, and users almost doubles every year.9 Some observers predict that by 2018 there could be 1.7 billion mHealth users worldwide.8 Thus, mHealth technologies could have a profound effect on patient care. However, mHealth has also become a challenge for the Food and Drug Administration (FDA), the regulator responsible for ensuring that medical devices are safe and effective. The FDA’s oversight of mHealth devices has been controversial to members of Congress and industry,10 who worry that “applying a complex regulatory framework could inhibit future growth and innovation in this promising market.”11 But such oversight has become increasingly important. A bewildering array of mHealth products can make it difficult for individual patients or physicians to evaluate their quality or utility. In recent years, a number of bills have been proposed in Congress to change FDA jurisdiction over mHealth products, and in April 2014, a key federal advisory committee laid out its recommendations for regulating mHealth and other health-information technologies.12 With momentum toward legislation building, this article focuses on the public health benefits and risks of mHealth devices under FDA jurisdiction and considers how to best use the FDA’s authority.",
"title": ""
},
{
"docid": "neg:1840525_18",
"text": "Several methods have been proposed for automatic and objective monitoring of food intake, but their performance suffers in the presence of speech and motion artifacts. This paper presents a novel sensor system and algorithms for detection and characterization of chewing bouts from a piezoelectric strain sensor placed on the temporalis muscle. The proposed data acquisition device was incorporated into the temple of eyeglasses. The system was tested by ten participants in two part experiments, one under controlled laboratory conditions and the other in unrestricted free-living. The proposed food intake recognition method first performed an energy-based segmentation to isolate candidate chewing segments (instead of using epochs of fixed duration commonly reported in research literature), with the subsequent classification of the segments by linear support vector machine models. On participant level (combining data from both laboratory and free-living experiments), with ten-fold leave-one-out cross-validation, chewing were recognized with average F-score of 96.28% and the resultant area under the curve was 0.97, which are higher than any of the previously reported results. A multivariate regression model was used to estimate chew counts from segments classified as chewing with an average mean absolute error of 3.83% on participant level. These results suggest that the proposed system is able to identify chewing segments in the presence of speech and motion artifacts, as well as automatically and accurately quantify chewing behavior, both under controlled laboratory conditions and unrestricted free-living.",
"title": ""
},
{
"docid": "neg:1840525_19",
"text": "The problem of testing programs without test oracles is well known. A commonly used approach is to use special values in testing but this is often insufficient to ensure program correctness. This paper demonstrates the use of metamorphic testing to uncover faults in programs, which could not be detected by special test values. Metamorphic testing can be used as a complementary test method to special value testing. In this paper, the sine function and a search function are used as examples to demonstrate the usefulness of metamorphic testing. This paper also examines metamorphic relationships and the extent of their usefulness in program testing.",
"title": ""
}
] |
1840526 | QoE and power efficiency tradeoff for fog computing networks with fog node cooperation | [
{
"docid": "pos:1840526_0",
"text": "The past 15 years have seen the rise of the Cloud, along with rapid increase in Internet backbone traffic and more sophisticated cellular core networks. There are three different types of “Clouds:” (1) data center, (2) backbone IP network and (3) cellular core network, responsible for computation, storage, communication and network management. Now the functions of these three types of Clouds are “descending” to be among or near the end users, i.e., to the edge of networks, as “Fog.”",
"title": ""
},
{
"docid": "pos:1840526_1",
"text": "Soldiers and front-line personnel operating in tactical environments increasingly make use of handheld devices to help with tasks such as face recognition, language translation, decision-making, and mission planning. These resource constrained edge environments are characterized by dynamic context, limited computing resources, high levels of stress, and intermittent network connectivity. Cyber-foraging is the leverage of external resource-rich surrogates to augment the capabilities of resource-limited devices. In cloudlet-based cyber-foraging, resource-intensive computation and data is offloaded to cloudlets. Forward-deployed, discoverable, virtual-machine-based tactical cloudlets can be hosted on vehicles or other platforms to provide infrastructure to offload computation, provide forward data staging for a mission, perform data filtering to remove unnecessary data from streams intended for dismounted users, and serve as collection points for data heading for enterprise repositories. This paper describes tactical cloudlets and presents experimentation results for five different cloudlet provisioning mechanisms. The goal is to demonstrate that cyber-foraging in tactical environments is possible by moving cloud computing concepts and technologies closer to the edge so that tactical cloudlets, even if disconnected from the enterprise, can provide capabilities that can lead to enhanced situational awareness and decision making at the edge.",
"title": ""
},
{
"docid": "pos:1840526_2",
"text": "Distributed optimization algorithms are highly attractive for solving big data problems. In particular, many machine learning problems can be formulated as the global consensus optimization problem, which can then be solved in a distributed manner by the alternating direction method of multipliers (ADMM) algorithm. However, this suffers from the straggler problem as its updates have to be synchronized. In this paper, we propose an asynchronous ADMM algorithm by using two conditions to control the asynchrony: partial barrier and bounded delay. The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time.",
"title": ""
}
] | [
{
"docid": "neg:1840526_0",
"text": "Cloud computing has established itself as an alternative IT infrastructure and service model. However, as with all logically centralized resource and service provisioning infrastructures, cloud does not handle well local issues involving a large number of networked elements (IoTs) and it is not responsive enough for many applications that require immediate attention of a local controller. Fog computing preserves many benefits of cloud computing and it is also in a good position to address these local and performance issues because its resources and specific services are virtualized and located at the edge of the customer premise. However, data security is a critical challenge in fog computing especially when fog nodes and their data move frequently in its environment. This paper addresses the data protection and the performance issues by 1) proposing a Region-Based Trust-Aware (RBTA) model for trust translation among fog nodes of regions, 2) introducing a Fog-based Privacy-aware Role Based Access Control (FPRBAC) for access control at fog nodes, and 3) developing a mobility management service to handle changes of users and fog devices' locations. The implementation results demonstrate the feasibility and the efficiency of our proposed framework.",
"title": ""
},
{
"docid": "neg:1840526_1",
"text": "Julio Rodriguez∗ Faculté des Sciences et Techniques de l’Ingénieur (STI), Institut de Microtechnique (IMT), Laboratoire de Production Microtechnique (LPM), Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland and Fakultät für Physik, Universität Bielefeld, D-33501 Bielefeld, Germany ∗Corresponding author. Email: [email protected] and [email protected]",
"title": ""
},
{
"docid": "neg:1840526_2",
"text": "Ambient intelligence (AmI) deals with a new world of ubiquitous computing devices, where physical environments interact intelligently and unobtrusively with people. These environments should be aware of people's needs, customizing requirements and forecasting behaviors. AmI environments can be diverse, such as homes, offices, meeting rooms, schools, hospitals, control centers, vehicles, tourist attractions, stores, sports facilities, and music devices. Artificial intelligence research aims to include more intelligence in AmI environments, allowing better support for humans and access to the essential knowledge for making better decisions when interacting with these environments. This article, which introduces a special issue on AmI, views the area from an artificial intelligence perspective.",
"title": ""
},
{
"docid": "neg:1840526_3",
"text": "A new mechanism is proposed for exciting the magnetic state of a ferromagnet. Assuming ballistic conditions and using WKB wave functions, we predict that a transfer of vectorial spin accompanies an electric current flowing perpendicular to two parallel magnetic films connected by a normal metallic spacer. This spin transfer drives motions of the two magnetization vectors within their instantaneously common plane. Consequent new mesoscopic precession and switching phenomena with potential applications are predicted. PACS: 75.50.Rr; 75.70.Cn A magnetic multilayer (MML) is composed of alternating ferromagnetic and paramagnetic sublayers whose thicknesses usually range between 1 and l0 nm. The discovery in 1988 of gian t magne tore s i s tance (GMR) in such multilayers stimulates much current research [1]. Although the initial reports dealt with currents flowing in the layer planes (CIP), the magnetoresistive phenomenon is known to be even stronger for currents flowing perpendicular to the plane (CPP) [2]. We predict here that the spinpolarized nature of such a perpendicular current generally creates a mutual transference of spin angular momentum between the magnetic sublayers which is manifested in their dynamic response. This response, which occurs only for CPP geometry, we propose to characterize as spin transfer . It can dominate the Larmor response to the magnetic field induced by * Fax: + 1-914-945-3291; email: [email protected]. the current when the magnetic sublayer thickness is about 1 nm and the smaller of its other two dimensions is less than 10= to 10 3 r im. On this mesoscopic scale, two new phenomena become possible: a steady precession driven by a constant current, and alternatively a novel form of switching driven by a pulsed current. Other forms of current-driven magnetic response without the use of any electromagnetically induced magnetic field are already known. Reports of both theory and experiments show how the exchange effect of external current flowing through a ferromagnetic domain wall causes it to move [3]. Even closer to the present subject is the magnetic response to tunneling current in the case of the sandwich structure f e r r o m a g n e t / i n s u l a t o r / f e r r o m a g n e t ( F / I / F ) predicted previously [4]. Unfortunately, theoretical relations indicated that the dissipation of energy, and therefore temperature rise, needed to produce more than barely observable spin-transfer through a tunneling barrier is prohibitively large. 0304-8853/96/$15.00 Copyright © 1996 Elsevier Science B.V. All rights reserved. PH S0304-8853(96)00062-5 12 ,/.C, Slo,cgewski / Journal of Magnetism and Magnetic Materials 159 (1996) L/ L7 However. the advent of multilayers incorporating very thin paramagnetic metallic spacers, rather than a barrier, places the realization of spin transfer in a different light. In the first place, the metallic spacer implies a low resistance and therefore low Ohmic dissipation for a given current, to which spin-transfer effects are proportional. Secondly, numerous experiments [5] and theories [6] show that the fundamental interlayer exchange coupling of RKKY type diminishes in strength and varies in sign as spacer thickness increases. Indeed, there exist experimental spacers which are thick enough (e.g. 4 nm) for the exchange coupling to be negligible even though spin relaxation is too weak to significantly diminish the GMR effect which relies on preservation of spin direction during electron transit across the spacer. Moreover, the same fact of long spin relaxation time in magnetic multilayers is illustrated on an even larger distance scale, an order of magnitude greater than the circa 10 nm electron mean free path, by spin injection experiments [7]. It follows, as we show below, that interesting current-driven spin-transfer effects are expected under laboratory conditions involving very small distance scales. We begin with simple arguments to explain current-driven spin transfer and establish its physical scale. We then sketch a detailed treatment and summarize its results. Finally, we predict two spin-transfer phenomena: steady magnetic precession driven by a constant current and a novel form of magnetic switching. We consider the five metallic regions represented schematically in Fig. 1. Layers A, B, and C are paramagnetic, whilst F I and F2 are ferromagnetic. The instantaneous macroscopic vectors hS~ and kS 2 forming the included angle 0 represent the respective total spin momenta per unit area of the ferromagnets. Now consider a flow of electrons moving rightward through the sandwich. The works on spin injection [7] show that if the thickness of spacer B is less than the spin-diffusion length, usually at least 100 nm, then some degree of spin polarization along the instantaneous axis parallel to the vector S~ of local ferromagnetic polarization in FI will be present in the electrons impinging on F2. This leads us to consider a three-layer (B, F2, C in Fig. 1) model in which an electron with initial spin state along the direction Sj is incident from S i ~ i S2 ~, EF=0J. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .",
"title": ""
},
{
"docid": "neg:1840526_4",
"text": "Cross-linguistically consistent annotation is necessary for sound comparative evaluation and cross-lingual learning experiments. It is also useful for multilingual system development and comparative linguistic studies. Universal Dependencies is an open community effort to create cross-linguistically consistent treebank annotation for many languages within a dependency-based lexicalist framework. In this paper, we describe v1 of the universal guidelines, the underlying design principles, and the currently available treebanks for 33 languages.",
"title": ""
},
{
"docid": "neg:1840526_5",
"text": "This study examined the process of how socioeconomic status, specifically parents' education and income, indirectly relates to children's academic achievement through parents' beliefs and behaviors. Data from a national, cross-sectional study of children were used for this study. The subjects were 868 8-12-year-olds, divided approximately equally across gender (436 females, 433 males). This sample was 49% non-Hispanic European American and 47% African American. Using structural equation modeling techniques, the author found that the socioeconomic factors were related indirectly to children's academic achievement through parents' beliefs and behaviors but that the process of these relations was different by racial group. Parents' years of schooling also was found to be an important socioeconomic factor to take into consideration in both policy and research when looking at school-age children.",
"title": ""
},
{
"docid": "neg:1840526_6",
"text": "Photovoltaic Thermal Collector (PVT) is a hybrid generator which converts solar radiation into useful electric and thermal energies simultaneously. This paper gathers all PVT sub-models in order to form a unique dynamic model that reveals PVT parameters interactions. As PVT is a multi-input/output/output system, a state space model based on energy balance equations is developed in order to analyze and assess the parameters behaviors and correlations of PVT constituents. The model simulation is performed using LabVIEW Software. The simulation shows the impact of the fluid flow rate variation on the collector efficiencies (thermal and electrical).",
"title": ""
},
{
"docid": "neg:1840526_7",
"text": "On many social networking web sites such as Facebook and Twitter, resharing or reposting functionality allows users to share others' content with their own friends or followers. As content is reshared from user to user, large cascades of reshares can form. While a growing body of research has focused on analyzing and characterizing such cascades, a recent, parallel line of work has argued that the future trajectory of a cascade may be inherently unpredictable. In this work, we develop a framework for addressing cascade prediction problems. On a large sample of photo reshare cascades on Facebook, we find strong performance in predicting whether a cascade will continue to grow in the future. We find that the relative growth of a cascade becomes more predictable as we observe more of its reshares, that temporal and structural features are key predictors of cascade size, and that initially, breadth, rather than depth in a cascade is a better indicator of larger cascades. This prediction performance is robust in the sense that multiple distinct classes of features all achieve similar performance. We also discover that temporal features are predictive of a cascade's eventual shape. Observing independent cascades of the same content, we find that while these cascades differ greatly in size, we are still able to predict which ends up the largest.",
"title": ""
},
{
"docid": "neg:1840526_8",
"text": "Low-power wireless networks are quickly becoming a critical part of our everyday infrastructure. Power consumption is a critical concern, but power measurement and estimation is a challenge. We present Powertrace, which to the best of our knowledge is the first system for network-level power profiling of low-power wireless systems. Powertrace uses power state tracking to estimate system power consumption and a structure called energy capsules to attribute energy consumption to activities such as packet transmissions and receptions. With Powertrace, the power consumption of a system can be broken down into individual activities which allows us to answer questions such as “How much energy is spent forwarding packets for node X?”, “How much energy is spent on control traffic and how much on critical data?”, and “How much energy does application X account for?”. Experiments show that Powertrace is accurate to 94% of the energy consumption of a device. To demonstrate the usefulness of Powertrace, we use it to experimentally analyze the power behavior of the proposed IETF standard IPv6 RPL routing protocol and a sensor network data collection protocol. Through using Powertrace, we find the highest power consumers and are able to reduce the power consumption of data collection with 24%. It is our hope that Powertrace will help the community to make empirical energy evaluation a widely used tool in the low-power wireless research community toolbox.",
"title": ""
},
{
"docid": "neg:1840526_9",
"text": "BACKGROUND\nDecreased systolic function is central to the pathogenesis of heart failure in millions of patients worldwide, but mechanism-related adverse effects restrict existing inotropic treatments. This study tested the hypothesis that omecamtiv mecarbil, a selective cardiac myosin activator, will augment cardiac function in human beings.\n\n\nMETHODS\nIn this dose-escalating, crossover study, 34 healthy men received a 6-h double-blind intravenous infusion of omecamtiv mecarbil or placebo once a week for 4 weeks. Each sequence consisted of three ascending omecamtiv mecarbil doses (ranging from 0·005 to 1·0 mg/kg per h) with a placebo infusion randomised into the sequence. Vital signs, blood samples, electrocardiographs (ECGs), and echocardiograms were obtained before, during, and after each infusion. The primary aim was to establish maximum tolerated dose (the highest infusion rate tolerated by at least eight participants) and plasma concentrations of omecamtiv mecarbil; secondary aims were evaluation of pharmacodynamic and pharmacokinetic characteristics, safety, and tolerability. This study is registered at ClinicalTrials.gov, number NCT01380223.\n\n\nFINDINGS\nThe maximum tolerated dose of omecamtiv mecarbil was 0·5 mg/kg per h. Omecamtiv mecarbil infusion resulted in dose-related and concentration-related increases in systolic ejection time (mean increase from baseline at maximum tolerated dose, 85 [SD 5] ms), the most sensitive indicator of drug effect (r(2)=0·99 by dose), associated with increases in stroke volume (15 [2] mL), fractional shortening (8% [1]), and ejection fraction (7% [1]; all p<0·0001). Omecamtiv mecarbil increased atrial contractile function, and there were no clinically relevant changes in diastolic function. There were no clinically significant dose-related adverse effects on vital signs, serum chemistries, ECGs, or adverse events up to a dose of 0·625 mg/kg per h. The dose-limiting toxic effect was myocardial ischaemia due to excessive prolongation of systolic ejection time.\n\n\nINTERPRETATION\nThese first-in-man data show highly dose-dependent augmentation of left ventricular systolic function in response to omecamtiv mecarbil and support potential clinical use of the drug in patients with heart failure.\n\n\nFUNDING\nCytokinetics Inc.",
"title": ""
},
{
"docid": "neg:1840526_10",
"text": "Design and analysis of ultrahigh-frequency (UHF) micropower rectifiers based on a diode-connected dynamic threshold MOSFET (DTMOST) is discussed. An analytical design model for DTMOST rectifiers is derived based on curve-fitted diode equation parameters. Several DTMOST six-stage charge-pump rectifiers were designed and fabricated using a CMOS 0.18-mum process with deep n-well isolation. Measured results verified the design model with average accuracy of 10.85% for an input power level between -4 and 0 dBm. At the same time, three other rectifiers based on various types of transistors were fabricated on the same chip. The measured results are compared with a Schottky diode solution.",
"title": ""
},
{
"docid": "neg:1840526_11",
"text": "In the past few years, cloud computing develops very quickly. A large amount of data are uploaded and stored in remote public cloud servers which cannot fully be trusted by users. Especially, more and more enterprises would like to manage their data by the aid of the cloud servers. However, when the data outsourced in the cloud are sensitive, the challenges of security and privacy becomes urgent for wide deployment of the cloud systems. This paper proposes a secure data sharing scheme to ensure the privacy of data owner and the security of the outsourced cloud data. The proposed scheme provides flexible utility of data while solving the privacy and security challenges for data sharing. The security and efficiency analysis demonstrate that the designed scheme is feasible and efficient. At last, we discuss its application in electronic health record.",
"title": ""
},
{
"docid": "neg:1840526_12",
"text": "A novel jitter equalization circuit is presented that addresses crosstalk-induced jitter in high-speed serial links. A simple model of electromagnetic coupling demonstrates the generation of crosstalk-induced jitter. The analysis highlights unique aspects of crosstalk-induced jitter that differ from far-end crosstalk. The model is used to predict the crosstalk-induced jitter in 2-PAM and 4-PAM, which is compared to measurement. Furthermore, the model suggests an equalizer that compensates for the data-induced electromagnetic coupling between adjacent links and is suitable for pre- or post-emphasis schemes. The circuits are implemented using 130-nm MOSFETs and operate at 5-10 Gb/s. The results demonstrate reduced deterministic jitter and lower bit-error rate (BER). At 10 Gb/s, the crosstalk-induced jitter equalizer opens the eye at 10/sup -12/ BER from 17 to 45 ps and lowers the rms jitter from 8.7 to 6.3 ps.",
"title": ""
},
{
"docid": "neg:1840526_13",
"text": "Generative adversarial networks (GANs) can implicitly learn rich distributions over images, audio, and data which are hard to model with an explicit likelihood. We present a practical Bayesian formulation for unsupervised and semi-supervised learning with GANs, in conjunction with stochastic gradient Hamiltonian Monte Carlo to marginalize the weights of the generator and discriminator networks. The resulting approach is straightforward and obtains good performance without any standard interventions such as feature matching, or mini-batch discrimination. By exploring an expressive posterior over the parameters of the generator, the Bayesian GAN avoids mode-collapse, produces interpretable candidate samples with notable variability, and in particular provides state-of-the-art quantitative results for semi-supervised learning on benchmarks including SVHN, CelebA, and CIFAR-10, outperforming DCGAN, Wasserstein GANs, and DCGAN ensembles.",
"title": ""
},
{
"docid": "neg:1840526_14",
"text": "The prevalence of mobile phones, the internet-of-things technology, and networks of sensors has led to an enormous and ever increasing amount of data that are now more commonly available in a streaming fashion [1]-[5]. Often, it is assumed - either implicitly or explicitly - that the process generating such a stream of data is stationary, that is, the data are drawn from a fixed, albeit unknown probability distribution. In many real-world scenarios, however, such an assumption is simply not true, and the underlying process generating the data stream is characterized by an intrinsic nonstationary (or evolving or drifting) phenomenon. The nonstationarity can be due, for example, to seasonality or periodicity effects, changes in the users' habits or preferences, hardware or software faults affecting a cyber-physical system, thermal drifts or aging effects in sensors. In such nonstationary environments, where the probabilistic properties of the data change over time, a non-adaptive model trained under the false stationarity assumption is bound to become obsolete in time, and perform sub-optimally at best, or fail catastrophically at worst.",
"title": ""
},
{
"docid": "neg:1840526_15",
"text": "According to the ways to see the real environments, mirror metaphor augmented reality systems can be classified into video see-through virtual mirror displays and reflective half-mirror displays. The two systems have distinctive characteristics and application fields with different types of complexity. In this paper, we introduce a system configuration to implement a prototype of a reflective half-mirror display-based augmented reality system. We also present a two-phase calibration method using an extra camera for the system. Finally, we describe three error sources in the proposed system and show the result of analysis of these errors with several experiments.",
"title": ""
},
{
"docid": "neg:1840526_16",
"text": "Severe adolescent female stress urinary incontinence (SAFSUI) can be defined as female adolescents between the ages of 12 and 17 years complaining of involuntary loss of urine multiple times each day during normal activities or sneezing or coughing rather than during sporting activities. An updated review of its likely prevalence, etiology, and management is required. The case of a 15-year-old female adolescent presenting with a 7-year history of SUI resistant to antimuscarinic medications and 18 months of intensive physiotherapy prompted this review. Issues of performing physical and urodynamic assessment at this young age were overcome in order to achieve the diagnosis of urodynamic stress incontinence (USI). Failed use of tampons was followed by the insertion of (retropubic) suburethral synthetic tape (SUST) under assisted local anesthetic into tissues deemed softer than the equivalent for an adult female. Whereas occasional urinary incontinence can occur in between 6 % and 45 % nulliparous adolescents, the prevalence of non‐neurogenic SAFSUI is uncertain but more likely rare. Risk factors for the occurrence of more severe AFSUI include obesity, athletic activities or high-impact training, and lung diseases such as cystic fibrosis (CF). This first reported use of a SUST in a patient with SAFSUI proved safe and completely curative. Artificial urinary sphincters, periurethral injectables and pubovaginal slings have been tried previously in equivalent patients. SAFSUI is a relatively rare but physically and emotionally disabling presentation. Multiple conservative options may fail, necessitating surgical management; SUST can prove safe and effective.",
"title": ""
},
{
"docid": "neg:1840526_17",
"text": "How can we learn a classier that is “fair” for a protected or sensitive group, when we do not know if the input to the classier belongs to the protected group? How can we train such a classier when data on the protected group is dicult to aain? In many settings, nding out the sensitive input aribute can be prohibitively expensive even during model training, and sometimes impossible during model serving. For example, in recommender systems, if we want to predict if a user will click on a given recommendation, we oen do not know many aributes of the user, e.g., race or age, and many aributes of the content are hard to determine, e.g., the language or topic. us, it is not feasible to use a dierent classier calibrated based on knowledge of the sensitive aribute. Here, we use an adversarial training procedure to remove information about the sensitive aribute from the latent representation learned by a neural network. In particular, we study how the choice of data for the adversarial training eects the resulting fairness properties. We nd two interesting results: a small amount of data is needed to train these adversarial models, and the data distribution empirically drives the adversary’s notion of fairness. ACM Reference format: Alex Beutel, Jilin Chen, Zhe Zhao, Ed H. Chi. 2017. Data Decisions and eoretical Implications when Adversarially Learning Fair Representations. In Proceedings of 2017Workshop on Fairness, Accountability, and Transparency in Machine Learning, Halifax, Canada, August 2017 (FAT/ML ’17), 5 pages.",
"title": ""
}
] |
1840527 | RECENT ADVANCES IN PERSONAL RECOMMENDER SYSTEMS | [
{
"docid": "pos:1840527_0",
"text": "Collaborative filtering aims at learning predictive models of user preferences, interests or behavior from community data, that is, a database of available user preferences. In this article, we describe a new family of model-based algorithms designed for this task. These algorithms rely on a statistical modelling technique that introduces latent class variables in a mixture model setting to discover user communities and prototypical interest profiles. We investigate several variations to deal with discrete and continuous response variables as well as with different objective functions. The main advantages of this technique over standard memory-based methods are higher accuracy, constant time prediction, and an explicit and compact model representation. The latter can also be used to mine for user communitites. The experimental evaluation shows that substantial improvements in accucracy over existing methods and published results can be obtained.",
"title": ""
}
] | [
{
"docid": "neg:1840527_0",
"text": "In this paper, we give an overview for the shared task at the 5th CCF Conference on Natural Language Processing & Chinese Computing (NLPCC 2016): Chinese word segmentation for micro-blog texts. Different with the popular used newswire datasets, the dataset of this shared task consists of the relatively informal micro-texts. Besides, we also use a new psychometric-inspired evaluation metric for Chinese word segmentation, which addresses to balance the very skewed word distribution at different levels of difficulty. The data and evaluation codes can be downloaded from https://github.com/FudanNLP/ NLPCC-WordSeg-Weibo.",
"title": ""
},
{
"docid": "neg:1840527_1",
"text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.",
"title": ""
},
{
"docid": "neg:1840527_2",
"text": "We present a novel method for key term extraction from text documents. In our method, document is modeled as a graph of semantic relationships between terms of that document. We exploit the following remarkable feature of the graph: the terms related to the main topics of the document tend to bunch up into densely interconnected subgraphs or communities, while non-important terms fall into weakly interconnected communities, or even become isolated vertices. We apply graph community detection techniques to partition the graph into thematically cohesive groups of terms. We introduce a criterion function to select groups that contain key terms discarding groups with unimportant terms. To weight terms and determine semantic relatedness between them we exploit information extracted from Wikipedia.\n Using such an approach gives us the following two advantages. First, it allows effectively processing multi-theme documents. Second, it is good at filtering out noise information in the document, such as, for example, navigational bars or headers in web pages.\n Evaluations of the method show that it outperforms existing methods producing key terms with higher precision and recall. Additional experiments on web pages prove that our method appears to be substantially more effective on noisy and multi-theme documents than existing methods.",
"title": ""
},
{
"docid": "neg:1840527_3",
"text": "Rotator cuff disorders are considered to be among the most common causes of shoulder pain and disability encountered in both primary and secondary care. The general pathology of subacromial impingment generally relates to a chronic repetitive process in which the conjoint tendon of the rotator cuff undergoes repetitive compression and micro trauma as it passes under the coracoacromial arch. However acute traumatic injuries may also lead to this condition. Diagnosis remains a clinical one, however advances in imaging modalities have enabled clinicians to have an increased understanding of the pathological process. Ultrasound scanning appears to be a justifiable and cost effective assessment tool following plain radiographs in the assessment of shoulder impingment, with MRI scans being reserved for more complex cases. A period of observed conservative management including the use of NSAIDs, physiotherapy with or without the use of subacromial steroid injections is a well-established and accepted practice. However, in young patients or following any traumatic injury to the rotator cuff, surgery should be considered early. If surgery is to be performed this should be done arthroscopically and in the case of complete rotator cuff rupture the tendon should be repaired where possible.",
"title": ""
},
{
"docid": "neg:1840527_4",
"text": "Although backward folding of the epiglottis is one of the signal events of the mammalian adult swallow, the epiglottis does not fold during the infant swallow. How this functional change occurs is unknown, but we hypothesize that a change in swallow mechanism occurs with maturation, prior to weaning. Using videofluoroscopy, we found three characteristic patterns of swallowing movement at different ages in the pig: an infant swallow, a transitional swallow and a post-weaning (juvenile or adult) swallow. In animals of all ages, the dorsal region of the epiglottis and larynx was held in an intranarial position by a muscular sphincter formed by the palatopharyngeal arch. In the infant swallow, increasing pressure in the oropharynx forced a liquid bolus through the piriform recesses on either side of a relatively stationary epiglottis into the esophagus. As the infant matured, the palatopharyngeal arch and the soft palate elevated at the beginning of the swallow, so exposing a larger area of the epiglottis to bolus pressure. In transitional swallows, the epiglottis was tilted backward relatively slowly by a combination of bolus pressure and squeezing of the epiglottis by closure of the palatopharyngeal sphincter. The bolus, however, traveled alongside but never over the tip of the epiglottis. In the juvenile swallow, the bolus always passed over the tip of the epiglottis. The tilting of the epiglottis resulted from several factors, including the action of the palatopharyngeal sphincter, higher bolus pressure exerted on the epiglottis and the allometry of increased size. In both transitional and juvenile swallows, the subsequent relaxation of the palatopharyngeal sphincter released the epiglottis, which sprang back to its original intranarial position.",
"title": ""
},
{
"docid": "neg:1840527_5",
"text": "This paper proposes a two-axis-decoupled solar tracker based on parallel mechanism. Utilizing Grassmann line geometry, the type design of the two-axis solar tracker is investigated. Then, singularity is studied to obtain the workspace without singularities. By using the virtual work principle, the inverse dynamics is derived to find out the driving torque. Taking Beijing as a sample city where the solar tracker is placed, the motion trajectory of the tracker is planned to collect the maximum solar energy. The position of the mass center of the solar mirror on the platform is optimized to minimize the driving torque. The driving torque of the proposed tracker is compared with that of a conventional serial tracker, which shows that the proposed tracker can greatly reduce the driving torque and the reducers with large reduction ratio are not necessary. Thus, the complexity and power dissipation of the system can be reduced.",
"title": ""
},
{
"docid": "neg:1840527_6",
"text": "Despite of the fact that graph-based methods are gaining more and more popularity in different scientific areas, it has to be considered that the choice of an appropriate algorithm for a given application is still the most crucial task. The lack of a large database of graphs makes the task of comparing the performance of different graph matching algorithms difficult, and often the selection of an algorithm is made on the basis of a few experimental results available. In this paper we present an experimental comparative evaluation of the performance of four graph matching algorithms. In order to perform this comparison, we have built and made available a large database of graphs, which is also described in detail in this article. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840527_7",
"text": "In conventional HTTP-based adaptive streaming (HAS), a video source is encoded at multiple levels of constant bitrate representations, and a client makes its representation selections according to the measured network bandwidth. While greatly simplifying adaptation to the varying network conditions, this strategy is not the best for optimizing the video quality experienced by end users. Quality fluctuation can be reduced if the natural variability of video content is taken into consideration. In this work, we study the design of a client rate adaptation algorithm to yield consistent video quality. We assume that clients have visibility into incoming video within a finite horizon. We also take advantage of the client-side video buffer, by using it as a breathing room for not only network bandwidth variability, but also video bitrate variability. The challenge, however, lies in how to balance these two variabilities to yield consistent video quality without risking a buffer underrun. We propose an optimization solution that uses an online algorithm to adapt the video bitrate step-by-step, while applying dynamic programming at each step. We incorporate our solution into PANDA -- a practical rate adaptation algorithm designed for HAS deployment at scale.",
"title": ""
},
{
"docid": "neg:1840527_8",
"text": "In this paper we present the design, fabrication and demonstration of an X-band phased array capable of wide-angle scanning. A new non-symmetric element for wideband tightly coupled dipole arrays is integrated with a low-profile microstrip balun printed on the array ground plane. The feed connects to the array aperture with vertical twin-wire transmission lines that concurrently perform impedance matching. The proposed element arms are identical near the center feed portion but dissimilar towards the ends, forming a ball-and-cup. A 64 element array prototype is verified experimentally and compared to numerical simulation. The array aperture is placed λ/7 (at 8 GHz) above a ground plane and shown to maintain a VSWR < 2 from 8–12.5 GHz while scanning up to 75° and 60° in E and H-plane, respectively.",
"title": ""
},
{
"docid": "neg:1840527_9",
"text": "The risk for multifactorial diseases is determined by risk factors that frequently apply across disorders (universal risk factors). To investigate unresolved issues on etiology of and individual’s susceptibility to multifactorial diseases, research focus should shift from single determinant-outcome relations to effect modification of universal risk factors. We present a model to investigate universal risk factors of multifactorial diseases, based on a single risk factor, a single outcome measure, and several effect modifiers. Outcome measures can be disease overriding, such as clustering of disease, frailty and quality of life. “Life course epidemiology” can be considered as a specific application of the proposed model, since risk factors and effect modifiers of multifactorial diseases typically have a chronic aspect. Risk factors are categorized into genetic, environmental, or complex factors, the latter resulting from interactions between (multiple) genetic and environmental factors (an example of a complex factor is overweight). The proposed research model of multifactorial diseases assumes that determinant-outcome relations differ between individuals because of modifiers, which can be divided into three categories. First, risk-factor modifiers that determine the effect of the determinant (such as factors that modify gene-expression in case of a genetic determinant). Second, outcome modifiers that determine the expression of the studied outcome (such as medication use). Third, generic modifiers that determine the susceptibility for multifactorial diseases (such as age). A study to assess disease risk during life requires phenotype and outcome measurements in multiple generations with a long-term follow up. Multiple generations will also enable to separate genetic and environmental factors. Traditionally, representative individuals (probands) and their first-degree relatives have been included in this type of research. We put forward that a three-generation design is the optimal approach to investigate multifactorial diseases. This design has statistical advantages (precision, multiple-informants, separation of non-genetic and genetic familial transmission, direct haplotype assessment, quantify genetic effects), enables unique possibilities to study social characteristics (socioeconomic mobility, partner preferences, between-generation similarities), and offers practical benefits (efficiency, lower non-response). LifeLines is a study based on these concepts. It will be carried out in a representative sample of 165,000 participants from the northern provinces of the Netherlands. LifeLines will contribute to the understanding of how universal risk factors are modified to influence the individual susceptibility to multifactorial diseases, not only at one stage of life but cumulatively over time: the lifeline.",
"title": ""
},
{
"docid": "neg:1840527_10",
"text": "In my commentary in response to the 3 articles (McKenzie & Lounsbery, 2013; Rink, 2013; Ward, 2013), I focus on 3 areas: (a) content knowledge, (b) a holistic approach to physical education, and (c) policy impact. I use the term quality teaching rather than \"teacher effectiveness.\" Quality teaching is a term with the potential to move our attention beyond a focus merely on issues of effectiveness relating to the achievement of prespecified objectives. I agree with Ward that teacher content knowledge is limited in physical education, and I argue that if the student does not have a connection to or relationship with the content, this will diminish their learning gains. I also argue for a more holistic approach to physical education coming from a broader conception. Physical educators who teach the whole child advocate for a plethora of physical activity, skills, knowledge, and positive attitudes that foster healthy and active playful lifestyles. Play is a valuable educational experience. I also endorse viewing assessment from different perspectives and discuss assessment through a social-critical political lens. The 3 articles also have implications for policy. Physical education is much broader than just physical activity, and we harm the future potential of our field if we adopt a narrow agenda. Looking to the future, I propose that we broaden the kinds of research that we value, support, and appreciate in our field.",
"title": ""
},
{
"docid": "neg:1840527_11",
"text": "In this work, we present an interactive system for visual analysis of urban traffic congestion based on GPS trajectories. For these trajectories we develop strategies to extract and derive traffic jam information. After cleaning the trajectories, they are matched to a road network. Subsequently, traffic speed on each road segment is computed and traffic jam events are automatically detected. Spatially and temporally related events are concatenated in, so-called, traffic jam propagation graphs. These graphs form a high-level description of a traffic jam and its propagation in time and space. Our system provides multiple views for visually exploring and analyzing the traffic condition of a large city as a whole, on the level of propagation graphs, and on road segment level. Case studies with 24 days of taxi GPS trajectories collected in Beijing demonstrate the effectiveness of our system.",
"title": ""
},
{
"docid": "neg:1840527_12",
"text": "In this paper, we present an optimization of Odlyzko and Schönhage algorithm that computes efficiently Zeta function at large height on the critical line, together with computation of zeros of the Riemann Zeta function thanks to an implementation of this technique. The first family of computations consists in the verification of the Riemann Hypothesis on all the first 10 non trivial zeros. The second family of computations consists in verifying the Riemann Hypothesis at very large height for different height, while collecting statistics in these zones. For example, we were able to compute two billion zeros from the 10-th zero of the Riemann Zeta function.",
"title": ""
},
{
"docid": "neg:1840527_13",
"text": "Electronic healthcare (eHealth) systems have replaced paper-based medical systems due to the attractive features such as universal accessibility, high accuracy, and low cost. As a major component of eHealth systems, mobile healthcare (mHealth) applies mobile devices, such as smartphones and tablets, to enable patient-to-physician and patient-to-patient communications for better healthcare and quality of life (QoL). Unfortunately, patients' concerns on potential leakage of personal health records (PHRs) is the biggest stumbling block. In current eHealth/mHealth networks, patients' medical records are usually associated with a set of attributes like existing symptoms and undergoing treatments based on the information collected from portable devices. To guarantee the authenticity of those attributes, PHRs should be verifiable. However, due to the linkability between identities and PHRs, existing mHealth systems fail to preserve patient identity privacy while providing medical services. To solve this problem, we propose a decentralized system that leverages users' verifiable attributes to authenticate each other while preserving attribute and identity privacy. Moreover, we design authentication strategies with progressive privacy requirements in different interactions among participating entities. Finally, we have thoroughly evaluated the security and computational overheads for our proposed schemes via extensive simulations and experiments.",
"title": ""
},
{
"docid": "neg:1840527_14",
"text": "This article extends psychological methods and concepts into a domain that is as profoundly consequential as it is poorly understood: intelligence analysis. We report findings from a geopolitical forecasting tournament that assessed the accuracy of more than 150,000 forecasts of 743 participants on 199 events occurring over 2 years. Participants were above average in intelligence and political knowledge relative to the general population. Individual differences in performance emerged, and forecasting skills were surprisingly consistent over time. Key predictors were (a) dispositional variables of cognitive ability, political knowledge, and open-mindedness; (b) situational variables of training in probabilistic reasoning and participation in collaborative teams that shared information and discussed rationales (Mellers, Ungar, et al., 2014); and (c) behavioral variables of deliberation time and frequency of belief updating. We developed a profile of the best forecasters; they were better at inductive reasoning, pattern detection, cognitive flexibility, and open-mindedness. They had greater understanding of geopolitics, training in probabilistic reasoning, and opportunities to succeed in cognitively enriched team environments. Last but not least, they viewed forecasting as a skill that required deliberate practice, sustained effort, and constant monitoring of current affairs.",
"title": ""
},
{
"docid": "neg:1840527_15",
"text": "In this paper we present a methodology for analyzing polyphonic musical passages comprised by notes that exhibit a harmonically fixed spectral profile (such as piano notes). Taking advantage of this unique note structure we can model the audio content of the musical passage by a linear basis transform and use non-negative matrix decomposition methods to estimate the spectral profile and the temporal information of every note. This approach results in a very simple and compact system that is not knowledge-based, but rather learns notes by observation.",
"title": ""
},
{
"docid": "neg:1840527_16",
"text": "Annona muricata is a member of the Annonaceae family and is a fruit tree with a long history of traditional use. A. muricata, also known as soursop, graviola and guanabana, is an evergreen plant that is mostly distributed in tropical and subtropical regions of the world. The fruits of A. muricata are extensively used to prepare syrups, candies, beverages, ice creams and shakes. A wide array of ethnomedicinal activities is contributed to different parts of A. muricata, and indigenous communities in Africa and South America extensively use this plant in their folk medicine. Numerous investigations have substantiated these activities, including anticancer, anticonvulsant, anti-arthritic, antiparasitic, antimalarial, hepatoprotective and antidiabetic activities. Phytochemical studies reveal that annonaceous acetogenins are the major constituents of A. muricata. More than 100 annonaceous acetogenins have been isolated from leaves, barks, seeds, roots and fruits of A. muricata. In view of the immense studies on A. muricata, this review strives to unite available information regarding its phytochemistry, traditional uses and biological activities.",
"title": ""
},
{
"docid": "neg:1840527_17",
"text": "INTRODUCTION\nNumerous methods for motor unit number estimation (MUNE) have been developed. The objective of this article is to summarize and compare the major methods and the available data regarding their reproducibility, validity, application, refinement, and utility.\n\n\nMETHODS\nUsing specified search criteria, a systematic review of the literature was performed. Reproducibility, normative data, application to specific diseases and conditions, technical refinements, and practicality were compiled into a comprehensive database and analyzed.\n\n\nRESULTS\nThe most commonly reported MUNE methods are the incremental, multiple-point stimulation, spike-triggered averaging, and statistical methods. All have established normative data sets and high reproducibility. MUNE provides quantitative assessments of motor neuron loss and has been applied successfully to the study of many clinical conditions, including amyotrophic lateral sclerosis and normal aging.\n\n\nCONCLUSIONS\nMUNE is an important research technique in human subjects, providing important data regarding motor unit populations and motor unit loss over time.",
"title": ""
},
{
"docid": "neg:1840527_18",
"text": "In less than half a century, molecular markers have totally changed our view of nature, and in the process they have evolved themselves. However, all of the molecular methods developed over the years to detect variation do so in one of only three conceptually different classes of marker: protein variants (allozymes), DNA sequence polymorphism and DNA repeat variation. The latest techniques promise to provide cheap, high-throughput methods for genotyping existing markers, but might other traditional approaches offer better value for some applications?",
"title": ""
}
] |
1840528 | IR-UWB Radar Demonstrator for Ultra-Fine Movement Detection and Vital-Sign Monitoring | [
{
"docid": "pos:1840528_0",
"text": "Antennas are mandatory system components for UWB communication systems. The paper presents a comprehensive approach for the characterization of UWB antenna concepts. Measurements of the transient responses of a LPDA and a Vivaldi antenna prove the effectivity of the presented model.",
"title": ""
}
] | [
{
"docid": "neg:1840528_0",
"text": "The cities of Paris, London, Chicago, and New York (among others) have recently launched large-scale bike-share systems to facilitate the use of bicycles for urban commuting. This paper estimates the relationship between aspects of bike-share system design and ridership. Specifically, we estimate the effects on ridership of station accessibility (how far the commuter must walk to reach a station) and of bike-availability (the likelihood of finding a bike at the station). Our analysis is based on a structural demand model that considers the random-utility maximizing choices of spatially distributed commuters, and it is estimated using highfrequency system-use data from the bike-share system in Paris. The role of station accessibility is identified using cross-sectional variation in station location and high -frequency changes in commuter choice sets; bike-availability effects are identified using longitudinal variation. Because the scale of our data, (in particular the high-frequency changes in choice sets) render traditional numerical estimation techniques infeasible, we develop a novel transformation of our estimation problem: from the time domain to the “station stockout state” domain. We find that a 10% reduction in distance traveled to access bike-share stations (about 13 meters) can increase system-use by 6.7% and that a 10% increase in bikeavailability can increase system-use by nearly 12%. Finally, we use our estimates to develop a calibrated counterfactual simulation demonstrating that the bike-share system in central Paris would have 29.41% more ridership if its station network design had incorporated our estimates of commuter preferences—with no additional spending on bikes or docking points.",
"title": ""
},
{
"docid": "neg:1840528_1",
"text": "Notes: (1) These questions require thought, but do not require long answers. Please be as concise as possible. (2) If you have a question about this homework, we encourage you to post your question on our Piazza forum, at https://piazza.com/stanford/fall2014/cs229. (3) If you missed the first lecture or are unfamiliar with the collaboration or honor code policy, please read the policy on Handout #1 (available from the course website) before starting work. (4) For problems that require programming, please include in your submission a printout of your code (with comments) and any figures that you are asked to plot. (5) If you are an on-campus (non-SCPD) student, please print, fill out, and include a copy of the cover sheet (enclosed as the final page of this document), and include the cover sheet as the first page of your submission. as a single PDF file under 20MB in size. If you have trouble submitting online, you can also email your submission to [email protected]. However, we strongly recommend using the website submission method as it will provide confirmation of submission, and also allow us to track and return your graded homework to you more easily. If you are scanning your document by cellphone, please check the Piazza forum for recommended cellphone scanning apps and best practices.",
"title": ""
},
{
"docid": "neg:1840528_2",
"text": "Data mining techniques are used to extract useful knowledge from raw data. The extracted knowledge is valuable and significantly affects the decision maker. Educational data mining (EDM) is a method for extracting useful information that could potentially affect an organization. The increase of technology use in educational systems has led to the storage of large amounts of student data, which makes it important to use EDM to improve teaching and learning processes. EDM is useful in many different areas including identifying at-risk students, identifying priority learning needs for different groups of students, increasing graduation rates, effectively assessing institutional performance, maximizing campus resources, and optimizing subject curriculum renewal. This paper surveys the relevant studies in the EDM field and includes the data and methodologies used in those studies.",
"title": ""
},
{
"docid": "neg:1840528_3",
"text": "The augmented Lagrangian method (ALM) is a benchmark for solving a convex minimization model with linear constraints. We consider the special case where the objective is the sum of m functions without coupled variables. For solving this separable convex minimization model, it is usually required to decompose the ALM subproblem at each iteration into m smaller subproblems, each of which only involves one function in the original objective. Easier subproblems capable of taking full advantage of the functions’ properties individually could thus be generated. In this paper, we focus on the case where full Jacobian decomposition is applied to ALM subproblems, i.e., all the decomposed ALM subproblems are eligible for parallel computation at each iteration. For the first time, we show by an example that the ALM with full Jacobian decomposition could be divergent. To guarantee the convergence, we suggest combining an under-relaxation step and the output of the ALM with full Jacobian decomposition. A novel analysis is presented to illustrate how to choose refined step sizes for this under-relaxation step. Accordingly, a new splitting version of the ALM with full Jacobian decomposition is proposed. We derive the worst-case O(1/k) convergence rate measured by the iteration complexity (where k represents the iteration counter) in both the ergodic and a nonergodic senses for the new algorithm. Finally, an assignment problem is tested to illustrate the efficiency of the new algorithm.",
"title": ""
},
{
"docid": "neg:1840528_4",
"text": "This paper examines prospects and limitations of citation studies in the humanities. We begin by presenting an overview of bibliometric analysis, noting several barriers to applying this method in the humanities. Following that, we present an experimental tool for extracting and classifying citation contexts in humanities journal articles. This tool reports the bibliographic information about each reference, as well as three features about its context(s): frequency, locationin-document, and polarity. We found that extraction was highly successful (above 85%) for three of the four journals, and statistics for the three citation figures were broadly consistent with previous research. We conclude by noting several limitations of the sentiment classifier and suggesting future areas for refinement. .................................................................................................................................................................................",
"title": ""
},
{
"docid": "neg:1840528_5",
"text": "Road condition data are important in transportation management systems. Over the last decades, significant progress has been made and new approaches have been proposed for efficient collection of pavement condition data. However, the assessment of unpaved road conditions has been rarely addressed in transportation research. Unpaved roads constitute approximately 40% of the U.S. road network, and are the lifeline in rural areas. Thus, it is important for timely identification and rectification of deformation on such roads. This article introduces an innovative Unmanned Aerial Vehicle (UAV)-based digital imaging system focusing on efficient collection of surface condition data over rural roads. In contrast to other approaches, aerial assessment is proposed by exploring aerial imagery acquired from an unpiloted platform to derive a threedimensional (3D) surface model over a road distress area for distress measurement. The system consists of a lowcost model helicopter equipped with a digital camera, a Global Positioning System (GPS) receiver and an Inertial Navigation System (INS), and a geomagnetic sensor. A set of image processing algorithms has been developed for precise orientation of the acquired images, and generation of 3D road surface models and orthoimages, which allows for accurate measurement of the size and the dimension of the road surface distresses. The developed system has been tested over several test sites ∗To whom correspondence should be addressed. E-mail: chunsunz@ unimelb.edu.au. with roads of various surface distresses. The experiments show that the system is capable for providing 3D information of surface distresses for road condition assessment. Experiment results demonstrate that the system is very promising and provides high accuracy and reliable results. Evaluation of the system using 2D and 3D models with known dimensions shows that subcentimeter measurement accuracy is readily achieved. The comparison of the derived 3D information with the onsite manual measurements of the road distresses reveals differences of 0.50 cm, demonstrating the potential of the presented system for future practice.",
"title": ""
},
{
"docid": "neg:1840528_6",
"text": "We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach first explicitly infers the parts of the geometry visible both in the input and novel views and then casts the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion/disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods.",
"title": ""
},
{
"docid": "neg:1840528_7",
"text": "Brain mapping transforms the brain cortical surface to canonical planar domains, which plays a fundamental role in morphological study. Most existing brain mapping methods are based on angle preserving maps, which may introduce large area distortions. This work proposes an area preserving brain mapping method based on Monge-Brenier theory. The brain mapping is intrinsic to the Riemannian metric, unique, and diffeomorphic. The computation is equivalent to convex energy minimization and power Voronoi diagram construction. Comparing to the existing approaches based on Monge-Kantorovich theory, the proposed one greatly reduces the complexity (from n2 unknowns to n ), and improves the simplicity and efficiency. Experimental results on caudate nucleus surface mapping and cortical surface mapping demonstrate the efficacy and efficiency of the proposed method. Conventional methods for caudate nucleus surface mapping may suffer from numerical instability, in contrast, current method produces diffeomorpic mappings stably. In the study of cortical surface classification for recognition of Alzheimer's Disease, the proposed method outperforms some other morphometry features.",
"title": ""
},
{
"docid": "neg:1840528_8",
"text": "Built to Last's answer is to consciously build a compmy with even more care than the hotels, airplanes, or computers from which the company earns revenue. Building a company requires much more than hiring smart employees and aggressive salespeople. Visionary companies consider the personality of their potential employees and how they will fare in the company culture. They treasure employees dedicated to the company's mission, while those that don't are \" ejected like a virus. \" They carefully choose goals and develop cultures that encourage innovation and experimentation. Visionary companies plan for the future, measure their current production, and revise plans when conditions change. Much like the TV show Biography, Built to Last gives fascinating historical insight into the birth and growth of The most radical of the three books I reviewed, The Fifth Discipline, can fundamentally change the way you view the world. The Flremise is that businesses, schools, gopernments, and other organizations can best succeed if they are learning organizations. The Fifth Discipline is Peter Senge's vehicle for explaining how five complementary components-systems thinking, personal mastery, mental models, shared vision, and team learning-can support continuous learning and therefore sustainable iniprovement. Senge, a professor a t MIT's Sloan School of Government and a director of the Society for Organizational Learning, looks beyont: simple cause-and-effect explanation:j and instead advocates \" systems thinking \" to discover a more complete understanding of how and why events occur. Systems thinkers go beyond the data readily available, question assumptions, and try to identify the many types of activities that can occur simultaneously. The need for such a worldview is made clear early in the book with the role-playing \" beer game. \" In this game, three participants play the roles of store manager, beverage distributor, and beer brewer. Each has information that would typically he available: the store manager knows how many cases of beer are in inventory , how many are on order, and how many were sold in the last week. The distributor tracks the orders placed with the brewery, inventory, orders received this week from each store, and so on. As the customers' demands vary, the manager, distributor, and brewer make what seem to be reasonable decisions to change the amount they order or brew. Thousands of people have played this and, unfortunately, the results are extremely consistent. As each player tries to maximize profits, each fails to consider how his …",
"title": ""
},
{
"docid": "neg:1840528_9",
"text": "Autonomous indoor navigation of Micro Aerial Vehicles (MAVs) possesses many challenges. One main reason is because GPS has limited precision in indoor environments. The additional fact that MAVs are not able to carry heavy weight or power consuming sensors, such as range finders, makes indoor autonomous navigation a challenging task. In this paper, we propose a practical system in which a quadcopter autonomously navigates indoors and finds a specific target, i.e. a book bag, by using a single camera. A deep learning model, Convolutional Neural Network (ConvNet), is used to learn a controller strategy that mimics an expert pilot’s choice of action. We show our system’s performance through real-time experiments in diverse indoor locations. To understand more about our trained network, we use several visualization techniques.",
"title": ""
},
{
"docid": "neg:1840528_10",
"text": "We address the problem of finding realistic geometric corrections to a foreground object such that it appears natural when composited into a background image. To achieve this, we propose a novel Generative Adversarial Network (GAN) architecture that utilizes Spatial Transformer Networks (STNs) as the generator, which we call Spatial Transformer GANs (ST-GANs). ST-GANs seek image realism by operating in the geometric warp parameter space. In particular, we exploit an iterative STN warping scheme and propose a sequential training strategy that achieves better results compared to naive training of a single generator. One of the key advantages of ST-GAN is its applicability to high-resolution images indirectly since the predicted warp parameters are transferable between reference frames. We demonstrate our approach in two applications: (1) visualizing how indoor furniture (e.g. from product images) might be perceived in a room, (2) hallucinating how accessories like glasses would look when matched with real portraits.",
"title": ""
},
{
"docid": "neg:1840528_11",
"text": "Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs. This adversarial example phenomenon has been explained as originating from deep networks being “too linear” (Goodfellow et al., 2014). We show here that the linear explanation of adversarial examples presents a number of limitations: the formal argument is not convincing; linear classifiers do not always suffer from the phenomenon, and when they do their adversarial examples are different from the ones affecting deep networks. We propose a new perspective on the phenomenon. We argue that adversarial examples exist when the classification boundary lies close to the submanifold of sampled data, and present a mathematical analysis of this new perspective in the linear case. We define the notion of adversarial strength and show that it can be reduced to the deviation angle between the classifier considered and the nearest centroid classifier. Then, we show that the adversarial strength can be made arbitrarily high independently of the classification performance due to a mechanism that we call boundary tilting. This result leads us to defining a new taxonomy of adversarial examples. Finally, we show that the adversarial strength observed in practice is directly dependent on the level of regularisation used and the strongest adversarial examples, symptomatic of overfitting, can be avoided by using a proper level of regularisation.",
"title": ""
},
{
"docid": "neg:1840528_12",
"text": "Objective: To study the efficacy and safety of an indigenously designed low cost nasal bubble continuous positive airway pressure (NB-CPAP) in neonates admitted with respiratory distress. Study Design: A descriptive study. Place and Duration of Study: Combined Military Hospital (CMH), Peshawar from Jan 2014 to May 2014. Material and Methods: Fifty neonates who developed respiratory distress within 6 hours of life were placed on an indigenous NB-CPAP device (costing 220 PKR) and evaluated for gestational age, weight, indications, duration on NB-CPAP, pre-defined outcomes and complications. Results: A total of 50 consecutive patients with respiratory distress were placed on NB-CPAP. Male to Female ratio was 2.3:1. Mean weight was 2365.85 ± 704 grams and mean gestational age was 35.41 ± 2.9 weeks. Indications for applying NB-CPAP were transient tachypnea of the newborn (TTN, 52%) and respiratory distress syndrome (RDS, 44%). Most common complications were abdominal distension (15.6%) and pulmonary hemorrhage (6%). Out of 50 infants placed on NB-CPAP, 35 (70%) were managed on NB-CPAP alone while 15 (30%) needed mechanical ventilation following a trial of NB-CPAP. Conclusion: In 70% of babies invasive mechanical ventilation was avoided using NB-CPAP.",
"title": ""
},
{
"docid": "neg:1840528_13",
"text": "Two experiments demonstrated that self-perceptions and social perceptions may persevere after the initial basis for such perceptions has been completely discredited. In both studies subjects first received false feedback, indicating that they had either succeeded or failed on a novel discrimination task and then were thoroughly debriefed concerning the predetermined and random nature of this outcome manipulation. In experiment 2, both the initial outcome manipulation and subsequent debriefing were watched and overheard by observers. Both actors and observers showed substantial perseverance of initial impressions concerning the actors' performance and abilities following a standard \"outcome\" debriefing. \"Process\" debriefing, in which explicit discussion of the perseverance process was provided, generally proved sufficient to eliminate erroneous self-perceptions. Biased attribution processes that might underlie perserverance phenomena and the implications of the present data for the ethical conduct of deception research are discussed.",
"title": ""
},
{
"docid": "neg:1840528_14",
"text": "Popular domain adaptation (DA) techniques learn a classifier for the target domain by sampling relevant data points from the source and combining it with the target data. We present a Support Vector Machine (SVM) based supervised DA technique, where the similarity between source and target domains is modeled as the similarity between their SVM decision boundaries. We couple the source and target SVMs and reduce the model to a standard single SVM. We test the Coupled-SVM on multiple datasets and compare our results with other popular SVM based DA approaches.",
"title": ""
},
{
"docid": "neg:1840528_15",
"text": "The process of translation is ambiguous, in that there are typically many valid translations for a given sentence. This gives rise to significant variation in parallel corpora, however, most current models of machine translation do not account for this variation, instead treating the problem as a deterministic process. To this end, we present a deep generative model of machine translation which incorporates a chain of latent variables, in order to account for local lexical and syntactic variation in parallel corpora. We provide an indepth analysis of the pitfalls encountered in variational inference for training deep generative models. Experiments on several different language pairs demonstrate that the model consistently improves over strong baselines.",
"title": ""
},
{
"docid": "neg:1840528_16",
"text": "Wireless sensor networks (WSNs) are autonomous networks of spatially distributed sensor nodes that are capable of wirelessly communicating with each other in a multihop fashion. Among different metrics, network lifetime and utility, and energy consumption in terms of carbon footprint are key parameters that determine the performance of such a network and entail a sophisticated design at different abstraction levels. In this paper, wireless energy harvesting (WEH), wake-up radio (WUR) scheme, and error control coding (ECC) are investigated as enabling solutions to enhance the performance of WSNs while reducing its carbon footprint. Specifically, a utility-lifetime maximization problem incorporating WEH, WUR, and ECC, is formulated and solved using distributed dual subgradient algorithm based on the Lagrange multiplier method. Discussion and verification through simulation results show how the proposed solutions improve network utility, prolong the lifetime, and pave the way for a greener WSN by reducing its carbon footprint.",
"title": ""
},
{
"docid": "neg:1840528_17",
"text": "c∈T ∑ u∈Sc log fu,c(X), where Sc – set of locations, which were identified as a class c ∈ C by the weak localization procedure. 2 Expansion principle • Expansion loss incorporates a prior knowledge about object sizes. • The characteristic size of any class c is controlled by a decay parameter dc. • We use decay d+ for all classes, which present in the image, and decay d− for all classes, which are absent. I = {i1, . . . , in} defines descending order for class scores: fi1,c(x) ≥ · · · ≥ fin,c(x) Gc(f(X);dc) = 1 Z(dc) n ∑",
"title": ""
},
{
"docid": "neg:1840528_18",
"text": "Two single-pole, double-throw transmit/receive switches were designed and fabricated with different substrate resistances using a 0.18-/spl mu/m p/sup $/substrate CMOS process. The switch with low substrate resistances exhibits 0.8-dB insertion loss and 17-dBm P/sub 1dB/ at 5.825 GHz, whereas the switch with high substrate resistances has 1-dB insertion loss and 18-dBm P/sub 1dB/. These results suggest that the optimal insertion loss can be achieved with low substrate resistances and 5.8-GHz T/R switches with excellent insertion loss and reasonable power handling capability can be implemented in a 0.18-/spl mu/m CMOS process.",
"title": ""
},
{
"docid": "neg:1840528_19",
"text": "Precision measurement of dc high current is usually realized by second harmonic fluxgate current transducers, but the complicated modulation and demodulation circuits with high cost have been limiting their applications. This paper presents a low-cost transducer that can substitute the traditional ones for precision measurement of high current. The new transducer, based on the principle of zero-flux, is the combination of an improved self-oscillating fluxgate sensor with a magnetic integrator in a common feedback loop. The transfer function of the zero-flux control strategy of the transducer is established to verify the validity of the qualitative analysis on operating principle. Origins and major influence factors of the modulation ripple, respectively, caused by the useful signal extraction circuit and the transformer effect are studied, and related suppression methods are proposed, which can be considered as one of the major technical modifications for performance improvement. As verification, a prototype is realized, and several key specifications, including the linearity, small-signal bandwidth, modulation ripple, ratio stability under full load, power-on repeatability, magnetic error, and temperature coefficient, are characterized. Measurement results show that the new transducer with the maximum output ripple 0.3 μA can measure dc current up to ±600 A with a relative accuracy 1.3 ppm in the full scale, and it also can measure ac current and has a -3 dB bandwidth greater than 100 kHz.",
"title": ""
}
] |
1840529 | Japanese Society for Cancer of the Colon and Rectum (JSCCR) Guidelines 2014 for treatment of colorectal cancer | [
{
"docid": "pos:1840529_0",
"text": "Five cases are described where minute foci of adenocarcinoma have been demonstrated in the mesorectum several centimetres distal to the apparent lower edge of a rectal cancer. In 2 of these there was no other evidence of lymphatic spread of the tumour. In orthodox anterior resection much of this tissue remains in the pelvis, and its is suggested that these foci might lead to suture-line or pelvic recurrence. Total excision of the mesorectum has, therefore, been carried out as a part of over 100 consecutive anterior resections. Fifty of these, which were classified as 'curative' or 'conceivably curative' operations, have now been followed for over 2 years with no pelvic or staple-line recurrence.",
"title": ""
},
{
"docid": "pos:1840529_1",
"text": "BACKGROUND\nShort-term preoperative radiotherapy and total mesorectal excision have each been shown to improve local control of disease in patients with resectable rectal cancer. We conducted a multicenter, randomized trial to determine whether the addition of preoperative radiotherapy increases the benefit of total mesorectal excision.\n\n\nMETHODS\nWe randomly assigned 1861 patients with resectable rectal cancer either to preoperative radiotherapy (5 Gy on each of five days) followed by total mesorectal excision (924 patients) or to total mesorectal excision alone (937 patients). The trial was conducted with the use of standardization and quality-control measures to ensure the consistency of the radiotherapy, surgery, and pathological techniques.\n\n\nRESULTS\nOf the 1861 patients randomly assigned to one of the two treatment groups, 1805 were eligible to participate. The overall rate of survival at two years among the eligible patients was 82.0 percent in the group assigned to both radiotherapy and surgery and 81.8 percent in the group assigned to surgery alone (P=0.84). Among the 1748 patients who underwent a macroscopically complete local resection, the rate of local recurrence at two years was 5.3 percent. The rate of local recurrence at two years was 2.4 percent in the radiotherapy-plus-surgery group and 8.2 percent in the surgery-only group (P<0.001).\n\n\nCONCLUSIONS\nShort-term preoperative radiotherapy reduces the risk of local recurrence in patients with rectal cancer who undergo a standardized total mesorectal excision.",
"title": ""
}
] | [
{
"docid": "neg:1840529_0",
"text": "Data domain description concerns the characterization of a data set. A good description covers all target data but includes no superfluous space. The boundary of a dataset can be used to detect novel data or outliers. We will present the Support Vector Data Description (SVDD) which is inspired by the Support Vector Classifier. It obtains a spherically shaped boundary around a dataset and analogous to the Support Vector Classifier it can be made flexible by using other kernel functions. The method is made robust against outliers in the training set and is capable of tightening the description by using negative examples. We show characteristics of the Support Vector Data Descriptions using artificial and real data.",
"title": ""
},
{
"docid": "neg:1840529_1",
"text": "A three-phase current source gate turn-off (GTO) thyristor rectifier is described with a high power factor, low line current distortion, and a simple main circuit. It adopts pulse-width modulation (PWM) control techniques obtained by analyzing the PWM patterns of three-phase current source rectifiers/inverters, and it uses a method of generating such patterns. In addition, by using an optimum set-up of the circuit constants, the GTO switching frequency is reduced to 500 Hz. This rectifier is suitable for large power conversion, because it can reduce GTO switching loss and its snubber loss.<<ETX>>",
"title": ""
},
{
"docid": "neg:1840529_2",
"text": "Abstract. We present a detailed workload characterization of a multi-tiered system that hosts an e-commerce site. Using the TPC-W workload and via experimental measurements, we illustrate how workload characteristics affect system behavior and operation, focusing on the statistical properties of dynamic page generation. This analysis allows to identify bottlenecks and the system conditions under which there is degradation in performance. Consistent with the literature, we find that the distribution of the dynamic page generation is heavy-tailed, which is caused by the interaction of the database server with the storage system. Furthermore, by examining the queuing behavior at the database server, we present experimental evidence of the existence of statistical correlation in the distribution of dynamic page generation times, especially under high load conditions. We couple this observation with the existence (and switching) of bottlenecks in the system.",
"title": ""
},
{
"docid": "neg:1840529_3",
"text": "Background: Silver nanoparticles (SNPs) are used extensively in areas such as medicine, catalysis, electronics, environmental science, and biotechnology. Therefore, facile synthesis of SNPs from an eco-friendly, inexpensive source is a prerequisite. In the present study, fabrication of SNPs from the leaf extract of Butea monosperma (Flame of Forest) has been performed. SNPs were synthesized from 1% leaf extract solution and characterized by ultraviolet-visible (UV-vis) spectroscopy and transmission electron microscopy (TEM). The mechanism of SNP formation was studied by Fourier transform infrared (FTIR), and anti-algal properties of SNPs on selected toxic cyanobacteria were evaluated. Results: TEM analysis indicated that size distribution of SNPs was under 5 to 30 nm. FTIR analysis indicated the role of amide I and II linkages present in protein in the reduction of silver ions. SNPs showed potent anti-algal properties on two cyanobacteria, namely, Anabaena spp. and Cylindrospermum spp. At a concentration of 800 μg/ml of SNPs, maximum anti-algal activity was observed in both cyanobacteria. Conclusions: This study clearly demonstrates that small-sized, stable SNPs can be synthesized from the leaf extract of B. monosperma. SNPs can be effectively employed for removal of toxic cyanobacteria.",
"title": ""
},
{
"docid": "neg:1840529_4",
"text": "The demand for greater battery life in low-power consumer electronics and implantable medical devices presents a need for improved energy efficiency in the management of small rechargeable cells. This paper describes an ultra-compact analog lithium-ion (Li-ion) battery charger with high energy efficiency. The charger presented here utilizes the tanh basis function of a subthreshold operational transconductance amplifier to smoothly transition between constant-current and constant-voltage charging regimes without the need for additional area- and power-consuming control circuitry. Current-domain circuitry for end-of-charge detection negates the need for precision-sense resistors in either the charging path or control loop. We show theoretically and experimentally that the low-frequency pole-zero nature of most battery impedances leads to inherent stability of the analog control loop. The circuit was fabricated in an AMI 0.5-μm complementary metal-oxide semiconductor process, and achieves 89.7% average power efficiency and an end voltage accuracy of 99.9% relative to the desired target 4.2 V, while consuming 0.16 mm2 of chip area. To date and to the best of our knowledge, this design represents the most area-efficient and most energy-efficient battery charger circuit reported in the literature.",
"title": ""
},
{
"docid": "neg:1840529_5",
"text": "As the extension of Distributed Denial-of-Service (DDoS) attacks to application layer in recent years, researchers pay much interest in these new variants due to a low-volume and intermittent pattern with a higher level of stealthiness, invaliding the state-of-the-art DDoS detection/defense mechanisms. We describe a new type of low-volume application layer DDoS attack--Tail Attacks on Web Applications. Such attack exploits a newly identified system vulnerability of n-tier web applications (millibottlenecks with sub-second duration and resource contention with strong dependencies among distributed nodes) with the goal of causing the long-tail latency problem of the target web application (e.g., 95th percentile response time > 1 second) and damaging the long-term business of the service provider, while all the system resources are far from saturation, making it difficult to trace the cause of performance degradation.\n We present a modified queueing network model to analyze the impact of our attacks in n-tier architecture systems, and numerically solve the optimal attack parameters. We adopt a feedback control-theoretic (e.g., Kalman filter) framework that allows attackers to fit the dynamics of background requests or system state by dynamically adjusting attack parameters. To evaluate the practicality of such attacks, we conduct extensive validation through not only analytical, numerical, and simulation results but also real cloud production setting experiments via a representative benchmark website equipped with state-of-the-art DDoS defense tools. We further proposed a solution to detect and defense the proposed attacks, involving three stages: fine-grained monitoring, identifying bursts, and blocking bots.",
"title": ""
},
{
"docid": "neg:1840529_6",
"text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.",
"title": ""
},
{
"docid": "neg:1840529_7",
"text": "The economic theory of the consumer is a combination of positive and normative theories. Since it is based on a rational maximizing model it describes how consumers should choose, but it is alleged to also describe how they do choose. This paper argues that in certain well-defined situations many consumers act in a manner that is inconsistent with economic theory. In these situations economic theory will make systematic errors in predicting behavior. Kahneman and Tversky's prospect theory is proposed as the basis for an alternative descriptive theory. Topics discussed are: underweighting of opportunity costs, failure to ignore sunk costs, search behavior, choosing not to choose and regret, and precommitment and self-control.",
"title": ""
},
{
"docid": "neg:1840529_8",
"text": "This paper describes our approach on “Information Extraction from Microblogs Posted during Disasters”as an attempt in the shared task of the Microblog Track at Forum for Information Retrieval Evaluation (FIRE) 2016 [2]. Our method uses vector space word embeddings to extract information from microblogs (tweets) related to disaster scenarios, and can be replicated across various domains. The system, which shows encouraging performance, was evaluated on the Twitter dataset provided by the FIRE 2016 shared task. CCS Concepts •Computing methodologies→Natural language processing; Information extraction;",
"title": ""
},
{
"docid": "neg:1840529_9",
"text": "This paper presents inexpensive computer vision techniques allowing to measure the texture characteristics of woven fabric, such as weave repeat and yarn counts, and the surface roughness. First, we discuss the automatic recognition of weave pattern and the accurate measurement of yarn counts by analyzing fabric sample images. We propose a surface roughness indicator FDFFT, which is the 3-D surface fractal dimension measurement calculated from the 2-D fast Fourier transform of high-resolution 3-D surface scan. The proposed weave pattern recognition method was validated by using computer-simulated woven samples and real woven fabric images. All weave patterns of the tested fabric samples were successfully recognized, and computed yarn counts were consistent with the manual counts. The rotation invariance and scale invariance of FDFFT were validated with fractal Brownian images. Moreover, to evaluate the correctness of FDFFT, we provide a method of calculating standard roughness parameters from the 3-D fabric surface. According to the test results, we demonstrated that FDFFT is a fast and reliable parameter for fabric roughness measurement based on 3-D surface data.",
"title": ""
},
{
"docid": "neg:1840529_10",
"text": "A simple, low-cost, and compact printed dual-band fork-shaped monopole antenna for Bluetooth and ultrawideband (UWB) applications is proposed. Dual-band operation covering 2.4-2.484 GHz (Bluetooth) and 3.1-10.6 GHz (UWB) frequency bands are obtained by using a fork-shaped radiating patch and a rectangular ground patch. The proposed antenna is fed by a 50-Ω microstrip line and fabricated on a low-cost FR4 substrate having dimensions 42 (<i>L</i><sub>sub</sub>) × 24 (<i>W</i><sub>sub</sub>) × 1.6 (<i>H</i>) mm<sup>3</sup>. The antenna structure is fabricated and tested. Measured <i>S</i><sub>11</sub> is ≤ -10 dB over 2.3-2.5 and 3.1-12 GHz. The antenna shows acceptable gain flatness with nearly omnidirectional radiation patterns over both Bluetooth and UWB bands.",
"title": ""
},
{
"docid": "neg:1840529_11",
"text": "The presence of buried landmines is a serious threat in many areas around the World. Despite various techniques have been proposed in the literature to detect and recognize buried objects, automatic and easy to use systems providing accurate performance are still under research. Given the incredible results achieved by deep learning in many detection tasks, in this paper we propose a pipeline for buried landmine detection based on convolutional neural networks (CNNs) applied to ground-penetrating radar (GPR) images. The proposed algorithm is capable of recognizing whether a B-scan profile obtained from GPR acquisitions contains traces of buried mines. Validation of the presented system is carried out on real GPR acquisitions, albeit system training can be performed simply relying on synthetically generated data. Results show that it is possible to reach 95% of detection accuracy without training in real acquisition of landmine profiles.",
"title": ""
},
{
"docid": "neg:1840529_12",
"text": "Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. However, as intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbances, 590 nm over 450 nm, is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantitation down to 50 ng of bovine serum albumin. Furthermore, protein assay in presence of up to 35-fold weight excess of sodium dodecyl sulfate (detergent) over bovine serum albumin (protein) can be performed. A linear equation that perfectly fits the experimental data is provided on the basis of mass action and Beer's law.",
"title": ""
},
{
"docid": "neg:1840529_13",
"text": "Numerous studies report that standard volatility models have low explanatory power, leading some researchers to question whether these models have economic value. We examine this question by using conditional mean-variance analysis to assess the value of volatility timing to short-horizon investors. We nd that the volatility timing strategies outperform the unconditionally e cient static portfolios that have the same target expected return and volatility. This nding is robust to estimation risk and transaction costs.",
"title": ""
},
{
"docid": "neg:1840529_14",
"text": "The networks of intelligent building are usually consist of a great number of smart devices. Since many smart devices only support on-site configuration and upgrade, and communication between devices could be observed and even altered by attackers, efficiency and security are two key concerns in maintaining and managing the devices used in intelligent building networks. In this paper, the authors apply the technology of software defined networking to satisfy the requirement for efficiency in intelligent building networks. More specific, a protocol stack in smart devices that support OpenFlow is designed. In addition, the authors designed the lightweight security mechanism with two foundation protocols and a full protocol that uses the foundation protocols as example. Performance and session key establishment for the security mechanism are also discussed.",
"title": ""
},
{
"docid": "neg:1840529_15",
"text": "Crowdfunding systems are social media websites that allow people to donate small amounts of money that add up to fund valuable larger projects. These websites are structured around projects: finite campaigns with welldefined goals, end dates, and completion criteria. We use a dataset from an existing crowdfunding website — the school charity Donors Choose — to understand the value of completing projects. We find that completing a project is an important act that leads to larger donations (over twice as large), greater likelihood of returning to donate again, and few projects that expire close but not complete. A conservative estimate suggests that this completion bias led to over $15 million in increased donations to Donors Choose, representing approximately 16% of the total donations for the period under study. This bias suggests that structuring many types of collaborative work as a series of projects might increase contribution significantly. Many social media creators find it rather difficult to motivate users to actively participate and contribute their time, energy, or money to make a site valuable to others. The value in social media largely derives from interactions between and among people who are working together to achieve common goals. To encourage people to participate and contribute, social media creators regularly look for different ways of structuring participation. Some use a blog-type format, such as Facebook, Twitter, or Tumblr. Some use a collaborative document format like Wikipedia. And some use a project-based format. A project is a well-defined set of tasks that needs to be accomplished. Projects usually have a well-defined end goal — something that needs to be accomplished for the project to be considered a success — and an end date — a day by which the project needs to be completed. Much work in society is structured around projects; for example, Hollywood makes movies by organizing each movie’s production as a project, hiring a new crew for each movie. Construction companies organize their work as a sequence of projects. And projects are common in knowledge-work based businesses (?). Copyright c © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Another important place we see project-based organization is in crowdfunding websites. Crowdfunding is a relatively new phenomenon that merges modern social web technologies with project-based fundraising. It is a new form of social media that publicizes projects that need money, and allows the crowd to each make a small contribution toward the larger project. By aggregating many small donations, crowdfunding websites can fund large and interesting projects of all kinds. Kickstarter, IndieGoGo, Spot.Us, and Donors Choose are examples of crowdfunding websites targeted at specific types of projects (creative, entrepreneurial, journalism, and classroom projects respectively). Crowdfunding is becoming an increasingly popular tool for enabling project-based work. Kickstarter, for example, has raised over $400 million for over 35,000 creative projects, and Donors Choose has raised over $90 million for over 200,000 classroom projects. Additionally, crowdfunding websites represent potential new business models for a number of industries, including some struggling to find viable revenue streams: Sellaband has proven successful in helping musicians fund the creation and distribution of their music; and Spot.Us enables journalists to fund and publish investigative news. In this paper, I seek to understand why crowdfunding systems that are organized around projects are successful. Using a dataset from Donors Choose, a crowdfunding charity that funds classroom projects for K–12 school teachers, I find that completing a project is a powerful motivator that helps projects succeed in the presence of a crowd: donations that complete a project are over twice as large as normal donations. People who make these donations are more likely to return and donate in the future, and their future donations are larger. And few projects get close to completion but fail. Together, these results suggest that completing the funding for a project is an important act for the crowd, and structuring the fundraising around completable projects helps enable success. This also has implications for other types of collaborative technologies. Background and Related Ideas",
"title": ""
},
{
"docid": "neg:1840529_16",
"text": "An electroneurographic study performed on the peripheral nerves of 25 patients with severe cirrhosis following viral hepatitis showed slight slowing (P > 0.05) of motor conduction velocity (CV) and significant diminution (P < 0.001) of sensory CV and mixed sensorimotor-evoked potentials, associated with a significant decrease in the amplitude of sensory evoked potentials. The slowing was about equal in the distal (digital) and in the proximal segments of the same nerve. A mixed axonal degeneration and segmental demyelination is presumed to explain these findings. The CV measurements proved helpful for an early diagnosis of hepatic polyneuropathy showing subjective symptoms in the subclinical stage. Elektroneurographische Untersuchungen der peripheren Nerven bei 25 Patienten mit postviralen Leberzirrhosen ergaben folgendes: geringe Verminderung (P > 0.05) der motorischen Leitgeschwindigkeit (LG) und eine signifikant verlangsamte LG in sensiblen Fasern (P < 0.001), in beiden proximalen und distalen Fasern. Es wurde in den gemischten evozierten Potentialen eine Verlangsamung der LG festgestellt, zwischen den Werten der motorischen und sensiblen Fasern. Gleichzeitig wurde eine Minderung der Amplitude des NAP beobachtet. Diese Befunde sprechen für eine axonale Degeneration und eine Demyelinisierung in den meisten untersuchten peripheren Nerven. Elektroneurographische Untersuchungen erlaubten den funktionellen Zustand des peripheren Nervens abzuschätzen und bestimmte Veränderungen bereits im Initialstadium der Erkrankung aufzudecken, wenn der Patient noch keine klinischen Zeichen einer peripheren Neuropathie bietet.",
"title": ""
},
{
"docid": "neg:1840529_17",
"text": "Empirical scoring functions based on either molecular force fields or cheminformatics descriptors are widely used, in conjunction with molecular docking, during the early stages of drug discovery to predict potency and binding affinity of a drug-like molecule to a given target. These models require expert-level knowledge of physical chemistry and biology to be encoded as hand-tuned parameters or features rather than allowing the underlying model to select features in a data-driven procedure. Here, we develop a general 3-dimensional spatial convolution operation for learning atomic-level chemical interactions directly from atomic coordinates and demonstrate its application to structure-based bioactivity prediction. The atomic convolutional neural network is trained to predict the experimentally determined binding affinity of a protein-ligand complex by direct calculation of the energy associated with the complex, protein, and ligand given the crystal structure of the binding pose. Non-covalent interactions present in the complex that are absent in the protein-ligand sub-structures are identified and the model learns the interaction strength associated with these features. We test our model by predicting the binding free energy of a subset of protein-ligand complexes found in the PDBBind dataset and compare with state-of-the-art cheminformatics and machine learning-based approaches. We find that all methods achieve experimental accuracy (less than 1 kcal/mol mean absolute error) and that atomic convolutional networks either outperform or perform competitively with the cheminformatics based methods. Unlike all previous protein-ligand prediction systems, atomic convolutional networks are end-to-end and fully-differentiable. They represent a new data-driven, physics-based deep learning model paradigm that offers a strong foundation for future improvements in structure-based bioactivity prediction.",
"title": ""
},
{
"docid": "neg:1840529_18",
"text": "Cloud computing enables access to the widespread services and resources in cloud datacenters for mitigating resource limitations in low-potential client devices. Computational cloud is an attractive platform for computational offloading due to the attributes of scalability and availability of resources. Therefore, mobile cloud computing (MCC) leverages the application processing services of computational clouds for enabling computational-intensive and ubiquitous mobile applications on smart mobile devices (SMDs). Computational offloading frameworks focus on offloading intensive mobile applications at different granularity levels which involve resource-intensive mechanism of application profiling and partitioning at runtime. As a result, the energy consumption cost (ECC) and turnaround time of the application is increased. This paper proposes an active service migration (ASM) framework for computational offloading to cloud datacenters, which employs lightweight procedure for the deployment of runtime distributed platform. The proposed framework employs coarse granularity level and simple developmental and deployment procedures for computational offloading in MCC. ASM is evaluated by benchmarking prototype application on the Android devices in the real MCC environment. It is found that the turnaround time of the application reduces up to 45 % and ECC of the application reduces up to 33 % in ASM-based computational offloading as compared to traditional offloading techniques which shows the lightweight nature of the proposed framework for computational offloading.",
"title": ""
},
{
"docid": "neg:1840529_19",
"text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.",
"title": ""
}
] |
1840530 | Diverse synaptic plasticity mechanisms orchestrated to form and retrieve memories in spiking neural networks | [
{
"docid": "pos:1840530_0",
"text": "Electrophysiological connectivity patterns in cortex often have a few strong connections, which are sometimes bidirectional, among a lot of weak connections. To explain these connectivity patterns, we created a model of spike timing–dependent plasticity (STDP) in which synaptic changes depend on presynaptic spike arrival and the postsynaptic membrane potential, filtered with two different time constants. Our model describes several nonlinear effects that are observed in STDP experiments, as well as the voltage dependence of plasticity. We found that, in a simulated recurrent network of spiking neurons, our plasticity rule led not only to development of localized receptive fields but also to connectivity patterns that reflect the neural code. For temporal coding procedures with spatio-temporal input correlations, strong connections were predominantly unidirectional, whereas they were bidirectional under rate-coded input with spatial correlations only. Thus, variable connectivity patterns in the brain could reflect different coding principles across brain areas; moreover, our simulations suggested that plasticity is fast.",
"title": ""
}
] | [
{
"docid": "neg:1840530_0",
"text": "Data-driven techniques for interactive narrative generation are the subject of growing interest. Reinforcement learning (RL) offers significant potential for devising data-driven interactive narrative generators that tailor players’ story experiences by inducing policies from player interaction logs. A key open question in RL-based interactive narrative generation is how to model complex player interaction patterns to learn effective policies. In this paper we present a deep RL-based interactive narrative generation framework that leverages synthetic data produced by a bipartite simulated player model. Specifically, the framework involves training a set of Q-networks to control adaptable narrative event sequences with long short-term memory network-based simulated players. We investigate the deep RL framework’s performance with an educational interactive narrative, CRYSTAL ISLAND. Results suggest that the deep RL-based narrative generation framework yields effective personalized interactive narratives.",
"title": ""
},
{
"docid": "neg:1840530_1",
"text": "0950-7051/$ see front matter 2010 Elsevier B.V. A doi:10.1016/j.knosys.2010.07.006 * Corresponding author. Tel.: +886 3 5712121x573 E-mail addresses: [email protected] (Y.-S (L.-I. Tong). The autoregressive integrated moving average (ARIMA), which is a conventional statistical method, is employed in many fields to construct models for forecasting time series. Although ARIMA can be adopted to obtain a highly accurate linear forecasting model, it cannot accurately forecast nonlinear time series. Artificial neural network (ANN) can be utilized to construct more accurate forecasting model than ARIMA for nonlinear time series, but explaining the meaning of the hidden layers of ANN is difficult and, moreover, it does not yield a mathematical equation. This study proposes a hybrid forecasting model for nonlinear time series by combining ARIMA with genetic programming (GP) to improve upon both the ANN and the ARIMA forecasting models. Finally, some real data sets are adopted to demonstrate the effectiveness of the proposed forecasting model. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840530_2",
"text": "The fabrication and characterization of magnetic sensors consisting of double magnetic layers are described. Both thin film based material and wire based materials were used for the double layers. The sensor elements were fabricated by patterning NiFe/CoFe multilayer thin films. This thin film based sensor exhibited a constant output voltage per excitation magnetic field at frequencies down to 0.1 Hz. The magnetic sensor using a twisted FeCoV wire, the conventional material for the Wiegand effect, had the disadvantage of an asymmetric output voltage generated by an alternating magnetic field. It was found that the magnetic wire whose ends were both slightly etched exhibited a symmetric output voltage.",
"title": ""
},
{
"docid": "neg:1840530_3",
"text": "Recently Adleman has shown that a small traveling salesman problem can be solved by molecular operations. In this paper we show how the same principles can be applied to breaking the Data Encryption Standard (DES). We describe in detail a library of operations which are useful when working with a molecular computer. We estimate that given one arbitrary (plain-text, cipher-text) pair, one can recover the DES key in about 4 months of work. Furthermore, we show that under chosen plain-text attack it is possible to recover the DES key in one day using some preprocessing. Our method can be generalized to break any cryptosystem which uses keys of length less than 64 bits.",
"title": ""
},
{
"docid": "neg:1840530_4",
"text": "Over the past years, the computing industry has started various initiatives announced to increase computer security by means of new hardware architectures. The most notable effort is the Trusted Computing Group (TCG) and the Next-Generation Secure Computing Base (NGSCB). This technology offers useful new functionalities as the possibility to verify the integrity of a platform (attestation) or binding quantities on a specific platform (sealing).In this paper, we point out the deficiencies of the attestation and sealing functionalities proposed by the existing specification of the TCG: we show that these mechanisms can be misused to discriminate certain platforms, i.e., their operating systems and consequently the corresponding vendors. A particular problem in this context is that of managing the multitude of possible configurations. Moreover, we highlight other shortcomings related to the attestation, namely system updates and backup. Clearly, the consequences caused by these problems lead to an unsatisfactory situation both for the private and business branch, and to an unbalanced market when such platforms are in wide use.To overcome these problems generally, we propose a completely new approach: the attestation of a platform should not depend on the specific software or/and hardware (configuration) as it is today's practice but only on the \"properties\" that the platform offers. Thus, a property-based attestation should only verify whether these properties are sufficient to fulfill certain (security) requirements of the party who asks for attestation. We propose and discuss a variety of solutions based on the existing Trusted Computing (TC) functionality. We also demonstrate, how a property-based attestation protocol can be realized based on the existing TC hardware such as a Trusted Platform Module (TPM).",
"title": ""
},
{
"docid": "neg:1840530_5",
"text": "This study presents novel coplanar waveguide (CPW) power splitters comprising a CPW T-junction with outputs attached to phase-adjusting circuits, i.e., the composite right/left-handed (CRLH) CPW and the conventional CPW, to achieve a constant phase difference with arbitrary value over a wide bandwidth. To demonstrate the proposed technique, a 180/spl deg/ CRLH CPW power splitter with a phase error of less than 10/spl deg/ and a magnitude difference of below 1.5 dB within 2.4 to 5.22 GHz is experimentally demonstrated. Compared with the conventional 180/spl deg/ delay-line power splitter, the proposed structure possesses not only superior phase and magnitude performances but also a 37% size reduction. The equivalent circuit of the CRLH CPW, which represents the left-handed (LH), right-handed (RH), and lossy characteristics, is constructed and the results obtained are in good agreement with the full-wave simulation and measurement. Applications involving the wideband coplanar waveguide-to-coplanar stripline (CPW-to-CPS) transition and the tapered loop antenna are presented to stress the practicality of the 180/spl deg/ CRLH CPW power splitter. The 3-dB insertion loss bandwidth is measured as 98% for the case of a back-to-back CPW-to-CPS transition. The tapered loop antenna fed by the proposed transition achieves a measured 10-dB return loss bandwidth of 114%, and shows similar radiation patterns and 6-9 dBi antenna gain in its operating band.",
"title": ""
},
{
"docid": "neg:1840530_6",
"text": "In this paper a novel application of multimodal emotion recognition algorithms in software engineering is described. Several application scenarios are proposed concerning program usability testing and software process improvement. Also a set of emotional states relevant in that application area is identified. The multimodal emotion recognition method that integrates video and depth channels, physiological signals and input devices usage patterns is proposed and some preliminary results on learning set creation are described.",
"title": ""
},
{
"docid": "neg:1840530_7",
"text": "The rapid progress in nanoelectronics showed an urgent need for microwave measurement of impedances extremely different from the 50Ω reference impedance of measurement instruments. In commonly used methods input impedance or admittance of a device under test (DUT) is derived from measured value of its reflection coefficient causing serious accuracy problems for very high and very low impedances due to insufficient sensitivity of the reflection coefficient to impedance of the DUT. This paper brings theoretical description and experimental verification of a method developed especially for measurement of extreme impedances. The method can significantly improve measurement sensitivity and reduce errors caused by the VNA. It is based on subtraction (or addition) of a reference reflection coefficient and the reflection coefficient of the DUT by a passive network, amplifying the resulting signal by an amplifier and measuring the amplified signal as a transmission coefficient by a common vector network analyzer (VNA). A suitable calibration technique is also presented.",
"title": ""
},
{
"docid": "neg:1840530_8",
"text": "This article seeks to reconceptualize the relationship between flexibility and efficiency. Much organization theory argues that efficiency requires bureaucracy, that bureaucracy impedes flexibility, and that organizations therefore confront a tradeoff between efficiency and flexibility. Some researchers have challenged this line of reasoning, arguing that organizations can shift the efficiency/flexibility tradeoff to attain both superior efficiency and superior flexibility. Others have pointed out numerous obstacles to successfully shifting the tradeoff. Seeking to advance our understanding of these obstacles and how they might be overcome, we analyze an auto assembly plant that appears to be far above average industry performance in both efficiency and flexibility. NUMMI, a Toyota subsidiary located in Fremont, California, relied on a highly bureaucratic organization to achieve its high efficiency. Analyzing two recent major model changes, we find that NUMMI used four mechanisms to support its exceptional flexibility/efficiency combination. First, metaroutines (routines for changing other routines) facilitated the efficient performance of nonroutine tasks. Second, both workers and suppliers contributed to nonroutine tasks while they worked in routine production. Third, routine and nonroutine tasks were separated temporally, and workers switched sequentially between them. Finally, novel forms of organizational partitioning enabled differentiated subunits to work in parallel on routine and nonroutine tasks. NUMMI’s success with these four mechanisms depended on several features of the broader organizational context, most notably training, trust, and leadership. (Flexibility; Bureaucracy; Tradeoffs; Routines; Metaroutines; Ambidexterity; Switching; Partitioning; Trust) Introduction The postulate of a tradeoff between efficiency and flexibility is one of the more enduring ideas in organizational theory. Thompson (1967, p. 15) described it as a central “paradox of administration.” Managers must choose between organization designs suited to routine, repetitive tasks and those suited to nonroutine, innovative tasks. However, as competitive rivalry intensifies, a growing number of firms are trying to improve simultaneously in efficiencyand flexibility-related dimensions (de Meyer et al. 1989, Volberda 1996, Organization Science 1996). How can firms shift the terms of the efficiency-flexibility tradeoff? To explore how firms can create simultaneously superior efficiency and superior flexibility, we examine an exceptional auto assembly plant, NUMMI, a joint venture of Toyota and GM whose day-to-day operations were unD ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? 44 ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 der Toyota control. Like other Japanese auto transplants in the U.S., NUMMI far outpaced its Big Three counterparts simultaneously in efficiency and quality and in model change flexibility (Womack et al. 1990, Business Week 1994). In the next section we set the theoretical stage by reviewing prior research on the efficiency/flexibility tradeoff. Prior research suggests four mechanisms by which organizations can shift the tradeoff as well as some potentially serious impediments to each mechanism. We then describe our research methods and the NUMMI organization. The following sections first outline in summary form the results of this investigation, then provide the supporting evidence in our analysis of two major model changeovers at NUMMI and how they differed from traditional U.S. Big Three practice. A discussion section identifies some conditions underlying NUMMI’s success in shifting the tradeoff and in overcoming the potential impediments to the four trade-off shifting mechanisms. Flexibility Versus Efficiency? There are many kinds of flexibility and indeed a sizable literature devoted to competing typologies of the various kinds of flexibility (see overview by Sethi and Sethi 1990). However, from an organizational point of view, all forms of flexibility present a common challenge: efficiency requires a bureaucratic form of organization with high levels of standardization, formalization, specialization, hierarchy, and staffs; but these features of bureaucracy impede the fluid process of mutual adjustment required for flexibility; and organizations therefore confront a tradeoff between efficiency and flexibility (Knott 1996, Kurke 1988). Contingency theory argues that organizations will be more effective if they are designed to fit the nature of their primary task. Specifically, organizations should adopt a mechanistic form if their task is simple and stable and their goal is efficiency, and they should adopt an organic form if their task is complex and changing and their goal is therefore flexibility (Burns and Stalker 1961). Organizational theory presents a string of contrasts reflecting this mechanistic/organic polarity: machine bureaucracies vs. adhocracies (Mintzberg 1979); adaptive learning based on formal rules and hierarchical controls versus generative learning relying on shared values, teams, and lateral communication (McGill et al. 1992); generalists who pursue opportunistic r-strategies and rely on excess capacity to do well in open environments versus specialists that are more likely to survive in competitive environments by pursuing k-strategies that trade less flexibility for greater efficiency (Hannan and Freeman 1977, 1989). March (1991) and Levinthal and March (1993) make the parallel argument that organizations must choose between structures that facilitate exploration—the search for new knowledge—and those that facilitate exploitation—the use of existing knowledge. Social-psychological theories provide a rationale for this polarization. Merton (1958) shows how goal displacement in bureaucratic organizations generates rigidity. Argyris and Schon (1978) show how defensiveness makes single-loop learning—focused on pursuing given goals more effectively (read: efficiency)—an impediment to double-loop learning—focused on defining new task goals (read: flexibility). Thus, argues Weick (1969), adaptation precludes adaptability. This tradeoff view has been echoed in other disciplines. Standard economic theory postulates a tradeoff between flexibility and average costs (e.g., Stigler 1939, Hart 1942). Further extending this line of thought, Klein (1984) contrasts static and dynamic efficiency. Operations management researchers have long argued that productivity and flexibility or innovation trade off against each other in manufacturing plant performance (Abernathy 1978; see reviews by Gerwin 1993, Suárez et al. 1996, Corrêa 1994). Hayes and Wheelwright’s (1984) product/process matrix postulates a close correspondence between product variety and process efficiency (see Safizadeh et al. 1996). Strategy researchers such as Ghemawat and Costa (1993) argue that firms must chose between a strategy of dynamic effectiveness through flexibility and static efficiency through more rigid discipline. In support of a key corollary of the tradeoff postulate articulated in the organization theory literature, they argue that in general the optimal choice is at one end or the other of the spectrum, since a firm pursuing both goals simultaneously would have to mix organizational elements appropriate to each strategy and thus lose the benefit of the complementarities that typically obtain between the various elements of each type of organization. They would thus be “stuck in the middle” (Porter 1980). Beyond the Tradeoff? Empirical evidence for the tradeoff postulate is, however, remarkably weak. Take, for example, product mix flexibility. On the one hand, Hayes and Wheelwright (1984) and Skinner (1985) provide anecdotal evidence that more focused factories—ones producing a narrower range of products—are more efficient. In their survey of plants across a range of manufacturing industries, Safizadeh et al. (1996) confirmed that in general more product variety was associated with reliance on job-shop rather continuous processes. D ow nl oa de d fr om in fo rm s. or g by [ 12 8. 32 .7 5. 11 8] o n 28 A pr il 20 14 , a t 1 0: 21 . Fo r pe rs on al u se o nl y, a ll ri gh ts r es er ve d. PAUL S. ADLER, BARBARA GOLDOFTAS AND DAVID I. LEVINE Flexibility Versus Efficiency? ORGANIZATION SCIENCE/Vol. 10, No. 1, January–February 1999 45 On the other hand, Kekre and Srinivasan’s (1990) study of companies selling industrial products found that a broader product line was significantly associated with lower manufacturing costs. MacDuffie et al. (1996) found that greater product variety had no discernible affect on auto assembly plant productivity. Suárez et al. (1996) found that product mix flexibility had no discernible relationship to costs or quality in printed circuit board assembly. Brush and Karnani (1996) found only three out of 19 manufacturing industries showed statistically significant productivity returns to narrower product lines, while two industries showed significant returns to broader product lines. Research by Fleischman (1996) on employment flexibility revealed a similar pattern: within 2digit SIC code industries that face relatively homogeneous levels of expected volatility of employment, the employment adjustment costs of the least flexible 4-digit industries were anywhere between 4 and 10 times greater than the adjustment costs found in the most flexible 4digit industries. Some authors argue that the era of tradeoffs is behind us (Ferdows and de Meyer 1990). Hypercompetitive environments force firms to compete on several dimensions at once (Organization Science 1996), and flexible technologies enable firms to shift the tradeoff curve just as quickly as they could move to a different point on the existing tr",
"title": ""
},
{
"docid": "neg:1840530_9",
"text": "A novel ultra-wideband (UWB) bandpass filter (BPF) with improved upper stopband performance using a defected ground structure (DGS) is presented in this letter. The proposed BPF is composed of seven DGSs that are positioned under the input and output microstrip line and coupled double step impedance resonator (CDSIR). By using CDSIR and open loop defected ground structure (OLDGS), we can achieve UWB BPF characteristics, and by using the conventional CDGSs under the input and output microstrip line, we can improve the upper stopband performance. Simulated and measured results are found in good agreement with each other, showing a wide passband from 3.4 to 10.9 GHz, minimum insertion loss of 0.61 dB at 7.02 GHz, a group delay variation of less than 0.4 ns in the operating band, and a wide upper stopband with more than 30 dB attenuation up to 20 GHz. In addition, the proposed UWB BPF has a compact size (0.27¿g ~ 0.29¿g , ¿g : guided wavelength at the central frequency of 6.85 GHz).",
"title": ""
},
{
"docid": "neg:1840530_10",
"text": "Steganography has been proposed as a new alternative technique to enforce data security. Lately, novel and versatile audio steganographic methods have been proposed. A perfect audio Steganographic technique aim at embedding data in an imperceptible, robust and secure way and then extracting it by authorized people. Hence, up to date the main challenge in digital audio steganography is to obtain robust high capacity steganographic systems. Leaning towards designing a system that ensures high capacity or robustness and security of embedded data has led to great diversity in the existing steganographic techniques. In this paper, we present a current state of art literature in digital audio steganographic techniques. We explore their potentials and limitations to ensure secure communication. A comparison and an evaluation for the reviewed techniques is also presented in this paper.",
"title": ""
},
{
"docid": "neg:1840530_11",
"text": "Video super-resolution (SR) aims to generate a highresolution (HR) frame from multiple low-resolution (LR) frames in a local temporal window. The inter-frame temporal relation is as crucial as the intra-frame spatial relation for tackling this problem. However, how to utilize temporal information efficiently and effectively remains challenging since complex motion is difficult to model and can introduce adverse effects if not handled properly. We address this problem from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependency. Filters on various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated. Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network which is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner. Our proposed models with learned temporal dynamics are systematically evaluated on public video datasets and achieve state-of-the-art SR results compared with other recent video SR approaches. Both of the temporal adaptation and the spatial alignment modules are demonstrated to considerably improve SR quality over their plain counterparts.",
"title": ""
},
{
"docid": "neg:1840530_12",
"text": "This paper examines the causes of conflict in Burundi and discusses strategies for building peace. The analysis of the complex relationships between distribution and group dynamics reveals that these relationships are reciprocal, implying that distribution and group dynamics are endogenous. The nature of endogenously generated group dynamics determines the type of preferences (altruistic or exclusionist), which in turn determines the type of allocative institutions and policies that prevail in the political and economic system. While unequal distribution of resources may be socially inefficient, it nonetheless can be rational from the perspective of the ruling elite, especially because inequality perpetuates dominance. However, as the unequal distribution of resources generates conflict, maintaining a system based on inequality is difficult because it requires ever increasing investments in repression. It is therefore clear that if the new Burundian leadership is serious about building peace, it must engineer institutions that uproot the legacy of discrimination and promote equal opportunity for social mobility for all members of ethnic groups and regions.",
"title": ""
},
{
"docid": "neg:1840530_13",
"text": "3 chapter This chapter examines the effects of fiscal consolidation —tax hikes and government spending cuts—on economic activity. Based on a historical analysis of fiscal consolidation in advanced economies, and on simulations of the IMF's Global Integrated Monetary and Fiscal Model (GIMF), it finds that fiscal consolidation typically reduces output and raises unemployment in the short term. At the same time, interest rate cuts, a fall in the value of the currency, and a rise in net exports usually soften the contractionary impact. Consolidation is more painful when it relies primarily on tax hikes; this occurs largely because central banks typically provide less monetary stimulus during such episodes, particularly when they involve indirect tax hikes that raise inflation. Also, fiscal consolidation is more costly when the perceived risk of sovereign default is low. These findings suggest that budget deficit cuts are likely to be more painful if they occur simultaneously across many countries, and if monetary policy is not in a position to offset them. Over the long term, reducing government debt is likely to raise output, as real interest rates decline and the lighter burden of interest payments permits cuts to distortionary taxes. Budget deficits and government debt soared during the Great Recession. In 2009, the budget deficit averaged about 9 percent of GDP in advanced economies, up from only 1 percent of GDP in 2007. 1 By the end of 2010, government debt is expected to reach about 100 percent of GDP—its highest level in 50 years. Looking ahead, population aging could create even more serious problems for public finances. In response to these worrisome developments, virtually all advanced economies will face the challenge of fiscal consolidation. Indeed, many governments are already undertaking or planning The main authors of this chapter are Daniel Leigh (team leader), Advanced economies are defined as the 33 economies so designated based on the World Economic Outlook classification described in the Statistical Appendix. large spending cuts and tax hikes. An important and timely question is, therefore, whether fiscal retrenchment will hurt economic performance. Although there is widespread agreement that reducing debt has important long-term benefits, there is no consensus regarding the short-term effects of fiscal austerity. On the one hand, the conventional Keynesian view is that cutting spending or raising taxes reduces economic activity in the short term. On the other hand, a number of studies present evidence that cutting budget deficits can …",
"title": ""
},
{
"docid": "neg:1840530_14",
"text": "Applications in radar systems and communications systems require very often antennas with beam steering or multi beam capabilities. For the millimeter frequency range Rotman lenses can be useful as multiple beam forming networks for linear antennas providing the advantage of broadband performance. The design and development of Rotman lens at 220 GHz feeding an antenna array for beam steering applications is presented. The construction is completely realized in waveguide technology. Experimental results are compared with theoretical considerations and electromagnetic simulations.",
"title": ""
},
{
"docid": "neg:1840530_15",
"text": "This paper examines factors that influence prices of most common five cryptocurrencies such Bitcoin, Ethereum, Dash, Litecoin, and Monero over 20102018 using weekly data. The study employs ARDL technique and documents several findings. First, cryptomarket-related factors such as market beta, trading volume, and volatility appear to be significant determinant for all five cryptocurrencies both in shortand long-run. Second, attractiveness of cryptocurrencies also matters in terms of their price determination, but only in long-run. This indicates that formation (recognition) of the attractiveness of cryptocurrencies are subjected to time factor. In other words, it travels slowly within the market. Third, SP500 index seems to have weak positive long-run impact on Bitcoin, Ethereum, and Litcoin, while its sign turns to negative losing significance in short-run, except Bitcoin that generates an estimate of -0.20 at 10% significance level. Lastly, error-correction models for Bitcoin, Etherem, Dash, Litcoin, and Monero show that cointegrated series cannot drift too far apart, and converge to a longrun equilibrium at a speed of 23.68%, 12.76%, 10.20%, 22.91%, and 14.27% respectively.",
"title": ""
},
{
"docid": "neg:1840530_16",
"text": "Pedestrian detection in real world scenes is a challenging problem. In recent years a variety of approaches have been proposed, and impressive results have been reported on a variety of databases. This paper systematically evaluates (1) various local shape descriptors, namely Shape Context and Local Chamfer descriptor and (2) four different interest point detectors for the detection of pedestrians. Those results are compared to the standard global Chamfer matching approach. A main result of the paper is that Shape Context trained on real edge images rather than on clean pedestrian silhouettes combined with the Hessian-Laplace detector outperforms all other tested approaches.",
"title": ""
},
{
"docid": "neg:1840530_17",
"text": "Link prediction for knowledge graphs is the task of predicting missing relationships between entities. Previous work on link prediction has focused on shallow, fast models which can scale to large knowledge graphs. However, these models learn less expressive features than deep, multi-layer models – which potentially limits performance. In this work we introduce ConvE, a multi-layer convolutional network model for link prediction, and report state-of-the-art results for several established datasets. We also show that the model is highly parameter efficient, yielding the same performance as DistMult and R-GCN with 8x and 17x fewer parameters. Analysis of our model suggests that it is particularly effective at modelling nodes with high indegree – which are common in highlyconnected, complex knowledge graphs such as Freebase and YAGO3. In addition, it has been noted that the WN18 and FB15k datasets suffer from test set leakage, due to inverse relations from the training set being present in the test set – however, the extent of this issue has so far not been quantified. We find this problem to be severe: a simple rule-based model can achieve state-of-the-art results on both WN18 and FB15k. To ensure that models are evaluated on datasets where simply exploiting inverse relations cannot yield competitive results, we investigate and validate several commonly used datasets – deriving robust variants where necessary. We then perform experiments on these robust datasets for our own and several previously proposed models, and find that ConvE achieves state-of-the-art Mean Reciprocal Rank across all datasets.",
"title": ""
},
{
"docid": "neg:1840530_18",
"text": "Purpose – This survey aims to study and analyze current techniques and methods for context-aware web service systems, to discuss future trends and propose further steps on making web services systems context-aware. Design/methodology/approach – The paper analyzes and compares existing context-aware web service-based systems based on techniques they support, such as context information modeling, context sensing, distribution, security and privacy, and adaptation techniques. Existing systems are also examined in terms of application domains, system type, mobility support, multi-organization support and level of web services implementation. Findings – Supporting context-aware web service-based systems is increasing. It is hard to find a truly context-aware web service-based system that is interoperable and secure, and operates on multi-organizational environments. Various issues, such as distributed context management, context-aware service modeling and engineering, context reasoning and quality of context, security and privacy issues have not been well addressed. Research limitations/implications – The number of systems analyzed is limited. Furthermore, the survey is based on published papers. Therefore, up-to-date information and development might not be taken into account. Originality/value – Existing surveys do not focus on context-awareness techniques for web services. This paper helps to understand the state of the art in context-aware techniques for web services that can be employed in the future of services which is built around, amongst others, mobile devices, web services, and pervasive environments.",
"title": ""
},
{
"docid": "neg:1840530_19",
"text": "In this paper, we study beam selection for millimeter-wave (mm-wave) multiuser multiple input multiple output (MIMO) systems where a base station (BS) and users are equipped with antenna arrays. Exploiting a certain sparsity of mm-wave channels, a low-complexity beam selection method for beamforming by low-cost analog beamformers is derived. It is shown that beam selection can be carried out without explicit channel estimation using the notion of compressive sensing (CS). Due to various reasons (e.g., the background noise and interference), some users may choose the same BS beam, which results in high inter-user interference. To overcome this problem, we further consider BS beam selection by users. Through simulations, we show that the performance gap between the proposed approach and the optimal beamforming approach, which requires full channel state information (CSI), becomes narrower for a larger number of users at a moderate/low signal-to-noise ratio (SNR). Since the optimal beamforming approach is difficult to be used due to prohibitively high computational complexity for large antenna arrays with a large number of users, the proposed approach becomes attractive for BSs and users in mm-wave systems where large antenna arrays can be employed.",
"title": ""
}
] |
1840531 | Understanding the requirements for developing open source software systems | [
{
"docid": "pos:1840531_0",
"text": "According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine the development process of a major open source application, the Apache web server. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution interval for this OSS project. This analysis reveals a unique process, which performs well on important measures. We conclude that hybrid forms of development that borrow the most effective techniques from both the OSS and commercial worlds may lead to high performance software processes.",
"title": ""
}
] | [
{
"docid": "neg:1840531_0",
"text": "First-order methods play a central role in large-scale machine learning. Even though many variations exist, each suited to a particular problem, almost all such methods fundamentally rely on two types of algorithmic steps: gradient descent, which yields primal progress, and mirror descent, which yields dual progress. We observe that the performances of gradient and mirror descent are complementary, so that faster algorithms can be designed by linearly coupling the two. We show how to reconstruct Nesterov’s accelerated gradient methods using linear coupling, which gives a cleaner interpretation than Nesterov’s original proofs. We also discuss the power of linear coupling by extending it to many other settings that Nesterov’s methods cannot apply to. 1998 ACM Subject Classification G.1.6 Optimization, F.2 Analysis of Algorithms and Problem Complexity",
"title": ""
},
{
"docid": "neg:1840531_1",
"text": "The aim of this work is to design a SLAM algorithm for localization and mapping of aerial platform for ocean observation. The aim is to determine the direction of travel, given that the aerial platform flies over the water surface and in an environment with few static features and dynamic background. This approach is inspired by the bird techniques which use landmarks as navigation direction. In this case, the blimp is chosen as the platform, therefore the payload is the most important concern in the design so that the desired lift can be achieved. The results show the improved SLAM is were able to achieve the desired waypoint.",
"title": ""
},
{
"docid": "neg:1840531_2",
"text": "Combination therapies exploit the chances for better efficacy, decreased toxicity, and reduced development of drug resistance and owing to these advantages, have become a standard for the treatment of several diseases and continue to represent a promising approach in indications of unmet medical need. In this context, studying the effects of a combination of drugs in order to provide evidence of a significant superiority compared to the single agents is of particular interest. Research in this field has resulted in a large number of papers and revealed several issues. Here, we propose an overview of the current methodological landscape concerning the study of combination effects. First, we aim to provide the minimal set of mathematical and pharmacological concepts necessary to understand the most commonly used approaches, divided into effect-based approaches and dose-effect-based approaches, and introduced in light of their respective practical advantages and limitations. Then, we discuss six main common methodological issues that scientists have to face at each step of the development of new combination therapies. In particular, in the absence of a reference methodology suitable for all biomedical situations, the analysis of drug combinations should benefit from a collective, appropriate, and rigorous application of the concepts and methods reviewed here.",
"title": ""
},
{
"docid": "neg:1840531_3",
"text": "Multivariate pattern recognition methods are increasingly being used to identify multiregional brain activity patterns that collectively discriminate one cognitive condition or experimental group from another, using fMRI data. The performance of these methods is often limited because the number of regions considered in the analysis of fMRI data is large compared to the number of observations (trials or participants). Existing methods that aim to tackle this dimensionality problem are less than optimal because they either over-fit the data or are computationally intractable. Here, we describe a novel method based on logistic regression using a combination of L1 and L2 norm regularization that more accurately estimates discriminative brain regions across multiple conditions or groups. The L1 norm, computed using a fast estimation procedure, ensures a fast, sparse and generalizable solution; the L2 norm ensures that correlated brain regions are included in the resulting solution, a critical aspect of fMRI data analysis often overlooked by existing methods. We first evaluate the performance of our method on simulated data and then examine its effectiveness in discriminating between well-matched music and speech stimuli. We also compared our procedures with other methods which use either L1-norm regularization alone or support vector machine-based feature elimination. On simulated data, our methods performed significantly better than existing methods across a wide range of contrast-to-noise ratios and feature prevalence rates. On experimental fMRI data, our methods were more effective in selectively isolating a distributed fronto-temporal network that distinguished between brain regions known to be involved in speech and music processing. These findings suggest that our method is not only computationally efficient, but it also achieves the twin objectives of identifying relevant discriminative brain regions and accurately classifying fMRI data.",
"title": ""
},
{
"docid": "neg:1840531_4",
"text": "We present a multi-instance object segmentation algorithm to tackle occlusions. As an object is split into two parts by an occluder, it is nearly impossible to group the two separate regions into an instance by purely bottomup schemes. To address this problem, we propose to incorporate top-down category specific reasoning and shape prediction through exemplars into an intuitive energy minimization framework. We perform extensive evaluations of our method on the challenging PASCAL VOC 2012 segmentation set. The proposed algorithm achieves favorable results on the joint detection and segmentation task against the state-of-the-art method both quantitatively and qualitatively.",
"title": ""
},
{
"docid": "neg:1840531_5",
"text": "We have conducted a comprehensive search for conserved elements in vertebrate genomes, using genome-wide multiple alignments of five vertebrate species (human, mouse, rat, chicken, and Fugu rubripes). Parallel searches have been performed with multiple alignments of four insect species (three species of Drosophila and Anopheles gambiae), two species of Caenorhabditis, and seven species of Saccharomyces. Conserved elements were identified with a computer program called phastCons, which is based on a two-state phylogenetic hidden Markov model (phylo-HMM). PhastCons works by fitting a phylo-HMM to the data by maximum likelihood, subject to constraints designed to calibrate the model across species groups, and then predicting conserved elements based on this model. The predicted elements cover roughly 3%-8% of the human genome (depending on the details of the calibration procedure) and substantially higher fractions of the more compact Drosophila melanogaster (37%-53%), Caenorhabditis elegans (18%-37%), and Saccharaomyces cerevisiae (47%-68%) genomes. From yeasts to vertebrates, in order of increasing genome size and general biological complexity, increasing fractions of conserved bases are found to lie outside of the exons of known protein-coding genes. In all groups, the most highly conserved elements (HCEs), by log-odds score, are hundreds or thousands of bases long. These elements share certain properties with ultraconserved elements, but they tend to be longer and less perfectly conserved, and they overlap genes of somewhat different functional categories. In vertebrates, HCEs are associated with the 3' UTRs of regulatory genes, stable gene deserts, and megabase-sized regions rich in moderately conserved noncoding sequences. Noncoding HCEs also show strong statistical evidence of an enrichment for RNA secondary structure.",
"title": ""
},
{
"docid": "neg:1840531_6",
"text": "The purposes of this study are to construct an instrument to evaluate service quality of mobile value-added services and have a further discussion of the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention. Structural equation modeling and multiple regression analysis were used to analyze the data collected from college and graduate students of fifteen major universities in Taiwan. The main findings are as follows: (1) service quality positively influences both perceived value and customer satisfaction; (2) perceived value positively influences on both customer satisfaction and post-purchase intention; (3) customer satisfaction positively influences post-purchase intention; (4) service quality has an indirect positive influence on post-purchase intention through customer satisfaction or perceived value; (5) among the dimensions of service quality, “customer service and system reliability” is most influential on perceived value and customer satisfaction, and the influence of “content quality” ranks second; (6) the proposed model is proven with the effectiveness in explaining the relationships among service quality, perceived value, customer satisfaction, and post-purchase intention in mobile added-value services.",
"title": ""
},
{
"docid": "neg:1840531_7",
"text": "The design of legged robots is often inspired by animals evolved to excel at different tasks. However, while mimicking morphological features seen in nature can be very powerful, robots may need to perform motor tasks that their living counterparts do not. In the absence of designs that can be mimicked, an alternative is to resort to mathematical models that allow the relationship between a robot's form and function to be explored. In this paper, we propose such a model to co-design the motion and leg configurations of a robot such that a measure of performance is optimized. The framework begins by planning trajectories for a simplified model consisting of the center of mass and feet. The framework then optimizes the length of each leg link while solving for associated full-body motions. Our model was successfully used to find optimized designs for legged robots performing tasks that include jumping, walking, and climbing up a step. Although our results are preliminary and our analysis makes a number of simplifying assumptions, our findings indicate that the cost function, the sum of squared joint torques over the duration of a task, varies substantially as the design parameters change.",
"title": ""
},
{
"docid": "neg:1840531_8",
"text": "The problem of domain generalization is to learn from multiple training domains, and extract a domain-agnostic model that can then be applied to an unseen domain. Domain generalization (DG) has a clear motivation in contexts where there are target domains with distinct characteristics, yet sparse data for training. For example recognition in sketch images, which are distinctly more abstract and rarer than photos. Nevertheless, DG methods have primarily been evaluated on photo-only benchmarks focusing on alleviating the dataset bias where both problems of domain distinctiveness and data sparsity can be minimal. We argue that these benchmarks are overly straightforward, and show that simple deep learning baselines perform surprisingly well on them. In this paper, we make two main contributions: Firstly, we build upon the favorable domain shift-robust properties of deep learning methods, and develop a low-rank parameterized CNN model for end-to-end DG learning. Secondly, we develop a DG benchmark dataset covering photo, sketch, cartoon and painting domains. This is both more practically relevant, and harder (bigger domain shift) than existing benchmarks. The results show that our method outperforms existing DG alternatives, and our dataset provides a more significant DG challenge to drive future research.",
"title": ""
},
{
"docid": "neg:1840531_9",
"text": "Ontology provides a shared and reusable piece of knowledge about a specific domain, and has been applied in many fields, such as semantic Web, e-commerce and information retrieval, etc. However, building ontology by hand is a very hard and error-prone task. Learning ontology from existing resources is a good solution. Because relational database is widely used for storing data and OWL is the latest standard recommended by W3C, this paper proposes an approach of learning OWL ontology from data in relational database. Compared with existing methods, the approach can acquire ontology from relational database automatically by using a group of learning rules instead of using a middle model. In addition, it can obtain OWL ontology, including the classes, properties, properties characteristics, cardinality and instances, while none of existing methods can acquire all of them. The proposed learning rules have been proven to be correct by practice.",
"title": ""
},
{
"docid": "neg:1840531_10",
"text": "Recurrent Neural Networks (RNNs), and specifically a variant with Long ShortTerm Memory (LSTM), are enjoying renewed interest as a result of successful applications in a wide range of machine learning problems that involve sequential data. However, while LSTMs provide exceptional results in practice, the source of their performance and their limitations remain rather poorly understood. Using character-level language models as an interpretable testbed, we aim to bridge this gap by providing an analysis of their representations, predictions and error types. In particular, our experiments reveal the existence of interpretable cells that keep track of long-range dependencies such as line lengths, quotes and brackets. Moreover, our comparative analysis with finite horizon n-gram models traces the source of the LSTM improvements to long-range structural dependencies. Finally, we provide analysis of the remaining errors and suggests areas for further study.",
"title": ""
},
{
"docid": "neg:1840531_11",
"text": "Edmodo is simply a controlled online networking application that can be used by teachers and students to communicate and remain connected. This paper explores the experiences from a group of students who were using Edmodo platform in their course work. It attempts to use the SAMR (Substitution, Augmentation, Modification and Redefinition) framework of technology integration in education to access and evaluate technology use in the classroom. The respondents were a group of 62 university students from a Kenyan University whose lecturer had created an Edmodo account and introduced the students to participate in their course work during the September to December 2015 semester. More than 82% of the students found that they had a personal stake in the quality of work presented through the platforms and that they were able to take on different subtopics and collaborate to create one final product. This underscores the importance of Edmodo as an environment with skills already in the hands of the students that we can use to integrate technology in the classroom.",
"title": ""
},
{
"docid": "neg:1840531_12",
"text": "There is a tension between user and author control of narratives in multimedia systems and virtual environments. Reducing the interactivity gives the author more control over when and how users experience key events in a narrative, but may lead to less immersion and engagement. Allowing the user to freely explore the virtual space introduces the risk that important narrative events will never be experienced. One approach to striking a balance between user freedom and author control is adaptation of narrative event presentation (i.e. changing the time, location, or method of presentation of a particular event in order to better communicate with the user). In this paper, we describe the architecture of a system capable of dynamically supporting narrative event adaptation. We also report results from two studies comparing adapted narrative presentation with two other forms of unadapted presentation - events with author selected views (movie), and events with user selected views (traditional VE). An analysis of user performance and feedback offers support for the hypothesis that adaptation can improve comprehension of narrative events in virtual environments while maintaining a sense of user control.",
"title": ""
},
{
"docid": "neg:1840531_13",
"text": "Recent developments in web technologies including evolution of web standards, improvements in browser performance, and the emergence of free and open-source software (FOSS) libraries are driving a general shift from server-side to client-side web applications where a greater share of the computational load is transferred to the browser. Modern client-side approaches allow for improved user interfaces that rival traditional desktop software, as well as the ability to perform simulations and visualizations within the browser. We demonstrate the use of client-side technologies to create an interactive web application for a simulation model of biochemical oxygen demand and dissolved oxygen in rivers called the Webbased Interactive River Model (WIRM). We discuss the benefits, limitations and potential uses of client-side web applications, and provide suggestions for future research using new and upcoming web technologies such as offline access and local data storage to create more advanced client-side web applications for environmental simulation modeling. 2014 Elsevier Ltd. All rights reserved. Software availability Product Title: Web-based Interactive River Model (WIRM) Developer: Jeffrey D. Walker Contact Address: Dept. of Civil and Environmental Engineering, Tufts University, 200 College Ave, Medford, MA 02155 Contact E-mail: [email protected] Available Since: 2013 Programming Language: JavaScript, Python Availability: http://wirm.walkerjeff.com/ Cost: Free",
"title": ""
},
{
"docid": "neg:1840531_14",
"text": "Platt’s probabilistic outputs for Support Vector Machines (Platt, J. in Smola, A., et al. (eds.) Advances in large margin classifiers. Cambridge, 2000) has been popular for applications that require posterior class probabilities. In this note, we propose an improved algorithm that theoretically converges and avoids numerical difficulties. A simple and ready-to-use pseudo code is included.",
"title": ""
},
{
"docid": "neg:1840531_15",
"text": "This article presents the 1:4 wideband balun based on transmission lines that was awarded the first prize in the Wideband Baluns Student Design Competition. The competition was held during the 2014 IEEE Microwave Theory and Techniques Society (MTT-S) International Microwave Symposium (IMS2014). It was initiated in 2011 and is sponsored by the MTT-17 Technical Coordinating Committee. The winner must implement and measure a wideband balun of his or her own design and achieve the highest possible operational frequency from at least 1 MHz (or below) while meeting the following conditions: ? female subminiature version A (SMA) connectors are used to terminate all ports ? a minimum impedance transformation ratio of two ? a maximum voltage standing wave ratio (VSWR) of 2:1 at all ports ? an insertion loss of less than 1 dB ? a common-mode rejection ratio (CMRR) of more than 25 dB ? imbalance of less than 1 dB and 2.5?.",
"title": ""
},
{
"docid": "neg:1840531_16",
"text": "With respect to the \" influence on the development and practice of science and engineering in the 20th century \" , Krylov space methods are considered as one of the ten most important classes of numerical methods [1]. Large sparse linear systems of equations or large sparse matrix eigenvalue problems appear in most applications of scientific computing. Sparsity means that most elements of the matrix involved are zero. In particular, discretization of PDEs with the finite element method (FEM) or with the finite difference method (FDM) leads to such problems. In case the original problem is nonlinear, linearization by Newton's method or a Newton-type method leads again to a linear problem. We will treat here systems of equations only, but many of the numerical methods for large eigenvalue problems are based on similar ideas as the related solvers for equations. Sparse linear systems of equations can be solved by either so-called sparse direct solvers, which are clever variations of Gauss elimination, or by iterative methods. In the last thirty years, sparse direct solvers have been tuned to perfection: on the one hand by finding strategies for permuting equations and unknowns to guarantee a stable LU decomposition and small fill-in in the triangular factors, and on the other hand by organizing the computation so that optimal use is made of the hardware, which nowadays often consists of parallel computers whose architecture favors block operations with data that are locally stored or cached. The iterative methods that are today applied for solving large-scale linear systems are mostly preconditioned Krylov (sub)space solvers. Classical methods that do not belong to this class, like the successive overrelaxation (SOR) method, are no longer competitive. However, some of the classical matrix splittings, e.g. the one of SSOR (the symmetric version of SOR), are still used for preconditioning. Multigrid is in theory a very effective iterative method, but normally it is now applied as an inner iteration with a Krylov space solver as outer iteration; then, it can also be considered as a preconditioner. In the past, Krylov space solvers were referred to also by other names such as semi-iterative methods and polynomial acceleration methods. Some",
"title": ""
},
{
"docid": "neg:1840531_17",
"text": "Faces in natural images are often occluded by a variety of objects. We propose a fully automated, probabilistic and occlusion-aware 3D morphable face model adaptation framework following an analysis-by-synthesis setup. The key idea is to segment the image into regions explained by separate models. Our framework includes a 3D morphable face model, a prototype-based beard model and a simple model for occlusions and background regions. The segmentation and all the model parameters have to be inferred from the single target image. Face model adaptation and segmentation are solved jointly using an expectation–maximization-like procedure. During the E-step, we update the segmentation and in the M-step the face model parameters are updated. For face model adaptation we apply a stochastic sampling strategy based on the Metropolis–Hastings algorithm. For segmentation, we apply loopy belief propagation for inference in a Markov random field. Illumination estimation is critical for occlusion handling. Our combined segmentation and model adaptation needs a proper initialization of the illumination parameters. We propose a RANSAC-based robust illumination estimation technique. By applying this method to a large face image database we obtain a first empirical distribution of real-world illumination conditions. The obtained empirical distribution is made publicly available and can be used as prior in probabilistic frameworks, for regularization or to synthesize data for deep learning methods.",
"title": ""
},
{
"docid": "neg:1840531_18",
"text": "This thesis presents an approach to robot arm control exploiting natural dynamics. The approach consists of using a compliant arm whose joints are controlled with simple non-linear oscillators. The arm has special actuators which makes it robust to collisions and gives it a smooth compliant, motion. The oscillators produce rhythmic commands of the joints of the arm, and feedback of the joint motions is used to modify the oscillator behavior. The oscillators enable the resonant properties of the arm to be exploited to perform a variety of rhythmic and discrete tasks. These tasks include tuning into the resonant frequencies of the arm itself, juggling, turning cranks, playing with a Slinky toy, sawing wood, throwing balls, hammering nails and drumming. For most of these tasks, the controllers at each joint are completely independent, being coupled by mechanical coupling through the physical arm of the robot. The thesis shows that this mechanical coupling allows the oscillators to automatically adjust their commands to be appropriate for the arm dynamics and the task. This coordination is robust to large changes in the oscillator parameters, and large changes in the dynamic properties of the arm. As well as providing a wealth of experimental data to support this approach, the thesis also provides a range of analysis tools, both approximate and exact. These can be used to understand and predict the behavior of current implementations, and design new ones. These analysis techniques improve the value of oscillator solutions. The results in the thesis suggest that the general approach of exploiting natural dynamics is a powerful method for obtaining coordinated dynamic behavior of robot arms. Thesis Supervisor: Rodney A. Brooks Title: Professor of Electrical Engineering and Computer Science, MIT 5.4. CASE (C): MODIFYING THE NATURAL DYNAMICS 95",
"title": ""
},
{
"docid": "neg:1840531_19",
"text": "PURPOSE\nThis article provides a critical overview of problem-based learning (PBL), its effectiveness for knowledge acquisition and clinical performance, and the underlying educational theory. The focus of the paper is on (1) the credibility of claims (both empirical and theoretical) about the ties between PBL and educational outcomes and (2) the magnitude of the effects.\n\n\nMETHOD\nThe author reviewed the medical education literature, starting with three reviews published in 1993 and moving on to research published from 1992 through 1998 in the primary sources for research in medical education. For each study the author wrote a summary, which included study design, outcome measures, effect sizes, and any other information relevant to the research conclusion.\n\n\nRESULTS AND CONCLUSION\nThe review of the literature revealed no convincing evidence that PBL improves knowledge base and clinical performance, at least not of the magnitude that would be expected given the resources required for a PBL curriculum. The results were considered in light of the educational theory that underlies PBL and its basic research. The author concludes that the ties between educational theory and research (both basic and applied) are loose at best.",
"title": ""
}
] |
1840532 | Control-flow integrity principles, implementations, and applications | [
{
"docid": "pos:1840532_0",
"text": "We introduceprogram shepherding, a method for monitoring control flow transfers during program execution to enforce a security policy. Shepherding ensures that malicious code masquerading as data is never executed, thwarting a large class of security attacks. Shepherding can also enforce entry points as the only way to execute shared library code. Furthermore, shepherding guarantees that sandboxing checks around any type of program operation will never be bypassed. We have implemented these capabilities efficiently in a runtime system with minimal or no performance penalties. This system operates on unmodified native binaries, requires no special hardware or operating system support, and runs on existing IA-32 machines.",
"title": ""
}
] | [
{
"docid": "neg:1840532_0",
"text": "Generative Adversarial Network (GAN) is a prominent generative model that are widely used in various applications. Recent studies have indicated that it is possible to obtain fake face images with a high visual quality based on this novel model. If those fake faces are abused in image tampering, it would cause some potential moral, ethical and legal problems. In this paper, therefore, we first propose a Convolutional Neural Network (CNN) based method to identify fake face images generated by the current best method [20], and provide experimental evidences to show that the proposed method can achieve satisfactory results with an average accuracy over 99.4%. In addition, we provide comparative results evaluated on some variants of the proposed CNN architecture, including the high pass filter, the number of the layer groups and the activation function, to further verify the rationality of our method.",
"title": ""
},
{
"docid": "neg:1840532_1",
"text": "As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.",
"title": ""
},
{
"docid": "neg:1840532_2",
"text": "The world has revolutionized and phased into a new era, an era which upholds the true essence of technology and digitalization. As the market has evolved at a staggering scale, it is must to exploit and inherit the advantages and opportunities, it provides. With the advent of web 2.0, considering the scalability and unbounded reach that it provides, it is detrimental for an organization to not to adopt the new techniques in the competitive stakes that this emerging virtual world has set along with its advantages. The transformed and highly intelligent data mining approaches now allow organizations to collect, categorize, and analyze users’ reviews and comments from micro-blogging sites regarding their services and products. This type of analysis makes those organizations capable to assess, what the consumers want, what they disapprove of, and what measures can be taken to sustain and improve the performance of products and services. This study focuses on critical analysis of the literature from year 2012 to 2017 on sentiment analysis by using SVM (support vector machine). SVM is one of the widely used supervised machine learning techniques for text classification. This systematic review will serve the scholars and researchers to analyze the latest work of sentiment analysis with SVM as well as provide them a baseline for future trends and comparisons. Keywords—Sentiment analysis; polarity detection; machine learning; support vector machine (SVM); support vector machine; SLR; systematic literature review",
"title": ""
},
{
"docid": "neg:1840532_3",
"text": "Concerted research effort since the nineteen fifties has lead to effective methods for retrieval of relevant documents from homogeneous collections of text, such as newspaper archives, scientific abstracts and CD-ROM encyclopaedias. However, the triumph of the Web in the nineteen nineties forced a significant paradigm shift in the Information Retrieval field because of the need to address the issues of enormous scale, fluid collection definition, great heterogeneity, unfettered interlinking, democratic publishing, the presence of adversaries and most of all the diversity of purposes for which Web search may be used. Now, the IR field is confronted with a challenge of similarly daunting dimensions – how to bring highly effective search to the complex information spaces within enterprises. Overcoming the challenge would bring massive economic benefit, but victory is far from assured. The present work characterises enterprise search, hints at its economic magnitude, states some of the unsolved research questions in the domain of enterprise search need, proposes an enterprise search test collection and presents results for a small but interesting subproblem.",
"title": ""
},
{
"docid": "neg:1840532_4",
"text": "Blockchain is a distributed database which is cryptographically protected against malicious modifications. While promising for a wide range of applications, current blockchain platforms rely on digital signatures, which are vulnerable to attacks by means of quantum computers. The same, albeit to a lesser extent, applies to cryptographic hash functions that are used in preparing new blocks, so parties with access to quantum computation would have unfair advantage in procuring mining rewards. Here we propose a possible solution to the quantum-era blockchain challenge and report an experimental realization of a quantum-safe blockchain platform that utilizes quantum key distribution across an urban fiber network for information-theoretically secure authentication. These results address important questions about realizability and scalability of quantum-safe blockchains for commercial and governmental applications.",
"title": ""
},
{
"docid": "neg:1840532_5",
"text": "We present a sequence-to-action parsing approach for the natural language to SQL task that incrementally fills the slots of a SQL query with feasible actions from a pre-defined inventory. To account for the fact that typically there are multiple correct SQL queries with the same or very similar semantics, we draw inspiration from syntactic parsing techniques and propose to train our sequence-to-action models with non-deterministic oracles. We evaluate our models on the WikiSQL dataset and achieve an execution accuracy of 83.7% on the test set, a 2.1% absolute improvement over the models trained with traditional static oracles assuming a single correct target SQL query. When further combined with the executionguided decoding strategy, our model sets a new state-of-the-art performance at an execution accuracy of 87.1%.",
"title": ""
},
{
"docid": "neg:1840532_6",
"text": "One trend in the implementation of modern web systems is the use of activity data in the form of log or event messages that capture user and server activity. This data is at the heart of many internet systems in the domains of advertising, relevance, search, recommendation systems, and security, as well as continuing to fulfill its traditional role in analytics and reporting. Many of these uses place real-time demands on data feeds. Activity data is extremely high volume and real-time pipelines present new design challenges. This paper discusses the design and engineering problems we encountered in moving LinkedIn’s data pipeline from a batch-oriented file aggregation mechanism to a real-time publish-subscribe system called Kafka. This pipeline currently runs in production at LinkedIn and handles more than 10 billion message writes each day with a sustained peak of over 172,000 messages per second. Kafka supports dozens of subscribing systems and delivers more than 55 billion messages to these consumer processing each day. We discuss the origins of this systems, missteps on the path to real-time, and the design and engineering problems we encountered along the way.",
"title": ""
},
{
"docid": "neg:1840532_7",
"text": "In this paper, we introduce the Reinforced Mnemonic Reader for machine comprehension (MC) task, which aims to answer a query about a given context document. We propose several novel mechanisms that address critical problems in MC that are not adequately solved by previous works, such as enhancing the capacity of encoder, modeling long-term dependencies of contexts, refining the predicted answer span, and directly optimizing the evaluation metric. Extensive experiments on TriviaQA and Stanford Question Answering Dataset (SQuAD) show that our model achieves state-of-theart results.",
"title": ""
},
{
"docid": "neg:1840532_8",
"text": "This paper presents a new physically-based method for predicting natural hairstyles in the presence of gravity and collisions. The method is based upon a mechanically accurate model for static elastic rods (Kirchhoff model), which accounts for the natural curliness of hair, as well as for hair ellipticity. The equilibrium shape is computed in a stable and easy way by energy minimization. This yields various typical hair configurations that can be observed in the real world, such as ringlets. As our results show, the method can generate different hair types with a very few input parameters, and perform virtual hairdressing operations such as wetting, cutting and drying hair.",
"title": ""
},
{
"docid": "neg:1840532_9",
"text": "This paper explores the current affordances and limitations of video game genre from a library and information science perspective with an emphasis on classification theory. We identify and discuss various purposes of genre relating to video games, including identity, collocation and retrieval, commercial marketing, and educational instruction. Through the use of examples, we discuss the ways in which these purposes are supported by genre classification and conceptualization, and the implications for video games. Suggestions for improved conceptualizations such as family resemblances, prototype theory, faceted classification, and appeal factors for video game genres are considered, with discussions of strengths and weaknesses. This analysis helps inform potential future practical applications for describing video games at cultural heritage institutions such as libraries, museums, and archives, as well as furthering the understanding of video game genre and genre classification for game studies at large. 3 Running head: WHY VIDEO GAME GENRES FAIL",
"title": ""
},
{
"docid": "neg:1840532_10",
"text": "Entropy, as it relates to dynamical systems, is the rate of information production. Methods for estimation of the entropy of a system represented by a time series are not, however, well suited to analysis of the short and noisy data sets encountered in cardiovascular and other biological studies. Pincus introduced approximate entropy (ApEn), a set of measures of system complexity closely related to entropy, which is easily applied to clinical cardiovascular and other time series. ApEn statistics, however, lead to inconsistent results. We have developed a new and related complexity measure, sample entropy (SampEn), and have compared ApEn and SampEn by using them to analyze sets of random numbers with known probabilistic character. We have also evaluated cross-ApEn and cross-SampEn, which use cardiovascular data sets to measure the similarity of two distinct time series. SampEn agreed with theory much more closely than ApEn over a broad range of conditions. The improved accuracy of SampEn statistics should make them useful in the study of experimental clinical cardiovascular and other biological time series.",
"title": ""
},
{
"docid": "neg:1840532_11",
"text": "BACKGROUND\nThe discovery of abnormal synchronization of neuronal activity in the basal ganglia in Parkinson's disease (PD) has prompted the development of novel neuromodulation paradigms. Coordinated reset neuromodulation intends to specifically counteract excessive synchronization and to induce cumulative unlearning of pathological synaptic connectivity and neuronal synchrony.\n\n\nMETHODS\nIn this prospective case series, six PD patients were evaluated before and after coordinated reset neuromodulation according to a standardized protocol that included both electrophysiological recordings and clinical assessments.\n\n\nRESULTS\nCoordinated reset neuromodulation of the subthalamic nucleus (STN) applied to six PD patients in an externalized setting during three stimulation days induced a significant and cumulative reduction of beta band activity that correlated with a significant improvement of motor function.\n\n\nCONCLUSIONS\nThese results highlight the potential effects of coordinated reset neuromodulation of the STN in PD patients and encourage further development of this approach as an alternative to conventional high-frequency deep brain stimulation in PD.",
"title": ""
},
{
"docid": "neg:1840532_12",
"text": "In this paper we describe a method that can be used for Minimum Bayes Risk (MBR) decoding for speech recognition. Our algorithm can take as input either a single lattice, or multiple lattices for system combination. It has similar functionality to the widely used Consensus method, but has a clearer theoretical basis and appears to give better results both for MBR decoding and system combination. Many different approximations have been described to solve the MBR decoding problem, which is very difficult from an optimization point of view. Our proposed method solves the problem through a novel forward–backward recursion on the lattice, not requiring time markings. We prove that our algorithm iteratively improves a bound on the Bayes risk. © 2011 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840532_13",
"text": "This paper describes the organization and results of the automatic keyphrase extraction task held at the workshop on Semantic Evaluation 2010 (SemEval-2010). The keyphrase extraction task was specifically geared towards scientific articles. Systems were automatically evaluated by matching their extracted keyphrases against those assigned by the authors as well as the readers to the same documents. We outline the task, present the overall ranking of the submitted systems, and discuss the improvements to the state-of-the-art in keyphrase extraction.",
"title": ""
},
{
"docid": "neg:1840532_14",
"text": "The aim of this paper is to discuss about various feature selection algorithms applied on different datasets to select the relevant features to classify data into binary and multi class in order to improve the accuracy of the classifier. Recent researches in medical diagnose uses the different kind of classification algorithms to diagnose the disease. For predicting the disease, the classification algorithm produces the result as binary class. When there is a multiclass dataset, the classification algorithm reduces the dataset into a binary class for simplification purpose by using any one of the data reduction methods and the algorithm is applied for prediction. When data reduction on original dataset is carried out, the quality of the data may degrade and the accuracy of an algorithm will get affected. To maintain the effectiveness of the data, the multiclass data must be treated with its original form without maximum reduction, and the algorithm can be applied on the dataset for producing maximum accuracy. Dataset with maximum number of attributes like thousands must incorporate the best feature selection algorithm for selecting the relevant features to reduce the space and time complexity. The performance of Classification algorithm is estimated by how accurately it predicts the individual class on particular dataset. The accuracy constrain mainly depends on the selection of appropriate features from the original dataset. The feature selection algorithms play an important role in classification for better performance. The feature selection is one of",
"title": ""
},
{
"docid": "neg:1840532_15",
"text": "Dimensionality reduction has attracted increasing attention, because high-dimensional data have arisen naturally in numerous domains in recent years. As one popular dimensionality reduction method, nonnegative matrix factorization (NMF), whose goal is to learn parts-based representations, has been widely studied and applied to various applications. In contrast to the previous approaches, this paper proposes a novel semisupervised NMF learning framework, called robust structured NMF, that learns a robust discriminative representation by leveraging the block-diagonal structure and the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (especially when <inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function. Specifically, the problems of noise and outliers are well addressed by the <inline-formula> <tex-math notation=\"LaTeX\">$\\ell _{2,p}$ </tex-math></inline-formula>-norm (<inline-formula> <tex-math notation=\"LaTeX\">$0<p\\leq 1$ </tex-math></inline-formula>) loss function, while the discriminative representations of both the labeled and unlabeled data are simultaneously learned by explicitly exploring the block-diagonal structure. The proposed problem is formulated as an optimization problem with a well-defined objective function solved by the proposed iterative algorithm. The convergence of the proposed optimization algorithm is analyzed both theoretically and empirically. In addition, we also discuss the relationships between the proposed method and some previous methods. Extensive experiments on both the synthetic and real-world data sets are conducted, and the experimental results demonstrate the effectiveness of the proposed method in comparison to the state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840532_16",
"text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.",
"title": ""
},
{
"docid": "neg:1840532_17",
"text": "The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.",
"title": ""
},
{
"docid": "neg:1840532_18",
"text": "The term ‘‘urban stream syndrome’’ describes the consistently observed ecological degradation of streams draining urban land. This paper reviews recent literature to describe symptoms of the syndrome, explores mechanisms driving the syndrome, and identifies appropriate goals and methods for ecological restoration of urban streams. Symptoms of the urban stream syndrome include a flashier hydrograph, elevated concentrations of nutrients and contaminants, altered channel morphology, and reduced biotic richness, with increased dominance of tolerant species. More research is needed before generalizations can be made about urban effects on stream ecosystem processes, but reduced nutrient uptake has been consistently reported. The mechanisms driving the syndrome are complex and interactive, but most impacts can be ascribed to a few major large-scale sources, primarily urban stormwater runoff delivered to streams by hydraulically efficient drainage systems. Other stressors, such as combined or sanitary sewer overflows, wastewater treatment plant effluents, and legacy pollutants (long-lived pollutants from earlier land uses) can obscure the effects of stormwater runoff. Most research on urban impacts to streams has concentrated on correlations between instream ecological metrics and total catchment imperviousness. Recent research shows that some of the variance in such relationships can be explained by the distance between the stream reach and urban land, or by the hydraulic efficiency of stormwater drainage. The mechanisms behind such patterns require experimentation at the catchment scale to identify the best management approaches to conservation and restoration of streams in urban catchments. Remediation of stormwater impacts is most likely to be achieved through widespread application of innovative approaches to drainage design. Because humans dominate urban ecosystems, research on urban stream ecology will require a broadening of stream ecological research to integrate with social, behavioral, and economic research.",
"title": ""
},
{
"docid": "neg:1840532_19",
"text": "Most literature on time series classification assumes that the beginning and ending points of the pattern of interest can be correctly identified, both during the training phase and later deployment. In this work, we argue that this assumption is unjustified, and this has in many cases led to unwarranted optimism about the performance of the proposed algorithms. As we shall show, the task of correctly extracting individual gait cycles, heartbeats, gestures, behaviors, etc., is generally much more difficult than the task of actually classifying those patterns. We propose to mitigate this problem by introducing an alignment-free time series classification framework. The framework requires only very weakly annotated data, such as “in this ten minutes of data, we see mostly normal heartbeats...,” and by generalizing the classic machine learning idea of data editing to streaming/continuous data, allows us to build robust, fast and accurate classifiers. We demonstrate on several diverse real-world problems that beyond removing unwarranted assumptions and requiring essentially no human intervention, our framework is both significantly faster and significantly more accurate than current state-of-the-art approaches.",
"title": ""
}
] |
1840533 | A comprehensive study of the predictive accuracy of dynamic change-impact analysis | [
{
"docid": "pos:1840533_0",
"text": "A key issue in software evolution analysis is the identification of particular changes that occur across several versions of a program. We present change distilling, a tree differencing algorithm for fine-grained source code change extraction. For that, we have improved the existing algorithm by Chawathe et al. for extracting changes in hierarchically structured data. Our algorithm extracts changes by finding both a match between the nodes of the compared two abstract syntax trees and a minimum edit script that can transform one tree into the other given the computed matching. As a result, we can identify fine-grained change types between program versions according to our taxonomy of source code changes. We evaluated our change distilling algorithm with a benchmark that we developed, which consists of 1,064 manually classified changes in 219 revisions of eight methods from three different open source projects. We achieved significant improvements in extracting types of source code changes: Our algorithm approximates the minimum edit script 45 percent better than the original change extraction approach by Chawathe et al. We are able to find all occurring changes and almost reach the minimum conforming edit script, that is, we reach a mean absolute percentage error of 34 percent, compared to the 79 percent reached by the original algorithm. The paper describes both our change distilling algorithm and the results of our evolution.",
"title": ""
}
] | [
{
"docid": "neg:1840533_0",
"text": "Male (N = 248) and female (N = 282) subjects were given the Personal Attributes Questionnaire consisting of 55 bipolar attributes drawn from the Sex Role Stereotype Questionnaire by Rosenkrantz, Vogel, Bee, Broverman, and Broverman and were asked to rate themselves and then to compare directly the typical male and female college student. Self-ratings were divided into male-valued (stereotypically masculine attributes judged more desirable for both sexes), female-valued, and sex-specific items. Also administered was the Attitudes Toward Women Scale and a measure of social self-esteem. Correlations of the self-ratings with stereotype scores and the Attitudes Toward Women Scale were low in magnitude, suggesting that sex role expectations do not distort self-concepts. For both men and women, \"femininity\" on the female-valued self items and \"masculinity\" on the male-valued items were positively correlated, and both significantly related to self-esteem. The implications of the results for a concept of masculinity and femininity as a duality, characteristic of all individuals, and the use of the self-rating scales for measuring masculinity, femininity, and androgyny were discussed.",
"title": ""
},
{
"docid": "neg:1840533_1",
"text": "Software applications continue to grow in terms of the number of features they offer, making personalization increasingly important. Research has shown that most users prefer the control afforded by an adaptable approach to personalization rather than a system-controlled adaptive approach. Both types of approaches offer advantages and disadvantages. No study, however, has compared the efficiency of the two approaches. In two controlled lab studies, we measured the efficiency of static, adaptive and adaptable interfaces in the context of pull-down menus. These menu conditions were implemented as split menus, in which the top four items remained static, were adaptable by the subject, or adapted according to the subject’s frequently and recently used items. The results of Study 1 showed that a static split menu was significantly faster than an adaptive split menu. Also, when the adaptable split menu was not the first condition presented to subjects, it was significantly faster than the adaptive split menu, and not significantly different from the static split menu. The majority of users preferred the adaptable menu overall. Several implications for personalizing user interfaces based on these results are discussed. One question which arose after Study 1 was whether prior exposure to the menus and task has an effect on the efficiency of the adaptable menus. A second study was designed to follow-up on the theory that prior exposure to different types of menu layouts influences a user’s willingness to customize. Though the observed power of this study was low and no statistically significant effect of type of exposure was found, a possible trend arose: that exposure to an adaptive interface may have a positive impact on the user’s willingness to customize. This and other secondary results are discussed, along with several areas for future work. The research presented in this thesis should be seen as an initial step towards a more thorough comparison of adaptive and adaptable interfaces, and should provide motivation for further development of adaptable interaction techniques.",
"title": ""
},
{
"docid": "neg:1840533_2",
"text": "Network monitoring guides network operators in understanding the current behavior of a network. Therefore, accurate and efficient monitoring is vital to ensure that the network operates according to the intended behavior and then to troubleshoot any deviations. However, the current practice of network-monitoring largely depends on manual operations, and thus enterprises spend a significant portion of their budgets on the workforce that monitor their networks. We analyze present network-monitoring technologies, identify open problems, and suggest future directions. In particular, our findings are based on two different analyses. The first analysis assesses how well present technologies integrate with the entire cycle of network-management operations: design, deployment, and monitoring. Network operators first design network configurations, given a set of requirements, then they deploy the new design, and finally they verify it by continuously monitoring the network’s behavior. One of our observations is that the efficiency of this cycle can be greatly improved by automated deployment of pre-designed configurations, in response to changes in monitored network behavior. Our second analysis focuses on network-monitoring technologies and group issues in these technologies into five categories. Such grouping leads to the identification of major problem groups in network monitoring, e.g., efficient management of increasing amounts of measurements for storage, analysis, and presentation. We argue that continuous effort is needed in improving network-monitoring since the presented problems will become even more serious in the future, as networks grow in size and carry more data. 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840533_3",
"text": "The American College of Prosthodontists (ACP) has developed a classification system for partial edentulism based on diagnostic findings. This classification system is similar to the classification system for complete edentulism previously developed by the ACP. These guidelines are intended to help practitioners determine appropriate treatments for their patients. Four categories of partial edentulism are defined, Class I to Class IV, with Class I representing an uncomplicated clinical situation and class IV representing a complex clinical situation. Each class is differentiated by specific diagnostic criteria. This system is designed for use by dental professionals involved in the diagnosis and treatment of partially edentulous patients. Potential benefits of the system include (1) improved intraoperator consistency, (2) improved professional communication, (3) insurance reimbursement commensurate with complexity of care, (4) improved screening tool for dental school admission clinics, (5) standardized criteria for outcomes assessment and research, (6) enhanced diagnostic consistency, and (7) simplified aid in the decision to refer a patient.",
"title": ""
},
{
"docid": "neg:1840533_4",
"text": "E-learning is emerging as the new paradigm of modern education. Worldwide, the e-learning market has a growth rate of 35.6%, but failures exist. Little is known about why many users stop their online learning after their initial experience. Previous research done under different task environments has suggested a variety of factors affecting user satisfaction with e-Learning. This study developed an integrated model with six dimensions: learners, instructors, courses, technology, design, and environment. A survey was conducted to investigate the critical factors affecting learners’ satisfaction in e-Learning. The results revealed that learner computer anxiety, instructor attitude toward e-Learning, e-Learning course flexibility, e-Learning course quality, perceived usefulness, perceived ease of use, and diversity in assessments are the critical factors affecting learners’ perceived satisfaction. The results show institutions how to improve learner satisfaction and further strengthen their e-Learning implementation. 2006 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840533_5",
"text": "This chapter is a review of conceptions of knowledge as they appear in selected bodies of research on teaching. Writing as a philosopher of education, my interest is in how notions of knowledge are used and analyzed in a number of research programs that study teachers and their teaching. Of particular interest is the growing research literature on the knowledge that teachers generate as a result of their experience as teachers, in contrast to the knowledge of teaching that is generated by those who specialize in research on teaching. This distinction, as will become apparent, is one that divides more conventional scientific approaches to the study of teaching from what might be thought of as alternative approaches.",
"title": ""
},
{
"docid": "neg:1840533_6",
"text": "In recent years, theoretical and computational linguistics has paid much attention to linguistic items that form scales. In NLP, much research has focused on ordering adjectives by intensity (tiny < small). Here, we address the task of automatically ordering English adverbs by their intensifying or diminishing effect on adjectives (e.g. extremely small < very small). We experiment with 4 different methods: 1) using the association strength between adverbs and adjectives; 2) exploiting scalar patterns (such as not only X but Y); 3) using the metadata of product reviews; 4) clustering. The method that performs best is based on the use of metadata and ranks adverbs by their scaling factor relative to unmodified adjectives.",
"title": ""
},
{
"docid": "neg:1840533_7",
"text": "This paper presents power-control strategies of a grid-connected hybrid generation system with versatile power transfer. The hybrid system is the combination of photovoltaic (PV) array, wind turbine, and battery storage via a common dc bus. Versatile power transfer was defined as multimodes of operation, including normal operation without use of battery, power dispatching, and power averaging, which enables grid- or user-friendly operation. A supervisory control regulates power generation of the individual components so as to enable the hybrid system to operate in the proposed modes of operation. The concept and principle of the hybrid system and its control were described. A simple technique using a low-pass filter was introduced for power averaging. A modified hysteresis-control strategy was applied in the battery converter. Modeling and simulations were based on an electromagnetic-transient-analysis program. A 30-kW hybrid inverter and its control system were developed. The simulation and experimental results were presented to evaluate the dynamic performance of the hybrid system under the proposed modes of operation.",
"title": ""
},
{
"docid": "neg:1840533_8",
"text": "We tested the hypothesis derived from eye blink literature that when liars experience cognitive demand, their lies would be associated with a decrease in eye blinks, directly followed by an increase in eye blinks when the demand has ceased after the lie is told. A total of 13 liars and 13 truth tellers lied or told the truth in a target period; liars and truth tellers both told the truth in two baseline periods. Their eye blinks during the target and baseline periods and directly after the target period (target offset period) were recorded. The predicted pattern (compared to the baseline periods, a decrease in eye blinks during the target period and an increase in eye blinks during the target offset period) was found in liars and was strikingly different from the pattern obtained in truth tellers. They showed an increase in eye blinks during the target period compared to the baseline periods, whereas their pattern of eye blinks in the target offset period did not differ from baseline periods. The implications for lie detection are discussed.",
"title": ""
},
{
"docid": "neg:1840533_9",
"text": "Cyclic GMP (cGMP) modulates important cerebral processes including some forms of learning and memory. cGMP pathways are strongly altered in hyperammonemia and hepatic encephalopathy (HE). Patients with liver cirrhosis show reduced intracellular cGMP in lymphocytes, increased cGMP in plasma and increased activation of soluble guanylate cyclase by nitric oxide (NO) in lymphocytes, which correlates with minimal HE assessed by psychometric tests. Activation of soluble guanylate cyclase by NO is also increased in cerebral cortex, but reduced in cerebellum, from patients who died with HE. This opposite alteration is reproduced in vivo in rats with chronic hyperammonemia or HE. A main pathway modulating cGMP levels in brain is the glutamate-NO-cGMP pathway. The function of this pathway is impaired both in cerebellum and cortex of rats with hyperammonemia or HE. Impairment of this pathway is responsible for reduced ability to learn some types of tasks. Restoring the pathway and cGMP levels in brain restores learning ability. This may be achieved by administering phosphodiesterase inhibitors (zaprinast, sildenafil), cGMP, anti-inflammatories (ibuprofen) or antagonists of GABAA receptors (bicuculline). These data support that increasing cGMP by safe pharmacological means may be a new therapeutic approach to improve cognitive function in patients with minimal or clinical HE.",
"title": ""
},
{
"docid": "neg:1840533_10",
"text": "This study examined linkages between divorce, depressive/withdrawn parenting, and child adjustment problems at home and school. Middle class divorced single mother families (n = 35) and 2-parent families (n = 174) with a child in the fourth grade participated. Mothers and teachers completed yearly questionnaires and children were interviewed when they were in the fourth, fifth, and sixth grades. Structural equation modeling suggested that the association between divorce and child externalizing and internalizing behavior was partially mediated by depressive/withdrawn parenting when the children were in the fourth and fifth grades.",
"title": ""
},
{
"docid": "neg:1840533_11",
"text": "We propose a new approach for locating forged regions in a video using correlation of noise residue. In our method, block-level correlation values of noise residual are extracted as a feature for classification. We model the distribution of correlation of temporal noise residue in a forged video as a Gaussian mixture model (GMM). We propose a two-step scheme to estimate the model parameters. Consequently, a Bayesian classifier is used to find the optimal threshold value based on the estimated parameters. Two video inpainting schemes are used to simulate two different types of forgery processes for performance evaluation. Simulation results show that our method achieves promising accuracy in video forgery detection.",
"title": ""
},
{
"docid": "neg:1840533_12",
"text": "Studies show that attractive women demonstrate stronger preferences for masculine men than relatively unattractive women do. Such condition-dependent preferences may occur because attractive women can more easily offset the costs associated with choosing a masculine partner, such as lack of commitment and less interest in parenting. Alternatively, if masculine men display negative characteristics less to attractive women than to unattractive women, attractive women may perceive masculine men to have more positive personality traits than relatively unattractive women do. We examined how two indices of women’s attractiveness, body mass index (BMI) and waist–hip ratio (WHR), relate to perceptions of both the attractiveness and trustworthiness of masculinized versus feminized male faces. Consistent with previous studies, women with a low (attractive) WHR had stronger preferences for masculine male faces than did women with a relatively high (unattractive) WHR. This relationship remained significant when controlling for possible effects of BMI. Neither WHR nor BMI predicted perceptions of trustworthiness. These findings present converging evidence for condition-dependent mate preferences in women and suggest that such preferences do not reflect individual differences in the extent to which pro-social traits are ascribed to feminine versus masculine men. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840533_13",
"text": "Congruence, the state in which a software development organization harbors sufficient coordination capabilities to meet the coordination demands of the technical products under development, is increasingly recognized as critically important to the performance of an organization. To date, it has been shown that a variety of states of incongruence may exist in an organization, with possibly serious negative effects on product quality, development progress, cost, and so on. Exactly how to achieve congruence, or knowing what steps to take to achieve congruence, is less understood. In this paper, we introduce a series of key challenges that we believe must be comprehensively addressed in order for congruence research to result in wellunderstood approaches, tactics, and tools – so these can be infused in the day-to-day practices of development organizations to improve their coordination capabilities with better aligned social and technical structures. This effort is partially funded by the National Science Foundation under grant number IIS-0534775, IIS0329090, and the Software Industry Center and its sponsors, particularly the Alfred P. Sloan Foundation. Effort also supported by a 2007 Jazz Faculty Grant. The views and conclusions are those of the authors and do not reflect the opinions of any sponsoring organizations/agencies.",
"title": ""
},
{
"docid": "neg:1840533_14",
"text": "Speech recognition systems have used the concept of states as a way to decompose words into sub-word units for decades. As the number of such states now reaches the number of words used to train acoustic models, it is interesting to consider approaches that relax the assumption that words are made of states. We present here an alternative construction, where words are projected into a continuous embedding space where words that sound alike are nearby in the Euclidean sense. We show how embeddings can still allow to score words that were not in the training dictionary. Initial experiments using a lattice rescoring approach and model combination on a large realistic dataset show improvements in word error rate.",
"title": ""
},
{
"docid": "neg:1840533_15",
"text": "Zero-shot learning has received increasing interest as a means to alleviate the often prohibitive expense of annotating training data for large scale recognition problems. These methods have achieved great success via learning intermediate semantic representations in the form of attributes and more recently, semantic word vectors. However, they have thus far been constrained to the single-label case, in contrast to the growing popularity and importance of more realistic multi-label data. In this paper, for the first time, we investigate and formalise a general framework for multi-label zero-shot learning, addressing the unique challenge therein: how to exploit multi-label correlation at test time with no training data for those classes? In particular, we propose (1) a multi-output deep regression model to project an image into a semantic word space, which explicitly exploits the correlations in the intermediate semantic layer of word vectors; (2) a novel zero-shot learning algorithm for multi-label data that exploits the unique compositionality property of semantic word vector representations; and (3) a transductive learning strategy to enable the regression model learned from seen classes to generalise well to unseen classes. Our zero-shot learning experiments on a number of standard multi-label datasets demonstrate that our method outperforms a variety of baselines.",
"title": ""
},
{
"docid": "neg:1840533_16",
"text": "Background: Many popular educational programmes claim to be ‘brain-based’, despite pleas from the neuroscience community that these neuromyths do not have a basis in scientific evidence about the brain. Purpose: The main aim of this paper is to examine several of the most popular neuromyths in the light of the relevant neuroscientific and educational evidence. Examples of neuromyths include: 10% brain usage, leftand right-brained thinking, VAK learning styles and multiple intelligences Sources of evidence: The basis for the argument put forward includes a literature review of relevant cognitive neuroscientific studies, often involving neuroimaging, together with several comprehensive education reviews of the brain-based approaches under scrutiny. Main argument: The main elements of the argument are as follows. We use most of our brains most of the time, not some restricted 10% brain usage. This is because our brains are densely interconnected, and we exploit this interconnectivity to enable our primitively evolved primate brains to live in our complex modern human world. Although brain imaging delineates areas of higher (and lower) activation in response to particular tasks, thinking involves coordinated interconnectivity from both sides of the brain, not separate leftand right-brained thinking. High intelligence requires higher levels of inter-hemispheric and other connected activity. The brain’s interconnectivity includes the senses, especially vision and hearing. We do not learn by one sense alone, hence VAK learning styles do not reflect how our brains actually learn, nor the individual differences we observe in classrooms. Neuroimaging studies do not support multiple intelligences; in fact, the opposite is true. Through the activity of its frontal cortices, among other areas, the human brain seems to operate with general intelligence, applied to multiple areas of endeavour. Studies of educational effectiveness of applying any of these ideas in the classroom have failed to find any educational benefits. Conclusions: The main conclusions arising from the argument are that teachers should seek independent scientific validation before adopting brain-based products in their classrooms. A more sceptical approach to educational panaceas could contribute to an enhanced professionalism of the field.",
"title": ""
},
{
"docid": "neg:1840533_17",
"text": "How does the brain cause positive affective reactions to sensory pleasure? An answer to pleasure causation requires knowing not only which brain systems are activated by pleasant stimuli, but also which systems actually cause their positive affective properties. This paper focuses on brain causation of behavioral positive affective reactions to pleasant sensations, such as sweet tastes. Its goal is to understand how brain systems generate 'liking,' the core process that underlies sensory pleasure and causes positive affective reactions. Evidence suggests activity in a subcortical network involving portions of the nucleus accumbens shell, ventral pallidum, and brainstem causes 'liking' and positive affective reactions to sweet tastes. Lesions of ventral pallidum also impair normal sensory pleasure. Recent findings regarding this subcortical network's causation of core 'liking' reactions help clarify how the essence of a pleasure gloss gets added to mere sensation. The same subcortical 'liking' network, via connection to brain systems involved in explicit cognitive representations, may also in turn cause conscious experiences of sensory pleasure.",
"title": ""
},
{
"docid": "neg:1840533_18",
"text": "Objective. This prospective open trial aimed to evaluate the efficacy and safety of isotretinoin (13-cis-retinoic acid) in patients with Cushing's disease (CD). Methods. Sixteen patients with CD and persistent or recurrent hypercortisolism after transsphenoidal surgery were given isotretinoin orally for 6-12 months. The drug was started on 20 mg daily and the dosage was increased up to 80 mg daily if needed and tolerated. Clinical, biochemical, and hormonal parameters were evaluated at baseline and monthly for 6-12 months. Results. Of the 16 subjects, 4% (25%) persisted with normal urinary free cortisol (UFC) levels at the end of the study. UFC reductions of up to 52.1% were found in the rest. Only patients with UFC levels below 2.5-fold of the upper limit of normal achieved sustained UFC normalization. Improvements of clinical and biochemical parameters were also noted mostly in responsive patients. Typical isotretinoin side-effects were experienced by 7 patients (43.7%), though they were mild and mostly transient. We also observed that the combination of isotretinoin with cabergoline, in relatively low doses, may occasionally be more effective than either drug alone. Conclusions. Isotretinoin may be an effective and safe therapy for some CD patients, particularly those with mild hypercortisolism.",
"title": ""
},
{
"docid": "neg:1840533_19",
"text": "Lenz-Majewski hyperostotic dwarfism (LMHD) is an ultra-rare Mendelian craniotubular dysostosis that causes skeletal dysmorphism and widely distributed osteosclerosis. Biochemical and histopathological characterization of the bone disease is incomplete and nonexistent, respectively. In 2014, a publication concerning five unrelated patients with LMHD disclosed that all carried one of three heterozygous missense mutations in PTDSS1 encoding phosphatidylserine synthase 1 (PSS1). PSS1 promotes the biosynthesis of phosphatidylserine (PTDS), which is a functional constituent of lipid bilayers. In vitro, these PTDSS1 mutations were gain-of-function and increased PTDS production. Notably, PTDS binds calcium within matrix vesicles to engender hydroxyapatite crystal formation, and may enhance mesenchymal stem cell differentiation leading to osteogenesis. We report an infant girl with LMHD and a novel heterozygous missense mutation (c.829T>C, p.Trp277Arg) within PTDSS1. Bone turnover markers suggested that her osteosclerosis resulted from accelerated formation with an unremarkable rate of resorption. Urinary amino acid quantitation revealed a greater than sixfold elevation of phosphoserine. Our findings affirm that PTDSS1 defects cause LMHD and support enhanced biosynthesis of PTDS in the pathogenesis of LMHD.",
"title": ""
}
] |
1840534 | An $X$ -Band Lumped-Element Wilkinson Combiner With Embedded Impedance Transformation | [
{
"docid": "pos:1840534_0",
"text": "This paper proposes a ultra compact Wilkinson power combiner (WPC) incorporating synthetic transmission lines at K-band in CMOS technology. The 50 % improvement on the size reduction can be achieved by increasing the slow-wave factor of synthetic transmission line. The presented Wilkinson power combiner design is analyzed and fabricated by using standard 0.18 µm 1P6M CMOS technology. The prototype has only a chip size of 480 µm × 90 µm, corresponding to 0.0002λ02 at 21.5 GHz. The measured insertion losses and return losses are less and higher than 4 dB and 17.5 dB from 16 GHz to 27 GHz, respectively. Furthermore, the proposed WPC is also integrated into the phase shifter to confirm its feasibility. The prototype of phase shifter shows 15 % size reduction and on-wafer measurements show good linearity of full 360-degree phase shifting from 21 GHz to 27 GHz.",
"title": ""
}
] | [
{
"docid": "neg:1840534_0",
"text": "This paper reports on the design of a low phase noise 76.8 MHz AlN-on-silicon reference oscillator using SiO2 as temperature compensation material. The paper presents profound theoretical optimization of all the important parameters for AlN-on-silicon width extensional mode resonators, filling into the knowledge gap targeting the tens of megahertz frequency range for this type of resonators. Low loading CMOS cross coupled series resonance oscillator is used to reach the-state-of-the-art LTE phase noise specifications. Phase noise of 123 dBc/Hz at 1 kHz, and 162 dBc/Hz at 1 MHz offset is achieved. The oscillator's integrated root mean square RMS jitter is 106 fs (10 kHz to 20 MHz), consuming 850 μA, with startup time of 250 μs, and a figure-of-merit FOM of 216 dB. This work offers a platform for high performance MEMS reference oscillators; where, it shows the applicability of replacing bulky quartz with MEMS resonators in cellular platforms. & 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840534_1",
"text": "This article presents two designs of power amplifiers to be used with piezo-electric actuators in diesel injectors. The topologies as well as the controller approach and implementation are discussed.",
"title": ""
},
{
"docid": "neg:1840534_2",
"text": "This paper introduces methods to compute impulse responses without specification and estimation of the underlying multivariate dynamic system. The central idea consists in estimating local projections at each period of interest rather than extrapolating into increasingly distant horizons from a given model, as it is done with vector autoregressions (VAR). The advantages of local projections are numerous: (1) they can be estimated by simple regression techniques with standard regression packages; (2) they are more robust to misspecification; (3) joint or point-wise analytic inference is simple; and (4) they easily accommodate experimentation with highly non-linear and flexible specifications that may be impractical in a multivariate context. Therefore, these methods are a natural alternative to estimating impulse responses from VARs. Monte Carlo evidence and an application to a simple, closed-economy, new-Keynesian model clarify these numerous advantages. •",
"title": ""
},
{
"docid": "neg:1840534_3",
"text": "MicroRNAs (miRNAs) are endogenous approximately 22 nt RNAs that can play important regulatory roles in animals and plants by targeting mRNAs for cleavage or translational repression. Although they escaped notice until relatively recently, miRNAs comprise one of the more abundant classes of gene regulatory molecules in multicellular organisms and likely influence the output of many protein-coding genes.",
"title": ""
},
{
"docid": "neg:1840534_4",
"text": "In the last decades, a lot of 3D face recognition techniques have been proposed. They can be divided into three parts, holistic matching techniques, feature-based techniques and hybrid techniques. In this paper, a hybrid technique is used, where, a prototype of a new hybrid face recognition technique depends on 3D face scan images are designed, simulated and implemented. Some geometric rules are used for analyzing and mapping the face. Image processing is used to get the twodimensional values of predetermined and specific facial points, software programming is used to perform a three-dimensional coordinates of the predetermined points and to calculate several geometric parameter ratios and relations. Neural network technique is used for processing the calculated geometric parameters and then performing facial recognition. The new design is not affected by variant pose, illumination and expression and has high accurate level compared with the 2D analysis. Moreover, the proposed algorithm is of higher performance than latest’s published biometric recognition algorithms in terms of cost, confidentiality of results, and availability of design tools.",
"title": ""
},
{
"docid": "neg:1840534_5",
"text": "Despite the fact that MRI has evolved to become the standard method for diagnosis and monitoring of patients with brain tumours, conventional MRI sequences have two key limitations: the inability to show the full extent of the tumour and the inability to differentiate neoplastic tissue from nonspecific, treatment-related changes after surgery, radiotherapy, chemotherapy or immunotherapy. In the past decade, PET involving the use of radiolabelled amino acids has developed into an important diagnostic tool to overcome some of the shortcomings of conventional MRI. The Response Assessment in Neuro-Oncology working group — an international effort to develop new standardized response criteria for clinical trials in brain tumours — has recommended the additional use of amino acid PET imaging for brain tumour management. Concurrently, a number of advanced MRI techniques such as magnetic resonance spectroscopic imaging and perfusion weighted imaging are under clinical evaluation to target the same diagnostic problems. This Review summarizes the clinical role of amino acid PET in relation to advanced MRI techniques for differential diagnosis of brain tumours; delineation of tumour extent for treatment planning and biopsy guidance; post-treatment differentiation between tumour progression or recurrence versus treatment-related changes; and monitoring response to therapy. An outlook for future developments in PET and MRI techniques is also presented.",
"title": ""
},
{
"docid": "neg:1840534_6",
"text": "iii Acknowledgements iv Chapter",
"title": ""
},
{
"docid": "neg:1840534_7",
"text": "STUDY DESIGN\nThis study is a retrospective review of the initial enrollment data from a prospective multicentered study of adult spinal deformity.\n\n\nOBJECTIVES\nThe purpose of this study is to correlate radiographic measures of deformity with patient-based outcome measures in adult scoliosis.\n\n\nSUMMARY OF BACKGROUND DATA\nPrior studies of adult scoliosis have attempted to correlate radiographic appearance and clinical symptoms, but it has proven difficult to predict health status based on radiographic measures of deformity alone. The ability to correlate radiographic measures of deformity with symptoms would be useful for decision-making and surgical planning.\n\n\nMETHODS\nThe study correlates radiographic measures of deformity with scores on the Short Form-12, Scoliosis Research Society-29, and Oswestry profiles. Radiographic evaluation was performed according to an established positioning protocol for anteroposterior and lateral 36-inch standing radiographs. Radiographic parameters studied were curve type, curve location, curve magnitude, coronal balance, sagittal balance, apical rotation, and rotatory subluxation.\n\n\nRESULTS\nThe 298 patients studied include 172 with no prior surgery and 126 who had undergone prior spine fusion. Positive sagittal balance was the most reliable predictor of clinical symptoms in both patient groups. Thoracolumbar and lumbar curves generated less favorable scores than thoracic curves in both patient groups. Significant coronal imbalance of greater than 4 cm was associated with deterioration in pain and function scores for unoperated patients but not in patients with previous surgery.\n\n\nCONCLUSIONS\nThis study suggests that restoration of a more normal sagittal balance is the critical goal for any reconstructive spine surgery. The study suggests that magnitude of coronal deformity and extent of coronal correction are less critical parameters.",
"title": ""
},
{
"docid": "neg:1840534_8",
"text": "Knowledge and lessons from past accidental exposures in radiotherapy are very helpful in finding safety provisions to prevent recurrence. Disseminating lessons is necessary but not sufficient. There may be additional latent risks for other accidental exposures, which have not been reported or have not occurred, but are possible and may occur in the future if not identified, analyzed, and prevented by safety provisions. Proactive methods are available for anticipating and quantifying risk from potential event sequences. In this work, proactive methods, successfully used in industry, have been adapted and used in radiotherapy. Risk matrix is a tool that can be used in individual hospitals to classify event sequences in levels of risk. As with any anticipative method, the risk matrix involves a systematic search for potential risks; that is, any situation that can cause an accidental exposure. The method contributes new insights: The application of the risk matrix approach has identified that another group of less catastrophic but still severe single-patient events may have a higher probability, resulting in higher risk. The use of the risk matrix approach for safety assessment in individual hospitals would provide an opportunity for self-evaluation and managing the safety measures that are most suitable to the hospital's own conditions.",
"title": ""
},
{
"docid": "neg:1840534_9",
"text": "The paper overviews the 11th evaluation campaign organized by the IWSLT workshop. The 2014 evaluation offered multiple tracks on lecture transcription and translation based on the TED Talks corpus. In particular, this year IWSLT included three automatic speech recognition tracks, on English, German and Italian, five speech translation tracks, from English to French, English to German, German to English, English to Italian, and Italian to English, and five text translation track, also from English to French, English to German, German to English, English to Italian, and Italian to English. In addition to the official tracks, speech and text translation optional tracks were offered, globally involving 12 other languages: Arabic, Spanish, Portuguese (B), Hebrew, Chinese, Polish, Persian, Slovenian, Turkish, Dutch, Romanian, Russian. Overall, 21 teams participated in the evaluation, for a total of 76 primary runs submitted. Participants were also asked to submit runs on the 2013 test set (progress test set), in order to measure the progress of systems with respect to the previous year. All runs were evaluated with objective metrics, and submissions for two of the official text translation tracks were also evaluated with human post-editing.",
"title": ""
},
{
"docid": "neg:1840534_10",
"text": "Digital game play is becoming increasingly prevalent. Its participant-players number in the millions and its revenues are in billions of dollars. As they grow in popularity, digital games are also growing in complexity, depth and sophistication. This paper presents reasons why games and game play matter to the future of education. Drawing upon these works, the potential for instruction in digital games is recognised. Previous works in the area were also analysed with respect to their theoretical findings. We then propose a framework for digital Game-based Learning approach for adoption in education setting.",
"title": ""
},
{
"docid": "neg:1840534_11",
"text": "The optimisation of a tail-sitter UAV (Unmanned Aerial Vehicle) that uses a stall-tumble manoeuvre to transition from vertical to horizontal flight and a pull-up manoeuvre to regain the vertical is investigated. The tandem wing vehicle is controlled in the hover and vertical flight phases by prop-wash over wing mounted control surfaces. It represents an innovative and potentially simple solution to the dual requirements of VTOL (Vertical Take-off and Landing) and high speed forward flight by obviating the need for complex mechanical systems such as rotor heads or tilt-rotor systems.",
"title": ""
},
{
"docid": "neg:1840534_12",
"text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.",
"title": ""
},
{
"docid": "neg:1840534_13",
"text": "characteristics, burnout, and (other-ratings of) performance (N 146). We hypothesized that job demands (e.g., work pressure and emotional demands) would be the most important antecedents of the exhaustion component of burnout, which, in turn, would predict in-role performance (hypothesis 1). In contrast, job resources (e.g., autonomy and social support) were hypothesized to be the most important predictors of extra-role performance, through their relationship with the disengagement component of burnout (hypothesis 2). In addition, we predicted that job resources would buffer the relationship between job demands and exhaustion (hypothesis 3), and that exhaustion would be positively related to disengagement (hypothesis 4). The results of structural equation modeling analyses provided strong support for hypotheses 1, 2, and 4, but rejected hypothesis 3. These findings support the JD-R model’s claim that job demands and job resources initiate two psychological processes, which eventually affect organizational outcomes. © 2004 Wiley Periodicals, Inc.",
"title": ""
},
{
"docid": "neg:1840534_14",
"text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.",
"title": ""
},
{
"docid": "neg:1840534_15",
"text": "ABSTRACT: Five questionnaires for assessing the usability of a website were compared in a study with 123 participants. The questionnaires studied were SUS, QUIS, CSUQ, a variant of Microsoft’s Product Reaction Cards, and one that we have used in our Usability Lab for several years. Each participant performed two tasks on each of two websites: finance.yahoo.com and kiplinger.com. All five questionnaires revealed that one site was significantly preferred over the other. The data were analyzed to determine what the results would have been at different sample sizes from 6 to 14. At a sample size of 6, only 30-40% of the samples would have identified that one of the sites was significantly preferred. Most of the data reach an apparent asymptote at a sample size of 12, where two of the questionnaires (SUS and CSUQ) yielded the same conclusion as the full dataset at least 90% of the time.",
"title": ""
},
{
"docid": "neg:1840534_16",
"text": "Some mathematical and natural objects (a random sequence, a sequence of zeros, a perfect crystal, a gas) are intuitively trivial, while others (e.g. the human body, the digits of π) contain internal evidence of a nontrivial causal history. We formalize this distinction by defining an object’s “logical depth” as the time required by a standard universal Turing machine to generate it from an input that is algorithmically random (i.e. Martin-Löf random). This definition of depth is shown to be reasonably machineindependent, as well as obeying a slow-growth law: deep objects cannot be quickly produced from shallow ones by any deterministic process, nor with much probability by a probabilistic process, but can be produced slowly. Next we apply depth to the physical problem of “self-organization,” inquiring in particular under what conditions (e.g. noise, irreversibility, spatial and other symmetries of the initial conditions and equations of motion) statistical-mechanical model systems can imitate computers well enough to undergo unbounded increase of depth in the limit of infinite space and time.",
"title": ""
},
{
"docid": "neg:1840534_17",
"text": "When operating in partially-known environments, autonomous vehicles must constantly update their maps and plans based on new sensor information. Much focus has been placed on developing efficient incremental planning algorithms that are able to efficiently replan when the map and associated cost function changes. However, much less attention has been placed on efficiently updating the cost function used by these planners, which can represent a significant portion of the time spent replanning. In this paper, we present the Limited Incremental Distance Transform algorithm, which can be used to efficiently update the cost function used for planning when changes in the environment are observed. Using this algorithm it is possible to plan paths in a completely incremental way starting from a list of changed obstacle classifications. We present results comparing the algorithm to the Euclidean distance transform and a mask-based incremental distance transform algorithm. Computation time is reduced by an order of magnitude for a UAV application. We also provide example results from an autonomous micro aerial vehicle with on-board sensing and computing.",
"title": ""
},
{
"docid": "neg:1840534_18",
"text": "This paper provides an objective evaluation of the performance impacts of binary XML encodings, using a fast stream-based XQuery processor as our representative application. Instead of proposing one binary format and comparing it against standard XML parsers, we investigate the individual effects of several binary encoding techniques that are shared by many proposals. Our goal is to provide a deeper understanding of the performance impacts of binary XML encodings in order to clarify the ongoing and often contentious debate over their merits, particularly in the domain of high performance XML stream processing.",
"title": ""
}
] |
1840535 | Texture-aware ASCII art synthesis with proportional fonts | [
{
"docid": "pos:1840535_0",
"text": "Image quality assessment (IQA) aims to use computational models to measure the image quality consistently with subjective evaluations. The well-known structural similarity index brings IQA from pixel- to structure-based stage. In this paper, a novel feature similarity (FSIM) index for full reference IQA is proposed based on the fact that human visual system (HVS) understands an image mainly according to its low-level features. Specifically, the phase congruency (PC), which is a dimensionless measure of the significance of a local structure, is used as the primary feature in FSIM. Considering that PC is contrast invariant while the contrast information does affect HVS' perception of image quality, the image gradient magnitude (GM) is employed as the secondary feature in FSIM. PC and GM play complementary roles in characterizing the image local quality. After obtaining the local quality map, we use PC again as a weighting function to derive a single quality score. Extensive experiments performed on six benchmark IQA databases demonstrate that FSIM can achieve much higher consistency with the subjective evaluations than state-of-the-art IQA metrics.",
"title": ""
},
{
"docid": "pos:1840535_1",
"text": "Many state-of-the-art perceptual image quality assessment (IQA) algorithms share a common two-stage structure: local quality/distortion measurement followed by pooling. While significant progress has been made in measuring local image quality/distortion, the pooling stage is often done in ad-hoc ways, lacking theoretical principles and reliable computational models. This paper aims to test the hypothesis that when viewing natural images, the optimal perceptual weights for pooling should be proportional to local information content, which can be estimated in units of bit using advanced statistical models of natural images. Our extensive studies based upon six publicly-available subject-rated image databases concluded with three useful findings. First, information content weighting leads to consistent improvement in the performance of IQA algorithms. Second, surprisingly, with information content weighting, even the widely criticized peak signal-to-noise-ratio can be converted to a competitive perceptual quality measure when compared with state-of-the-art algorithms. Third, the best overall performance is achieved by combining information content weighting with multiscale structural similarity measures.",
"title": ""
}
] | [
{
"docid": "neg:1840535_0",
"text": "BACKGROUND\nAlzheimer's disease (AD) causes considerable distress in caregivers who are continuously required to deal with requests from patients. Coping strategies play a fundamental role in modulating the psychologic impact of the disease, although their role is still debated. The present study aims to evaluate the burden and anxiety experienced by caregivers, the effectiveness of adopted coping strategies, and their relationships with burden and anxiety.\n\n\nMETHODS\nEighty-six caregivers received the Caregiver Burden Inventory (CBI) and the State-Trait Anxiety Inventory (STAI Y-1 and Y-2). The coping strategies were assessed by means of the Coping Inventory for Stressful Situations (CISS), according to the model proposed by Endler and Parker in 1990.\n\n\nRESULTS\nThe CBI scores (overall and single sections) were extremely high and correlated with dementia severity. Women, as well as older caregivers, showed higher scores. The trait anxiety (STAI-Y-2) correlated with the CBI overall score. The CISS showed that caregivers mainly adopted task-focused strategies. Women mainly adopted emotion-focused strategies and this style was related to a higher level of distress.\n\n\nCONCLUSION\nAD is associated with high distress among caregivers. The burden strongly correlates with dementia severity and is higher in women and in elderly subjects. Chronic anxiety affects caregivers who mainly rely on emotion-oriented coping strategies. The findings suggest providing support to families of patients with AD through tailored strategies aimed to reshape the dysfunctional coping styles.",
"title": ""
},
{
"docid": "neg:1840535_1",
"text": "Artificial bee colony algorithm simulating the intelligent foraging behavior of honey bee swarms is one of the most popular swarm based optimization algorithms. It has been introduced in 2005 and applied in several fields to solve different problems up to date. In this paper, an artificial bee colony algorithm, called as Artificial Bee Colony Programming (ABCP), is described for the first time as a new method on symbolic regression which is a very important practical problem. Symbolic regression is a process of obtaining a mathematical model using given finite sampling of values of independent variables and associated values of dependent variables. In this work, a set of symbolic regression benchmark problems are solved using artificial bee colony programming and then its performance is compared with the very well-known method evolving computer programs, genetic programming. The simulation results indicate that the proposed method is very feasible and robust on the considered test problems of symbolic regression. 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840535_2",
"text": "ÐGraphs are a powerful and universal data structure useful in various subfields of science and engineering. In this paper, we propose a new algorithm for subgraph isomorphism detection from a set of a priori known model graphs to an input graph that is given online. The new approach is based on a compact representation of the model graphs that is computed offline. Subgraphs that appear multiple times within the same or within different model graphs are represented only once, thus reducing the computational effort to detect them in an input graph. In the extreme case where all model graphs are highly similar, the run-time of the new algorithm becomes independent of the number of model graphs. Both a theoretical complexity analysis and practical experiments characterizing the performance of the new approach will be given. Index TermsÐGraph matching, graph isomorphism, subgraph isomorphism, preprocessing.",
"title": ""
},
{
"docid": "neg:1840535_3",
"text": "In commercial-off-the-shelf (COTS) multi-core systems, a task running on one core can be delayed by other tasks running simultaneously on other cores due to interference in the shared DRAM main memory. Such memory interference delay can be large and highly variable, thereby posing a significant challenge for the design of predictable real-time systems. In this paper, we present techniques to provide a tight upper bound on the worst-case memory interference in a COTS-based multi-core system. We explicitly model the major resources in the DRAM system, including banks, buses and the memory controller. By considering their timing characteristics, we analyze the worst-case memory interference delay imposed on a task by other tasks running in parallel. To the best of our knowledge, this is the first work bounding the request re-ordering effect of COTS memory controllers. Our work also enables the quantification of the extent by which memory interference can be reduced by partitioning DRAM banks. We evaluate our approach on a commodity multi-core platform running Linux/RK. Experimental results show that our approach provides an upper bound very close to our measured worst-case interference.",
"title": ""
},
{
"docid": "neg:1840535_4",
"text": "Gesturing and motion control have become common as interaction methods for video games since the advent of the Nintendo Wii game console. Despite the growing number of motion-based control platforms for video games, no set of shared design heuristics for motion control across the platforms has been published. Our approach in this paper combines analysis of player experiences across platforms. We work towards a collection of design heuristics for motion-based control by studying game reviews in two motion-based control platforms, Xbox 360 Kinect and PlayStation 3 Move. In this paper we present an analysis of player problems within 256 game reviews, on which we ground a set of heuristics for motion-controlled games.",
"title": ""
},
{
"docid": "neg:1840535_5",
"text": "Chemical fiber paper tubes are the essential spinning equipment on filament high-speed spinning and winding machine of the chemical fiber industry. The precision of its application directly impacts on the formation of the silk, determines the cost of the spinning industry. Due to the accuracy of its application requirements, the paper tubes with defects must be detected and removed. Traditional industrial defect detection methods are usually carried out using the target operator's characteristics, only to obtain surface information, not only the detection efficiency and accuracy is difficult to improve, due to human judgment, it's difficult to give effective algorithm for some targets. And the existing learning algorithms are also difficult to use the deep features, so they can not get good results. Based on the Faster-RCNN method in depth learning, this paper extracts the deep features of the defective target by Convolutional Neural Network (CNN), which effectively solves the internal joint defects that the traditional algorithm can not effectively detect. As to the external joints and damaged flaws that the traditional algorithm can detect, this algorithm has better results, the experimental accuracy rate can be raised up to 98.00%. At the same time, it can be applied to a variety of lighting conditions, reducing the pretreatment steps and improving efficiency. The experimental results show that the method is effective and worthy of further research.",
"title": ""
},
{
"docid": "neg:1840535_6",
"text": "Due to the advantages of deep learning, in this paper, a regularized deep feature extraction (FE) method is presented for hyperspectral image (HSI) classification using a convolutional neural network (CNN). The proposed approach employs several convolutional and pooling layers to extract deep features from HSIs, which are nonlinear, discriminant, and invariant. These features are useful for image classification and target detection. Furthermore, in order to address the common issue of imbalance between high dimensionality and limited availability of training samples for the classification of HSI, a few strategies such as L2 regularization and dropout are investigated to avoid overfitting in class data modeling. More importantly, we propose a 3-D CNN-based FE model with combined regularization to extract effective spectral-spatial features of hyperspectral imagery. Finally, in order to further improve the performance, a virtual sample enhanced method is proposed. The proposed approaches are carried out on three widely used hyperspectral data sets: Indian Pines, University of Pavia, and Kennedy Space Center. The obtained results reveal that the proposed models with sparse constraints provide competitive results to state-of-the-art methods. In addition, the proposed deep FE opens a new window for further research.",
"title": ""
},
{
"docid": "neg:1840535_7",
"text": "Artificial bee colony (ABC), an optimization algorithm is a recent addition to the family of population based search algorithm. ABC has taken its inspiration from the collective intelligent foraging behavior of honey bees. In this study we have incorporated golden section search mechanism in the structure of basic ABC to improve the global convergence and prevent to stick on a local solution. The proposed variant is termed as ILS-ABC. Comparative numerical results with the state-of-art algorithms show the performance of the proposal when applied to the set of unconstrained engineering design problems. The simulated results show that the proposed variant can be successfully applied to solve real life problems.",
"title": ""
},
{
"docid": "neg:1840535_8",
"text": "In this paper, we investigate resource block (RB) assignment and modulation-and-coding scheme (MCS) selection to maximize downlink throughput of long-term evolution (LTE) systems, where all RB's assigned to the same user in any given transmission time interval (TTI) must use the same MCS. We develop several effective MCS selection schemes by using the effective packet-level SINR based on exponential effective SINR mapping (EESM), arithmetic mean, geometric mean, and harmonic mean. From both analysis and simulation results, we show that the system throughput of all the proposed schemes are better than that of the scheme in [7]. Furthermore, the MCS selection scheme using harmonic mean based effective packet-level SINR almost reaches the optimal performance and significantly outperforms the other proposed schemes.",
"title": ""
},
{
"docid": "neg:1840535_9",
"text": "A stress-detection system is proposed based on physiological signals. Concretely, galvanic skin response (GSR) and heart rate (HR) are proposed to provide information on the state of mind of an individual, due to their nonintrusiveness and noninvasiveness. Furthermore, specific psychological experiments were designed to induce properly stress on individuals in order to acquire a database for training, validating, and testing the proposed system. Such system is based on fuzzy logic, and it described the behavior of an individual under stressing stimuli in terms of HR and GSR. The stress-detection accuracy obtained is 99.5% by acquiring HR and GSR during a period of 10 s, and what is more, rates over 90% of success are achieved by decreasing that acquisition period to 3-5 s. Finally, this paper comes up with a proposal that an accurate stress detection only requires two physiological signals, namely, HR and GSR, and the fact that the proposed stress-detection system is suitable for real-time applications.",
"title": ""
},
{
"docid": "neg:1840535_10",
"text": "The discovery of ammonia oxidation by mesophilic and thermophilic Crenarchaeota and the widespread distribution of these organisms in marine and terrestrial environments indicated an important role for them in the global nitrogen cycle. However, very little is known about their physiology or their contribution to nitrification. Here we report oligotrophic ammonia oxidation kinetics and cellular characteristics of the mesophilic crenarchaeon ‘Candidatus Nitrosopumilus maritimus’ strain SCM1. Unlike characterized ammonia-oxidizing bacteria, SCM1 is adapted to life under extreme nutrient limitation, sustaining high specific oxidation rates at ammonium concentrations found in open oceans. Its half-saturation constant (Km = 133 nM total ammonium) and substrate threshold (≤10 nM) closely resemble kinetics of in situ nitrification in marine systems and directly link ammonia-oxidizing Archaea to oligotrophic nitrification. The remarkably high specific affinity for reduced nitrogen (68,700 l per g cells per h) of SCM1 suggests that Nitrosopumilus-like ammonia-oxidizing Archaea could successfully compete with heterotrophic bacterioplankton and phytoplankton. Together these findings support the hypothesis that nitrification is more prevalent in the marine nitrogen cycle than accounted for in current biogeochemical models.",
"title": ""
},
{
"docid": "neg:1840535_11",
"text": "Fleet management systems are commonly used to coordinate mobility and delivery services in a broad variety of domains. However, their traditional top-down control architecture becomes a bottleneck in open and dynamic environments, where scalability, proactiveness, and autonomy are becoming key factors for their success. Here, the authors present an abstract event-based architecture for fleet management systems that supports tailoring dynamic control regimes for coordinating fleet vehicles, and illustrate it for the case of medical emergency management. Then, they go one step ahead in the transition toward automatic or driverless fleets, by conceiving fleet management systems in terms of cyber-physical systems, and putting forward the notion of cyber fleets.",
"title": ""
},
{
"docid": "neg:1840535_12",
"text": "Over the last decade blogs became an important part of the Web, where people can announce anything that is on their mind. Due to their high popularity blogs have great potential to mine public opinions regarding products. Such knowledge is very valuable as it could be used to adjust marketing campaigns or advertisement of products accordingly. In this paper we investigate how the blogosphere can be used to predict the success of products in the domain of music and movies. We analyze and characterize the blogging behavior in both domains particularly around product releases, propose different methods for extracting characteristic features from the blogosphere, and show that our predictions correspond to the real world measures Sales Rank and box office revenue respectively.",
"title": ""
},
{
"docid": "neg:1840535_13",
"text": "We introduce a method for learning to generate the surface of 3D shapes. Our approach represents a 3D shape as a collection of parametric surface elements and, in contrast to methods generating voxel grids or point clouds, naturally infers a surface representation of the shape. Beyond its novelty, our new shape generation framework, AtlasNet, comes with significant advantages, such as improved precision and generalization capabilities, and the possibility to generate a shape of arbitrary resolution without memory issues. We demonstrate these benefits and compare to strong baselines on the ShapeNet benchmark for two applications: (i) autoencoding shapes, and (ii) single-view reconstruction from a still image. We also provide results showing its potential for other applications, such as morphing, parametrization, super-resolution, matching, and co-segmentation.",
"title": ""
},
{
"docid": "neg:1840535_14",
"text": "UNLABELLED\nThe replacement of crowns and bridges is a common procedure for many dental practitioners. When correctly planned and executed, fixed prostheses will provide predictable function, aesthetics and value for money. However, when done poorly, they are more likely to fail prematurely and lead to irreversible damage to the teeth and supporting structures beneath. Sound diagnosis, assessment and technical skills are essential when dealing with failed or failing fixed restorations. These skills are essential for the 21st century dentist. This paper, with treated clinical examples, illustrates the areas of technical skill and clinical decisions needed for this type of work. It also provides advice on how the risk of premature failure can, in general, be further reduced. The article also confirms the very real risk in the UK of dento-legal problems when patients experience unexpected problems with their crowns and bridges.\n\n\nCLINICAL RELEVANCE\nThis paper outlines clinical implications of failed fixed prosthodontics to the dental surgeon. It also discusses factors that we can all use to predict and reduce the risk of premature restoration failure. Restoration design, clinical execution and patient factors are the most frequent reasons for premature problems. It is worth remembering (and informing patients) that the health of the underlying supporting dental tissue is often irreversibly compromised at the time of fixed restoration failure.",
"title": ""
},
{
"docid": "neg:1840535_15",
"text": "Compared to supervised feature selection, unsupervised feature selection tends to be more challenging due to the lack of guidance from class labels. Along with the increasing variety of data sources, many datasets are also equipped with certain side information of heterogeneous structure. Such side information can be critical for feature selection when class labels are unavailable. In this paper, we propose a new feature selection method, SideFS, to exploit such rich side information. We model the complex side information as a heterogeneous network and derive instance correlations to guide subsequent feature selection. Representations are learned from the side information network and the feature selection is performed in a unified framework. Experimental results show that the proposed method can effectively enhance the quality of selected features by incorporating heterogeneous side information.",
"title": ""
},
{
"docid": "neg:1840535_16",
"text": "CRYPTONITE is a programmable processor tailored to the needs of crypto algorithms. The design of CRYPTONITE was based on an in-depth application analysis in which standard crypto algorithms (AES, DES, MD5, SHA-1, etc) were distilled down to their core functionality. We describe this methodology and use AES as a central example. Starting with a functional description of AES, we give a high level account of how to implement AES efficiently in hardware, and present several novel optimizations (which are independent of CRYPTONITE).We then describe the CRYPTONITE architecture, highlighting how AES implementation issues influenced the design of the processor and its instruction set. CRYPTONITE is designed to run at high clock rates and be easy to implement in silicon while providing a significantly better performance/area/power tradeoff than general purpose processors.",
"title": ""
},
{
"docid": "neg:1840535_17",
"text": "We present a formulation of deep learning that aims at producing a large margin classifier. The notion of margin, minimum distance to a decision boundary, has served as the foundation of several theoretically profound and empirically successful results for both classification and regression tasks. However, most large margin algorithms are applicable only to shallow models with a preset feature representation; and conventional margin methods for neural networks only enforce margin at the output layer. Such methods are therefore not well suited for deep networks. In this work, we propose a novel loss function to impose a margin on any chosen set of layers of a deep network (including input and hidden layers). Our formulation allows choosing any lp norm (p ≥ 1) on the metric measuring the margin. We demonstrate that the decision boundary obtained by our loss has nice properties compared to standard classification loss functions. Specifically, we show improved empirical results on the MNIST, CIFAR-10 and ImageNet datasets on multiple tasks: generalization from small training sets, corrupted labels, and robustness against adversarial perturbations. The resulting loss is general and complementary to existing data augmentation (such as random/adversarial input transform) and regularization techniques such as weight decay, dropout, and batch norm. 2",
"title": ""
},
{
"docid": "neg:1840535_18",
"text": "Deep learning has delivered its powerfulness in many application domains, especially in image and speech recognition. As the backbone of deep learning, deep neural networks (DNNs) consist of multiple layers of various types with hundreds to thousands of neurons. Embedded platforms are now becoming essential for deep learning deployment due to their portability, versatility, and energy efficiency. The large model size of DNNs, while providing excellent accuracy, also burdens the embedded platforms with intensive computation and storage. Researchers have investigated on reducing DNN model size with negligible accuracy loss. This work proposes a Fast Fourier Transform (FFT)-based DNN training and inference model suitable for embedded platforms with reduced asymptotic complexity of both computation and storage, making our approach distinguished from existing approaches. We develop the training and inference algorithms based on FFT as the computing kernel and deploy the FFT-based inference model on embedded platforms achieving extraordinary processing speed.",
"title": ""
},
{
"docid": "neg:1840535_19",
"text": "Automated Guided Vehicles (AGVs) are being increasingly used for intelligent transportation and distribution of materials in warehouses and auto-production lines. In this paper, a preliminary hazard analysis of an AGV’s critical components is conducted by the approach of Failure Modes Effects and Criticality Analysis (FMECA). To implement this research, a particular AGV transport system is modelled as a phased mission. Then, Fault Tree Analysis (FTA) is adopted to model the causes of phase failure, enabling the probability of success in each phase and hence mission success to be determined. Through this research, a promising technical approach is established, which allows the identification of the critical AGV components and crucial mission phases of AGVs at the design stage. 1998 ACM Subject Classification B.8 Performance and Reliability",
"title": ""
}
] |
1840536 | A big data architecture for managing oceans of data and maritime applications | [
{
"docid": "pos:1840536_0",
"text": "As the challenge of our time, Big Data still has many research hassles, especially the variety of data. The high diversity of data sources often results in information silos, a collection of non-integrated data management systems with heterogeneous schemas, query languages, and APIs. Data Lake systems have been proposed as a solution to this problem, by providing a schema-less repository for raw data with a common access interface. However, just dumping all data into a data lake without any metadata management, would only lead to a 'data swamp'. To avoid this, we propose Constance, a Data Lake system with sophisticated metadata management over raw data extracted from heterogeneous data sources. Constance discovers, extracts, and summarizes the structural metadata from the data sources, and annotates data and metadata with semantic information to avoid ambiguities. With embedded query rewriting engines supporting structured data and semi-structured data, Constance provides users a unified interface for query processing and data exploration. During the demo, we will walk through each functional component of Constance. Constance will be applied to two real-life use cases in order to show attendees the importance and usefulness of our generic and extensible data lake system.",
"title": ""
},
{
"docid": "pos:1840536_1",
"text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.",
"title": ""
}
] | [
{
"docid": "neg:1840536_0",
"text": "We propose a new end-to-end single image dehazing method, called Densely Connected Pyramid Dehazing Network (DCPDN), which can jointly learn the transmission map, atmospheric light and dehazing all together. The end-to-end learning is achieved by directly embedding the atmospheric scattering model into the network, thereby ensuring that the proposed method strictly follows the physics-driven scattering model for dehazing. Inspired by the dense network that can maximize the information flow along features from different levels, we propose a new edge-preserving densely connected encoder-decoder structure with multi-level pyramid pooling module for estimating the transmission map. This network is optimized using a newly introduced edge-preserving loss function. To further incorporate the mutual structural information between the estimated transmission map and the dehazed result, we propose a joint-discriminator based on generative adversarial network framework to decide whether the corresponding dehazed image and the estimated transmission map are real or fake. An ablation study is conducted to demonstrate the effectiveness of each module evaluated at both estimated transmission map and dehazed result. Extensive experiments demonstrate that the proposed method achieves significant improvements over the state-of-the-art methods. Code and dataset is made available at: https://github.com/hezhangsprinter/DCPDN",
"title": ""
},
{
"docid": "neg:1840536_1",
"text": "Water molecules can be affected by magnetic fields (MF) due to their bipolar characteristics. In the present study maize plants, from sowing to the end period of generative stage, were irrigated with magnetically treated water (MTW).Tap water was treated with MF by passing through a locally designed alternative magnetic field generating apparatus (110 mT). Irrigation with MTW increased the ear length and fresh weight, 100-grain fresh and dry weights, and water productivity (119.5%, 119.1%, 114.2%, 116.6% and 122.3%, respectively), compared with the control groups. Levels of photosynthetic pigments i.e. chlorophyll a and b, and the contents of anthocyanin and flavonoids of the leaves were increased compared to those of non-treated ones. Increase of the activity of superoxide dismutase (SOD) and ascorbate peroxidase (APX) in leaves of the treated plants efficiently scavenged active oxygen species and resulted in the maintenance of photosynthetic membranes and reduction of malondealdehyde. Total ferritin, sugar, iron and calcium contents of kernels of MTW-irrigated plants were respectively 122.9%, 167.4%, 235% and 185% of the control ones. From the results presented here it can be concluded that the influence of MF on living plant cells, at least in part, is mediated by water. The results also suggest that irrigation of maize plant with MTW can be applied as a useful method for improvement of quantity and quality of it.",
"title": ""
},
{
"docid": "neg:1840536_2",
"text": "Methods for embedding secret data are more sophisticated than their ancient predecessors, but the basic principles remain unchanged.",
"title": ""
},
{
"docid": "neg:1840536_3",
"text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",
"title": ""
},
{
"docid": "neg:1840536_4",
"text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23",
"title": ""
},
{
"docid": "neg:1840536_5",
"text": "We present NeuroSAT, a message passing neural network that learns to solve SAT problems after only being trained as a classifier to predict satisfiability. Although it is not competitive with state-of-the-art SAT solvers, NeuroSAT can solve problems that are substantially larger and more difficult than it ever saw during training by simply running for more iterations. Moreover, NeuroSAT generalizes to novel distributions; after training only on random SAT problems, at test time it can solve SAT problems encoding graph coloring, clique detection, dominating set, and vertex cover problems, all on a range of distributions over small random graphs.",
"title": ""
},
{
"docid": "neg:1840536_6",
"text": "Helpfulness of online reviews is a multi-faceted concept that can be driven by several types of factors. This study was designed to extend existing research on online review helpfulness by looking at not just the quantitative factors (such as word count), but also qualitative aspects of reviewers (including reviewer experience, reviewer impact, reviewer cumulative helpfulness). This integrated view uncovers some insights that were not available before. Our findings suggest that word count has a threshold in its effects on review helpfulness. Beyond this threshold, its effect diminishes significantly or becomes near non-existent. Reviewer experience and their impact were not statistically significant predictors of helpfulness, but past helpfulness records tended to predict future helpfulness ratings. Review framing was also a strong predictor of helpfulness. As a result, characteristics of reviewers and review messages have a varying degree of impact on review helpfulness. Theoretical and practical implications are discussed. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840536_7",
"text": "Female genital cosmetic surgery procedures have gained popularity in the West in recent years. Marketing by surgeons promotes the surgeries, but professional organizations have started to question the promotion and practice of these procedures. Despite some surgeon claims of drastic transformations of psychological, emotional, and sexual life associated with the surgery, little reliable evidence of such effects exists. This article achieves two objectives. First, reviewing the published academic work on the topic, it identifies the current state of knowledge around female genital cosmetic procedures, as well as limitations in our knowledge. Second, examining a body of critical scholarship that raises sociological and psychological concerns not typically addressed in medical literature, it summarizes broader issues and debates. Overall, the article demonstrates a paucity of scientific knowledge and highlights a pressing need to consider the broader ramifications of surgical practices. \"Today we have a whole society held in thrall to the drastic plastic of labial rejuvenation.\"( 1 ) \"At the present time, the field of female cosmetic genital surgery is like the old Wild, Wild West: wide open and unregulated\"( 2 ).",
"title": ""
},
{
"docid": "neg:1840536_8",
"text": "OBJECTIVE\nIn multidisciplinary prenatal diagnosis centers, the search for a tetrasomy 12p mosaic is requested following the discovery of a diaphragmatic hernia in the antenatal period. Thus, the series of Pallister Killian syndromes (PKS: OMIM 601803) probably overestimate the prevalence of diaphragmatic hernia in this syndrome to the detriment of other morphological abnormalities.\n\n\nMETHODS\nA multicenter retrospective study was conducted with search for assistance from members of the French society for Fetal Pathology. For each identified case, we collected all antenatal and postnatal data. Antenatal data were compared with data from the clinicopathological examination to assess the adequacy of sonographic signs of PKS. A review of the literature on antenatal morphological anomalies in case of PKS completed the study.\n\n\nRESULTS\nTen cases were referred to us: 7 had cytogenetic confirmation and 6 had ultrasound screening. In the prenatal as well as post mortem period, the most common sign is facial dysmorphism (5 cases/6). A malformation of limbs is reported in half of the cases (3 out of 6). Ultrasound examination detected craniofacial dysmorphism in 5 cases out of 6. We found 1 case of left diaphragmatic hernia. Our results are in agreement with the malformation spectrum described in the literature.\n\n\nCONCLUSION\nSome malformation associations could evoke a SPK without classical diaphragmatic hernia.",
"title": ""
},
{
"docid": "neg:1840536_9",
"text": "Nonlinear dynamical systems are ubiquitous in science and engineering, yet many issues still exist related to the analysis and prediction of these systems. Koopman theory circumvents these issues by transforming the finite-dimensional nonlinear dynamics to a linear dynamical system of functions in an infinite-dimensional Hilbert space of observables. The eigenfunctions of the Koopman operator evolve linearly in time and thus provide a natural coordinate system for simplifying the dynamical behaviors of the system. We consider a family of observable functions constructed by projecting the delay coordinates of the system onto the eigenvectors of the autocorrelation function, which can be regarded as continuous SVD basis vectors for time-delay observables. We observe that these functions are the most parsimonious basis of observables for a system with Koopman mode decomposition of order N , in the sense that the associated Koopman eigenfunctions are guaranteed to lie in the span of the first N of these coordinates. We conjecture and prove a number of theoretical results related to the quality of these approximations in the more general setting where the system has mixed spectra or the coordinates are otherwise insufficient to capture the full spectral information. We prove a related and very general result that the dynamics of the observables generated by projecting delay coordinates onto an arbitrary orthonormal basis are systemindependent and depend only on the choice of basis, which gives a highly efficient way of computing representations of the Koopman operator in these coordinates. We show that this formalism provides a theoretical underpinning for the empirical results in [8], which found that chaotic dynamical systems can be approximately factored into intermittently forced linear systems when viewed in delay coordinates. Finally, we compute these time delay observables for a number of example dynamical systems and show that empirical results match our theory.",
"title": ""
},
{
"docid": "neg:1840536_10",
"text": "Physical activity has a positive impact on people's well-being, and it may also decrease the occurrence of chronic diseases. Activity recognition with wearable sensors can provide feedback to the user about his/her lifestyle regarding physical activity and sports, and thus, promote a more active lifestyle. So far, activity recognition has mostly been studied in supervised laboratory settings. The aim of this study was to examine how well the daily activities and sports performed by the subjects in unsupervised settings can be recognized compared to supervised settings. The activities were recognized by using a hybrid classifier combining a tree structure containing a priori knowledge and artificial neural networks, and also by using three reference classifiers. Activity data were collected for 68 h from 12 subjects, out of which the activity was supervised for 21 h and unsupervised for 47 h. Activities were recognized based on signal features from 3-D accelerometers on hip and wrist and GPS information. The activities included lying down, sitting and standing, walking, running, cycling with an exercise bike, rowing with a rowing machine, playing football, Nordic walking, and cycling with a regular bike. The total accuracy of the activity recognition using both supervised and unsupervised data was 89% that was only 1% unit lower than the accuracy of activity recognition using only supervised data. However, the accuracy decreased by 17% unit when only supervised data were used for training and only unsupervised data for validation, which emphasizes the need for out-of-laboratory data in the development of activity-recognition systems. The results support a vision of recognizing a wider spectrum, and more complex activities in real life settings.",
"title": ""
},
{
"docid": "neg:1840536_11",
"text": "Deep Learning’s recent successes have mostly relied on Convolutional Networks, which exploit fundamental statistical properties of images, sounds and video data: the local stationarity and multi-scale compositional structure, that allows expressing long range interactions in terms of shorter, localized interactions. However, there exist other important examples, such as text documents or bioinformatic data, that may lack some or all of these strong statistical regularities. In this paper we consider the general question of how to construct deep architectures with small learning complexity on general non-Euclidean domains, which are typically unknown and need to be estimated from the data. In particular, we develop an extension of Spectral Networks which incorporates a Graph Estimation procedure, that we test on large-scale classification problems, matching or improving over Dropout Networks with far less parameters to estimate.",
"title": ""
},
{
"docid": "neg:1840536_12",
"text": "Conventional topic modeling schemes, such as Latent Dirichlet Allocation, are known to perform inadequately when applied to tweets, due to the sparsity of short documents. To alleviate these disadvantages, we apply several pooling techniques, aggregating similar tweets into individual documents, and specifically study the aggregation of tweets sharing authors or hashtags. The results show that aggregating similar tweets into individual documents significantly increases topic coherence.",
"title": ""
},
{
"docid": "neg:1840536_13",
"text": "Modern cryptographic practice rests on the use of one-way functions, which are easy to evaluate but difficult to invert. Unfortunately, commonly used one-way functions are either based on unproven conjectures or have known vulnerabilities. We show that instead of relying on number theory, the mesoscopic physics of coherent transport through a disordered medium can be used to allocate and authenticate unique identifiers by physically reducing the medium's microstructure to a fixed-length string of binary digits. These physical one-way functions are inexpensive to fabricate, prohibitively difficult to duplicate, admit no compact mathematical representation, and are intrinsically tamper-resistant. We provide an authentication protocol based on the enormous address space that is a principal characteristic of physical one-way functions.",
"title": ""
},
{
"docid": "neg:1840536_14",
"text": "For several decades now, there has been sporadic interest in automatically characterizing the speech impairment due to Parkinson's disease (PD). Most early studies were confined to quantifying a few speech features that were easy to compute. More recent studies have adopted a machine learning approach where a large number of potential features are extracted and the models are learned automatically from the data. In the same vein, here we characterize the disease using a relatively large cohort of 168 subjects, collected from multiple (three) clinics. We elicited speech using three tasks - the sustained phonation task, the diadochokinetic task and a reading task, all within a time budget of 4 minutes, prompted by a portable device. From these recordings, we extracted 1582 features for each subject using openSMILE, a standard feature extraction tool. We compared the effectiveness of three strategies for learning a regularized regression and find that ridge regression performs better than lasso and support vector regression for our task. We refine the feature extraction to capture pitch-related cues, including jitter and shimmer, more accurately using a time-varying harmonic model of speech. Our results show that the severity of the disease can be inferred from speech with a mean absolute error of about 5.5, explaining 61% of the variance and consistently well-above chance across all clinics. Of the three speech elicitation tasks, we find that the reading task is significantly better at capturing cues than diadochokinetic or sustained phonation task. In all, we have demonstrated that the data collection and inference can be fully automated, and the results show that speech-based assessment has promising practical application in PD. The techniques reported here are more widely applicable to other paralinguistic tasks in clinical domain.",
"title": ""
},
{
"docid": "neg:1840536_15",
"text": "Recent introduction of all-oral direct-acting antiviral (DAA) treatment has revolutionized care of patients with chronic hepatitis C virus infection. Because patients with different liver disease stages have been treated with great success including those awaiting liver transplantation, therapy has been extended to patients with hepatocellular carcinoma as well. From observational studies among compensated cirrhotic hepatitis C patients treated with interferon-containing regimens, it would have been expected that the rate of hepatocellular carcinoma occurrence is markedly decreased after a sustained virological response. However, recently 2 studies have been published reporting markedly increased rates of tumor recurrence and occurrence after viral clearance with DAA agents. Over the last decades, it has been established that chronic antigen stimulation during persistent infection with hepatitis C virus is associated with continuous activation and impaired function of several immune cell populations, such as natural killer cells and virus-specific T cells. This review therefore focuses on recent studies evaluating the restoration of adaptive and innate immune cell populations after DAA therapy in patients with chronic hepatitis C virus infection in the context of the immune responses in hepatocarcinogenesis.",
"title": ""
},
{
"docid": "neg:1840536_16",
"text": "We present a comprehensive overview of the stereoscopic Intel RealSense RGBD imaging systems. We discuss these systems’ mode-of-operation, functional behavior and include models of their expected performance, shortcomings, and limitations. We provide information about the systems’ optical characteristics, their correlation algorithms, and how these properties can affect different applications, including 3D reconstruction and gesture recognition. Our discussion covers the Intel RealSense R200 and the Intel RealSense D400 (formally RS400).",
"title": ""
},
{
"docid": "neg:1840536_17",
"text": "We develop theory that distinguishes trust among employees in typical task contexts (marked by low levels of situational unpredictability and danger) from trust in “highreliability” task contexts (those marked by high levels of situational unpredictability and danger). A study of firefighters showed that trust in high-reliability task contexts was based on coworkers’ integrity, whereas trust in typical task contexts was also based on benevolence and identification. Trust in high-reliability contexts predicted physical symptoms, whereas trust in typical contexts predicted withdrawal. Job demands moderated linkages with performance: trust in high-reliability task contexts was a more positive predictor of performance when unpredictable and dangerous calls were more frequent.",
"title": ""
},
{
"docid": "neg:1840536_18",
"text": " Fig.1にマスタリーラーニングのアウトラインを示す。 初めに教師はカリキュラムや教材をコンセプトやアイディアが重要であるためレビューする必要がある。 次に教師による診断手段や診断プロセスという形式的評価の計画である。また学習エラーを改善するための Corrective Activitiesの計画の主要な援助でもある。 Corrective Activites 矯正活動にはさまざまな形がとられる。Peer Cross-age Tutoring、コンピュータ支援レッスンなど Enrichment Activities 問題解決練習の特別なtutoringであり、刺激的で早熟な学習者に実りのある学習となっている。 Formative Assesment B もしCorrective Activitiesが学習者を改善しているのならばこの2回目の評価では体得を行っている。 この2回目の評価は学習者に改善されていることや良い学習者になっていることを示し、強力なモチベーショ ンのデバイスとなる。最後は累積的試験または評価の開発がある。",
"title": ""
},
{
"docid": "neg:1840536_19",
"text": "In this paper, Principal Component Analysis (PCA), Most Discriminant Features (MDF), and Regularized-Direct Linear Discriminant Analysis (RD-LDA) - based feature extraction approaches are tested and compared in an experimental personal recognition system. The system is multimodal and bases on features extracted from nine regions of an image of the palmar surface of the hand. For testing purposes 10 gray-scale images of right hand of 184 people were acquired. The experiments have shown that the best results are obtained with the RD-LDA - based features extraction approach (100% correctness for 920 identification tests and EER = 0.01% for 64170 verification tests).",
"title": ""
}
] |
1840537 | Assessing and moving on from the dominant project management discourse in the light of project overruns | [
{
"docid": "pos:1840537_0",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "pos:1840537_1",
"text": "This publication contains reprint articles for which IEEE does not hold copyright. Full text is not available on IEEE Xplore for these articles.",
"title": ""
}
] | [
{
"docid": "neg:1840537_0",
"text": "An evolutionary optimization method over continuous search spaces, differential evolution, has recently been successfully applied to real world and artificial optimization problems and proposed also for neural network training. However, differential evolution has not been comprehensively studied in the context of training neural network weights, i.e., how useful is differential evolution in finding the global optimum for expense of convergence speed. In this study, differential evolution has been analyzed as a candidate global optimization method for feed-forward neural networks. In comparison to gradient based methods, differential evolution seems not to provide any distinct advantage in terms of learning rate or solution quality. Differential evolution can rather be used in validation of reached optima and in the development of regularization terms and non-conventional transfer functions that do not necessarily provide gradient information.",
"title": ""
},
{
"docid": "neg:1840537_1",
"text": "Data mining, the extraction of hidden predictive information from large databases, is a powerful new technology with great potential to help companies focus on the most important information in their data warehouses. Nowadays, large amount of data and information are available, Data can now be stored in many different kinds of databases and information repositories, being available on the Internet. There is a need for powerful techniques for better interpretation of these data that exceeds the human's ability for comprehension and making decision in a better way. There are data mining, web mining and knowledge discovery tools and software packages such as WEKA Tool and RapidMiner tool. The work deals with analysis of WEKA, RapidMiner and NetTools spider tools KNIME and Orange. There are various tools available for data mining and web mining. Therefore awareness is required about the quantitative investigation of these tools. This paper focuses on various functional, practical, cognitive as well as analysis aspects that users may be looking for in the tools. Complete study addresses the usefulness and importance of these tools including various aspects. Analysis presents various benefits of these data mining tools along with desired aspects and the features of current tools. KEYWORDSData Mining, KDD, Data Mining Tools.",
"title": ""
},
{
"docid": "neg:1840537_2",
"text": "Higher-level semantics such as visual attributes are crucial for fundamental multimedia applications. We present a novel attribute discovery approach that can automatically identify, model and name attributes from an arbitrary set of image and text pairs that can be easily gathered on the Web. Different from conventional attribute discovery methods, our approach does not rely on any pre-defined vocabularies and human labeling. Therefore, we are able to build a large visual knowledge base without any human efforts. The discovery is based on a novel deep architecture, named Independent Component Multimodal Autoencoder (ICMAE), that can continually learn shared higher-level representations across the visual and textual modalities. With the help of the resultant representations encoding strong visual and semantic evidences, we propose to (a) identify attributes and their corresponding high-quality training images, (b) iteratively model them with maximum compactness and comprehensiveness, and (c) name the attribute models with human understandable words. To date, the proposed system has discovered 1,898 attributes over 1.3 million pairs of image and text. Extensive experiments on various real-world multimedia datasets demonstrate the quality and effectiveness of the discovered attributes, facilitating multimedia applications such as image annotation and retrieval as compared to the state-of-the-art approaches.",
"title": ""
},
{
"docid": "neg:1840537_3",
"text": "Jpred (http://www.compbio.dundee.ac.uk/jpred) is a secondary structure prediction server powered by the Jnet algorithm. Jpred performs over 1000 predictions per week for users in more than 50 countries. The recently updated Jnet algorithm provides a three-state (alpha-helix, beta-strand and coil) prediction of secondary structure at an accuracy of 81.5%. Given either a single protein sequence or a multiple sequence alignment, Jpred derives alignment profiles from which predictions of secondary structure and solvent accessibility are made. The predictions are presented as coloured HTML, plain text, PostScript, PDF and via the Jalview alignment editor to allow flexibility in viewing and applying the data. The new Jpred 3 server includes significant usability improvements that include clearer feedback of the progress or failure of submitted requests. Functional improvements include batch submission of sequences, summary results via email and updates to the search databases. A new software pipeline will enable Jnet/Jpred to continue to be updated in sync with major updates to SCOP and UniProt and so ensures that Jpred 3 will maintain high-accuracy predictions.",
"title": ""
},
{
"docid": "neg:1840537_4",
"text": "It has recently been reported that dogs affected by canine heartworm disease (Dirofilaria immitis) can show an increase in plasma levels of myoglobin and cardiac troponin I, two markers of muscle/myocardial injury. In order to determine if this increase is due to myocardial damage, the right ventricle of 24 naturally infected dogs was examined by routine histology and immunohistochemistry with anti-myoglobin and anti-cardiac troponin I antibodies. Microscopic lesions included necrosis and myocyte vacuolization, and were associated with loss of staining for one or both proteins. Results confirm that increased levels of myoglobin and cardiac troponin I are indicative of myocardial damage in dogs affected by heartworm disease.",
"title": ""
},
{
"docid": "neg:1840537_5",
"text": "Camera tracking is an important issue in many computer vision and robotics applications, such as, augmented reality and Simultaneous Localization And Mapping (SLAM). In this paper, a feature-based technique for monocular camera tracking is proposed. The proposed approach is based on tracking a set of sparse features, which are successively tracked in a stream of video frames. In the developed system, camera initially views a chessboard with known cell size for few frames to be enabled to construct initial map of the environment. Thereafter, Camera pose estimation for each new incoming frame is carried out in a framework that is merely working with a set of visible natural landmarks. Estimation of 6-DOF camera pose parameters is performed using a particle filter. Moreover, recovering depth of newly detected landmarks, a linear triangulation method is used. The proposed method is applied on real world videos and positioning error of the camera pose is less than 3 cm in average that indicates effectiveness and accuracy of the proposed method.",
"title": ""
},
{
"docid": "neg:1840537_6",
"text": "While some unmanned aerial vehicles (UAVs) have the capacity to carry mechanically stabilized camera equipment, weight limits or other problems may make mechanical stabilization impractical. As a result many UAVs rely on fixed cameras to provide a video stream to an operator or observer. With a fixed camera, the video stream is often unsteady due to the multirotor's movement from wind and acceleration. These video streams are often analyzed by both humans and machines, and the unwanted camera movement can cause problems for both. For a human observer, unwanted movement may simply make it harder to follow the video, while for computer algorithms, it may severely impair the algorithm's intended function. There has been significant research on how to stabilize videos using feature tracking to determine camera movement, which in turn is used to manipulate frames and stabilize the camera stream. We believe, however, that this process could be greatly simplified by using data from a UAV's on-board inertial measurement unit (IMU) to stabilize the camera feed. In this paper we present an algorithm for video stabilization based only on IMU data from a UAV platform. Our results show that our algorithm successfully stabilizes the camera stream with the added benefit of requiring less computational power.",
"title": ""
},
{
"docid": "neg:1840537_7",
"text": "With the advent of social networks and micro-blogging systems, the way of communicating with other people and spreading information has changed substantially. Persons with different backgrounds, age and education exchange information and opinions, spanning various domains and topics, and have now the possibility to directly interact with popular users and authoritative information sources usually unreachable before the advent of these environments. As a result, the mechanism of information propagation changed deeply, the study of which is indispensable for the sake of understanding the evolution of information networks. To cope up with this intention, in this paper, we propose a novel model which enables to delve into the spread of information over a social network along with the change in the user relationships with respect to the domain of discussion. For this, considering Twitter as a case study, we aim at analyzing the multiple paths the information follows over the network with the goal of understanding the dynamics of the information contagion with respect to the change of the topic of discussion. We then provide a method for estimating the influence among users by evaluating the nature of the relationship among them with respect to the topic of discussion they share. Using a vast sample of the Twitter network, we then present various experiments that illustrate our proposal and show the efficacy of the proposed approach in modeling this information spread.",
"title": ""
},
{
"docid": "neg:1840537_8",
"text": "In this paper, the problem of switching stabilization for a class of switched nonlinear systems is studied by using average dwell time (ADT) switching, where the subsystems are possibly all unstable. First, a new concept of ADT is given, which is different from the traditional definition of ADT. Based on the new proposed switching signals, a sufficient condition of stabilization for switched nonlinear systems with unstable subsystems is derived. Then, the T-S fuzzy modeling approach is applied to represent the underlying nonlinear system to make the obtained condition easily verified. A novel multiple quadratic Lyapunov function approach is also proposed, by which some conditions are provided in terms of a set of linear matrix inequalities to guarantee the derived T-S fuzzy system to be asymptotically stable. Finally, a numerical example is given to demonstrate the effectiveness of our developed results.",
"title": ""
},
{
"docid": "neg:1840537_9",
"text": "NoSQL and especially graph databases are constantly gaining popularity among developers of Web 2.0 applications as they promise to deliver superior performance when handling highly interconnected data compared to traditional relational databases. Apache Shindig is the reference implementation for OpenSocial with its highly interconnected data model. However, the default back-end is based on a relational database. In this paper we describe our experiences with a different back-end based on the graph database Neo4j and compare the alternatives for querying data with each other and the JPA-based sample back-end running on MySQL. Moreover, we analyze why the different approaches often may yield such diverging results concerning throughput. The results show that the graph-based back-end can match and even outperform the traditional JPA implementation and that Cypher is a promising candidate for a standard graph query language, but still leaves room for improvements.",
"title": ""
},
{
"docid": "neg:1840537_10",
"text": "We are at a key juncture in history where biodiversity loss is occurring daily and accelerating in the face of population growth, climate change, and rampant development. Simultaneously, we are just beginning to appreciate the wealth of human health benefits that stem from experiencing nature and biodiversity. Here we assessed the state of knowledge on relationships between human health and nature and biodiversity, and prepared a comprehensive listing of reported health effects. We found strong evidence linking biodiversity with production of ecosystem services and between nature exposure and human health, but many of these studies were limited in rigor and often only correlative. Much less information is available to link biodiversity and health. However, some robust studies indicate that exposure to microbial biodiversity can improve health, specifically in reducing certain allergic and respiratory diseases. Overall, much more research is needed on mechanisms of causation. Also needed are a reenvisioning of land-use planning that places human well-being at the center and a new coalition of ecologists, health and social scientists and planners to conduct research and develop policies that promote human interaction with nature and biodiversity. Improvements in these areas should enhance human health and ecosystem, community, as well as human resilience. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).",
"title": ""
},
{
"docid": "neg:1840537_11",
"text": "Most research in reading comprehension has focused on answering questions based on individual documents or even single paragraphs. We introduce a method which integrates and reasons relying on information spread within documents and across multiple documents. We frame it as an inference problem on a graph. Mentions of entities are nodes of this graph where edges encode relations between different mentions (e.g., withinand cross-document co-references). Graph convolutional networks (GCNs) are applied to these graphs and trained to perform multi-step reasoning. Our Entity-GCN method is scalable and compact, and it achieves state-of-the-art results on the WIKIHOP dataset (Welbl et al., 2017).",
"title": ""
},
{
"docid": "neg:1840537_12",
"text": "Fatma Özcan IBM Almaden Research Center San Jose, CA [email protected] Nesime Tatbul Intel Labs and MIT Cambridge, MA [email protected] Daniel J. Abadi Yale University New Haven, CT [email protected] Marcel Kornacker Cloudera San Francisco, CA [email protected] C Mohan IBM Almaden Research Center San Jose, CA [email protected] Karthik Ramasamy Twitter, Inc. San Francisco, CA [email protected] Janet Wiener Facebook, Inc. Menlo Park, CA [email protected]",
"title": ""
},
{
"docid": "neg:1840537_13",
"text": "Overfitting is one of the most critical challenges in deep neural networks, and there are various types of regularization methods to improve generalization performance. Injecting noises to hidden units during training, e.g., dropout, is known as a successful regularizer, but it is still not clear enough why such training techniques work well in practice and how we can maximize their benefit in the presence of two conflicting objectives—optimizing to true data distribution and preventing overfitting by regularization. This paper addresses the above issues by 1) interpreting that the conventional training methods with regularization by noise injection optimize the lower bound of the true objective and 2) proposing a technique to achieve a tighter lower bound using multiple noise samples per training example in a stochastic gradient descent iteration. We demonstrate the effectiveness of our idea in several computer vision applications.",
"title": ""
},
{
"docid": "neg:1840537_14",
"text": "Code-mixing is a linguistic phenomenon where multiple languages are used in the same occurrence that is increasingly common in multilingual societies. Codemixed content on social media is also on the rise, prompting the need for tools to automatically understand such content. Automatic Parts-of-Speech (POS) tagging is an essential step in any Natural Language Processing (NLP) pipeline, but there is a lack of annotated data to train such models. In this work, we present a unique language tagged and POS-tagged dataset of code-mixed English-Hindi tweets related to five incidents in India that led to a lot of Twitter activity. Our dataset is unique in two dimensions: (i) it is larger than previous annotated datasets and (ii) it closely resembles typical real-world tweets. Additionally, we present a POS tagging model that is trained on this dataset to provide an example of how this dataset can be used. The model also shows the efficacy of our dataset in enabling the creation of codemixed social media POS taggers.",
"title": ""
},
{
"docid": "neg:1840537_15",
"text": "The literature examining the relationship between cardiorespiratory fitness and the brain in older adults has increased rapidly, with 30 of 34 studies published since 2008. Here we review cross-sectional and exercise intervention studies in older adults examining the relationship between cardiorespiratory fitness and brain structure and function, typically assessed using Magnetic Resonance Imaging (MRI). Studies of patients with Alzheimer's disease are discussed when available. The structural MRI studies revealed a consistent positive relationship between cardiorespiratory fitness and brain volume in cortical regions including anterior cingulate, lateral prefrontal, and lateral parietal cortex. Support for a positive relationship between cardiorespiratory fitness and medial temporal lobe volume was less consistent, although evident when a region-of-interest approach was implemented. In fMRI studies, cardiorespiratory fitness in older adults was associated with activation in similar regions as those identified in the structural studies, including anterior cingulate, lateral prefrontal, and lateral parietal cortex, despite heterogeneity among the functional tasks implemented. This comprehensive review highlights the overlap in brain regions showing a positive relationship with cardiorespiratory fitness in both structural and functional imaging modalities. The findings suggest that aerobic exercise and cardiorespiratory fitness contribute to healthy brain aging, although additional studies in Alzheimer's disease are needed.",
"title": ""
},
{
"docid": "neg:1840537_16",
"text": "Ensuring reliable access to clean and affordable water is one of the greatest global challenges of this century. As the world's population increases, water pollution becomes more complex and difficult to remove, and global climate change threatens to exacerbate water scarcity in many areas, the magnitude of this challenge is rapidly increasing. Wastewater reuse is becoming a common necessity, even as a source of potable water, but our separate wastewater collection and water supply systems are not designed to accommodate this pressing need. Furthermore, the aging centralized water and wastewater infrastructure in the developed world faces growing demands to produce higher quality water using less energy and with lower treatment costs. In addition, it is impractical to establish such massive systems in developing regions that currently lack water and wastewater infrastructure. These challenges underscore the need for technological innovation to transform the way we treat, distribute, use, and reuse water toward a distributed, differential water treatment and reuse paradigm (i.e., treat water and wastewater locally only to the required level dictated by the intended use). Nanotechnology offers opportunities to develop next-generation water supply systems. This Account reviews promising nanotechnology-enabled water treatment processes and provides a broad view on how they could transform our water supply and wastewater treatment systems. The extraordinary properties of nanomaterials, such as high surface area, photosensitivity, catalytic and antimicrobial activity, electrochemical, optical, and magnetic properties, and tunable pore size and surface chemistry, provide useful features for many applications. These applications include sensors for water quality monitoring, specialty adsorbents, solar disinfection/decontamination, and high performance membranes. More importantly, the modular, multifunctional and high-efficiency processes enabled by nanotechnology provide a promising route both to retrofit aging infrastructure and to develop high performance, low maintenance decentralized treatment systems including point-of-use devices. Broad implementation of nanotechnology in water treatment will require overcoming the relatively high costs of nanomaterials by enabling their reuse and mitigating risks to public and environmental health by minimizing potential exposure to nanoparticles and promoting their safer design. The development of nanotechnology must go hand in hand with environmental health and safety research to alleviate unintended consequences and contribute toward sustainable water management.",
"title": ""
},
{
"docid": "neg:1840537_17",
"text": "An increasing interest in understanding human perception in social media has led to the study of the processes of personality self-presentation and impression formation based on user profiles and text blogs. However, despite the popularity of online video, we do not know of any attempt to study personality impressions that go beyond the use of text and still photos. In this paper, we analyze one facet of YouTube as a repository of brief behavioral slices in the form of personal conversational vlogs, which are a unique medium for selfpresentation and interpersonal perception. We investigate the use of nonverbal cues as descriptors of vloggers’ behavior and find significant associations between automatically extracted nonverbal cues for several personality judgments. As one notable result, audio and visual cues together can be used to predict 34% of the variance of the Extraversion trait of the Big Five model. In addition, we explore the associations between vloggers’ personality scores and the level of social attention that their videos received in YouTube. Our study is conducted on a dataset of 442 YouTube vlogs and 2,210 annotations collected using Amazon’s Mechanical Turk.",
"title": ""
},
{
"docid": "neg:1840537_18",
"text": "Stability in cluster analysis is strongly dependent on the data set, especially on how well separated and how homogeneous the clusters are. In the same clustering, some clusters may be very stable and others may be extremely unstable. The Jaccard coefficient, a similarity measure between sets, is used as a clusterwise measure of cluster stability, which is assessed by the bootstrap distribution of the Jaccard coefficient for every single cluster of a clustering compared to the most similar cluster in the bootstrapped data sets. This can be applied to very general cluster analysis methods. Some alternative resampling methods are investigated as well, namely subsetting, jittering the data points and replacing some data points by artificial noise points. The different methods are compared by means of a simulation study. A data example illustrates the use of the cluster-wise stability assessment to distinguish between meaningful stable and spurious clusters, but it is also shown that clusters are sometimes only stable because of the inflexibility of certain clustering methods.",
"title": ""
},
{
"docid": "neg:1840537_19",
"text": "The purpose of this article is to introduce evidence-based concepts and demonstrate how to find valid evidence to answer clinical questions. Evidence-based decision making (EBDM) requires understanding new concepts and developing new skills including how to: ask good clinical questions, conduct a computerized search, critically appraise the evidence, apply the results in clinical practice, and evaluate the process. This approach recognizes that clinicians can never be completely current with all conditions, medications, materials, or available products. Thus EBDM provides a mechanism for addressing these gaps in knowledge in order to provide the best care possible. In Part 1, a case scenario demonstrates the application of the skills involved in structuring a clinical question and conducting an online search using PubMed. Practice tips are provided along with online resources related to the evidence-based process.",
"title": ""
}
] |
1840538 | BackFi: High Throughput WiFi Backscatter | [
{
"docid": "pos:1840538_0",
"text": "This paper presents a 116nW wake-up radio complete with crystal reference, interference compensation, and baseband processing, such that a selectable 31-bit code is required to toggle a wake-up signal. The front-end operates over a broad frequency range, tuned by an off-chip band-select filter and matching network, and is demonstrated in the 402-405MHz MICS band and the 915MHz and 2.4GHz ISM bands with sensitivities of -45.5dBm, -43.4dBm, and -43.2dBm, respectively. Additionally, the baseband processor implements automatic threshold feedback to detect the presence of interferers and dynamically adjust the receiver's sensitivity, mitigating the jamming problem inherent to previous energy-detection wake-up radios. The wake-up radio has a raw OOK chip-rate of 12.5kbps, an active area of 0.35mm2 and operates using a 1.2V supply for the crystal reference and RF demodulation, and a 0.5V supply for subthreshold baseband processing.",
"title": ""
}
] | [
{
"docid": "neg:1840538_0",
"text": "Paraphrases are sentences or phrases that convey the same meaning using different wording. Although the logical definition of paraphrases requires strict semantic equivalence, linguistics accepts a broader, approximate, equivalence—thereby allowing far more examples of “quasi-paraphrase.” But approximate equivalence is hard to define. Thus, the phenomenon of paraphrases, as understood in linguistics, is difficult to characterize. In this article, we list a set of 25 operations that generate quasi-paraphrases. We then empirically validate the scope and accuracy of this list by manually analyzing random samples of two publicly available paraphrase corpora. We provide the distribution of naturally occurring quasi-paraphrases in English text.",
"title": ""
},
{
"docid": "neg:1840538_1",
"text": "Resource allocation efficiency and energy consumption are among the top concerns to today's Cloud data center. Finding the optimal point where users' multiple job requests can be accomplished timely with minimum electricity and hardware cost is one of the key factors for system designers and managers to optimize the system configurations. Understanding the characteristics of the distribution of user task is an essential step for this purpose. At large-scale Cloud Computing data centers, a precise workload prediction will significantly help designers and operators to schedule hardware/software resources and power supplies in a more efficient manner, and make appropriate decisions to upgrade the Cloud system when the workload grows. While a lot of study has been conducted for hypervisor-based Cloud, container-based virtualization is becoming popular because of the low overhead and high efficiency in utilizing computing resources. In this paper, we have studied a set of real-world container data center traces from part of Google's cluster. We investigated the distribution of job duration, waiting time and machine utilization and the number of jobs submitted in a fix time period. Based on the quantitative study, an Ensemble Workload Prediction (EnWoP) method and a novel prediction evaluation parameter called Cloud Workload Correction Rate (C-Rate) have been proposed. The experimental results have verified that the EnWoP method achieved high prediction accuracy and the C-Rate evaluates the prediction methods more objective.",
"title": ""
},
{
"docid": "neg:1840538_2",
"text": "The capability of Corynebacterium glutamicum for glucose-based synthesis of itaconate was explored, which can serve as building block for production of polymers, chemicals, and fuels. C. glutamicum was highly tolerant to itaconate and did not metabolize it. Expression of the Aspergillus terreus CAD1 gene encoding cis-aconitate decarboxylase (CAD) in strain ATCC13032 led to the production of 1.4mM itaconate in the stationary growth phase. Fusion of CAD with the Escherichia coli maltose-binding protein increased its activity and the itaconate titer more than two-fold. Nitrogen-limited growth conditions boosted CAD activity and itaconate titer about 10-fold to values of 1440 mU mg(-1) and 30 mM. Reduction of isocitrate dehydrogenase activity via exchange of the ATG start codon to GTG or TTG resulted in maximal itaconate titers of 60 mM (7.8 g l(-1)), a molar yield of 0.4 mol mol(-1), and a volumetric productivity of 2.1 mmol l(-1) h(-1).",
"title": ""
},
{
"docid": "neg:1840538_3",
"text": "The ambitious goals set for 5G wireless networks, which are expected to be introduced around 2020, require dramatic changes in the design of different layers for next generation communications systems. Massive MIMO systems, filter bank multi-carrier modulation, relaying technologies, and millimeter-wave communications have been considered as some of the strong candidates for the physical layer design of 5G networks. In this article, we shed light on the potential and implementation of IM techniques for MIMO and multi-carrier communications systems, which are expected to be two of the key technologies for 5G systems. Specifically, we focus on two promising applications of IM: spatial modulation and orthogonal frequency-division multiplexing with IM, and discuss the recent advances and future research directions in IM technologies toward spectrum- and energy-efficient 5G wireless networks.",
"title": ""
},
{
"docid": "neg:1840538_4",
"text": "This paper analyzes the Sampled Value (SV) Process Bus concept that was recently introduced by the IEC 61850-9-2 standard. This standard proposes that the Current and Voltage Transformer (CT, PT) outputs that are presently hard wired to various devices (relays, meters, IED, and SCADA) be digitized at the source and then communicated to those devices using an Ethernet-Based Local Area Network (LAN). The approach is especially interesting for modern optical CT/PT devices that possess high quality information about the primary voltage/current waveforms, but are often forced to degrade output signal accuracy in order to meet traditional analog interface requirements (5 A/120 V). While very promising, the SV-based process bus brings along a distinct set of issues regarding the overall reliability of the new Ethernet communications-based protection and control system. This paper looks at the Merging Unit Concept, analyzes the protection system reliability in the process bus environment, and proposes an alternate approach that can be used to successfully deploy this technology. Multiple scenarios used with the associated equipment configurations are compared. Additional issues that need to be addressed by various standards bodies and interoperability challenges posed by the SV process bus LAN on real-time monitoring and control applications (substation HMI, SCADA, engineering access) are also identified.",
"title": ""
},
{
"docid": "neg:1840538_5",
"text": "The glucagon-like peptides include glucagon, GLP-1, and GLP-2, and exert diverse actions on nutrient intake, gastrointestinal motility, islet hormone secretion, cell proliferation and apoptosis, nutrient absorption, and nutrient assimilation. GIP, a related member of the glucagon peptide superfamily, also regulates nutrient disposal via stimulation of insulin secretion. The actions of these peptides are mediated by distinct members of the glucagon receptor superfamily of G protein-coupled receptors. These receptors exhibit unique patterns of tissue-specific expression, exhibit considerable amino acid sequence identity, and share similar structural and functional properties with respect to ligand binding and signal transduction. This article provides an overview of the biology of these receptors with an emphasis on understanding the unique actions of glucagon-related peptides through studies of the biology of their cognate receptors.",
"title": ""
},
{
"docid": "neg:1840538_6",
"text": "Mapping word embeddings of different languages into a single space has multiple applications. In order to map from a source space into a target space, a common approach is to learn a linear mapping that minimizes the distances between equivalences listed in a bilingual dictionary. In this paper, we propose a framework that generalizes previous work, provides an efficient exact method to learn the optimal linear transformation and yields the best bilingual results in translation induction while preserving monolingual performance in an analogy task.",
"title": ""
},
{
"docid": "neg:1840538_7",
"text": "A knowledgeable observer of a game of football (soccer) can make a subjective evaluation of the quality of passes made between players during the game, such as rating them as Good, OK, or Bad. In this article, we consider the problem of producing an automated system to make the same evaluation of passes and present a model to solve this problem.\n Recently, many professional football leagues have installed object tracking systems in their stadiums that generate high-resolution and high-frequency spatiotemporal trajectories of the players and the ball. Beginning with the thesis that much of the information required to make the pass ratings is available in the trajectory signal, we further postulated that using complex data structures derived from computational geometry would enable domain football knowledge to be included in the model by computing metric variables in a principled and efficient manner. We designed a model that computes a vector of predictor variables for each pass made and uses machine learning techniques to determine a classification function that can accurately rate passes based only on the predictor variable vector.\n Experimental results show that the learned classification functions can rate passes with 90.2% accuracy. The agreement between the classifier ratings and the ratings made by a human observer is comparable to the agreement between the ratings made by human observers, and suggests that significantly higher accuracy is unlikely to be achieved. Furthermore, we show that the predictor variables computed using methods from computational geometry are among the most important to the learned classifiers.",
"title": ""
},
{
"docid": "neg:1840538_8",
"text": "INTRODUCTION\nSevere motion sickness is easily identifiable with sufferers showing obvious behavioral signs, including emesis (vomiting). Mild motion sickness and sopite syndrome lack such clear and objective behavioral markers. We postulate that yawning may have the potential to be used in operational settings as such a marker. This study assesses the utility of yawning as a behavioral marker for the identification of soporific effects by investigating the association between yawning and mild motion sickness/sopite syndrome in a controlled environment.\n\n\nMETHODS\nUsing a randomized motion-counterbalanced design, we collected yawning and motion sickness data from 39 healthy individuals (34 men and 5 women, ages 27-59 yr) in static and motion conditions. Each individual participated in two 1-h sessions. Each session consisted of six 10-min blocks. Subjects performed a multitasking battery on a head mounted display while seated on the moving platform. The occurrence and severity of symptoms were assessed with the Motion Sickness Assessment Questionnaire (MSAQ).\n\n\nRESULTS\nYawning occurred predominantly in the motion condition. All yawners in motion (N = 5) were symptomatic. Compared to nonyawners (MSAQ indices: Total = 14.0, Sopite = 15.0), subjects who yawned in motion demonstrated increased severity of motion sickness and soporific symptoms (MSAQ indices: Total = 17.2, Sopite = 22.4), and reduced multitasking cognitive performance (Composite score: nonyawners = 1348; yawners = 1145).\n\n\nDISCUSSION\nThese results provide evidence that yawning may be a viable behavioral marker to recognize the onset of soporific effects and their concomitant reduction in cognitive performance.",
"title": ""
},
{
"docid": "neg:1840538_9",
"text": "This paper presents a fast and robust level set method for image segmentation. To enhance the robustness against noise, we embed a Markov random field (MRF) energy function to the conventional level set energy function. This MRF energy function builds the correlation of a pixel with its neighbors and encourages them to fall into the same region. To obtain a fast implementation of the MRF embedded level set model, we explore algebraic multigrid (AMG) and sparse field method (SFM) to increase the time step and decrease the computation domain, respectively. Both AMG and SFM can be conducted in a parallel fashion, which facilitates the processing of our method for big image databases. By comparing the proposed fast and robust level set method with the standard level set method and its popular variants on noisy synthetic images, synthetic aperture radar (SAR) images, medical images, and natural images, we comprehensively demonstrate the new method is robust against various kinds of noises. In particular, the new level set method can segment an image of size 500 × 500 within 3 s on MATLAB R2010b installed in a computer with 3.30-GHz CPU and 4-GB memory.",
"title": ""
},
{
"docid": "neg:1840538_10",
"text": "A design of a novel wireless implantable blood pressure sensing microsystem for advanced biological research is presented. The system employs a miniature instrumented elastic cuff, wrapped around a blood vessel, for small laboratory animal real-time blood pressure monitoring. The elastic cuff is made of biocompatible soft silicone material by a molding process and is filled by insulating silicone oil with an immersed MEMS capacitive pressure sensor interfaced with low-power integrated electronic system. This technique avoids vessel penetration and substantially minimizes vessel restriction due to the soft cuff elasticity, and is thus attractive for long-term implant. The MEMS pressure sensor detects the coupled blood pressure waveform caused by the vessel expansion and contraction, followed by amplification, 11-bit digitization, and wireless FSK data transmission to an external receiver. The integrated electronics are designed with capability of receiving RF power from an external power source and converting the RF signal to a stable 2 V DC supply in an adaptive manner to power the overall implant system, thus enabling the realization of stand-alone batteryless implant microsystem. The electronics are fabricated in a 1.5 μm CMOS process and occupy an area of 2 mm × 2 mm. The prototype monitoring cuff is wrapped around the right carotid artery of a laboratory rat to measure real-time blood pressure waveform. The measured in vivo blood waveform is compared with a reference waveform recorded simultaneously using a commercial catheter-tip transducer inserted into the left carotid artery. The two measured waveforms are closely matched with a constant scaling factor. The ASIC is interfaced with a 5-mm-diameter RF powering coil with four miniature surface-mounted components (one inductor and three capacitors) over a thin flexible substrate by bond wires, followed by silicone coating and packaging with the prototype blood pressure monitoring cuff. The overall system achieves a measured average sensitivity of 7 LSB/ mmHg, a nonlinearity less than 2.5% of full scale, and a hysteresis less than 1% of full scale. From noise characterization, a blood vessel pressure change sensing resolution 328 of 1 mmHg can be expected. The system weighs 330 mg, representing an order of magnitude mass reduction compared with state-of-the-art commercial technology.",
"title": ""
},
{
"docid": "neg:1840538_11",
"text": "We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.",
"title": ""
},
{
"docid": "neg:1840538_12",
"text": "A 1.2 V 4 Gb DDR4 SDRAM is presented in a 30 nm CMOS technology. DDR4 SDRAM is developed to raise memory bandwidth with lower power consumption compared with DDR3 SDRAM. Various functions and circuit techniques are newly adopted to reduce power consumption and secure stable transaction. First, dual error detection scheme is proposed to guarantee the reliability of signals. It is composed of cyclic redundancy check (CRC) for DQ channel and command-address (CA) parity for command and address channel. For stable reception of high speed signals, a gain enhanced buffer and PVT tolerant data fetch scheme are adopted for CA and DQ respectively. To reduce the output jitter, the type of delay line is selected depending on data rate at initial stage. As a result, test measurement shows 3.3 Gb/s DDR operation at 1.14 V.",
"title": ""
},
{
"docid": "neg:1840538_13",
"text": "The problem of creating fair ship design curves is of major importance in Computer Aided Ship Design environment. The fairness of these curves is generally considered a subjective notion depending on the judgement of the designer (eg., visually pleasing, minimum variation of curvature, devoid of unnecessary bumps or wiggles, satisfying certain continuity requirements). Thus an automated fairing process based on objective criteria is clearly desirable. This paper presents an automated fairing algorithm for ship curves to satisfy objective geometric constraints. This procedure is based on the use of optimisation tools and cubic B-spline functions. The aim is to produce curves with a more gradual variation of curvature without deteriorating initial shapes. The optimisation based fairing procedure is applied to a variety of plane ship sections to demonstrate the capability and flexibility of the methodology. The resulting curves, with their corresponding curvature plots indicate that, provided that the designer can specify his objectives and constraints clearly, the procedure will generate fair ship definition curves within the constrained design space.",
"title": ""
},
{
"docid": "neg:1840538_14",
"text": "This paper presents new image sensors with multi- bucket pixels that enable time-multiplexed exposure, an alter- native imaging approach. This approach deals nicely with scene motion, and greatly improves high dynamic range imaging, structured light illumination, motion corrected photography, etc. To implement an in-pixel memory or a bucket, the new image sensors incorporate the virtual phase CCD concept into a standard 4-transistor CMOS imager pixel. This design allows us to create a multi-bucket pixel which is compact, scalable, and supports true correlated double sampling to cancel kTC noise. Two image sensors with dual and quad-bucket pixels have been designed and fabricated. The dual-bucket sensor consists of a 640H × 576V array of 5.0 μm pixel in 0.11 μm CMOS technology while the quad-bucket sensor comprises 640H × 512V array of 5.6 μm pixel in 0.13 μm CMOS technology. Some computational photography applications were implemented using the two sensors to demonstrate their values in eliminating artifacts that currently plague computational photography.",
"title": ""
},
{
"docid": "neg:1840538_15",
"text": "PURPOSE\nThe purpose of this article is to provide an overview of our previous work on roll-over shapes, which are the effective rocker shapes that the lower limb systems conform to during walking.\n\n\nMETHOD\nThis article is a summary of several recently published articles from the Northwestern University Prosthetics Research Laboratory and Rehabilitation Engineering Research Program on the topic of roll-over shapes. The roll-over shape is a measurement of centre of pressure of the ground reaction force in body-based coordinates. This measurement is interpreted as the effective rocker shape created by lower limb systems during walking.\n\n\nRESULTS\nOur studies have shown that roll-over shapes in able-bodied subjects do not change appreciably for conditions of level ground walking, including walking at different speeds, while carrying different amounts of weight, while wearing shoes of different heel heights, or when wearing shoes with different rocker radii. In fact, results suggest that able-bodied humans will actively change their ankle movements to maintain the same roll-over shapes.\n\n\nCONCLUSIONS\nThe consistency of the roll-over shapes to level surface walking conditions has provided insight for design, alignment and evaluation of lower limb prostheses and orthoses. Changes to ankle-foot and knee-ankle-foot roll-over shapes for ramp walking conditions have suggested biomimetic (i.e. mimicking biology) strategies for adaptable ankle-foot prostheses and orthoses.",
"title": ""
},
{
"docid": "neg:1840538_16",
"text": "Introduction Vision is the primary sensory modality for humans—and most other mammals—by which they perceive the world. In humans, vision-related areas occupy about 30% of the neocortex. Light rays are projected upon the retina, and the brain tries to make sense of the world by means of interpreting the visual input pattern. The sensitivity and specificity with which the brain solves this computationally complex problem cannot yet be replicated on a computer. The most imposing of these problems is that of invariant visual pattern recognition. Recently it has been said that the prediction of future sensory input from salient features of current input is the keystone of intelligence. The neocortex is the structure in the brain which is assumed to be responsible for the evolution of intelligence. Current sensory input patterns activate stored traces of previous inputs which then generate top-down expectations, which are verified against the bottom-up input signals. If the verification succeeds, the predicted pattern is recognised. This theory explains how humans, and mammals in general, can recognise images despite changes in location, size and lighting conditions, and in the presence of deformations and large amounts of noise. Parts of this theory, known as the memory-prediction theory (MPT), are modelled in the Hierarchical Temporal Memory or HTM technology developed by a company called Numenta; the model is an attempt to replicate the structural and algorithmic properties of the neocortex. Spatial and temporal relations between features of the sensory signals are formed in an hierarchical memory architecture during a learning process. When a new pattern arrives, the recognition process can be viewed as choosing the stored representation that best predicts the pattern. Hierarchical Temporal Memory has been successfully applied to the recognition of relatively simple images, showing invariance across several transformations and robustness with respect to noisy patterns. We have applied the concept of HTM, as implemented by Numenta, to land-use recognition, by building and testing a system to learn to recognise five different types of land use. Overview of the HTM learning algorithm Hierarchical Temporal Memory can be considered a form of a Bayesian network, where the network consists of a collection of nodes arranged in a tree-shaped hierarchy. Each node in the hierarchy self-discovers a set of causes in its input, through a process of finding common spatial patterns and then detecting common temporal patterns. Unlike many Bayesian networks, HTMs are self-training, have a well-defined parent/child relationship between each node, inherently handle time-varying data and afford mechanisms for covert attention. Sensory data are presented at the bottom of the hierarchy. To train an HTM, it is necessary to present continuous, time-varying, sensory inputs while the causes underlying the same sensory data persist in the environment. In other words, you either move the senses of the HTM through the world, or the objects in the world move relative to the HTM’s senses. Time is the fundamental component of an HTM, and can be thought of as a learning supervisor. Hierarchical Temporal Memory networks are made of nodes; each node receives as input a temporal sequence of patterns. The goal of each node is to group input patterns that are likely to have the same cause, thereby forming invariant representations of extrinsic causes. An HTM node uses two grouping mechanisms to form invariants (Fig. 1). The first mechanism is called spatial pooling, in which raw data are received by the sensor; spatial poolers of higher nodes receive the outputs from their child nodes. The input of the spatial pooler in higher layers is the fixed-order concatenation of the output of its children. This input is represented by row vectors, and the role of the spatial pooler is to build a matrix (the coincidence matrix) from input vectors that occur frequently. There are multiple spatial pooler algorithms, e.g. Gaussian and Product. The Gaussian spatial pooler algorithm is used for nodes at the input layer, whereas the nodes higher up the hierarchy use the Product spatial pooler. The Gaussian spatial pooler algorithm compares the raw input vectors with the existing coincidences in the coincidence matrix. If the Euclidean distance between an input vector and an existing coincidence is small enough, the input is considered to be the same coincidence, and the count for that coincidence is incremented and stored in memory. 370 South African Journal of Science 105, September/October 2009 Research Articles",
"title": ""
},
{
"docid": "neg:1840538_17",
"text": "Stochastic local search (SLS) algorithms are well known for their ability to efficiently find models of random instances of the Boolean satisfiability (SAT) problem. One of the most famous SLS algorithms for SAT is WalkSAT, which is an initial algorithm that has wide influence and performs very well on random 3-SAT instances. However, the performance of WalkSAT on random k-SAT instances with k > 3 lags far behind. Indeed, there are limited works on improving SLS algorithms for such instances. This work takes a good step toward this direction. We propose a novel concept namely multilevel make. Based on this concept, we design a scoring function called linear make, which is utilized to break ties in WalkSAT, leading to a new algorithm called WalkSATlm. Our experimental results show that WalkSATlm improves WalkSAT by orders of magnitude on random k-SAT instances with k > 3 near the phase transition. Additionally, we propose an efficient implementation for WalkSATlm, which leads to a speedup of 100%. We also give some insights on different forms of linear make functions, and show the limitation of the linear make function on random 3-SAT through theoretical analysis.",
"title": ""
},
{
"docid": "neg:1840538_18",
"text": "This paper presents a robot aimed to assist the shoulder movements of stroke patients during their rehabilitation process. This robot has the general form of an exoskeleton, but is characterized by an action principle on the patient no longer requiring a tedious and accurate alignment of the robot and patient's joints. It is constituted of a poly-articulated structure whose actuation is deported and transmission is ensured by Bowden cables. It manages two of the three rotational degrees of freedom (DOFs) of the shoulder. Quite light and compact, its proximal end can be rigidly fixed to the patient's back on a rucksack structure. As for its distal end, it is connected to the arm through passive joints and a splint guaranteeing the robot action principle, i.e. exert a force perpendicular to the patient's arm, whatever its configuration. This paper also presents a first prototype of this robot and some experimental results such as the arm angular excursions reached with the robot in the three joint planes.",
"title": ""
},
{
"docid": "neg:1840538_19",
"text": "It is widely believed that the employee participation may affect employee’s job satisfaction; employee productivity, employee commitment and they all can create comparative advantage for the organization. The main intention of this study was to find out relationship among employee participation, job satisfaction, employee productivity and employee commitment. For the matter 34 organizations from Oil & Gas, Banking and Telecommunication sectors were contacted, of which 15 responded back. The findings of this study are that employee participation not only an important determinant of job satisfaction components. Increasing employee participation will have a positive effect on employee’s job satisfaction, employee commitment and employee productivity. Naturally increasing employee participation is a long-term process, which demands both attention from management side and initiative from the employee side.",
"title": ""
}
] |
1840539 | A Fine-Grained Performance Model of Cloud Computing Centers | [
{
"docid": "pos:1840539_0",
"text": "Reliable performance evaluations require the use of representative workloads. This is no easy task because modern computer systems and their workloads are complex, with many interrelated attributes and complicated structures. Experts often use sophisticated mathematics to analyze and describe workload models, making these models difficult for practitioners to grasp. This book aims to close this gap by emphasizing the intuition and the reasoning behind the definitions and derivations related to the workload models. It provides numerous examples from real production systems, with hundreds of graphs. Using this book, readers will be able to analyze collected workload data and clean it if necessary, derive statistical models that include skewed marginal distributions and correlations, and consider the need for generative models and feedback from the system. The descriptive statistics techniques covered are also useful for other domains.",
"title": ""
}
] | [
{
"docid": "neg:1840539_0",
"text": "Timely and accurate classification and interpretation of high-resolution images are very important for urban planning and disaster rescue. However, as spatial resolution gets finer, it is increasingly difficult to recognize complex patterns in high-resolution remote sensing images. Deep learning offers an efficient strategy to fill the gap between complex image patterns and their semantic labels. However, due to the hierarchical abstract nature of deep learning methods, it is difficult to capture the precise outline of different objects at the pixel level. To further reduce this problem, we propose an object-based deep learning method to accurately classify the high-resolution imagery without intensive human involvement. In this study, high-resolution images were used to accurately classify three different urban scenes: Beijing (China), Pavia (Italy), and Vaihingen (Germany). The proposed method is built on a combination of a deep feature learning strategy and an object-based classification for the interpretation of high-resolution images. Specifically, high-level feature representations extracted through the convolutional neural networks framework have been systematically investigated over five different layer configurations. Furthermore, to improve the classification accuracy, an object-based classification method also has been integrated with the deep learning strategy for more efficient image classification. Experimental results indicate that with the combination of deep learning and object-based classification, it is possible to discriminate different building types in Beijing Scene, such as commercial buildings and residential buildings with classification accuracies above 90%.",
"title": ""
},
{
"docid": "neg:1840539_1",
"text": "In this paper, we use time-series modeling to forecast taxi travel demand, in the context of a mobile application-based taxi hailing service. In particular, we model the passenger demand density at various locations in the city of Bengaluru, India. Using the data, we first shortlist time-series models that suit our application. We then analyse the performance of these models by using Mean Absolute Percentage Error (MAPE) as the performance metric. In order to improve the model performance, we employ a multi-level clustering technique where we aggregate demand over neighboring cells/geohashes. We observe that the improved model based on clustering leads to a forecast accuracy of 80% per km2. In addition, our technique obtains an accuracy of 89% per km2 for the most frequently occurring use case.",
"title": ""
},
{
"docid": "neg:1840539_2",
"text": "Optimal real-time distributed V2G and G2V management of electric vehicles Sonja Stüdli, Emanuele Crisostomi, Richard Middleton & Robert Shorten a Centre for Complex Dynamic Systems and Control, The University of Newcastle, New South Wales, Australia b Department of Energy, Systems, Territory and Constructions, University of Pisa, Pisa, Italy c IBM Research, Dublin, Ireland Accepted author version posted online: 10 Dec 2013.Published online: 05 Feb 2014.",
"title": ""
},
{
"docid": "neg:1840539_3",
"text": "BACKGROUND\nOutcomes are poor for patients with previously treated, advanced or metastatic non-small-cell lung cancer (NSCLC). The anti-programmed death ligand 1 (PD-L1) antibody atezolizumab is clinically active against cancer, including NSCLC, especially cancers expressing PD-L1 on tumour cells, tumour-infiltrating immune cells, or both. We assessed efficacy and safety of atezolizumab versus docetaxel in previously treated NSCLC, analysed by PD-L1 expression levels on tumour cells and tumour-infiltrating immune cells and in the intention-to-treat population.\n\n\nMETHODS\nIn this open-label, phase 2 randomised controlled trial, patients with NSCLC who progressed on post-platinum chemotherapy were recruited in 61 academic medical centres and community oncology practices across 13 countries in Europe and North America. Key inclusion criteria were Eastern Cooperative Oncology Group performance status 0 or 1, measurable disease by Response Evaluation Criteria In Solid Tumors version 1.1 (RECIST v1.1), and adequate haematological and end-organ function. Patients were stratified by PD-L1 tumour-infiltrating immune cell status, histology, and previous lines of therapy, and randomly assigned (1:1) by permuted block randomisation (with a block size of four) using an interactive voice or web system to receive intravenous atezolizumab 1200 mg or docetaxel 75 mg/m(2) once every 3 weeks. Baseline PD-L1 expression was scored by immunohistochemistry in tumour cells (as percentage of PD-L1-expressing tumour cells TC3≥50%, TC2≥5% and <50%, TC1≥1% and <5%, and TC0<1%) and tumour-infiltrating immune cells (as percentage of tumour area: IC3≥10%, IC2≥5% and <10%, IC1≥1% and <5%, and IC0<1%). The primary endpoint was overall survival in the intention-to-treat population and PD-L1 subgroups at 173 deaths. Biomarkers were assessed in an exploratory analysis. We assessed safety in all patients who received at least one dose of study drug. This study is registered with ClinicalTrials.gov, number NCT01903993.\n\n\nFINDINGS\nPatients were enrolled between Aug 5, 2013, and March 31, 2014. 144 patients were randomly allocated to the atezolizumab group, and 143 to the docetaxel group. 142 patients received at least one dose of atezolizumab and 135 received docetaxel. Overall survival in the intention-to-treat population was 12·6 months (95% CI 9·7-16·4) for atezolizumab versus 9·7 months (8·6-12·0) for docetaxel (hazard ratio [HR] 0·73 [95% CI 0·53-0·99]; p=0·04). Increasing improvement in overall survival was associated with increasing PD-L1 expression (TC3 or IC3 HR 0·49 [0·22-1·07; p=0·068], TC2/3 or IC2/3 HR 0·54 [0·33-0·89; p=0·014], TC1/2/3 or IC1/2/3 HR 0·59 [0·40-0·85; p=0·005], TC0 and IC0 HR 1·04 [0·62-1·75; p=0·871]). In our exploratory analysis, patients with pre-existing immunity, defined by high T-effector-interferon-γ-associated gene expression, had improved overall survival with atezolizumab. 11 (8%) patients in the atezolizumab group discontinued because of adverse events versus 30 (22%) patients in the docetaxel group. 16 (11%) patients in the atezolizumab group versus 52 (39%) patients in the docetaxel group had treatment-related grade 3-4 adverse events, and one (<1%) patient in the atezolizumab group versus three (2%) patients in the docetaxel group died from a treatment-related adverse event.\n\n\nINTERPRETATION\nAtezolizumab significantly improved survival compared with docetaxel in patients with previously treated NSCLC. Improvement correlated with PD-L1 immunohistochemistry expression on tumour cells and tumour-infiltrating immune cells, suggesting that PD-L1 expression is predictive for atezolizumab benefit. Atezolizumab was well tolerated, with a safety profile distinct from chemotherapy.\n\n\nFUNDING\nF Hoffmann-La Roche/Genentech Inc.",
"title": ""
},
{
"docid": "neg:1840539_4",
"text": "Search engines are increasingly relying on large knowledge bases of facts to provide direct answers to users’ queries. However, the construction of these knowledge bases is largely manual and does not scale to the long and heavy tail of facts. Open information extraction tries to address this challenge, but typically assumes that facts are expressed with verb phrases, and therefore has had difficulty extracting facts for noun-based relations. We describe ReNoun, an open information extraction system that complements previous efforts by focusing on nominal attributes and on the long tail. ReNoun’s approach is based on leveraging a large ontology of noun attributes mined from a text corpus and from user queries. ReNoun creates a seed set of training data by using specialized patterns and requiring that the facts mention an attribute in the ontology. ReNoun then generalizes from this seed set to produce a much larger set of extractions that are then scored. We describe experiments that show that we extract facts with high precision and for attributes that cannot be extracted with verb-based techniques.",
"title": ""
},
{
"docid": "neg:1840539_5",
"text": "Due to the shared responsibility model of clouds, tenants have to manage the security of their workloads and data. Developing security solutions using VMs or containers creates further problems as these resources also need to be secured. In this paper, we advocate for taking a serverless approach by proposing six serverless design patterns to build security services in the cloud. For each design pattern, we describe the key advantages and present applications and services utilizing the pattern. Using the proposed patterns as building blocks, we introduce a threat-intelligence platform that collects logs from various sources, alerts malicious activities, and takes actions against such behaviors. We also discuss the limitations of serverless design and how future implementations can overcome those limitations.",
"title": ""
},
{
"docid": "neg:1840539_6",
"text": "This paper reviews the state-of-the-art research on multi-robot systems, with a focus on multi-robot cooperation and coordination. By primarily classifying multi-robot systems into active and passive cooperative systems, three main research topics of multi-robot systems are focused on: task allocation, multi-sensor fusion and localization. In addition, formation control and coordination methods for multi-robots are reviewed.",
"title": ""
},
{
"docid": "neg:1840539_7",
"text": "With the increasing popularity of herbomineral preparations in healthcare, a new proprietary herbomineral formulation was formulated with ashwagandha root extract and three minerals viz. zinc, magnesium, and selenium. The aim of the study was to evaluate the immunomodulatory potential of Biofield Energy Healing (The Trivedi Effect ® ) on the herbomineral formulation using murine splenocyte cells. The test formulation was divided into two parts. One was the control without the Biofield Energy Treatment. The other part was labelled the Biofield Energy Treated sample, which received the Biofield Energy Healing Treatment remotely by twenty renowned Biofield Energy Healers. Through MTT assay, all the test formulation concentrations from 0.00001053 to 10.53 μg/mL were found to be safe with cell viability ranging from 102.61% to 194.57% using splenocyte cells. The Biofield Treated test formulation showed a significant (p≤0.01) inhibition of TNF-α expression by 15.87%, 20.64%, 18.65%, and 20.34% at 0.00001053, 0.0001053, 0.01053, and 0.1053, μg/mL, respectively as compared to the vehicle control (VC) group. The level of TNF-α was reduced by 8.73%, 19.54%, and 14.19% at 0.001053, 0.01053, and 0.1053 μg/mL, respectively in the Biofield Treated test formulation compared to the untreated test formulation. The expression of IL-1β reduced by 22.08%, 23.69%, 23.00%, 16.33%, 25.76%, 16.10%, and 23.69% at 0.00001053, 0.0001053, 0.001053, 0.01053, 0.1053, 1.053 and 10.53 μg/mL, respectively compared to the VC. Additionally, the expression of MIP-1α significantly (p≤0.001) reduced by 13.35%, 22.96%, 25.11%, 22.71%, and 21.83% at 0.00001053, 0.0001053, 0.01053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation significantly down-regulated the MIP-1α expression by 10.75%, 9.53%, 9.57%, and 10.87% at 0.00001053, 0.01053, 0.1053 and 1.053 μg/mL, respectively compared to the untreated test formulation. The results showed the IFN-γ expression was also significantly (p≤0.001) reduced by 39.16%, 40.34%, 27.57%, 26.06%, 42.53%, and 48.91% at 0.0001053, 0.001053, 0.01053, 0.1053, 1.053, and 10.53 μg/mL, respectively in the Biofield Treated test formulation compared to the VC. The Biofield Treated test formulation showed better suppression of IFN-γ expression by 15.46%, 13.78%, International Journal of Biomedical Engineering and Clinical Science 2016; 2(1): 8-17 9 17.14%, and 13.11% at concentrations 0.001053, 0.01053, 0.1053, and 10.53 μg/mL, respectively compared to the untreated test formulation. Overall, the results demonstrated that The Trivedi Effect ® Biofield Energy Healing (TEBEH) has the capacity to potentiate the immunomodulatory and anti-inflammatory activity of the test formulation. Biofield Energy may also be useful in organ transplants, anti-aging, and stress management by improving overall health and quality of life.",
"title": ""
},
{
"docid": "neg:1840539_8",
"text": "CONTEXT\nMedical issues are widely reported in the mass media. These reports influence the general public, policy makers and health-care professionals. This information should be valid, but is often criticized for being speculative, inaccurate and misleading. An understanding of the obstacles medical reporters meet in their work can guide strategies for improving the informative value of medical journalism.\n\n\nOBJECTIVE\nTo investigate constraints on improving the informative value of medical reports in the mass media and elucidate possible strategies for addressing these.\n\n\nDESIGN\nWe reviewed the literature and organized focus groups, a survey of medical journalists in 37 countries, and semi-structured telephone interviews.\n\n\nRESULTS\nWe identified nine barriers to improving the informative value of medical journalism: lack of time, space and knowledge; competition for space and audience; difficulties with terminology; problems finding and using sources; problems with editors and commercialism. Lack of time, space and knowledge were the most common obstacles. The importance of different obstacles varied with the type of media and experience. Many health reporters feel that it is difficult to find independent experts willing to assist journalists, and also think that editors need more education in critical appraisal of medical news. Almost all of the respondents agreed that the informative value of their reporting is important. Nearly everyone wanted access to short, reliable and up-to-date background information on various topics available on the Internet. A majority (79%) was interested in participating in a trial to evaluate strategies to overcome identified constraints.\n\n\nCONCLUSIONS\nMedical journalists agree that the validity of medical reporting in the mass media is important. A majority acknowledge many constraints. Mutual efforts of health-care professionals and journalists employing a variety of strategies will be needed to address these constraints.",
"title": ""
},
{
"docid": "neg:1840539_9",
"text": "In needle-based medical procedures, beveled-tip flexible needles are steered inside soft tissue with the aim of reaching pre-defined target locations. The efficiency of needle-based interventions depends on accurate control of the needle tip. This paper presents a comprehensive mechanics-based model for simulation of planar needle insertion in soft tissue. The proposed model for needle deflection is based on beam theory, works in real-time, and accepts the insertion velocity as an input that can later be used as a control command for needle steering. The model takes into account the effects of tissue deformation, needle-tissue friction, tissue cutting force, and needle bevel angle on needle deflection. Using a robot that inserts a flexible needle into a phantom tissue, various experiments are conducted to separately identify different subsets of the model parameters. The validity of the proposed model is verified by comparing the simulation results to the empirical data. The results demonstrate the accuracy of the proposed model in predicting the needle tip deflection for different insertion velocities.",
"title": ""
},
{
"docid": "neg:1840539_10",
"text": "This paper proposes a novel method to detect fire and/or flames in real-time by processing the video data generated by an ordinary camera monitoring a scene. In addition to ordinary motion and color clues, flame and fire flicker is detected by analyzing the video in the wavelet domain. Quasi-periodic behavior in flame boundaries is detected by performing temporal wavelet transform. Color variations in flame regions are detected by computing the spatial wavelet transform of moving fire-colored regions. Another clue used in the fire detection algorithm is the irregularity of the boundary of the fire-colored region. All of the above clues are combined to reach a final decision. Experimental results show that the proposed method is very successful in detecting fire and/or flames. In addition, it drastically reduces the false alarms issued to ordinary fire-colored moving objects as compared to the methods using only motion and color clues. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840539_11",
"text": "While deep reinforcement learning (deep RL) agents are effective at maximizing rewards, it is often unclear what strategies they use to do so. In this paper, we take a step toward explaining deep RL agents through a case study using Atari 2600 environments. In particular, we focus on using saliency maps to understand how an agent learns and executes a policy. We introduce a method for generating useful saliency maps and use it to show 1) what strong agents attend to, 2) whether agents are making decisions for the right or wrong reasons, and 3) how agents evolve during learning. We also test our method on non-expert human subjects and find that it improves their ability to reason about these agents. Overall, our results show that saliency information can provide significant insight into an RL agent’s decisions and learning behavior.",
"title": ""
},
{
"docid": "neg:1840539_12",
"text": "Retrieval is the key process for understanding learning and for promoting learning, yet retrieval is not often granted the central role it deserves. Learning is typically identified with the encoding or construction of knowledge, and retrieval is considered merely the assessment of learning that occurred in a prior experience. The retrieval-based learning perspective outlined here is grounded in the fact that all expressions of knowledge involve retrieval and depend on the retrieval cues available in a given context. Further, every time a person retrieves knowledge, that knowledge is changed, because retrieving knowledge improves one’s ability to retrieve it again in the future. Practicing retrieval does not merely produce rote, transient learning; it produces meaningful, long-term learning. Yet retrieval practice is a tool many students lack metacognitive awareness of and do not use as often as they should. Active retrieval is an effective but undervalued strategy for promoting meaningful learning.",
"title": ""
},
{
"docid": "neg:1840539_13",
"text": "Cellular electron cryo-tomography enables the 3D visualization of cellular organization in the near-native state and at submolecular resolution. However, the contents of cellular tomograms are often complex, making it difficult to automatically isolate different in situ cellular components. In this paper, we propose a convolutional autoencoder-based unsupervised approach to provide a coarse grouping of 3D small subvolumes extracted from tomograms. We demonstrate that the autoencoder can be used for efficient and coarse characterization of features of macromolecular complexes and surfaces, such as membranes. In addition, the autoencoder can be used to detect non-cellular features related to sample preparation and data collection, such as carbon edges from the grid and tomogram boundaries. The autoencoder is also able to detect patterns that may indicate spatial interactions between cellular components. Furthermore, we demonstrate that our autoencoder can be used for weakly supervised semantic segmentation of cellular components, requiring a very small amount of manual annotation.",
"title": ""
},
{
"docid": "neg:1840539_14",
"text": "This exploratory research investigates how students and professionals use social network sites (SNSs) in the setting of developing and emerging countries. Data collection included focus groups consisting of medical students and faculty as well as the analysis of a Facebook site centred on medical and clinical topics. The findings show how users, both students and professionals, appropriate social network sites from their mobile phones as rich educational tools in informal learning contexts. First, unlike in previous studies, the analysis revealed explicit forms of educational content embedded in Facebook, such as quizzes and case presentations and associated deliberate (e-)learning practices which are typically found in (more) formal educational settings. Second, from a socio-cultural learning perspective, it is shown how the participation in such virtual professional communities across national boundaries permits the announcement and negotiation of occupational status and professional identities. Introduction and background Technologies for development and health in \"resource-limited\" environments Technological innovations have given hope that new ICT tools will result in the overall progress and well-being of developing countries, in particular with respect to health and education services. Great expectations are attached to the spread of mobile communication technologies. The number of mobile cellular subscriptions worldwide is currently 4.7 billion and increasing. This includes people in remote and rural areas and \"resource-limited\" settings (The World Bank, 2011). To a much lesser extent there is also a discussion on affordances of social network sites (SNSs) in such contexts (Marcelo, Adejumo, & Luna, 2011). Discourses and projects on ICT(4)D (information technology for development) or mHealth (mobile technology for health) tend to be based on techno-centric and deterministic approaches where learning materials, either software or hardware, are distributed by central authorities or knowledge is \"delivered\" according to \"push-strategies\"; or, using the words of Traxler, information is pumped through the infrastructure, often in \"educationally naïve\" ways (in press). Similarly, the main direction of techno-centric and transmissional approaches appears to be from developed to \"developing\" countries, respectively from experts to novices. In spite of all efforts the situation is still problematic and ambitious visions have been only realised to a limited extent. For example, the goal of providing every person worldwide with access to an informed and educated healthcare provider by 2015 is unlikely to be realised. In particular, little progress has been made in meeting the information needs of frontline healthcare providers and ordinary citizens in low resource settings (Smith & Koehlmoos, 2011). Very often it is basic knowledge that is needed, related for example to the treatment of childhood pneumonia or diarrhoea, which cannot be accessed by healthcare providers such as family caregivers or health workers (HIFA Report, 2010). With this research we attempt to shed light on aspects of technology use, such as engagement with SNSs and mobile phones, in the context of health education in developing countries which, we would argue, have been widely neglected. In doing so, we hope to contribute to the academic discourses on SNSs and mobile learning. Since our approach follows the principles of case study research, the remainder of this paper is structured as follows. We continue with a brief and, admittedly, selective characterization of two underlying academic discourses that can inform this research, namely mobile learning and research on SNSs. After presenting our methodological approach and results we discuss the findings in the light of multiple theoretical concepts and empirical studies from these fields. We conclude with some practical considerations, limitations and directions for further research. Educational discourses on mobile learning and social network sites In the field of mobile learning, a small, yet rapidly growing research community, recent work has considered the (educational) use of mobile phones as an appropriation of cultural resources (Pachler, Cook, & Bachmair, 2010). In contrast to the classical binary and quantitative model of adoption, appropriation is centred on the question of how people use mobile phones once they have adopted them (Wirth, Von Pape, & Karnowski, 2008). Researchers define appropriation as the emerging \"processes of the internalization of the pre-given world of cultural products\" by the engagement of learners in the form of social practices with particular settings inside and outside of formal educational settings (Pachler, et al., 2010). While mobile learning research tend to focus on learning in schools, universities, workplaces or on life-long learning in industrialised countries (Frohberg, Göth, & Schwabe, 2009; Pachler, Pimmer, & Seipold, 2011; Pimmer, Pachler, & Attwell, 2010), some attention has also been paid to developing countries (see for example Traxler & Kukulska-Hulme, 2005). Research on SNSs is becoming increasingly popular not only in industrialised nations (boyd & Ellison, 2007) but, to a lesser extent, also in developing countries (Kolko, Rose, & Johnson, 2007). Increasing importance is attached to educational aspects of SNSs (Selwyn, 2009), though there is relatively little theoretical and empirical attention paid by social researchers to the form and nature of that learning in general (Merchant, 2011). Socio-cultural approaches to learning in general, and to social networks and mobile learning in particular are based on the notions of participation, belonging, communities and identity construction. It was suggested, for example, that such networks create a \"sense of place in a social world\" (Merchant, 2011) and can be considered as \"multi-audience identity production sites\" (Zhao, Grasmuck, & Martin, 2008). By documenting daily episodes by means of mobiles and social networks, such tools are said to contribute to the formation of (multiple) identities related to the live-worlds of users. In this sense, learning is considered as situated meaning-making and identity formation (Pachler, et al., 2010). The influence of SNSs on community practices was also discussed. An empirical study suggested, for example, that social network sites helped maintain relations as people move across different offline communities (Ellison, Steinfield, & Lampe, 2007). Also in formal educational environments, when social networks were deliberately used in order to support classroom-based teaching and learning, (unintended) community building was observed (Arnold & Paulus, 2010). However, research has little to say with respect to vocational and professional aspects of the use of SNSs. One study reported that a company's internal social network site supported professionals in building stronger relations with their weak ties and in getting in touch with professionals they did not know before (DiMicco et al., 2008). Another study that observed the use of mobiles and social software for the compilation of e-portfolios witnessed influences on identity trajectory according to the concepts of belonging to a workplace, becoming and then being a professional (Chan, 2011).",
"title": ""
},
{
"docid": "neg:1840539_15",
"text": "1Student, Department of Computer Science & Engineering, G.H.Raisoni Institute of Engineering & Management, Jalgaon, Maharashtra, India. 2Assistant Professor, Department of Information and Technology , G.H.Raisoni Institute of Engineering & Management, Jalgaon, Maharashtra, India. ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract As deep web grows at a very fast pace, there has been increased interest in techniques that help efficiently locate deep-web interfaces. However, due to the large volume of web resources and the dynamic nature of deep web, achieving wide coverage and high efficiency is a challenging issue. Therefore a two-stage enhanced web crawler framework is proposed for efficiently harvesting deep web interfaces. The proposed enhanced web crawler is divided into two stages. In the first stage, site locating is performed by using reverse searching which finds relevant content. In the second stage, enhanced web crawler achieves fast in site searching by excavating most relevant links of site. It uses a novel deep web crawling framework based on reinforcement learning which is effective for crawling the deep web. The experimental results show that the method outperforms the state of art methods in terms of crawling capability and achieves higher harvest rates than other crawlers.",
"title": ""
},
{
"docid": "neg:1840539_16",
"text": "One objective of the French-funded (ANR-2006-SECU-006) ISyCri Project (ISyCri stands for Interoperability of Systems in Crisis situation) is to provide the crisis cell in charge of the situation management with an information system (IS) able to support the interoperability of partners involved in this collaborative situation. Such a system is called Mediation Information System (MIS). This system must be in charge of (i) information exchange, (ii) services sharing and (iii) behavior orchestration. This paper presents the first step of the MIS engineering, the deduction of a collaborative process used to coordinate actors of the crisis cell. Especially, this paper give a formal definition of the deduction rules used to deduce the collaborative process.",
"title": ""
},
{
"docid": "neg:1840539_17",
"text": "MAX NEUENDORF,1 AES Member, MARKUS MULTRUS,1 AES Member, NIKOLAUS RETTELBACH1, GUILLAUME FUCHS1, JULIEN ROBILLIARD1, JÉRÉMIE LECOMTE1, STEPHAN WILDE1, STEFAN BAYER,10 AES Member, SASCHA DISCH1, CHRISTIAN HELMRICH10, ROCH LEFEBVRE,2 AES Member, PHILIPPE GOURNAY2, BRUNO BESSETTE2, JIMMY LAPIERRE,2 AES Student Member, KRISTOFER KJÖRLING3, HEIKO PURNHAGEN,3 AES Member, LARS VILLEMOES,3 AES Associate Member, WERNER OOMEN,4 AES Member, ERIK SCHUIJERS4, KEI KIKUIRI5, TORU CHINEN6, TAKESHI NORIMATSU1, KOK SENG CHONG7, EUNMI OH,8 AES Member, MIYOUNG KIM8, SCHUYLER QUACKENBUSH,9 AES Fellow, AND BERNHARD GRILL1",
"title": ""
},
{
"docid": "neg:1840539_18",
"text": "In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.",
"title": ""
}
] |
1840540 | Insights into deep neural networks for speaker recognition | [
{
"docid": "pos:1840540_0",
"text": "We propose a novel framework for speaker recognition in which extraction of sufficient statistics for the state-of-the-art i-vector model is driven by a deep neural network (DNN) trained for automatic speech recognition (ASR). Specifically, the DNN replaces the standard Gaussian mixture model (GMM) to produce frame alignments. The use of an ASR-DNN system in the speaker recognition pipeline is attractive as it integrates the information from speech content directly into the statistics, allowing the standard backends to remain unchanged. Improvement from the proposed framework compared to a state-of-the-art system are of 30% relative at the equal error rate when evaluated on the telephone conditions from the 2012 NIST speaker recognition evaluation (SRE). The proposed framework is a successful way to efficiently leverage transcribed data for speaker recognition, thus opening up a wide spectrum of research directions.",
"title": ""
},
{
"docid": "pos:1840540_1",
"text": "We describe the neural-network training framework used in the Kaldi speech recognition toolkit, which is geared towards training DNNs with large amounts of training data using multiple GPU-equipped or multicore machines. In order to be as hardwareagnostic as possible, we needed a way to use multiple machines without generating excessive network traffic. Our method is to average the neural network parameters periodically (typically every minute or two), and redistribute the averaged parameters to the machines for further training. Each machine sees different data. By itself, this method does not work very well. However, we have another method, an approximate and efficient implementation of Natural Gradient for Stochastic Gradient Descent (NG-SGD), which seems to allow our periodic-averaging method to work well, as well as substantially improving the convergence of SGD on a single machine.",
"title": ""
}
] | [
{
"docid": "neg:1840540_0",
"text": "In restorative dentistry, the non-vital tooth and its restoration have been extensively studied from both its structural and esthetic aspects. The restoration of endodontically treated teeth has much in common with modern implantology: both must include multifaceted biological, biomechanical and esthetic considerations with a profound understanding of materials and techniques; both are technique sensitive and both require a multidisciplinary approach. And for both, two fundamental principles from team sports apply well: firstly, the weakest link determines the limits, and secondly, it is a very long way to the top, but a very short way to failure. Nevertheless, there is one major difference: if the tooth fails, there is the option of the implant, but if the implant fails, there is only another implant or nothing. The aim of this essay is to try to answer some clinically relevant conceptual questions and to give some clinical guidelines regarding the reconstructive aspects, based on scientific evidence and clinical expertise.",
"title": ""
},
{
"docid": "neg:1840540_1",
"text": "Electronic concept mapping tools provide a flexible vehicle for constructing concept maps, linking concept maps to other concept maps and related resources, and distributing concept maps to others. As electronic concept maps are constructed, it is often helpful for users to consult additional resources, in order to jog their memories or to locate resources to link to the map under construction. The World Wide Web provides a rich range of resources for these tasks—if the right resources can be found. This paper presents ongoing research on how to automatically generate Web queries from concept maps under construction, in order to proactively suggest related information to aid concept mapping. First, it examines how concept map structure and content can be exploited to automatically select terms to include in initial queries, based on studies of (1) how concept map structure influences human judgments of concept importance, and (2) the relative value of including information from concept labels and linking phrases. Second, it examines how a concept map can be used to refine future queries by reinforcing the weights of terms that have proven to be good discriminators for the topic of the concept map. The described methods are being applied to developing “intelligent suggesters” to support the concept mapping process.",
"title": ""
},
{
"docid": "neg:1840540_2",
"text": "This study aims at investigating alcoholic inpatients' attachment system by combining a measurement of adult attachment style (AAQ, Hazan and Shaver, 1987. Journal of Personality and Social Psychology, 52(3): 511-524) and the degree of alexithymia (BVAQ, Bermond and Vorst, 1998. Bermond-Vorst Alexithymia Questionnaire, Unpublished data). Data were collected from 101 patients (71 men, 30 women) admitted to a psychiatric hospital in Belgium for alcohol use-related problems, between September 2003 and December 2004. To investigate the research question, cluster analyses and regression analyses are performed. We found that it makes sense to distinguish three subgroups of alcoholic inpatients with different degrees of impairment of the attachment system. Our results also reveal a pattern of correspondence between the severity of psychiatric symptoms-personality disorder traits (ADP-IV), anxiety (STAI), and depression (BDI-II-Nl)-and the severity of the attachment system's impairment. Limitations of the study and suggestions for further research are highlighted and implications for diagnosis and treatment are discussed.",
"title": ""
},
{
"docid": "neg:1840540_3",
"text": "Julius Bernstein belonged to the Berlin school of “organic physicists” who played a prominent role in creating modern physiology and biophysics during the second half of the nineteenth century. He trained under du Bois-Reymond in Berlin, worked with von Helmholtz in Heidelberg, and finally became Professor of Physiology at the University of Halle. Nowadays his name is primarily associated with two discoveries: (1) The first accurate description of the action potential in 1868. He developed a new instrument, a differential rheotome (= current slicer) that allowed him to resolve the exact time course of electrical activity in nerve and muscle and to measure its conduction velocity. (2) His ‘Membrane Theory of Electrical Potentials’ in biological cells and tissues. This theory, published by Bernstein in 1902, provided the first plausible physico-chemical model of bioelectric events; its fundamental concepts remain valid to this day. Bernstein pursued an intense and long-range program of research in which he achieved a new level of precision and refinement by formulating quantitative theories supported by exact measurements. The innovative design and application of his electromechanical instruments were milestones in the development of biomedical engineering techniques. His seminal work prepared the ground for hypotheses and experiments on the conduction of the nervous impulse and ultimately the transmission of information in the nervous system. Shortly after his retirement, Bernstein (1912) summarized his electrophysiological work and extended his theoretical concepts in a book Elektrobiologie that became a classic in its field. The Bernstein Centers for Computational Neuroscience recently established at several universities in Germany were named to honor the person and his work.",
"title": ""
},
{
"docid": "neg:1840540_4",
"text": "Neuropathic pain refers to pain that originates from pathology of the nervous system. Diabetes, infection (herpes zoster), nerve compression, nerve trauma, \"channelopathies,\" and autoimmune disease are examples of diseases that may cause neuropathic pain. The development of both animal models and newer pharmacological strategies has led to an explosion of interest in the underlying mechanisms. Neuropathic pain reflects both peripheral and central sensitization mechanisms. Abnormal signals arise not only from injured axons but also from the intact nociceptors that share the innervation territory of the injured nerve. This review focuses on how both human studies and animal models are helping to elucidate the mechanisms underlying these surprisingly common disorders. The rapid gain in knowledge about abnormal signaling promises breakthroughs in the treatment of these often debilitating disorders.",
"title": ""
},
{
"docid": "neg:1840540_5",
"text": "Spliddit is a first-of-its-kind fair division website, which offers provably fair solutions for the division of rent, goods, and credit. In this note, we discuss Spliddit's goals, methods, and implementation.",
"title": ""
},
{
"docid": "neg:1840540_6",
"text": "In the current context of increased surveillance and security, more sophisticated and robust surveillance systems are needed. One idea relies on the use of pairs of video (visible spectrum) and thermal infrared (IR) cameras located around premises of interest. To automate the system, a robust person detection algorithm and the development of an efficient technique enabling the fusion of the information provided by the two sensors becomes necessary and these are described in this chapter. Recently, multi-sensor based image fusion system is a challenging task and fundamental to several modern day image processing applications, such as security systems, defence applications, and intelligent machines. Image fusion techniques have been actively investigated and have wide application in various fields. It is often a vital pre-processing procedure to many computer vision and image processing tasks which are dependent on the acquisition of imaging data via sensors, such as IR and visible. One such task is that of human detection. To detect humans with an artificial system is difficult for a number of reasons as shown in Figure 1 (Gavrila, 2001). The main challenge for a vision-based pedestrian detector is the high degree of variability with the human appearance due to articulated motion, body size, partial occlusion, inconsistent cloth texture, highly cluttered backgrounds and changing lighting conditions.",
"title": ""
},
{
"docid": "neg:1840540_7",
"text": "Most successful computational approaches for protein function prediction integrate multiple genomics and proteomics data sources to make inferences about the function of unknown proteins. The most accurate of these algorithms have long running times, making them unsuitable for real-time protein function prediction in large genomes. As a result, the predictions of these algorithms are stored in static databases that can easily become outdated. We propose a new algorithm, GeneMANIA, that is as accurate as the leading methods, while capable of predicting protein function in real-time. We use a fast heuristic algorithm, derived from ridge regression, to integrate multiple functional association networks and predict gene function from a single process-specific network using label propagation. Our algorithm is efficient enough to be deployed on a modern webserver and is as accurate as, or more so than, the leading methods on the MouseFunc I benchmark and a new yeast function prediction benchmark; it is robust to redundant and irrelevant data and requires, on average, less than ten seconds of computation time on tasks from these benchmarks. GeneMANIA is fast enough to predict gene function on-the-fly while achieving state-of-the-art accuracy. A prototype version of a GeneMANIA-based webserver is available at http://morrislab.med.utoronto.ca/prototype .",
"title": ""
},
{
"docid": "neg:1840540_8",
"text": "Although insecticide resistance is a widespread problem for most insect pests, frequently the assessment of resistance occurs over a limited geographic range. Herein, we report the first widespread survey of insecticide resistance in the USA ever undertaken for the house fly, Musca domestica, a major pest in animal production facilities. The levels of resistance to six different insecticides were determined (using discriminating concentration bioassays) in 10 collections of house flies from dairies in nine different states. In addition, the frequencies of Vssc and CYP6D1 alleles that confer resistance to pyrethroid insecticides were determined for each fly population. Levels of resistance to the six insecticides varied among states and insecticides. Resistance to permethrin was highest overall and most consistent across the states. Resistance to methomyl was relatively consistent, with 65-91% survival in nine of the ten collections. In contrast, resistance to cyfluthrin and pyrethrins + piperonyl butoxide varied considerably (2.9-76% survival). Resistance to imidacloprid was overall modest and showed no signs of increasing relative to collections made in 2004, despite increasing use of this insecticide. The frequency of Vssc alleles that confer pyrethroid resistance was variable between locations. The highest frequencies of kdr, kdr-his and super-kdr were found in Minnesota, North Carolina and Kansas, respectively. In contrast, the New Mexico population had the highest frequency (0.67) of the susceptible allele. The implications of these results to resistance management and to the understanding of the evolution of insecticide resistance are discussed.",
"title": ""
},
{
"docid": "neg:1840540_9",
"text": "The problem of assessing the significance of data mining results on high-dimensional 0-1 data sets has been studied extensively in the literature. For problems such as mining frequent sets and finding correlations, significance testing can be done by, e.g., chi-square tests, or many other methods. However, the results of such tests depend only on the specific attributes and not on the dataset as a whole. Moreover, the tests are more difficult to apply to sets of patterns or other complex results of data mining. In this paper, we consider a simple randomization technique that deals with this shortcoming. The approach consists of producing random datasets that have the same row and column margins with the given dataset, computing the results of interest on the randomized instances, and comparing them against the results on the actual data. This randomization technique can be used to assess the results of many different types of data mining algorithms, such as frequent sets, clustering, and rankings. To generate random datasets with given margins, we use variations of a Markov chain approach, which is based on a simple swap operation. We give theoretical results on the efficiency of different randomization methods, and apply the swap randomization method to several well-known datasets. Our results indicate that for some datasets the structure discovered by the data mining algorithms is a random artifact, while for other datasets the discovered structure conveys meaningful information.",
"title": ""
},
{
"docid": "neg:1840540_10",
"text": "Motivated by a project to create a system for people who are deaf or hard-of-hearing that would use automatic speech recognition (ASR) to produce real-time text captions of spoken English during in-person meetings with hearing individuals, we have augmented a transcript of the Switchboard conversational dialogue corpus with an overlay of word-importance annotations, with a numeric score for each word, to indicate its importance to the meaning of each dialogue turn. Further, we demonstrate the utility of this corpus by training an automatic word importance labeling model; our best performing model has an F-score of 0.60 in an ordinal 6-class word-importance classification task with an agreement (concordance correlation coefficient) of 0.839 with the human annotators (agreement score between annotators is 0.89). Finally, we discuss our intended future applications of this resource, particularly for the task of evaluating ASR performance, i.e. creating metrics that predict ASR-output caption text usability for DHH users better than Word Error Rate (WER).",
"title": ""
},
{
"docid": "neg:1840540_11",
"text": "We propose a novel group regularization which we call exclusive lasso. Unlike the group lasso regularizer that assumes covarying variables in groups, the proposed exclusive lasso regularizer models the scenario when variables in the same group compete with each other. Analysis is presented to illustrate the properties of the proposed regularizer. We present a framework of kernel based multi-task feature selection algorithm based on the proposed exclusive lasso regularizer. An efficient algorithm is derived to solve the related optimization problem. Experiments with document categorization show that our approach outperforms state-of-theart algorithms for multi-task feature selection.",
"title": ""
},
{
"docid": "neg:1840540_12",
"text": "This paper presents a multilayer aperture coupled microstrip antenna with a non symmetric U-shaped feed line. The antenna structure consists of a rectangular patch which is excited through two slots on the ground plane. A parametric study is presented on the effects of the position and dimensions of the slots. Results show that the antenna has VSWR < 2 from 2.6 GHz to 5.4 GHz (70%) and the gain of the structure is more than 7 dB from 2.7 GHz to 4.4 GHz (48%).",
"title": ""
},
{
"docid": "neg:1840540_13",
"text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.",
"title": ""
},
{
"docid": "neg:1840540_14",
"text": "This work introduces a set of scalable algorithms to identify patterns of human daily behaviors. These patterns are extracted from multivariate temporal data that have been collected from smartphones. We have exploited sensors that are available on these devices, and have identified frequent behavioral patterns with a temporal granularity, which has been inspired by the way individuals segment time into events. These patterns are helpful to both end-users and third parties who provide services based on this information. We have demonstrated our approach on two real-world datasets and showed that our pattern identification algorithms are scalable. This scalability makes analysis on resource constrained and small devices such as smartwatches feasible. Traditional data analysis systems are usually operated in a remote system outside the device. This is largely due to the lack of scalability originating from software and hardware restrictions of mobile/wearable devices. By analyzing the data on the device, the user has the control over the data, i.e., privacy, and the network costs will also be removed.",
"title": ""
},
{
"docid": "neg:1840540_15",
"text": "Leishmaniasis is caused by protozoa of the genus Leishmania, with the presentation restricted to the mucosa being infrequent. Although the nasal mucosa is the main site affected in this form of the disease, it is also possible the involvement of the lips, mouth, pharynx and larynx. The lesions are characteristically ulcerative-vegetative, with granulation tissue formation. Patients usually complain of pain, dysphagia and odynophagia. Differential diagnosis should include cancer, infectious diseases and granulomatous diseases. We present a case of a 64-year-old male patient, coming from an endemic area for American Tegumentary Leishmaniasis (ATL), with a chief complaint of persistent dysphagia and nasal obstruction for 6 months. The lesion was ulcerative with a purulent infiltration into the soft palate and uvula. After excluding other diseases, ATL was suggested as a hypothesis, having been requested serology and biopsy of the lesions. Was started the treatment with pentavalent antimony and the patient presented regression of the lesions in 30 days, with no other complications.",
"title": ""
},
{
"docid": "neg:1840540_16",
"text": "This paper focuses on biomimetic design in the field of technical textiles / smart fabrics. Biologically inspired design is a very promising approach that has provided many elegant solutions. Firstly, a few bio-inspired innovations are presented, followed the introduction of trans-disciplinary research as a useful tool for defining the design problem and giving solutions. Furthermore, the required methods for identifying and applying biological analogies are analysed. Finally, the bio-mimetic approach is questioned and the difficulties, limitations and errors that a designer might face when adopting it are discussed. Researchers and product developers that use this approach should also problematize on the role of biomimetic design: is it a practice that redirects us towards a new model of sustainable development or is it just another tool for generating product ideas in order to increase a company’s competitiveness in the global market? Author",
"title": ""
},
{
"docid": "neg:1840540_17",
"text": "Continuous-time Markov chains (CTMCs) have been widely used to determine system performance and dependability characteristics. Their analysis most often concerns the computation of steady-state and transient-state probabilities. This paper introduces a branching temporal logic for expressing real-time probabilistic properties on CTMCs and presents approximate model checking algorithms for this logic. The logic, an extension of the continuous stochastic logic CSL of Aziz et al., contains a time-bounded until operator to express probabilistic timing properties over paths as well as an operator to express steady-state probabilities. We show that the model checking problem for this logic reduces to a system of linear equations (for unbounded until and the steady-state operator) and a Volterra integral equation system (for time-bounded until). We then show that the problem of model-checking timebounded until properties can be reduced to the problem of computing transient state probabilities for CTMCs. This allows the verification of probabilistic timing properties by efficient techniques for transient analysis for CTMCs such as uniformization. Finally, we show that a variant of lumping equivalence (bisimulation), a well-known notion for aggregating CTMCs, preserves the validity of all formulas in the logic.",
"title": ""
},
{
"docid": "neg:1840540_18",
"text": "Deep convolutional neural networks (CNNs) have shown excellent performance in object recognition tasks and dense classification problems such as semantic segmentation. However, training deep neural networks on large and sparse datasets is still challenging and can require large amounts of computation and memory. In this work, we address the task of performing semantic segmentation on large data sets, such as three-dimensional medical images. We propose an adaptive sampling scheme that uses a-posterior error maps, generated throughout training, to focus sampling on difficult regions, resulting in improved learning. Our contribution is threefold: 1) We give a detailed description of the proposed sampling algorithm to speed up and improve learning performance on large images. 2) We propose a deep dual path CNN that captures information at fine and coarse scales, resulting in a network with a large field of view and high resolution outputs. 3) We show that our method is able to attain new state-of-the-art results on the VISCERAL Anatomy benchmark.",
"title": ""
}
] |
1840541 | Individual differences in executive control relate to metaphor processing: an eye movement study of sentence reading | [
{
"docid": "pos:1840541_0",
"text": "Metaphors are fundamental to creative thought and expression. Newly coined metaphors regularly infiltrate our collective vocabulary and gradually become familiar, but it is unclear how this shift from novel to conventionalized meaning happens in the brain. We investigated the neural career of metaphors in a functional magnetic resonance imaging study using extensively normed new metaphors and simulated the ordinary, gradual experience of metaphor conventionalization by manipulating participants' exposure to these metaphors. Results showed that the conventionalization of novel metaphors specifically tunes activity within bilateral inferior prefrontal cortex, left posterior middle temporal gyrus, and right postero-lateral occipital cortex. These results support theoretical accounts attributing a role for the right hemisphere in processing novel, low salience figurative meanings, but also show that conventionalization of metaphoric meaning is a bilaterally-mediated process. Metaphor conventionalization entails a decreased neural load within semantic networks rather than a hemispheric or regional shift across brain areas.",
"title": ""
},
{
"docid": "pos:1840541_1",
"text": "Recent studies of eye movements in reading and other information processing tasks, such as music reading, typing, visual search, and scene perception, are reviewed. The major emphasis of the review is on reading as a specific example of cognitive processing. Basic topics discussed with respect to reading are (a) the characteristics of eye movements, (b) the perceptual span, (c) integration of information across saccades, (d) eye movement control, and (e) individual differences (including dyslexia). Similar topics are discussed with respect to the other tasks examined. The basic theme of the review is that eye movement data reflect moment-to-moment cognitive processes in the various tasks examined. Theoretical and practical considerations concerning the use of eye movement data are also discussed.",
"title": ""
}
] | [
{
"docid": "neg:1840541_0",
"text": "In this work we present a novel approach to predict the function of proteins in protein-protein interaction (PPI) networks. We classify existing approaches into inductive and transductive approaches, and into local and global approaches. As of yet, among the group of inductive approaches, only local ones have been proposed for protein function prediction. We here introduce a protein description formalism that also includes global information, namely information that locates a protein relative to specific important proteins in the network. We analyze the effect on function prediction accuracy of selecting a different number of important proteins. With around 70 important proteins, even in large graphs, our method makes good and stable predictions. Furthermore, we investigate whether our method also classifies proteins accurately on more detailed function levels. We examined up to five different function levels. The method is benchmarked on four datasets where we found classification performance according to F-measure values indeed improves by 9 percent over the benchmark methods employed.",
"title": ""
},
{
"docid": "neg:1840541_1",
"text": "In order to fully understand the sensory, perceptual, and cognitive issues associated with helmet-/head-mounted displays (HMDs), it is essential to possess an understanding of exactly what constitutes an HMD, the various design types, their advantages and limitations, and their applications. It also is useful to explore the developmental history of these systems. Such an exploration can reveal the major engineering, human factors, and ergonomic issues encountered in the development cycle. These identified issues usually are indicators of where the most attention needs to be placed when evaluating the usefulness of such systems. New HMD systems are implemented because they are intended to provide some specific capability or performance enhancement. However, these improvements always come at a cost. In reality, the introduction of technology is a tradeoff endeavor. It is necessary to identify and assess the tradeoffs that impact overall system and user sensory systems performance. HMD developers have often and incorrectly assumed that the human visual and auditory systems are fully capable of accepting the added sensory and cognitive demands of an HMD system without incurring performance degradation or introducing perceptual illusions. Situation awareness (SA), essential in preventing actions or inactions that lead to catastrophic outcomes, may be degraded if the HMD interferes with normal perceptual processes, resulting in misinterpretations or misperceptions (illusions). As HMD applications increase, it is important to maintain an awareness of both current and future programs. Unfortunately, in these developmental programs, one factor still is often minimized. This factor is how the user accepts and eventually uses the HMD. In the demanding rigors of warfare, the user rapidly decides whether using a new HMD, intended to provide tactical and other information, outweighs the impact the HMD has on survival and immediate mission success. If the system requires an unacceptable compromise in any aspect of mission completion deemed critical to the Warfighter, the HMD will not be used. Technology in which the Warfighter does have confidence or determines to be a liability will go unused.",
"title": ""
},
{
"docid": "neg:1840541_2",
"text": "Cell-cell communication is critical across an assortment of physiological and pathological processes. Extracellular vesicles (EVs) represent an integral facet of intercellular communication largely through the transfer of functional cargo such as proteins, messenger RNAs (mRNAs), microRNA (miRNAs), DNAs and lipids. EVs, especially exosomes and shed microvesicles, represent an important delivery medium in the tumour micro-environment through the reciprocal dissemination of signals between cancer and resident stromal cells to facilitate tumorigenesis and metastasis. An important step of the metastatic cascade is the reprogramming of cancer cells from an epithelial to mesenchymal phenotype (epithelial-mesenchymal transition, EMT), which is associated with increased aggressiveness, invasiveness and metastatic potential. There is now increasing evidence demonstrating that EVs released by cells undergoing EMT are reprogrammed (protein and RNA content) during this process. This review summarises current knowledge of EV-mediated functional transfer of proteins and RNA species (mRNA, miRNA, long non-coding RNA) between cells in cancer biology and the EMT process. An in-depth understanding of EVs associated with EMT, with emphasis on molecular composition (proteins and RNA species), will provide fundamental insights into cancer biology.",
"title": ""
},
{
"docid": "neg:1840541_3",
"text": "Background:There are currently two million cancer survivors in the United Kingdom, and in recent years this number has grown by 3% per annum. The aim of this paper is to provide long-term projections of cancer prevalence in the United Kingdom.Methods:National cancer registry data for England were used to estimate cancer prevalence in the United Kingdom in 2009. Using a model of prevalence as a function of incidence, survival and population demographics, projections were made to 2040. Different scenarios of future incidence and survival, and their effects on cancer prevalence, were also considered. Colorectal, lung, prostate, female breast and all cancers combined (excluding non-melanoma skin cancer) were analysed separately.Results:Assuming that existing trends in incidence and survival continue, the number of cancer survivors in the United Kingdom is projected to increase by approximately one million per decade from 2010 to 2040. Particularly large increases are anticipated in the oldest age groups, and in the number of long-term survivors. By 2040, almost a quarter of people aged at least 65 will be cancer survivors.Conclusion:Increasing cancer survival and the growing/ageing population of the United Kingdom mean that the population of survivors is likely to grow substantially in the coming decades, as are the related demands upon the health service. Plans must, therefore, be laid to ensure that the varied needs of cancer survivors can be met in the future.",
"title": ""
},
{
"docid": "neg:1840541_4",
"text": "An original differential structure using exclusively MOS devices working in the saturation region will be further presented. Performing the great advantage of an excellent linearity, obtained by a proper biasing of the differential core (using original translation and arithmetical mean blocks), the proposed circuit is designed for low-voltage low- power operation. The estimated linearity is obtained for an extended range of the differential input voltage and in the worst case of considering second-order effects that affect MOS transistors operation. The frequency response of the new differential structure is strongly increased by operating all MOS devices in the saturation region. The circuit is implemented in 0.35 mum CMOS technology, SPICE simulations confirming the theoretical estimated results.",
"title": ""
},
{
"docid": "neg:1840541_5",
"text": "This paper addresses the problem of learning similarity-preserving binary codes for efficient similarity search in large-scale image collections. We formulate this problem in terms of finding a rotation of zero-centered data so as to minimize the quantization error of mapping this data to the vertices of a zero-centered binary hypercube, and propose a simple and efficient alternating minimization algorithm to accomplish this task. This algorithm, dubbed iterative quantization (ITQ), has connections to multiclass spectral clustering and to the orthogonal Procrustes problem, and it can be used both with unsupervised data embeddings such as PCA and supervised embeddings such as canonical correlation analysis (CCA). The resulting binary codes significantly outperform several other state-of-the-art methods. We also show that further performance improvements can result from transforming the data with a nonlinear kernel mapping prior to PCA or CCA. Finally, we demonstrate an application of ITQ to learning binary attributes or \"classemes\" on the ImageNet data set.",
"title": ""
},
{
"docid": "neg:1840541_6",
"text": "This paper presents Latent Sampling-based Motion Planning (L-SBMP), a methodology towards computing motion plans for complex robotic systems by learning a plannable latent representation. Recent works in control of robotic systems have effectively leveraged local, low-dimensional embeddings of high-dimensional dynamics. In this paper we combine these recent advances with techniques from samplingbased motion planning (SBMP) in order to design a methodology capable of planning for high-dimensional robotic systems beyond the reach of traditional approaches (e.g., humanoids, or even systems where planning occurs in the visual space). Specifically, the learned latent space is constructed through an autoencoding network, a dynamics network, and a collision checking network, which mirror the three main algorithmic primitives of SBMP, namely state sampling, local steering, and collision checking. Notably, these networks can be trained through only raw data of the system’s states and actions along with a supervising collision checker. Building upon these networks, an RRT-based algorithm is used to plan motions directly in the latent space – we refer to this exploration algorithm as Learned Latent RRT (L2RRT). This algorithm globally explores the latent space and is capable of generalizing to new environments. The overall methodology is demonstrated on two planning problems, namely a visual planning problem, whereby planning happens in the visual (pixel) space, and a humanoid robot planning problem.",
"title": ""
},
{
"docid": "neg:1840541_7",
"text": "In this paper we propose a state space modeling approach for trust evaluation in wireless sensor networks. In our state space trust model (SSTM), each sensor node is associated with a trust metric, which measures to what extent the data transmitted from this node would better be trusted by the server node. Given the SSTM, we translate the trust evaluation problem to be a nonlinear state filtering problem. To estimate the state based on the SSTM, a component-wise iterative state inference procedure is proposed to work in tandem with the particle filter, and thus the resulting algorithm is termed as iterative particle filter (IPF). The computational complexity of the IPF algorithm is theoretically linearly related with the dimension of the state. This property is desirable especially for high dimensional trust evaluation and state filtering problems. The performance of the proposed algorithm is evaluated by both simulations and real data analysis. Index Terms state space trust model, wireless sensor network, trust evaluation, particle filter, high dimensional. ✦",
"title": ""
},
{
"docid": "neg:1840541_8",
"text": "............................................................................................................................................... 4",
"title": ""
},
{
"docid": "neg:1840541_9",
"text": "Clustering is a powerful tool which has been used in several forecasting works, such as time series forecasting, real time storm detection, flood forecasting and so on. In this paper, a generic methodology for weather forecasting is proposed by the help of incremental K-means clustering algorithm. Weather forecasting plays an important role in day to day applications.Weather forecasting of this paper is done based on the incremental air pollution database of west Bengal in the years of 2009 and 2010. This paper generally uses typical Kmeans clustering on the main air pollution database and a list of weather category will be developed based on the maximum mean values of the clusters.Now when the new data are coming, the incremental K-means is used to group those data into those clusters whose weather category has been already defined. Thus it builds up a strategy to predict the weather of the upcoming data of the upcoming days. This forecasting database is totally based on the weather of west Bengal and this forecasting methodology is developed to mitigating the impacts of air pollutions and launch focused modeling computations for prediction and forecasts of weather events. Here accuracy of this approach is also measured.",
"title": ""
},
{
"docid": "neg:1840541_10",
"text": "The Class II division 2 (Class II/2) malocclusion as originally defined by E.H. Angle is relatively rare. The orthodontic literature does not agree on the skeletal characteristics of this malocclusion. Several researchers claim that it is characterized by an orthognathic facial pattern and that the malocclusion is dentoalveolar per se. Others claim that the Class II/2 malocclusion has unique skeletal and dentoalveolar characteristics. The present study describes the skeletal and dentoalveolar cephalometric characteristics of 50 patients clinically diagnosed as having Class II/2 malocclusion according to Angle's original criteria. The study compares the findings with those of both a control group of 54 subjects with Class II division I (Class II/1) malocclusion and a second control group of 34 subjects with Class I (Class I) malocclusion. The findings demonstrate definite skeletal and dentoalveolar patterns with the following characteristics: (1) the maxilla is orthognathic, (2) the mandible has relatively short and retrognathic parameters, (3) the chin is relatively prominent, (4) the facial pattern is hypodivergent, (5) the upper central incisors are retroclined, and (6) the overbite is deep. The results demonstrate that, in a sagittal direction, the entity of Angle Class II/2 malocclusion might actually be located between the Angle Class I and the Angle Class II/1 malocclusions. with unique vertical skeletal characteristics.",
"title": ""
},
{
"docid": "neg:1840541_11",
"text": "A growing body of evidence suggests that empathy for pain is underpinned by neural structures that are also involved in the direct experience of pain. In order to assess the consistency of this finding, an image-based meta-analysis of nine independent functional magnetic resonance imaging (fMRI) investigations and a coordinate-based meta-analysis of 32 studies that had investigated empathy for pain using fMRI were conducted. The results indicate that a core network consisting of bilateral anterior insular cortex and medial/anterior cingulate cortex is associated with empathy for pain. Activation in these areas overlaps with activation during directly experienced pain, and we link their involvement to representing global feeling states and the guidance of adaptive behavior for both self- and other-related experiences. Moreover, the image-based analysis demonstrates that depending on the type of experimental paradigm this core network was co-activated with distinct brain regions: While viewing pictures of body parts in painful situations recruited areas underpinning action understanding (inferior parietal/ventral premotor cortices) to a stronger extent, eliciting empathy by means of abstract visual information about the other's affective state more strongly engaged areas associated with inferring and representing mental states of self and other (precuneus, ventral medial prefrontal cortex, superior temporal cortex, and temporo-parietal junction). In addition, only the picture-based paradigms activated somatosensory areas, indicating that previous discrepancies concerning somatosensory activity during empathy for pain might have resulted from differences in experimental paradigms. We conclude that social neuroscience paradigms provide reliable and accurate insights into complex social phenomena such as empathy and that meta-analyses of previous studies are a valuable tool in this endeavor.",
"title": ""
},
{
"docid": "neg:1840541_12",
"text": "Interest in meat fatty acid composition stems mainly from the need to find ways to produce healthier meat, i.e. with a higher ratio of polyunsaturated (PUFA) to saturated fatty acids and a more favourable balance between n-6 and n-3 PUFA. In pigs, the drive has been to increase n-3 PUFA in meat and this can be achieved by feeding sources such as linseed in the diet. Only when concentrations of α-linolenic acid (18:3) approach 3% of neutral lipids or phospholipids are there any adverse effects on meat quality, defined in terms of shelf life (lipid and myoglobin oxidation) and flavour. Ruminant meats are a relatively good source of n-3 PUFA due to the presence of 18:3 in grass. Further increases can be achieved with animals fed grain-based diets by including whole linseed or linseed oil, especially if this is \"protected\" from rumen biohydrogenation. Long-chain (C20-C22) n-3 PUFA are synthesised from 18:3 in the animal although docosahexaenoic acid (DHA, 22:6) is not increased when diets are supplemented with 18:3. DHA can be increased by feeding sources such as fish oil although too-high levels cause adverse flavour and colour changes. Grass-fed beef and lamb have naturally high levels of 18:3 and long chain n-3 PUFA. These impact on flavour to produce a 'grass fed' taste in which other components of grass are also involved. Grazing also provides antioxidants including vitamin E which maintain PUFA levels in meat and prevent quality deterioration during processing and display. In pork, beef and lamb the melting point of lipid and the firmness/hardness of carcass fat is closely related to the concentration of stearic acid (18:0).",
"title": ""
},
{
"docid": "neg:1840541_13",
"text": "Automatic inspection of Mura defects is a challenging task in thin-film transistor liquid crystal display (TFT-LCD) defect detection, which is critical for LCD manufacturers to guarantee high standard quality control. In this paper, we propose a set of automatic procedures to detect mura defects by using image processing and computer vision techniques. Singular Value Decomposition (SVD) and Discrete Cosine Transformation(DCT) techniques are employed to conduct image reconstruction, based on which we are able to obtain the differential image of LCD Cells. In order to detect different types of mura defects accurately, we then design a method that employs different detection modules adaptively, which can overcome the disadvantage of simply using a single threshold value. Finally, we provide the experimental results to validate the effectiveness of the proposed method in mura detection.",
"title": ""
},
{
"docid": "neg:1840541_14",
"text": "The standard serial algorithm for strongly connected components is based on depth rst search, which is di cult to parallelize. We describe a divide-and-conquer algorithm for this problem which has signi cantly greater potential for parallelization. For a graph with n vertices in which degrees are bounded by a constant, we show the expected serial running time of our algorithm to be O(n log n).",
"title": ""
},
{
"docid": "neg:1840541_15",
"text": "This paper presents a new approach for facial attribute classification using a multi-task learning approach. Unlike other approaches that uses hand engineered features, our model learns a shared feature representation that is wellsuited for multiple attribute classification. Learning a joint feature representation enables interaction between different tasks. For learning this shared feature representation we use a Restricted Boltzmann Machine (RBM) based model, enhanced with a factored multi-task component to become Multi-Task Restricted Boltzmann Machine (MT-RBM). Our approach operates directly on faces and facial landmark points to learn a joint feature representation over all the available attributes. We use an iterative learning approach consisting of a bottom-up/top-down pass to learn the shared representation of our multi-task model and at inference we use a bottom-up pass to predict the different tasks. Our approach is not restricted to any type of attributes, however, for this paper we focus only on facial attributes. We evaluate our approach on three publicly available datasets, the Celebrity Faces (CelebA), the Multi-task Facial Landmarks (MTFL), and the ChaLearn challenge dataset. We show superior classification performance improvement over the state-of-the-art.",
"title": ""
},
{
"docid": "neg:1840541_16",
"text": "Recurrent Neural Networks (RNNs) have obtained excellent result in many natural language processing (NLP) tasks. However, understanding and interpreting the source of this success remains a challenge. In this paper, we propose Recurrent Memory Network (RMN), a novel RNN architecture, that not only amplifies the power of RNN but also facilitates our understanding of its internal functioning and allows us to discover underlying patterns in data. We demonstrate the power of RMN on language modeling and sentence completion tasks. On language modeling, RMN outperforms Long Short-Term Memory (LSTM) network on three large German, Italian, and English dataset. Additionally we perform indepth analysis of various linguistic dimensions that RMN captures. On Sentence Completion Challenge, for which it is essential to capture sentence coherence, our RMN obtains 69.2% accuracy, surpassing the previous state of the art by a large margin.1",
"title": ""
},
{
"docid": "neg:1840541_17",
"text": "The problem of designing, coordinating, and managing complex systems has been central to the management and organizations literature. Recent writings have tended to offer modularity as, at least, a partial solution to this design problem. However, little attention has been paid to the problem of identifying what constitutes an appropriate modularization of a complex system. We develop a formal simulation model that allows us to carefully examine the dynamics of innovation and performance in complex systems. The model points to the trade-off between the destabilizing effects of overly refined modularization and the modest levels of search and a premature fixation on inferior designs that can result from excessive levels of integration. The analysis highlights an asymmetry in this trade-off, with excessively refined modules leading to cycling behavior and a lack of performance improvement. We discuss the implications of these arguments for product and organization design.",
"title": ""
},
{
"docid": "neg:1840541_18",
"text": "In power transistor switching circuits, shunt snubbers (dv/dt limiting capacitors) are often used to reduce the turn-off switching loss or prevent reverse-biased second breakdown. Similarly, series snubbers (di/dt limiting inductors) are used to reduce the turn-on switching loss or prevent forward-biased second breakdown. In both cases energy is stored in the reactive element of the snubber and is dissipated during its discharge. If the circuit includes a transformer, a voltage clamp across the transistor may be needed to absorb the energy trapped in the leakage inductance. The action of these typical snubber and clamp arrangements is analyzed and applied to optimize the design of a flyback converter used as a battery charger.",
"title": ""
},
{
"docid": "neg:1840541_19",
"text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.",
"title": ""
}
] |
1840542 | Swing-up of the double pendulum on a cart by feedforward and feedback control with experimental validation | [
{
"docid": "pos:1840542_0",
"text": "This paper presents the control of an underactuated two-link robot called the Pendubot. We propose a controller for swinging the linkage and rise it to its uppermost unstable equilibrium position. The balancing control is based on an energy approach and the passivity properties of the system.",
"title": ""
}
] | [
{
"docid": "neg:1840542_0",
"text": "Multi-Processor Systems-on-Chip (MPSoCs) are increasingly popular in embedded systems. Due to their complexity and huge design space to explore for such systems, CAD tools and frameworks to customize MPSoCs are mandatory. Some academic and industrial frameworks are available to support bus-based MPSoCs, but few works target NoCs as underlying communication architecture. A framework targeting MPSoC customization must provide abstract models to enable fast design space exploration, flexible application mapping strategies, all coupled to features to evaluate the performance of running applications. This paper proposes a framework to customize NoC-based MPSoCs with support to static and dynamic task mapping and C/SystemC simulation models for processors and memories. A simple, specifically designed microkernel executes in each processor, enabling multitasking at the processor level. Graphical tools enable debug and system verification, individualizing data for each task. Practical results highlight the benefit of using dynamic mapping strategies (total execution time reduction) and abstract models (total simulation time reduction without losing accuracy).",
"title": ""
},
{
"docid": "neg:1840542_1",
"text": "Bacopa monnieri (L.) Wettst., a traditional Indian medicinal plant with high commercial potential, is used as a potent nervine tonic. A slow growth protocol was developed for medium-term conservation using mineral oil (MO) overlay. Nodal segments of B. monnieri (two genotypes; IC249250, IC468878) were conserved using MO for 24 months. Single node explants were implanted on MS medium supplemented with 0.2 mg l−1 BA and were covered with MO. Subculture duration could be significantly enhanced from 6 to 24 months, on the above medium. Normal plants regenerated from conserved cultures were successfully established in soil. On the basis of 20 random amplified polymorphic DNA and 5 inter-simple sequence repeat primers analyses and bacoside A content using HPLC, no significant reproducible variation was observed between the controls and in vitro-conserved plants. The results demonstrate the feasibility of using MO for medium-term conservation of B. monnieri germplasm without any adverse genetical and biochemical effects.",
"title": ""
},
{
"docid": "neg:1840542_2",
"text": "Lactose (milk sugar) is a fermentable substrate. It can be fermented outside of the body to produce cheeses, yoghurts and acidified milks. It can be fermented within the large intestine in those people who have insufficient expression of lactase enzyme on the intestinal mucosa to ferment this disaccharide to its absorbable, simple hexose sugars: glucose and galactose. In this way, the issues of lactose intolerance and of fermented foods are joined. It is only at the extremes of life, in infancy and old age, in which severe and life-threatening consequences from lactose maldigestion may occur. Fermentation as part of food processing can be used for preservation, for liberation of pre-digested nutrients, or to create ethanolic beverages. Almost all cultures and ethnic groups have developed some typical forms of fermented foods. Lessons from fermentation of non-dairy items may be applicable to fermentation of milk, and vice versa.",
"title": ""
},
{
"docid": "neg:1840542_3",
"text": "The use of brushless permanent magnet DC drive motors in racing motorcycles is discussed in this paper. The application requirements are highlighted and the characteristics of the load demand and drive converter outlined. The possible topologies of the machine are investigated and a design for a internal permanent magnet is developed. This is a 6-pole machine with 18 stator slots and coils of one stator tooth pitch. The performance predictions are put forward and these are obtained from design software. Cooling is vital for these machines and this is briefly discussed.",
"title": ""
},
{
"docid": "neg:1840542_4",
"text": "BACKGROUND\nGreen tea (GT) extract may play a role in body weight regulation. Suggested mechanisms are decreased fat absorption and increased energy expenditure.\n\n\nOBJECTIVE\nWe examined whether GT supplementation for 12 wk has beneficial effects on weight control via a reduction in dietary lipid absorption as well as an increase in resting energy expenditure (REE).\n\n\nMETHODS\nSixty Caucasian men and women [BMI (in kg/m²): 18-25 or >25; age: 18-50 y] were included in a randomized placebo-controlled study in which fecal energy content (FEC), fecal fat content (FFC), resting energy expenditure, respiratory quotient (RQ), body composition, and physical activity were measured twice (baseline vs. week 12). For 12 wk, subjects consumed either GT (>0.56 g/d epigallocatechin gallate + 0.28-0.45 g/d caffeine) or placebo capsules. Before the measurements, subjects recorded energy intake for 4 consecutive days and collected feces for 3 consecutive days.\n\n\nRESULTS\nNo significant differences between groups and no significant changes over time were observed for the measured variables. Overall means ± SDs were 7.2 ± 3.8 g/d, 6.1 ± 1.2 MJ/d, 67.3 ± 14.3 kg, and 29.8 ± 8.6% for FFC, REE, body weight, and body fat percentage, respectively.\n\n\nCONCLUSION\nGT supplementation for 12 wk in 60 men and women did not have a significant effect on FEC, FFC, REE, RQ, and body composition.",
"title": ""
},
{
"docid": "neg:1840542_5",
"text": "We introduce a novel learning method for 3D pose estimation from color images. While acquiring annotations for color images is a difficult task, our approach circumvents this problem by learning a mapping from paired color and depth images captured with an RGB-D camera. We jointly learn the pose from synthetic depth images that are easy to generate, and learn to align these synthetic depth images with the real depth images. We show our approach for the task of 3D hand pose estimation and 3D object pose estimation, both from color images only. Our method achieves performances comparable to state-of-the-art methods on popular benchmark datasets, without requiring any annotations for the color images.",
"title": ""
},
{
"docid": "neg:1840542_6",
"text": "Bar charts are an effective way to convey numeric information, but today's algorithms cannot parse them. Existing methods fail when faced with even minor variations in appearance. Here, we present DVQA, a dataset that tests many aspects of bar chart understanding in a question answering framework. Unlike visual question answering (VQA), DVQA requires processing words and answers that are unique to a particular bar chart. State-of-the-art VQA algorithms perform poorly on DVQA, and we propose two strong baselines that perform considerably better. Our work will enable algorithms to automatically extract numeric and semantic information from vast quantities of bar charts found in scientific publications, Internet articles, business reports, and many other areas.",
"title": ""
},
{
"docid": "neg:1840542_7",
"text": "The concept of presortedness and its use in sorting are studied. Natural ways to measure presortedness are given and some general properties necessary for a measure are proposed. A concept of a sorting algorithm optimal with respect to a measure of presortedness is defined, and examples of such algorithms are given. A new insertion sort algorithm is shown to be optimal with respect to three natural measures. The problem of finding an optimal algorithm for an arbitrary measure is studied, and partial results are proven.",
"title": ""
},
{
"docid": "neg:1840542_8",
"text": "ACC: allergic contact cheilitis Bronopol: 2-Bromo-2-nitropropane-1,3-diol MI: methylisothiazolinone MCI: methylchloroisothiazolinone INTRODUCTION Pediatric cheilitis can be a debilitating condition for the child and parents. Patch testing can help isolate allergens to avoid. Here we describe a 2-yearold boy with allergic contact cheilitis improving remarkably after prudent avoidance of contactants and food avoidance.",
"title": ""
},
{
"docid": "neg:1840542_9",
"text": "The paper presents the power amplifier design. The introduction of a practical harmonic balance capability at the device measurement stage brings a number of advantages and challenges. Breaking down this traditional barrier means that the test-bench engineer needs to become more aware of the design process and requirements. The inverse is also true, as the measurement specifications for a harmonically tuned amplifier are a bit more complex than just the measurement of load-pull contours. We hope that the new level of integration between both will also result in better exchanges between both sides and go beyond showing either very accurate, highly tuned device models, or using the device model as the traditional scapegoat for unsuccessful PA designs. A nonlinear model and its quality can now be diagnosed through direct comparison of simulated and measured wave forms. The quality of a PA design can be verified by placing the device within the measurement system, practical harmonic balance emulator into the same impedance state in which it will operate in the actual realized design.",
"title": ""
},
{
"docid": "neg:1840542_10",
"text": "We have been developing human mimetic musculoskeletal humanoids from the view point of human-inspired design approach. Kengoro is our latest version of musculoskeletal humanoid designed to achieve physically interactive actions in real world. This study presents the design concept, body characteristics, and motion achievements of Kengoro. In the design process of Kengoro, we adopted the novel idea of multifunctional skeletal structures to achieve both humanoid performance and humanlike proportions. We adopted the sensor-driver integrated muscle modules for improved muscle control. In order to demonstrate the effectiveness of these body structures, we conducted several preliminary movements using Kengoro.",
"title": ""
},
{
"docid": "neg:1840542_11",
"text": "Dysregulated expression of microRNAs (miRNAs) in various tissues has been associated with a variety of diseases, including cancers. Here we demonstrate that miRNAs are present in the serum and plasma of humans and other animals such as mice, rats, bovine fetuses, calves, and horses. The levels of miRNAs in serum are stable, reproducible, and consistent among individuals of the same species. Employing Solexa, we sequenced all serum miRNAs of healthy Chinese subjects and found over 100 and 91 serum miRNAs in male and female subjects, respectively. We also identified specific expression patterns of serum miRNAs for lung cancer, colorectal cancer, and diabetes, providing evidence that serum miRNAs contain fingerprints for various diseases. Two non-small cell lung cancer-specific serum miRNAs obtained by Solexa were further validated in an independent trial of 75 healthy donors and 152 cancer patients, using quantitative reverse transcription polymerase chain reaction assays. Through these analyses, we conclude that serum miRNAs can serve as potential biomarkers for the detection of various cancers and other diseases.",
"title": ""
},
{
"docid": "neg:1840542_12",
"text": "This paper outlines and tests two agency models of dividends. According to the “outcome” model, dividends are the result of effective pressure by minority shareholders to force corporate insiders to disgorge cash. According to the “substitute” model, insiders interested in issuing equity in the future choose to pay dividends to establish a reputation for decent treatment of minority shareholders. The first model predicts that stronger minority shareholder rights should be associated with higher dividend payouts; the second model predicts the opposite. Tests on a cross-section of 4,000 companies from 33 countries with different levels of minority shareholder rights support the outcome agency model of dividends. The authors are from Harvard University, Harvard University, Harvard University and University of Chicago, respectively. They are grateful to Alexander Aganin for excellent research assistance, and to Lucian Bebchuk, Mihir Desai, Edward Glaeser, Denis Gromb, Oliver Hart, James Hines, Kose John, James Poterba, Roberta Romano, Raghu Rajan, Lemma Senbet, René Stulz, Daniel Wolfenzohn, Luigi Zingales, and two anonymous referees for helpful comments. 2 The so-called dividend puzzle (Black 1976) has preoccupied the attention of financial economists at least since Modigliani and Miller’s (1958, 1961) seminal work. This work established that, in a frictionless world, when the investment policy of a firm is held constant, its dividend payout policy has no consequences for shareholder wealth. Higher dividend payouts lead to lower retained earnings and capital gains, and vice versa, leaving total wealth of the shareholders unchanged. Contrary to this prediction, however, corporations follow extremely deliberate dividend payout strategies (Lintner (1956)). This evidence raises a puzzle: how do firms choose their dividend policies? In the United States and other countries, the puzzle is even deeper since many shareholders are taxed more heavily on their dividend receipts than on capital gains. The actual magnitude of this tax burden is debated (see Poterba and Summers (1985) and Allen and Michaely (1997)), but taxes generally make it even harder to explain dividend policies of firms. Economists have proposed a number of explanations of the dividend puzzle. Of these, particularly popular is the idea that firms can signal future profitability by paying dividends (Bhattacharya (1979), John and Williams (1985), Miller and Rock (1985), Ambarish, John, and Williams (1987)). Empirically, this theory had considerable initial success, since firms that initiate (or raise) dividends experience share price increases, and the converse is true for firms that eliminate (or cut) dividends (Aharony and Swary (1980), Asquith and Mullins (1983)). Recent results are more mixed, since current dividend changes do not help predict firms’ future earnings growth (DeAngelo, DeAngelo, and Skinner (1996) and Benartzi, Michaely, and Thaler (1997)). Another idea, which has received only limited attention until recently (e.g., Easterbrook (1984), Jensen (1986), Fluck (1998a, 1998b), Myers (1998), Gomes (1998), Zwiebel (1996)), is 3 that dividend policies address agency problems between corporate insiders and outside shareholders. According to these theories, unless profits are paid out to shareholders, they may be diverted by the insiders for personal use or committed to unprofitable projects that provide private benefits for the insiders. As a consequence, outside shareholders have a preference for dividends over retained earnings. Theories differ on how outside shareholders actually get firms to disgorge cash. The key point, however, is that failure to disgorge cash leads to its diversion or waste, which is detrimental to outside shareholders’ interest. The agency approach moves away from the assumptions of the Modigliani-Miller theorem by recognizing two points. First, the investment policy of the firm cannot be taken as independent of its dividend policy, and, in particular, paying out dividends may reduce the inefficiency of marginal investments. Second, and more subtly, the allocation of all the profits of the firm to shareholders on a pro-rata basis cannot be taken for granted, and in particular the insiders may get preferential treatment through asset diversion, transfer prices and theft, even holding the investment policy constant. In so far as dividends are paid on a pro-rata basis, they benefit outside shareholders relative to the alternative of expropriation of retained earnings. In this paper, we attempt to identify some of the basic elements of the agency approach to dividends, to understand its key implications, and to evaluate them on a cross-section of over 4,000 firms from 33 countries around the world. The reason for looking around the world is that the severity of agency problems to which minority shareholders are exposed differs greatly across countries, in part because legal protection of these shareholders vary (La Porta et al. (1997, 1998)). Empirically, we find that dividend policies vary across legal regimes in ways consistent with a particular version of the agency theory of dividends. Specifically, firms in common law 4 countries, where investor protection is typically better, make higher dividend payouts than firms in civil law countries do. Moreover, in common but not civil law countries, high growth firms make lower dividend payouts than low growth firms. These results support the version of the agency theory in which investors in good legal protection countries use their legal powers to extract dividends from firms, especially when reinvestment opportunities are poor. Section I of the paper summarizes some of the theoretical arguments. Section II describes the data. Section III presents our empirical findings. Section IV concludes. I. Theoretical Issues. A. Agency Problems and Legal Regimes Conflicts of interest between corporate insiders, such as managers and controlling shareholders, on the one hand, and outside investors, such as minority shareholders, on the other hand, are central to the analysis of the modern corporation (Berle and Means (1932), Jensen and Meckling (1976)). The insiders who control corporate assets can use these assets for a range of purposes that are detrimental to the interests of the outside investors. Most simply, they can divert corporate assets to themselves, through outright theft, dilution of outside investors through share issues to the insiders, excessive salaries, asset sales to themselves or other corporations they control at favorable prices, or transfer pricing with other entities they control (see Shleifer and Vishny (1997) for a discussion). Alternatively, insiders can use corporate assets to pursue investment strategies that yield them personal benefits of control, such as growth or diversification, without benefitting outside investors (e.g., Baumol (1959), Jensen (1986)). What is meant by insiders varies from country to country. In the United States, U.K., 5 Canada, and Australia, where ownership in large corporations is relatively dispersed, most large corporations are to a significant extent controlled by their managers. In most other countries, large firms typically have shareholders that own a significant fraction of equity, such as the founding families (La Porta, Lopez-de-Silanes, and Shleifer (1999)). The controlling shareholders can effectively determine the decisions of the managers (indeed, managers typically come from the controlling family), and hence the problem of managerial control per se is not as severe as it is in the rich common law countries. On the other hand, the controlling shareholders can implement policies that benefit themselves at the expense of minority shareholders. Regardless of the identity of the insiders, the victims of insider control are minority shareholders. It is these minority shareholders that would typically have a taste for dividends. One of the principal remedies to agency problems is the law. Corporate and other law gives outside investors, including shareholders, certain powers to protect their investment against expropriation by insiders. These powers in the case of shareholders range from the right to receive the same per share dividends as the insiders, to the right to vote on important corporate matters, including the election of directors, to the right to sue the company for damages. The very fact that this legal protection exists probably explains why becoming a minority shareholder is a viable investment strategy, as opposed to just being an outright giveaway of money to strangers who are under few if any obligations to give it back. As pointed out by La Porta et al. (1998), the extent of legal protection of outside investors differs enormously across countries. Legal protection consists of both the content of the laws and the quality of their enforcement. Some countries, including most notably the wealthy common law countries such as the U.S. and the U.K., provide effective protection of minority shareholders 6 so that the outright expropriation of corporate assets by the insiders is rare. Agency problems manifest themselves primarily through non-value-maximizing investment choices. In many other countries, the condition of outside investors is a good deal more precarious, but even there some protection does exist. La Porta et al. (1998) show in particular that common law countries appear to have the best legal protection of minority shareholders, whereas civil law countries, and most conspicuously the French civil law countries, have the weakest protection. The quality of investor protection, viewed as a proxy for lower agency costs, has been shown to matter for a number of important issues in corporate finance. For example, corporate ownership is more concentrated in countries with inferior shareholder protection (La Porta et al. (1998), La Porta, Lopez-de-Silanes, and Shleifer (1999)). The valuation and breadth of cap",
"title": ""
},
{
"docid": "neg:1840542_13",
"text": "The demonstration that dopamine loss is the key pathological feature of Parkinson's disease (PD), and the subsequent introduction of levodopa have revolutionalized the field of PD therapeutics. This review will discuss the significant progress that has been made in the development of new pharmacological and surgical tools to treat PD motor symptoms since this major breakthrough in the 1960s. However, we will also highlight some of the challenges the field of PD therapeutics has been struggling with during the past decades. The lack of neuroprotective therapies and the limited treatment strategies for the nonmotor symptoms of the disease (ie, cognitive impairments, autonomic dysfunctions, psychiatric disorders, etc.) are among the most pressing issues to be addressed in the years to come. It appears that the combination of early PD nonmotor symptoms with imaging of the nigrostriatal dopaminergic system offers a promising path toward the identification of PD biomarkers, which, once characterized, will set the stage for efficient use of neuroprotective agents that could slow down and alter the course of the disease.",
"title": ""
},
{
"docid": "neg:1840542_14",
"text": "The constant increase in global energy demand, together with the awareness of the finite supply of fossil fuels, has brought about an imperious need to take advantage of renewable energy sources. At the same time, concern over CO(2) emissions and future rises in the cost of gasoline has boosted technological efforts to make hybrid and electric vehicles available to the general public. Energy storage is a vital issue to be addressed within this scenario, and batteries are certainly a key player. In this tutorial review, the most recent and significant scientific advances in the field of rechargeable batteries, whose performance is dependent on their underlying chemistry, are covered. In view of its utmost current significance and future prospects, special emphasis is given to progress in lithium-based technologies.",
"title": ""
},
{
"docid": "neg:1840542_15",
"text": "For years, researchers in face recognition area have been representing and recognizing faces based on subspace discriminant analysis or statistical learning. Nevertheless, these approaches are always suffering from the generalizability problem. This paper proposes a novel non-statistics based face representation approach, local Gabor binary pattern histogram sequence (LGBPHS), in which training procedure is unnecessary to construct the face model, so that the generalizability problem is naturally avoided. In this approach, a face image is modeled as a \"histogram sequence\" by concatenating the histograms of all the local regions of all the local Gabor magnitude binary pattern maps. For recognition, histogram intersection is used to measure the similarity of different LGBPHSs and the nearest neighborhood is exploited for final classification. Additionally, we have further proposed to assign different weights for each histogram piece when measuring two LGBPHSes. Our experimental results on AR and FERET face database show the validity of the proposed approach especially for partially occluded face images, and more impressively, we have achieved the best result on FERET face database.",
"title": ""
},
{
"docid": "neg:1840542_16",
"text": "Despite a variety of new communication technologies, loneliness is prevalent in Western countries. Boosting emotional communication through intimate connections has the potential to reduce loneliness. New technologies might exploit biosignals as intimate emotional cues because of their strong relationship to emotions. Through two studies, we investigate the possibilities of heartbeat communication as an intimate cue. In the first study (N = 32), we demonstrate, using self-report and behavioral tracking in an immersive virtual environment, that heartbeat perception influences social behavior in a similar manner as traditional intimate signals such as gaze and interpersonal distance. In the second study (N = 34), we demonstrate that a sound of the heartbeat is not sufficient to cause the effect; the stimulus must be attributed to the conversational partner in order to have influence. Together, these results show that heartbeat communication is a promising way to increase intimacy. Implications and possibilities for applications are discussed.",
"title": ""
},
{
"docid": "neg:1840542_17",
"text": "With the rapid development of modern wireless communication systems, the desirable miniaturization, multifunctionality strong harmonic suppression, and enhanced bandwidth of the rat-race coupler has generated much interest and continues to be a focus of research. Whether the current rat-race coupler is sufficient to adapt to the future development of microwave systems has become a heated topic.",
"title": ""
},
{
"docid": "neg:1840542_18",
"text": "Sparse representation of information provides a powerful means to perform feature extraction on high-dimensional data and is of broad interest for applications in signal processing, computer vision, object recognition and neurobiology. Sparse coding is also believed to be a key mechanism by which biological neural systems can efficiently process a large amount of complex sensory data while consuming very little power. Here, we report the experimental implementation of sparse coding algorithms in a bio-inspired approach using a 32 × 32 crossbar array of analog memristors. This network enables efficient implementation of pattern matching and lateral neuron inhibition and allows input data to be sparsely encoded using neuron activities and stored dictionary elements. Different dictionary sets can be trained and stored in the same system, depending on the nature of the input signals. Using the sparse coding algorithm, we also perform natural image processing based on a learned dictionary.",
"title": ""
},
{
"docid": "neg:1840542_19",
"text": "The cognitive processes in a widely used, nonverbal test of analytic intelligence, the Raven Progressive Matrices Test (Raven, 1962), are analyzed in terms of which processes distinguish between higher scoring and lower scoring subjects and which processes are common to all subjects and all items on the test. The analysis is based on detailed performance characteristics, such as verbal protocols, eye-fixation patterns, and errors. The theory is expressed as a pair of computer simulation models that perform like the median or best college students in the sample. The processing characteristic common to all subjects is an incremental, reiterative strategy for encoding and inducing the regularities in each problem. The processes that distinguish among individuals are primarily the ability to induce abstract relations and the ability to dynamically manage a large set of problem-solving goals in working memory.",
"title": ""
}
] |
1840543 | 3D Printing Your Wireless Coverage | [
{
"docid": "pos:1840543_0",
"text": "Information on site-specific spectrum characteristics is essential to evaluate and improve the performance of wireless networks. However, it is usually very costly to obtain accurate spectrum-condition information in heterogeneous wireless environments. This paper presents a novel spectrum-survey system, called Sybot (Spectrum survey robot), that guides network engineers to efficiently monitor the spectrum condition (e.g., RSS) of WiFi networks. Sybot effectively controls mobility and employs three disparate monitoring techniques - complete, selective, and diagnostic - that help produce and maintain an accurate spectrum-condition map for challenging indoor WiFi networks. By adaptively triggering the most suitable of the three techniques, Sybot captures spatio-temporal changes in spectrum condition. Moreover, based on the monitoring results, Sybot automatically determines several key survey parameters, such as site-specific measurement time and space granularities. Sybot has been prototyped with a commodity IEEE 802.11 router and Linux OS, and experimentally evaluated, demonstrating its ability to generate accurate spectrum-condition maps while reducing the measurement effort (space, time) by more than 56%.",
"title": ""
},
{
"docid": "pos:1840543_1",
"text": "In this work, we investigate the use of directional antennas and beam steering techniques to improve performance of 802.11 links in the context of communication between amoving vehicle and roadside APs. To this end, we develop a framework called MobiSteer that provides practical approaches to perform beam steering. MobiSteer can operate in two modes - cached mode - where it uses prior radiosurvey data collected during \"idle\" drives, and online mode, where it uses probing. The goal is to select the best AP and beam combination at each point along the drive given the available information, so that the throughput can be maximized. For the cached mode, an optimal algorithm for AP and beam selection is developed that factors in all overheads.\n We provide extensive experimental results using a commercially available eight element phased-array antenna. In the experiments, we use controlled scenarios with our own APs, in two different multipath environments, as well as in situ scenarios, where we use APs already deployed in an urban region - to demonstrate the performance advantage of using MobiSteer over using an equivalent omni-directional antenna. We show that MobiSteer improves the connectivity duration as well as PHY-layer data rate due to better SNR provisioning. In particular, MobiSteer improves the throughput in the controlled experiments by a factor of 2 - 4. In in situ experiments, it improves the connectivity duration by more than a factor of 2 and average SNR by about 15 dB.",
"title": ""
}
] | [
{
"docid": "neg:1840543_0",
"text": "An important difference between traditional AI systems and human intelligence is the human ability to harness commonsense knowledge gleaned from a lifetime of learning and experience to make informed decisions. This allows humans to adapt easily to novel situations where AI fails catastrophically due to a lack of situation-specific rules and generalization capabilities. Commonsense knowledge also provides background information that enables humans to successfully operate in social situations where such knowledge is typically assumed. Since commonsense consists of information that humans take for granted, gathering it is an extremely difficult task. Previous versions of SenticNet were focused on collecting this kind of knowledge for sentiment analysis but they were heavily limited by their inability to generalize. SenticNet 4 overcomes such limitations by leveraging on conceptual primitives automatically generated by means of hierarchical clustering and dimensionality reduction.",
"title": ""
},
{
"docid": "neg:1840543_1",
"text": "Prior research has demonstrated a clear relationship between experiences of racial microaggressions and various indicators of psychological unwellness. One concern with these findings is that the role of negative affectivity, considered a marker of neuroticism, has not been considered. Negative affectivity has previously been correlated to experiences of racial discrimination and psychological unwellness and has been suggested as a cause of the observed relationship between microaggressions and psychopathology. We examined the relationships between self-reported frequency of experiences of microaggressions and several mental health outcomes (i.e., anxiety [Beck Anxiety Inventory], stress [General Ethnic and Discrimination Scale], and trauma symptoms [Trauma Symptoms of Discrimination Scale]) in 177 African American and European American college students, controlling for negative affectivity (the Positive and Negative Affect Schedule) and gender. Results indicated that African Americans experience more racial discrimination than European Americans. Negative affectivity in African Americans appears to be significantly related to some but not all perceptions of the experience of discrimination. A strong relationship between racial mistreatment and symptoms of psychopathology was evident, even after controlling for negative affectivity. In summary, African Americans experience clinically measurable anxiety, stress, and trauma symptoms as a result of racial mistreatment, which cannot be wholly explained by individual differences in negative affectivity. Future work should examine additional factors in these relationships, and targeted interventions should be developed to help those suffering as a result of racial mistreatment and to reduce microaggressions.",
"title": ""
},
{
"docid": "neg:1840543_2",
"text": "LLC resonant DC/DC converters are becoming popular in computing applications, such as telecom, server systems. For these applications, it is required to meet the EMI standard. In this paper, novel EMI noise transferring path and EMI model for LLC resonant DC/DC converters are proposed. DM and CM noise of LLC resonant converter are analyzed. Several EMI noise reduction approaches are proposed. Shield layers are applied to reduce CM noise. By properly choosing the ground point of shield layer, significant noise reduction can be obtained. With extra EMI balance capacitor, CM noise can be reduced further. Two channel interleaving LLC resonant converters are proposed to cancel the CM current. Conceptually, when two channels operate with 180 degree phase shift, CM current can be canceled. Therefore, the significant EMI noise reduction can be achieved.",
"title": ""
},
{
"docid": "neg:1840543_3",
"text": "In this paper, we introduce a new technology, which allows people to share taste and smell sensations digitally with a remote person through existing networking technologies such as the Internet. By introducing this technology, we expect people to share their smell and taste experiences with their family and friends remotely. Sharing these senses are immensely beneficial since those are strongly associated with individual memories, emotions, and everyday experiences. As the initial step, we developed a control system, an actuator, which could digitally stimulate the sense of taste remotely. The system uses two approaches to stimulate taste sensations digitally: the electrical and thermal stimulations on tongue. Primary results suggested that sourness and saltiness are the main sensations that could be evoked through this device. Furthermore, this paper focuses on future aspects of such technology for remote smell actuation followed by applications and possibilities for further developments.",
"title": ""
},
{
"docid": "neg:1840543_4",
"text": "This paper mainly deals with various classification algorithms namely, Bayes. NaiveBayes, Bayes. BayesNet, Bayes. NaiveBayesUpdatable, J48, Randomforest, and Multi Layer Perceptron. It analyzes the hepatitis patients from the UC Irvine machine learning repository. The results of the classification model are accuracy and time. Finally, it concludes that the Naive Bayes performance is better than other classification techniques for hepatitis patients.",
"title": ""
},
{
"docid": "neg:1840543_5",
"text": "Workflow technologies have become a major vehicle for easy and efficient development of scientific applications. In the meantime, state-of-the-art resource provisioning technologies such as cloud computing enable users to acquire computing resources dynamically and elastically. A critical challenge in integrating workflow technologies with resource provisioning technologies is to determine the right amount of resources required for the execution of workflows in order to minimize the financial cost from the perspective of users and to maximize the resource utilization from the perspective of resource providers. This paper suggests an architecture for the automatic execution of large-scale workflow-based applications on dynamically and elastically provisioned computing resources. Especially, we focus on its core algorithm named PBTS (Partitioned Balanced Time Scheduling), which estimates the minimum number of computing hosts required to execute a workflow within a user-specified finish time. The PBTS algorithm is designed to fit both elastic resource provisioning models such as Amazon EC2 and malleable parallel application models such as MapReduce. The experimental results with a number of synthetic workflows and several real science workflows demonstrate that PBTS estimates the resource capacity close to the theoretical low bound. © 2011 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840543_6",
"text": "It is shown by an extensive benchmark on molecular energy data that the mathematical form of the damping function in DFT-D methods has only a minor impact on the quality of the results. For 12 different functionals, a standard \"zero-damping\" formula and rational damping to finite values for small interatomic distances according to Becke and Johnson (BJ-damping) has been tested. The same (DFT-D3) scheme for the computation of the dispersion coefficients is used. The BJ-damping requires one fit parameter more for each functional (three instead of two) but has the advantage of avoiding repulsive interatomic forces at shorter distances. With BJ-damping better results for nonbonded distances and more clear effects of intramolecular dispersion in four representative molecular structures are found. For the noncovalently-bonded structures in the S22 set, both schemes lead to very similar intermolecular distances. For noncovalent interaction energies BJ-damping performs slightly better but both variants can be recommended in general. The exception to this is Hartree-Fock that can be recommended only in the BJ-variant and which is then close to the accuracy of corrected GGAs for non-covalent interactions. According to the thermodynamic benchmarks BJ-damping is more accurate especially for medium-range electron correlation problems and only small and practically insignificant double-counting effects are observed. It seems to provide a physically correct short-range behavior of correlation/dispersion even with unmodified standard functionals. In any case, the differences between the two methods are much smaller than the overall dispersion effect and often also smaller than the influence of the underlying density functional.",
"title": ""
},
{
"docid": "neg:1840543_7",
"text": "The foundation of a process model lies in its structural specifications. Using a generic process modeling language for workflows, we show how a structural specification may contain deadlock and lack of synchronization conflicts that could compromise the correct execution of workflows. In general, identification of such conflicts is a computationally complex problem and requires development of effective algorithms specific for the target modeling language. We present a visual verification approach and algorithm that employs a set of graph reduction rules to identify structural conflicts in process models for the given workflow modeling language. We also provide insights into the correctness and complexity of the reduction process. Finally, we show how the reduction algorithm may be used to count possible instance subgraphs of a correct process model. The main contribution of the paper is a new technique for satisfying well-defined correctness criteria in process models.",
"title": ""
},
{
"docid": "neg:1840543_8",
"text": "The class scheduling problem can be modeled by a graph where the vertices and edges represent the courses and the common students, respectively. The problem is to assign the courses a given number of time slots (colors), where each time slot can be used for a given number of class rooms. The Vertex Coloring (VC) algorithm is a polynomial time algorithm which produces a conflict free solution using the least number of colors [9]. However, the VC solution may not be implementable because it uses a number of time slots that exceed the available ones with unbalanced use of class rooms. We propose a heuristic approach VC* to (1) promote uniform distribution of courses over the colors and to (2) balance course load for each time slot over the available class rooms. The performance function represents the percentage of students in all courses that could not be mapped to time slots or to class rooms. A randomized simulation of registration of four departments with up to 1200 students is used to evaluate the performance of proposed heuristic.",
"title": ""
},
{
"docid": "neg:1840543_9",
"text": "The proliferation of mobile computing and smartphone technologies has resulted in an increasing number and range of services from myriad service providers. These mobile service providers support numerous emerging services with differing quality metrics but similar functionality. Facilitating an automated service workflow requires fast selection and composition of services from the services pool. The mobile environment is ambient and dynamic in nature, requiring more efficient techniques to deliver the required service composition promptly to users. Selecting the optimum required services in a minimal time from the numerous sets of dynamic services is a challenge. This work addresses the challenge as an optimization problem. An algorithm is developed by combining particle swarm optimization and k-means clustering. It runs in parallel using MapReduce in the Hadoop platform. By using parallel processing, the optimum service composition is obtained in significantly less time than alternative algorithms. This is essential for handling large amounts of heterogeneous data and services from various sources in the mobile environment. The suitability of this proposed approach for big data-driven service composition is validated through modeling and simulation.",
"title": ""
},
{
"docid": "neg:1840543_10",
"text": "Purpose – The purpose of this paper is to highlight the potential role that the so-called “toxic triangle” (Padilla et al., 2007) can play in undermining the processes around effectiveness. It is the interaction between leaders, organisational members, and the environmental context in which those interactions occur that has the potential to generate dysfunctional behaviours and processes. The paper seeks to set out a set of issues that would seem to be worthy of further consideration within the Journal and which deal with the relationships between organisational effectiveness and the threats from insiders. Design/methodology/approach – The paper adopts a systems approach to the threats from insiders and the manner in which it impacts on organisation effectiveness. The ultimate goal of the paper is to stimulate further debate and discussion around the issues. Findings – The paper adds to the discussions around effectiveness by highlighting how senior managers can create the conditions in which failure can occur through the erosion of controls, poor decision making, and the creation of a culture that has the potential to generate failure. Within this setting, insiders can serve to trigger a series of failures by their actions and for which the controls in place are either ineffective or have been by-passed as a result of insider knowledge. Research limitations/implications – The issues raised in this paper need to be tested empirically as a means of providing a clear evidence base in support of their relationships with the generation of organisational ineffectiveness. Practical implications – The paper aims to raise awareness and stimulate thinking by practising managers around the role that the “toxic triangle” of issues can play in creating the conditions by which organisations can incubate the potential for crisis. Originality/value – The paper seeks to bring together a disparate body of published work within the context of “organisational effectiveness” and sets out a series of dark characteristics that organisations need to consider if they are to avoid failure. The paper argues the case that effectiveness can be a fragile construct and that the mechanisms that generate failure also need to be actively considered when discussing what effectiveness means in practice.",
"title": ""
},
{
"docid": "neg:1840543_11",
"text": "We study the adaptation of convolutional neural networks to the complex temporal radio signal domain. We compare the efficacy of radio modulation classification using naively learned features against using expert features, which are currently used widely and well regarded in the field and we show significant performance improvements. We show that blind temporal learning on large and densely encoded time series using deep convolutional neural networks is viable and a strong candidate approach for this task.",
"title": ""
},
{
"docid": "neg:1840543_12",
"text": "A millimeter-wave filtering monopulse antenna array based on substrate integrated waveguide (SIW) technology is proposed, manufactured, and tested in this communication. The proposed antenna array consists of a filter, a monopulse comparator, a feed network, and four antennas. A square dual-mode SIW cavity is designed to realize the monopulse comparator, in which internal coupling slots are located at its diagonal lines for the purpose of meeting the internal coupling coefficiencies in both sum and difference channels. Then, a four-output filter including the monopulse comparator is synthesized efficiently by modifying the coupling matrix of a single-ended filter. Finally, each SIW resonator coupled with those four outputs of the filter is replaced by a cavity-backed slot antenna so as to form the proposed filtering antenna array. A prototype is demonstrated at Ka band with a center frequency of 29.25 GHz and fractional bandwidth of 1.2%. Our measurement shows that, for the H-plane, the sidelobe levels of the sum pattern are less than -15 dB and the null depths of the difference pattern are less than -28 dB. The maximum measured gain of the sum beam at the center operating frequency is 8.1 dBi.",
"title": ""
},
{
"docid": "neg:1840543_13",
"text": "Automatically constructing a complete documentary or educational film from scattered pieces of images and knowledge is a significant challenge. Even when this information is provided in an annotated format, the problems of ordering, structuring and animating sequences of images, and producing natural language descriptions that correspond to those images within multiple constraints, are each individually difficult tasks. This paper describes an approach for tackling these problems through a combination of rhetorical structures with narrative and film theory to produce movie-like visual animations from still images along with natural language generation techniques needed to produce text descriptions of what is being seen in the animations. The use of rhetorical structures from NLG is used to integrate separate components for video creation and script generation. We further describe an implementation, named GLAMOUR, that produces actual, short video documentaries, focusing on a cultural heritage domain, and that have been evaluated by professional filmmakers. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840543_14",
"text": "In the research reported here, we investigated the debiasing effect of mindfulness meditation on the sunk-cost bias. We conducted four studies (one correlational and three experimental); the results suggest that increased mindfulness reduces the tendency to allow unrecoverable prior costs to influence current decisions. Study 1 served as an initial correlational demonstration of the positive relationship between trait mindfulness and resistance to the sunk-cost bias. Studies 2a and 2b were laboratory experiments examining the effect of a mindfulness-meditation induction on increased resistance to the sunk-cost bias. In Study 3, we examined the mediating mechanisms of temporal focus and negative affect, and we found that the sunk-cost bias was attenuated by drawing one's temporal focus away from the future and past and by reducing state negative affect, both of which were accomplished through mindfulness meditation.",
"title": ""
},
{
"docid": "neg:1840543_15",
"text": "This paper considers innovative marketing within the context of a micro firm, exploring how such firm’s marketing practices can take advantage of digital media. Factors that influence a micro firm’s innovative activities are examined and the development and implementation of digital media in the firm’s marketing practice is explored. Despite the significance of marketing and innovation to SMEs, a lack of literature and theory on innovation in marketing theory exists. Research suggests that small firms’ marketing practitioners and entrepreneurs have identified their marketing focus on the 4Is. This paper builds on knowledge in innovation and marketing and examines the process in a micro firm. A qualitative approach is applied using action research and case study approach. The relevant literature is reviewed as the starting point to diagnose problems and issues anticipated by business practitioners. A longitudinal study is used to illustrate the process of actions taken with evaluations and reflections presented. The exploration illustrates that in practice much of the marketing activities within micro firms are driven by incremental innovation. This research emphasises that integrating Information Communication Technologies (ICTs) successfully in marketing requires marketers to take an active managerial role far beyond their traditional areas of competence and authority.",
"title": ""
},
{
"docid": "neg:1840543_16",
"text": "As computational learning agents move into domains that incur real costs (e.g., autonomous driving or financial investment), it will be necessary to learn good policies without numerous high-cost learning trials. One promising approach to reducing sample complexity of learning a task is knowledge transfer from humans to agents. Ideally, methods of transfer should be accessible to anyone with task knowledge, regardless of that person's expertise in programming and AI. This paper focuses on allowing a human trainer to interactively shape an agent's policy via reinforcement signals. Specifically, the paper introduces \"Training an Agent Manually via Evaluative Reinforcement,\" or TAMER, a framework that enables such shaping. Differing from previous approaches to interactive shaping, a TAMER agent models the human's reinforcement and exploits its model by choosing actions expected to be most highly reinforced. Results from two domains demonstrate that lay users can train TAMER agents without defining an environmental reward function (as in an MDP) and indicate that human training within the TAMER framework can reduce sample complexity over autonomous learning algorithms.",
"title": ""
},
{
"docid": "neg:1840543_17",
"text": "To bridge the gap between humans and machines in image understanding and describing, we need further insight into how people describe a perceived scene. In this paper, we study the agreement between bottom-up saliency-based visual attention and object referrals in scene description constructs. We investigate the properties of human-written descriptions and machine-generated ones. We then propose a saliency-boosted image captioning model in order to investigate benefits from low-level cues in language models. We learn that (1) humans mention more salient objects earlier than less salient ones in their descriptions, (2) the better a captioning model performs, the better attention agreement it has with human descriptions, (3) the proposed saliencyboosted model, compared to its baseline form, does not improve significantly on the MS COCO database, indicating explicit bottom-up boosting does not help when the task is well learnt and tuned on a data, (4) a better generalization ability is, however, observed for the saliency-boosted model on unseen data.",
"title": ""
},
{
"docid": "neg:1840543_18",
"text": "This paper proposes a novel reference signal generation method for the unified power quality conditioner (UPQC) adopted to compensate current and voltage-quality problems of sensitive loads. The UPQC consists of a shunt and series converter having a common dc link. The shunt converter eliminates current harmonics originating from the nonlinear load side and the series converter mitigates voltage sag/swell originating from the supply side. The developed controllers for shunt and series converters are based on an enhanced phase-locked loop and nonlinear adaptive filter. The dc link control strategy is based on the fuzzy-logic controller. A fast sag/swell detection method is also presented. The efficacy of the proposed system is tested through simulation studies using the Power System Computer Aided Design/Electromagnetic Transients dc analysis program. The proposed UPQC achieves superior capability of mitigating the effects of voltage sag/swell and suppressing the load current harmonics under distorted supply conditions.",
"title": ""
}
] |
1840544 | Diet eyeglasses: Recognising food chewing using EMG and smart eyeglasses | [
{
"docid": "pos:1840544_0",
"text": "Maintaining appropriate levels of food intake anddeveloping regularity in eating habits is crucial to weight lossand the preservation of a healthy lifestyle. Moreover, maintainingawareness of one's own eating habits is an important steptowards portion control and ultimately, weight loss. Though manysolutions have been proposed in the area of physical activitymonitoring, few works attempt to monitor an individual's foodintake by means of a noninvasive, wearable platform. In thispaper, we introduce a novel nutrition-intake monitoring systembased around a wearable, mobile, wireless-enabled necklacefeaturing an embedded piezoelectric sensor. We also propose aframework capable of estimating volume of meals, identifyinglong-term trends in eating habits, and providing classificationbetween solid foods and liquids with an F-Measure of 85% and86% respectively. The data is presented to the user in the formof a mobile application.",
"title": ""
}
] | [
{
"docid": "neg:1840544_0",
"text": "Modelling and exploiting teammates’ policies in cooperative multi-agent systems have long been an interest and also a big challenge for the reinforcement learning (RL) community. The interest lies in the fact that if the agent knows the teammates’ policies, it can adjust its own policy accordingly to arrive at proper cooperations; while the challenge is that the agents’ policies are changing continuously due to they are learning concurrently, which imposes difficulty to model the dynamic policies of teammates accurately. In this paper, we present ATTention Multi-Agent Deep Deterministic Policy Gradient (ATT-MADDPG) to address this challenge. ATT-MADDPG extends DDPG, a single-agent actor-critic RL method, with two special designs. First, in order to model the teammates’ policies, the agent should get access to the observations and actions of teammates. ATT-MADDPG adopts a centralized critic to collect such information. Second, to model the teammates’ policies using the collected information in an effective way, ATT-MADDPG enhances the centralized critic with an attention mechanism. This attention mechanism introduces a special structure to explicitly model the dynamic joint policy of teammates, making sure that the collected information can be processed efficiently. We evaluate ATT-MADDPG on both benchmark tasks and the real-world packet routing tasks. Experimental results show that it not only outperforms the state-of-the-art RL-based methods and rule-based methods by a large margin, but also achieves better performance in terms of scalability and robustness.",
"title": ""
},
{
"docid": "neg:1840544_1",
"text": "A burgeoning interest in the intersection of neuroscience and architecture promises to offer biologically inspired insights into the design of spaces. The goal of such interdisciplinary approaches to architecture is to motivate construction of environments that would contribute to peoples' flourishing in behavior, health, and well-being. We suggest that this nascent field of neuroarchitecture is at a pivotal point in which neuroscience and architecture are poised to extend to a neuroscience of architecture. In such a research program, architectural experiences themselves are the target of neuroscientific inquiry. Here, we draw lessons from recent developments in neuroaesthetics to suggest how neuroarchitecture might mature into an experimental science. We review the extant literature and offer an initial framework from which to contextualize such research. Finally, we outline theoretical and technical challenges that lie ahead.",
"title": ""
},
{
"docid": "neg:1840544_2",
"text": "Although there is no analytical framework for assessing the organizational benefits of ERP systems, several researchers have indicated that the balanced scorecard (BSC) approach may be an appropriate technique for evaluating the performance of ERP systems. This paper fills this gap in the literature by providing a balanced-scorecard based framework for valuing the strategic contributions of an ERP system. Using a successful SAP implementation by a major international aircraft engine manufacturing and service organization as a case study, this paper illustrates that an ERP system does indeed impacts the business objectives of the firm and derives a new innovative ERP framework for valuing the strategic impacts of ERP systems. The ERP valuation framework, called here an ERP scorecard, integrates the four Kaplan and Norton’s balanced scorecard dimensions with Zuboff’s automate, informate and transformate goals of information systems to provide a practical approach for measuring the contributions and impacts of ERP systems on the strategic goals of the company. # 2005 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "neg:1840544_3",
"text": "A method is presented to assess stability changes in waves in early-stage ship design. The method is practical: the calculations can be completed quickly and can be applied as soon as lines are available. The intended use of the described method is for preliminary analysis. If stability changes that result in large roll motion are indicated early in the design process, this permits planning and budgeting for direct assessments using numerical simulations and/or model experiments. The main use of the proposed method is for the justification for hull form shape modification or for necessary additional analysis to better quantify potentially increased stability risk. The method is based on the evaluation of changing stability in irregular seas and can be applied to any type of ship. To demonstrate the robustness of the method, results for ten naval ship types are presented and discussed. The proposed method is shown to identify ships with known risk for large stability changes in waves.",
"title": ""
},
{
"docid": "neg:1840544_4",
"text": "This study assessed embodied simulation via electromyography (EMG) as participants first encoded emotionally ambiguous faces with emotion concepts (i.e., \"angry,\"\"happy\") and later passively viewed the faces without the concepts. Memory for the faces was also measured. At initial encoding, participants displayed more smiling-related EMG activity in response to faces paired with \"happy\" than in response to faces paired with \"angry.\" Later, in the absence of concepts, participants remembered happiness-encoded faces as happier than anger-encoded faces. Further, during passive reexposure to the ambiguous faces, participants' EMG indicated spontaneous emotion-specific mimicry, which in turn predicted memory bias. No specific EMG activity was observed when participants encoded or viewed faces with non-emotion-related valenced concepts, or when participants encoded or viewed Chinese ideographs. From an embodiment perspective, emotion simulation is a measure of what is currently perceived. Thus, these findings provide evidence of genuine concept-driven changes in emotion perception. More generally, the findings highlight embodiment's role in the representation and processing of emotional information.",
"title": ""
},
{
"docid": "neg:1840544_5",
"text": "Smart parking is a typical IoT application that can benefit from advances in sensor, actuator and RFID technologies to provide many services to its users and parking owners of a smart city. This paper considers a smart parking infrastructure where sensors are laid down on the parking spots to detect car presence and RFID readers are embedded into parking gates to identify cars and help in the billing of the smart parking. Both types of devices are endowed with wired and wireless communication capabilities for reporting to a gateway where the situation recognition is performed. The sensor devices are tasked to play one of the three roles: (1) slave sensor nodes located on the parking spot to detect car presence/absence; (2) master nodes located at one of the edges of a parking lot to detect presence and collect the sensor readings from the slave nodes; and (3) repeater sensor nodes, also called \"anchor\" nodes, located strategically at specific locations in the parking lot to increase the coverage and connectivity of the wireless sensor network. While slave and master nodes are placed based on geographic constraints, the optimal placement of the relay/anchor sensor nodes in smart parking is an important parameter upon which the cost and efficiency of the parking system depends. We formulate the optimal placement of sensors in smart parking as an integer linear programming multi-objective problem optimizing the sensor network engineering efficiency in terms of coverage and lifetime maximization, as well as its economic gain in terms of the number of sensors deployed for a specific coverage and lifetime. We propose an exact solution to the node placement problem using single-step and two-step solutions implemented in the Mosel language based on the Xpress-MPsuite of libraries. Experimental results reveal the relative efficiency of the single-step compared to the two-step model on different performance parameters. These results are consolidated by simulation results, which reveal that our solution outperforms a random placement in terms of both energy consumption, delay and throughput achieved by a smart parking network.",
"title": ""
},
{
"docid": "neg:1840544_6",
"text": "Peer-to-peer (P2P) lending is a fast growing financial technology (FinTech) trend that is displacing traditional retail banking. Studies on P2P lending have focused on predicting individual interest rates or default probabilities. However, the relationship between aggregated P2P interest rates and the general economy will be of interest to investors and borrowers as the P2P credit market matures. We show that the variation in P2P interest rates across grade types are determined by three macroeconomic latent factors formed by Canonical Correlation Analysis (CCA) — macro default, investor uncertainty, and the fundamental value of the market. However, the variation in P2P interest rates across term types cannot be explained by the general economy.",
"title": ""
},
{
"docid": "neg:1840544_7",
"text": "1,2,3,4 Department of Information Technology, Matoshri Collage of Engineering & Reasearch Centre Eklahare, Nashik, India ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Waste management is one of the primary problem that the world faces irrespective of the case of developed or developing country. The key issue in the waste management is that the garbage bin at public places gets overflowed well in advance before the commencement of the next cleaning process. It in turn leads to various hazards such as bad odor & ugliness to that place which may be the root cause for spread of various diseases. To avoid all such hazardous scenario and maintain public cleanliness and health this work is mounted on a smart garbage system. The main theme of the work is to develop a smart intelligent garbage alert system for a proper garbage management .This paper proposes a smart alert system for garbage clearance by giving an alert signal to the municipal web server for instant cleaning of dustbin with proper verification based on level of garbage filling. This process is aided by the ultrasonic sensor which is interfaced with Arduino UNO to check the level of garbage filled in the dustbin and sends the alert to the municipal web server once if garbage is filled . After cleaning the dustbin, the driver confirms the task of emptying the garbage with the aid of RFID Tag. RFID is a computing technology that is used for verification process and in addition, it also enhances the smart garbage alert system by providing automatic identification of garbage filled in the dustbin and sends the status of clean-up to the server affirming that the work is done. The whole process is upheld by an embedded module integrated with RF ID and IOT Facilitation. The real time status of how waste collection is being done could be monitored and followed up by the municipality authority with the aid of this system. In addition to this the necessary remedial / alternate measures could be adapted. An Android application is developed and linked to a web server to intimate the alerts from the microcontroller to the urban office and to perform the remote monitoring of the cleaning process, done by the workers, thereby reducing the manual process of monitoring and verification. The notifications are sent to the Android application using Wi-Fi module.",
"title": ""
},
{
"docid": "neg:1840544_8",
"text": "In the research of rule extraction from neural networks,fidelity describes how well the rules mimic the behavior of a neural network whileaccuracy describes how well the rules can be generalized. This paper identifies thefidelity-accuracy dilemma. It argues to distinguishrule extraction using neural networks andrule extraction for neural networks according to their different goals, where fidelity and accuracy should be excluded from the rule quality evaluation framework, respectively.",
"title": ""
},
{
"docid": "neg:1840544_9",
"text": "In this paper we present an automated way of using spare CPU resources within a shared memory multi-processor or multi-core machine. Our approach is (i) to profile the execution of a program, (ii) from this to identify pieces of work which are promising sources of parallelism, (iii) recompile the program with this work being performed speculatively via a work-stealing system and then (iv) to detect at run-time any attempt to perform operations that would reveal the presence of speculation.\n We assess the practicality of the approach through an implementation based on GHC 6.6 along with a limit study based on the execution profiles we gathered. We support the full Concurrent Haskell language compiled with traditional optimizations and including I/O operations and synchronization as well as pure computation. We use 20 of the larger programs from the 'nofib' benchmark suite. The limit study shows that programs vary a lot in the parallelism we can identify: some have none, 16 have a potential 2x speed-up, 4 have 32x. In practice, on a 4-core processor, we get 10-80% speed-ups on 7 programs. This is mainly achieved at the addition of a second core rather than beyond this.\n This approach is therefore not a replacement for manual parallelization, but rather a way of squeezing extra performance out of the threads of an already-parallel program or out of a program that has not yet been parallelized.",
"title": ""
},
{
"docid": "neg:1840544_10",
"text": "Building a computationally creative system is a challenging undertaking. While such systems are beginning to proliferate, and a good number of them have been reasonably well-documented, it may seem, especially to newcomers to the field, that each system is a bespoke design that bears little chance of revealing any general knowledge about CC system building. This paper seeks to dispel this concern by presenting an abstract CC system description, or, in other words a practical, general approach for constructing CC systems.",
"title": ""
},
{
"docid": "neg:1840544_11",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "neg:1840544_12",
"text": "Educational data mining and learning analytics promise better understanding of student behavior and knowledge, as well as new information on the tacit factors that contribute to student actions. This knowledge can be used to inform decisions related to course and tool design and pedagogy, and to further engage students and guide those at risk of failure. This working group report provides an overview of the body of knowledge regarding the use of educational data mining and learning analytics focused on the teaching and learning of programming. In a literature survey on mining students' programming processes for 2005-2015, we observe a significant increase in work related to the field. However, the majority of the studies focus on simplistic metric analysis and are conducted within a single institution and a single course. This indicates the existence of further avenues of research and a critical need for validation and replication to better understand the various contributing factors and the reasons why certain results occur. We introduce a novel taxonomy to analyse replicating studies and discuss the importance of replicating and reproducing previous work. We describe what is the state of the art in collecting and sharing programming data. To better understand the challenges involved in replicating or reproducing existing studies, we report our experiences from three case studies using programming data. Finally, we present a discussion of future directions for the education and research community.",
"title": ""
},
{
"docid": "neg:1840544_13",
"text": "Convolutional Neural Networks (CNN) have been successfully applied to autonomous driving tasks, many in an end-to-end manner. Previous end-to-end steering control methods take an image or an image sequence as the input and directly predict the steering angle with CNN. Although single task learning on steering angles has reported good performances, the steering angle alone is not sufficient for vehicle control. In this work, we propose a multi-task learning framework to predict the steering angle and speed control simultaneously in an end-to-end manner. Since it is nontrivial to predict accurate speed values with only visual inputs, we first propose a network to predict discrete speed commands and steering angles with image sequences. Moreover, we propose a multi-modal multi-task network to predict speed values and steering angles by taking previous feedback speeds and visual recordings as inputs. Experiments are conducted on the public Udacity dataset and a newly collected SAIC dataset. Results show that the proposed model predicts steering angles and speed values accurately. Furthermore, we improve the failure data synthesis methods to solve the problem of error accumulation in real road tests.",
"title": ""
},
{
"docid": "neg:1840544_14",
"text": "The functions of rewards are based primarily on their effects on behavior and are less directly governed by the physics and chemistry of input events as in sensory systems. Therefore, the investigation of neural mechanisms underlying reward functions requires behavioral theories that can conceptualize the different effects of rewards on behavior. The scientific investigation of behavioral processes by animal learning theory and economic utility theory has produced a theoretical framework that can help to elucidate the neural correlates for reward functions in learning, goal-directed approach behavior, and decision making under uncertainty. Individual neurons can be studied in the reward systems of the brain, including dopamine neurons, orbitofrontal cortex, and striatum. The neural activity can be related to basic theoretical terms of reward and uncertainty, such as contiguity, contingency, prediction error, magnitude, probability, expected value, and variance.",
"title": ""
},
{
"docid": "neg:1840544_15",
"text": "We present a system for Answer Selection that integrates fine-grained Question Classification with a Deep Learning model designed for Answer Selection. We detail the necessary changes to the Question Classification taxonomy and system, the creation of a new Entity Identification system and methods of highlighting entities to achieve this objective. Our experiments show that Question Classes are a strong signal to Deep Learning models for Answer Selection, and enable us to outperform the current state of the art in all variations of our experiments except one. In the best configuration, our MRR and MAP scores outperform the current state of the art by between 3 and 5 points on both versions of the TREC Answer Selection test set, a standard dataset for this task.",
"title": ""
},
{
"docid": "neg:1840544_16",
"text": "Dynamic imaging is a recently proposed action description paradigm for simultaneously capturing motion and temporal evolution information, particularly in the context of deep convolutional neural networks (CNNs). Compared with optical flow for motion characterization, dynamic imaging exhibits superior efficiency and compactness. Inspired by the success of dynamic imaging in RGB video, this study extends it to the depth domain. To better exploit three-dimensional (3D) characteristics, multi-view dynamic images are proposed. In particular, the raw depth video is densely projected with ∗Corresponding author. Tel.: +86 27 87558918 Email addresses: Yang [email protected] (Yang Xiao), [email protected] (Jun Chen), yancheng [email protected] (Yancheng Wang), [email protected] (Zhiguo Cao), [email protected] (Joey Tianyi Zhou), [email protected] (Xiang Bai) Preprint submitted to Information Sciences December 31, 2018 ar X iv :1 80 6. 11 26 9v 3 [ cs .C V ] 2 7 D ec 2 01 8 respect to different virtual imaging viewpoints by rotating the virtual camera within the 3D space. Subsequently, dynamic images are extracted from the obtained multi-view depth videos and multi-view dynamic images are thus constructed from these images. Accordingly, more view-tolerant visual cues can be involved. A novel CNN model is then proposed to perform feature learning on multi-view dynamic images. Particularly, the dynamic images from different views share the same convolutional layers but correspond to different fully connected layers. This is aimed at enhancing the tuning effectiveness on shallow convolutional layers by alleviating the gradient vanishing problem. Moreover, as the spatial occurrence variation of the actions may impair the CNN, an action proposal approach is also put forth. In experiments, the proposed approach can achieve state-of-the-art performance on three challenging datasets.",
"title": ""
},
{
"docid": "neg:1840544_17",
"text": "Progress in science has advanced the development of human society across history, with dramatic revolutions shaped by information theory, genetic cloning, and artificial intelligence, among the many scientific achievements produced in the 20th century. However, the way that science advances itself is much less well-understood. In this work, we study the evolution of scientific development over the past century by presenting an anatomy of 89 million digitalized papers published between 1900 and 2015. We find that science has benefited from the shift from individual work to collaborative effort, with over 90% of the world-leading innovations generated by collaborations in this century, nearly four times higher than they were in the 1900s. We discover that rather than the frequent myopic- and self-referencing that was common in the early 20th century, modern scientists instead tend to look for literature further back and farther around. Finally, we also observe the globalization of scientific development from 1900 to 2015, including 25-fold and 7-fold increases in international collaborations and citations, respectively, as well as a dramatic decline in the dominant accumulation of citations by the US, the UK, and Germany, from ~95% to ~50% over the same period. Our discoveries are meant to serve as a starter for exploring the visionary ways in which science has developed throughout the past century, generating insight into and an impact upon the current scientific innovations and funding policies.",
"title": ""
},
{
"docid": "neg:1840544_18",
"text": "This report describes the difficulties of training neural networks and in particular deep neural networks. It then provides a literature review of training methods for deep neural networks, with a focus on pre-training. It focuses on Deep Belief Networks composed of Restricted Boltzmann Machines and Stacked Autoencoders and provides an outreach on further and alternative approaches. It also includes related practical recommendations from the literature on training them. In the second part, initial experiments using some of the covered methods are performed on two databases. In particular, experiments are performed on the MNIST hand-written digit dataset and on facial emotion data from a Kaggle competition. The results are discussed in the context of results reported in other research papers. An error rate lower than the best contribution to the Kaggle competition is achieved using an optimized Stacked Autoencoder.",
"title": ""
},
{
"docid": "neg:1840544_19",
"text": "This paper presents a non-intrusive approach for monitoring driver drowsiness using the fusion of several optimized indicators based on driver physical and driving performance measures, obtained from ADAS (Advanced Driver Assistant Systems) in simulated conditions. The paper is focused on real-time drowsiness detection technology rather than on long-term sleep/awake regulation prediction technology. We have developed our own vision system in order to obtain robust and optimized driver indicators able to be used in simulators and future real environments. These indicators are principally based on driver physical and driving performance skills. The fusion of several indicators, proposed in the literature, is evaluated using a neural network and a stochastic optimization method to obtain the best combination. We propose a new method for ground-truth generation based on a supervised Karolinska Sleepiness Scale (KSS). An extensive evaluation of indicators, derived from trials over a third generation simulator with several test subjects during different driving sessions, was performed. The main conclusions about the performance of single indicators and the best combinations of them are included, as well as the future works derived from this study.",
"title": ""
}
] |
1840545 | LightBox: SGX-assisted Secure Network Functions at Near-native Speed | [
{
"docid": "pos:1840545_0",
"text": "Intel is developing the Intel® Software Guard Extensions (Intel® SGX) technology, an extension to Intel® Architecture for generating protected software containers. The container is referred to as an enclave. Inside the enclave, software’s code, data, and stack are protected by hardware enforced access control policies that prevent attacks against the enclave’s content. In an era where software and services are deployed over the Internet, it is critical to be able to securely provision enclaves remotely, over the wire or air, to know with confidence that the secrets are protected and to be able to save secrets in non-volatile memory for future use. This paper describes the technology components that allow provisioning of secrets to an enclave. These components include a method to generate a hardware based attestation of the software running inside an enclave and a means for enclave software to seal secrets and export them outside of the enclave (for example store them in non-volatile memory) such that only the same enclave software would be able un-seal them back to their original form.",
"title": ""
},
{
"docid": "pos:1840545_1",
"text": "Many systems run rich analytics on sensitive data in the cloud, but are prone to data breaches. Hardware enclaves promise data confidentiality and secure execution of arbitrary computation, yet still suffer from access pattern leakage. We propose Opaque, a distributed data analytics platform supporting a wide range of queries while providing strong security guarantees. Opaque introduces new distributed oblivious relational operators that hide access patterns, and new query planning techniques to optimize these new operators. Opaque is implemented on Spark SQL with few changes to the underlying system. Opaque provides data encryption, authentication and computation verification with a performance ranging from 52% faster to 3.3x slower as compared to vanilla Spark SQL; obliviousness comes with a 1.6–46x overhead. Opaque provides an improvement of three orders of magnitude over state-of-the-art oblivious protocols, and our query optimization techniques improve performance by 2–5x.",
"title": ""
}
] | [
{
"docid": "neg:1840545_0",
"text": "The call-by-need lambda calculus provides an equational framework for reasoning syntactically about lazy evaluation. This paper examines its operational characteristics.\n By a series of reasoning steps, we systematically unpack the standard-order reduction relation of the calculus and discover a novel abstract machine definition which, like the calculus, goes \"under lambdas.\" We prove that machine evaluation is equivalent to standard-order evaluation.\n Unlike traditional abstract machines, delimited control plays a significant role in the machine's behavior. In particular, the machine replaces the manipulation of a heap using store-based effects with disciplined management of the evaluation stack using control-based effects. In short, state is replaced with control.\n To further articulate this observation, we present a simulation of call-by-need in a call-by-value language using delimited control operations.",
"title": ""
},
{
"docid": "neg:1840545_1",
"text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.",
"title": ""
},
{
"docid": "neg:1840545_2",
"text": "WiFi network traffics will be expected to increase sharply in the coming years, since WiFi network is commonly used for local area connectivity. Unfortunately, there are difficulties in WiFi network research beforehand, since there is no common dataset between researchers on this area. Recently, AWID dataset was published as a comprehensive WiFi network dataset, which derived from real WiFi traces. The previous work on this AWID dataset was unable to classify Impersonation Attack sufficiently. Hence, we focus on optimizing the Impersonation Attack detection. Feature selection can overcome this problem by selecting the most important features for detecting an arbitrary class. We leverage Artificial Neural Network (ANN) for the feature selection and apply Stacked Auto Encoder (SAE), a deep learning algorithm as a classifier for AWID Dataset. Our experiments show that the reduced input features have significantly improved to detect the Impersonation Attack.",
"title": ""
},
{
"docid": "neg:1840545_3",
"text": "We compare nonreturn-to-zero (NRZ) with return-to-zero (RZ) modulation format for wavelength-division-multiplexed systems operating at data rates up to 40 Gb/s. We find that in 10-40-Gb/s dispersion-managed systems (single-mode fiber alternating with dispersion compensating fiber), NRZ is more adversely affected by nonlinearities, whereas RZ is more affected by dispersion. In this dispersion map, 10- and 20-Gb/s systems operate better using RZ modulation format because nonlinearity dominates. However, 40-Gb/s systems favor the usage of NRZ because dispersion becomes the key limiting factor at 40 Gb/s.",
"title": ""
},
{
"docid": "neg:1840545_4",
"text": "We focus on the role that community plays in the continuum of disaster preparedness, response and recovery, and we explore where community fits in conceptual frameworks concerning disaster decision-making. We offer an overview of models developed in the literature as well as insights drawn from research related to Hurricane Katrina. Each model illustrates some aspect of the spectrum of disaster preparedness and recovery, beginning with risk perception and vulnerability assessments, and proceeding to notions of resiliency and capacity building. Concepts like social resilience are related to theories of ‘‘social capital,’’ which stress the importance of social networks, reciprocity, and interpersonal trust. These allow individuals and groups to accomplish greater things than they could by their isolated efforts. We trace two contrasting notions of community to Tocqueville. On the one hand, community is simply an aggregation of individual persons, that is, a population. As individuals, they have only limited capacity to act effectively or make decisions for themselves, and they are strongly subject to administrative decisions that authorities impose on them. On the other hand, community is an autonomous actor, with its own interests, preferences, resources, and capabilities. This definition of community has also been embraced by community-based participatory researchers and has been thought to offer an approach that is more active and advocacy oriented. We conclude with a discussion of the strengths and weaknesses of community in disaster response and in disaster research.",
"title": ""
},
{
"docid": "neg:1840545_5",
"text": "In recent years, real-time processing and analytics systems for big data--in the context of Business Intelligence (BI)--have received a growing attention. The traditional BI platforms that perform regular updates on daily, weekly or monthly basis are no longer adequate to satisfy the fast-changing business environments. However, due to the nature of big data, it has become a challenge to achieve the real-time capability using the traditional technologies. The recent distributed computing technology, MapReduce, provides off-the-shelf high scalability that can significantly shorten the processing time for big data; Its open-source implementation such as Hadoop has become the de-facto standard for processing big data, however, Hadoop has the limitation of supporting real-time updates. The improvements in Hadoop for the real-time capability, and the other alternative real-time frameworks have been emerging in recent years. This paper presents a survey of the open source technologies that support big data processing in a real-time/near real-time fashion, including their system architectures and platforms.",
"title": ""
},
{
"docid": "neg:1840545_6",
"text": "Variable speed operation is essential for large wind turbines in order to optimize the energy capture under variable wind speed conditions. Variable speed wind turbines require a power electronic interface converter to permit connection with the grid. The power electronics can be either partially-rated or fully-rated [1]. A popular interface method for large wind turbines that is based on a partiallyrated interface is the doubly-fed induction generator (DFIG) system [2]. In the DFIG system, the power electronic interface controls the rotor currents in order to control the electrical torque and thus the rotational speed. Because the power electronics only process the rotor power, which is typically less than 25% of the overall output power, the DFIG offers the advantages of speed control for a reduction in cost and power losses. This report presents a DFIG wind turbine system that is modeled in PLECS and Simulink. A full electrical model that includes the switching converter implementation for the rotor-side power electronics and a dq model of the induction machine is given. The aerodynamics of the wind turbine and the mechanical dynamics of the induction machine are included to extend the use of the model to simulating system operation under variable wind speed conditions. For longer simulations that include these slower mechanical and wind dynamics, an averaged PWM converter model is presented. The averaged electrical model offers improved simulation speed at the expense of neglecting converter switching detail.",
"title": ""
},
{
"docid": "neg:1840545_7",
"text": "Visual Question Answering (VQA) is the task of taking as input an image and a free-form natural language question about the image, and producing an accurate answer. In this work we view VQA as a “feature extraction” module to extract image and caption representations. We employ these representations for the task of image-caption ranking. Each feature dimension captures (imagines) whether a fact (question-answer pair) could plausibly be true for the image and caption. This allows the model to interpret images and captions from a wide variety of perspectives. We propose score-level and representation-level fusion models to incorporate VQA knowledge in an existing state-of-the-art VQA-agnostic image-caption ranking model. We find that incorporating and reasoning about consistency between images and captions significantly improves performance. Concretely, our model improves state-of-the-art on caption retrieval by 7.1% and on image retrieval by 4.4% on the MSCOCO dataset.",
"title": ""
},
{
"docid": "neg:1840545_8",
"text": "Code obfuscation is a technique to transform a program into an equivalent one that is harder to be reverse engineered and understood. On Android, well-known obfuscation techniques are shrinking, optimization, renaming, string encryption, control flow transformation, etc. On the other hand, adversaries may also maliciously use obfuscation techniques to hide pirated or stolen software. If pirated software were obfuscated, it would be difficult to detect software theft. To detect illegal software transformed by code obfuscation, one possible approach is to measure software similarity between original and obfuscated programs and determine whether the obfuscated version is an illegal copy of the original version. In this paper, we analyze empirically the effects of code obfuscation on Android app similarity analysis. The empirical measurements were done on five different Android apps with DashO obfuscator. Experimental results show that similarity measures at bytecode level are more effective than those at source code level to analyze software similarity.",
"title": ""
},
{
"docid": "neg:1840545_9",
"text": "PATIENT\nMale, 70 • Male, 84.\n\n\nFINAL DIAGNOSIS\nAppendiceal mucocele and pseudomyxoma peritonei.\n\n\nSYMPTOMS\n-.\n\n\nMEDICATION\n-.\n\n\nCLINICAL PROCEDURE\n-.\n\n\nSPECIALTY\nSurgery.\n\n\nOBJECTIVE\nRare disease.\n\n\nBACKGROUND\nMucocele of the appendix is an uncommon cystic lesion characterized by distension of the appendiceal lumen with mucus. Most commonly, it is the result of epithelial proliferation, but it can also be caused by inflammation or obstruction of the appendix. When an underlying mucinous cystadenocarcinoma exists, spontaneous or iatrogenic rupture of the mucocele can lead to mucinous intraperitoneal ascites, a syndrome known as pseudomyxoma peritonei.\n\n\nCASE REPORT\nWe report 2 cases that represent the clinical extremities of this heterogeneous disease; an asymptomatic mucocele of the appendix in a 70-year-old female and a case of pseudomyxoma peritonei in an 84-year-old male. Subsequently, we review the current literature focusing to the optimal management of both conditions.\n\n\nCONCLUSIONS\nMucocele of the appendix is a rare disease, usually diagnosed on histopathologic examination of appendectomized specimens. Due to the existing potential for malignant transformation and pseudomyxoma peritonei caused by rupture of the mucocele, extensive preoperative evaluation and thorough intraoperative gastrointestinal and peritoneal examination is required.",
"title": ""
},
{
"docid": "neg:1840545_10",
"text": "A fully integrated low-dropout-regulated step-down multiphase-switched-capacitor DC-DC converter (a.k.a. charge pump, CP) with a fast-response adaptive-phase (Fast-RAP) digital controller is designed using a 65-nm CMOS process. Different from conventional designs, a low-dropout regulator (LDO) with an NMOS power stage is used without the need for an additional stepup CP for driving. A clock tripler and a pulse divider are proposed to enable the Fast-RAP control. As the Fast-RAP digital controller is designed to be able to respond faster than the cascaded linear regulator, transient response will not be affected by the adaptive scheme. Thus, light-load efficiency is improved without sacrificing the response time. When the CP operates at 90 MHz with 80.3% CP efficiency, only small ripples would appear on the CP output with the 18-phase interleaving scheme, and be further attenuated at VOUT by the 50-mV dropout regulator with only 4.1% efficiency overhead and 6.5% area overhead. The output ripple is less than 2 mV for a load current of 20 mA.",
"title": ""
},
{
"docid": "neg:1840545_11",
"text": "One area of positive psychology analyzes subjective well-being (SWB), people's cognitive and affective evaluations of their lives. Progress has been made in understanding the components of SWB, the importance of adaptation and goals to feelings of well-being, the temperament underpinnings of SWB, and the cultural influences on well-being. Representative selection of respondents, naturalistic experience sampling measures, and other methodological refinements are now used to study SWB and could be used to produce national indicators of happiness.",
"title": ""
},
{
"docid": "neg:1840545_12",
"text": "In this paper we address the problem of grounding distributional representations of lexical meaning. We introduce a new model which uses stacked autoencoders to learn higher-level embeddings from textual and visual input. The two modalities are encoded as vectors of attributes and are obtained automatically from text and images, respectively. We evaluate our model on its ability to simulate similarity judgments and concept categorization. On both tasks, our approach outperforms baselines and related models.",
"title": ""
},
{
"docid": "neg:1840545_13",
"text": "The ever-increasing representativeness of software maintenance in the daily effort of software team requires initiatives for enhancing the activities accomplished to provide a good service for users who request a software improvement. This article presents a quantitative approach for evaluating software maintenance services based on cluster analysis techniques. The proposed approach provides a compact characterization of the services delivered by a maintenance organization, including characteristics such as service, waiting, and queue time. The ultimate goal is to help organizations to better understand, manage, and improve their current software maintenance process. We also report in this paper the usage of the proposed approach in a medium-sized organization throughout 2010. This case study shows that 72 software maintenance requests can be grouped in seven distinct clusters containing requests with similar characteristics. The in-depth analysis of the clusters found with our approach can foster the understanding of the nature of the requests and, consequently, it may improve the process followed by the software maintenance team.",
"title": ""
},
{
"docid": "neg:1840545_14",
"text": "or as ventricular fibrillation, the circulation must be restored promptly; otherwise anoxia will result in irreversible damage. There are two techniques that may be used to meet the emergency: one is to open the chest and massage the heart directly and the other is to accomplish the same end by a new method of closed-chest cardiac massage. The latter method is described in this communication. The closed-chest alternating current defibrillator ' that",
"title": ""
},
{
"docid": "neg:1840545_15",
"text": "LEGO is a globally popular toy composed of colorful interlocking plastic bricks that can be assembled in many ways; however, this special feature makes designing a LEGO sculpture particularly challenging. Building a stable sculpture is not easy for a beginner; even an experienced user requires a good deal of time to build one. This paper provides a novel approach to creating a balanced LEGO sculpture for a 3D model in any pose, using centroid adjustment and inner engraving. First, the input 3D model is transformed into a voxel data structure. Next, the model’s centroid is adjusted to an appropriate position using inner engraving to ensure that the model stands stably. A model can stand stably without any struts when the center of mass is moved to the ideal position. Third, voxels are merged into layer-by-layer brick layout assembly instructions. Finally, users will be able to build a LEGO sculpture by following these instructions. The proposed method is demonstrated with a number of LEGO sculptures and the results of the physical experiments are presented.",
"title": ""
},
{
"docid": "neg:1840545_16",
"text": "This paper reviews prior research in management accounting innovations covering the period 1926-2008. Management accounting innovations refer to the adoption of “newer” or modern forms of management accounting systems such as activity-based costing, activity-based management, time-driven activity-based costing, target costing, and balanced scorecards. Although some prior reviews, covering the period until 2000, place emphasis on modern management accounting techniques, however, we believe that the time gap between 2000 and 2008 could entail many new or innovative accounting issues. We find that research in management accounting innovations has intensified during the period 2000-2008, with the main focus has been on explaining various factors associated with the implementation and the outcome of an innovation. In addition, research in management accounting innovations indicates the dominant use of sociological-based theories and increasing use of field studies. We suggest some directions for future research pertaining to management accounting innovations.",
"title": ""
},
{
"docid": "neg:1840545_17",
"text": "The authors examined how networks of teams integrate their efforts to succeed collectively. They proposed that integration processes used to align efforts among multiple teams are important predictors of multiteam performance. The authors used a multiteam system (MTS) simulation to assess how both cross-team and within-team processes relate to MTS performance over multiple performance episodes that differed in terms of required interdependence levels. They found that cross-team processes predicted MTS performance beyond that accounted for by within-team processes. Further, cross-team processes were more important for MTS effectiveness when there were high cross-team interdependence demands as compared with situations in which teams could work more independently. Results are discussed in terms of extending theory and applications from teams to multiteam systems.",
"title": ""
},
{
"docid": "neg:1840545_18",
"text": "OBJECTIVE\nTo assess for the first time the morphology of the lymphatic system in patients with lipedema and lipo-lymphedema of the lower extremities by MR lymphangiography.\n\n\nMATERIALS AND METHODS\n26 lower extremities in 13 consecutive patients (5 lipedema, 8 lipo-lymphedema) were examined by MR lymphangiography. 18 mL of gadoteridol and 1 mL of mepivacainhydrochloride 1% were subdivided into 10 portions and injected intracutaneously in the forefoot. MR imaging was performed with a 1.5-T system equipped with high-performance gradients. For MR lymphangiography, a 3D-spoiled gradient-echo sequence was used. For evaluation of the lymphedema a heavily T2-weighted 3D-TSE sequence was performed.\n\n\nRESULTS\nIn all 16 lower extremities (100%) with lipo-lymphedema, high signal intensity areas in the epifascial region could be detected on the 3D-TSE sequence. In the 16 examined lower extremities with lipo-lymphedema, 8 lower legs and 3 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 3 mm. In two lower legs with lipo-lymphedema, an area of dermal back-flow was seen, indicating lymphatic outflow obstruction. In the 10 examined lower extremities with clinically pure lipedema, 4 lower legs and 2 upper legs demonstrated enlarged lymphatic vessels up to a diameter of 2 mm, indicating a subclinical status of lymphedema. In all examined extremities, the inguinal lymph nodes demonstrated a contrast material enhancement in the first image acquisition 15 min after injection.\n\n\nCONCLUSION\nMR lymphangiography is a safe and accurate minimal-invasive imaging modality for the evaluation of the lymphatic circulation in patients with lipedema and lipo-lymphedema of the lower extremities. If the extent of lymphatic involvement is unclear at the initial clinical examination or requires a better definition for optimal therapeutic planning, MR lymphangiography is able to identify the anatomic and physiological derangements and to establish an objective baseline.",
"title": ""
},
{
"docid": "neg:1840545_19",
"text": "In 1994, amongst a tide of popular books on virtual reality, Grigore Burdea and Philippe Coiffet published a well researched review of the field. Their book, “Virtual Reality Technology,” was notable because it was the first to contain detailed information on force and tactile feedback, areas in which both the authors have conducted extensive research. The book became a classic, and although not intended as such was adopted as the textbook of choice for many university classes in virtual reality. This was due in part to its broad review of the virtual reality technologies based on a strong engineering and scientific focus. Almost ten years later and Burdea and Coiffet have returned with a second edition that builds on the success of the first. While the content of the second edition is largely the same as the first, with almost identical chapter headings, there is a change in focus towards making this more of an educational tool. From their introduction on, it is clear that the authors intend for this to be used as a textbook. Each chapter is filled with definitions, graphs and equations, and ends with a set of review questions. More significantly the book has an accompanying CD which contains a number of excellent video clips and a complete laboratory manual with instruction on how to build desktop VR interfaces using VRML and Java 3D libraries. The manual is a 120 page book with 18 programming assignments and further homework questions. This book provides the instructor with almost all the material they might need for a course in virtual reality. The content itself is well written and researched. The authors have taken the material of the first book and updated much of it to reflect a decade of growth in the VR field. A strong theme running through the book is the rising dominance of PC-based virtual reality platforms, particularly in the chapter on computing architectures. Readers will be exposed to discussion on graphics rendering pipelines, PC graphics architecture, and clusters. In the fast changing world of PC hardware some of the hardware mentioned has already become dated, but the content still gives an essential grounding in the technological principles. Discussion of hardware architectures is also complemented by chapters on input and display devices, modeling, and programming toolkits. These were also in the original addition, but have been updated to reflect the invention of devices such as the Phantom force-feedback arm, or new software toolkits such as Java 3D. Interestingly, rather than having a whole chapter on force feedback, this now becomes part of a more general chapter on output devices. Burdea’s own work on the Rutgers Master glove with force feedback is barely mentioned at all. As with any book on a field as rich as virtual reality it is impossible to cover all possible topics in significant depth. The authors handle this by providing hundreds of references to the relevant technical literature, enabling readers to study topics in as much depth as they are interested in. In the first book a separate bibliography and list of VR companies and laboratories was provided at the end of the book. In the second edition, references are provided at the end of each chapter. This makes each chapter more self contained and suitable for studying in almost any order, once the introduction has been read. In this way the book provides an ideal introduction to a student or researcher who will want to know where to find out more. Despite its considerable strengths there are a number of weaknesses the authors might want to address when they produce a third edition. Some of these are minor. For example, the first edition had a collection of color photographs showing a variety of VR technologies and environments. Unfortunately these are missing from the second edition, and although the many black and white pictures are excellent, there are aspects of the technology that can be best understood by seeing it in color. As a teaching tool, it would have been good for the authors to provide more code samples on the enclosed Presence, Vol. 12, No. 6, December 2003, 663–664",
"title": ""
}
] |
1840546 | Prenatal developmental origins of behavior and mental health: The influence of maternal stress in pregnancy | [
{
"docid": "pos:1840546_0",
"text": "OBJECTIVE\nPrenatal exposure to inappropriate levels of glucocorticoids (GCs) and maternal stress are putative mechanisms for the fetal programming of later health outcomes. The current investigation examined the influence of prenatal maternal cortisol and maternal psychosocial stress on infant physiological and behavioral responses to stress.\n\n\nMETHODS\nThe study sample comprised 116 women and their full term infants. Maternal plasma cortisol and report of stress, anxiety and depression were assessed at 15, 19, 25, 31 and 36 + weeks' gestational age. Infant cortisol and behavioral responses to the painful stress of a heel-stick blood draw were evaluated at 24 hours after birth. The association between prenatal maternal measures and infant cortisol and behavioral stress responses was examined using hierarchical linear growth curve modeling.\n\n\nRESULTS\nA larger infant cortisol response to the heel-stick procedure was associated with exposure to elevated concentrations of maternal cortisol during the late second and third trimesters. Additionally, a slower rate of behavioral recovery from the painful stress of a heel-stick blood draw was predicted by elevated levels of maternal cortisol early in pregnancy as well as prenatal maternal psychosocial stress throughout gestation. These associations could not be explained by mode of delivery, prenatal medical history, socioeconomic status or child race, sex or birth order.\n\n\nCONCLUSIONS\nThese data suggest that exposure to maternal cortisol and psychosocial stress exerts programming influences on the developing fetus with consequences for infant stress regulation.",
"title": ""
}
] | [
{
"docid": "neg:1840546_0",
"text": "Methods of alloplastic forehead augmentation using soft expanded polytetrafluoroethylene (ePTFE) and silicone implants are described. Soft ePTFE forehead implantation has the advantage of being technically simpler, with better fixation. The disadvantages are a limited degree of forehead augmentation and higher chance of infection. Properly fabricated soft silicone implants provide potential for larger degree of forehead silhouette augmentation with less risk of infection. The corrugated edge and central perforations of the implant minimize mobility and capsule contraction.",
"title": ""
},
{
"docid": "neg:1840546_1",
"text": "We present a flexible method for fusing information from optical and range sensors based on an accelerated high-dimensional filtering approach. Our system takes as input a sequence of monocular camera images as well as a stream of sparse range measurements as obtained from a laser or other sensor system. In contrast with existing approaches, we do not assume that the depth and color data streams have the same data rates or that the observed scene is fully static. Our method produces a dense, high-resolution depth map of the scene, automatically generating confidence values for every interpolated depth point. We describe how to integrate priors on object motion and appearance and how to achieve an efficient implementation using parallel processing hardware such as GPUs.",
"title": ""
},
{
"docid": "neg:1840546_2",
"text": "My 1971 Turing Award Lecture was entitled \"Generality in Artificial Intelligence.\" The topic turned out to have been overambitious in that I discovered I was unable to put my thoughts on the subject in a satisfactory written form at that time. It would have been better to have reviewed my previous work rather than attempt something new, but such was not my custom at that time.\nI am grateful to ACM for the opportunity to try again. Unfortunately for our science, although perhaps fortunately for this project, the problem of generality in artificial intelligence (AI) is almost as unsolved as ever, although we now have many ideas not available in 1971. This paper relies heavily on such ideas, but it is far from a full 1987 survey of approaches for achieving generality. Ideas are therefore discussed at a length proportional to my familiarity with them rather than according to some objective criterion.\nIt was obvious in 1971 and even in 1958 that AI programs suffered from a lack of generality. It is still obvious; there are many more details. The first gross symptom is that a small addition to the idea of a program often involves a complete rewrite beginning with the data structures. Some progress has been made in modularizing data structures, but small modifications of the search strategies are even less likely to be accomplished without rewriting.\nAnother symptom is no one knows how to make a general database of commonsense knowledge that could be used by any program that needed the knowledge. Along with other information, such a database would contain what a robot would need to know about the effects of moving objects around, what a person can be expected to know about his family, and the facts about buying and selling. This does not depend on whether the knowledge is to be expressed in a logical language or in some other formalism. When we take the logic approach to AI, lack of generality shows up in that the axioms we devise to express commonsense knowledge are too restricted in their applicability for a general commonsense database. In my opinion, getting a language for expressing general commonsense knowledge for inclusion in a general database is the key problem of generality in AI.\nHere are some ideas for achieving generality proposed both before and after 1971. I repeat my disclaimer of comprehensiveness.",
"title": ""
},
{
"docid": "neg:1840546_3",
"text": "This Paper investigate action recognition by using Extreme Gradient Boosting (XGBoost). XGBoost is a supervised classification technique using an ensemble of decision trees. In this study, we also compare the performance of Xboost using another machine learning techniques Support Vector Machine (SVM) and Naive Bayes (NB). The experimental study on the human action dataset shows that XGBoost better as compared to SVM and NB in classification accuracy. Although takes more computational time the XGBoost performs good classification on action recognition.",
"title": ""
},
{
"docid": "neg:1840546_4",
"text": "This paper presents a large and systematic body of data on the relative effectiveness of mutation, crossover, and combinations of mutation and crossover in genetic programming (GP). The literature of traditional genetic algorithms contains related studies, but mutation and crossover in GP differ from their traditional counterparts in significant ways. In this paper we present the results from a very large experimental data set, the equivalent of approximately 12,000 typical runs of a GP system, systematically exploring a range of parameter settings. The resulting data may be useful not only for practitioners seeking to optimize parameters for GP runs, but also for theorists exploring issues such as the role of “building blocks” in GP.",
"title": ""
},
{
"docid": "neg:1840546_5",
"text": "The input to a neural sequence-tosequence model is often determined by an up-stream system, e.g. a word segmenter, part of speech tagger, or speech recognizer. These up-stream models are potentially error-prone. Representing inputs through word lattices allows making this uncertainty explicit by capturing alternative sequences and their posterior probabilities in a compact form. In this work, we extend the TreeLSTM (Tai et al., 2015) into a LatticeLSTM that is able to consume word lattices, and can be used as encoder in an attentional encoderdecoder model. We integrate lattice posterior scores into this architecture by extending the TreeLSTM’s child-sum and forget gates and introducing a bias term into the attention mechanism. We experiment with speech translation lattices and report consistent improvements over baselines that translate either the 1-best hypothesis or the lattice without posterior scores.",
"title": ""
},
{
"docid": "neg:1840546_6",
"text": "While recent deep monocular depth estimation approaches based on supervised regression have achieved remarkable performance, costly ground truth annotations are required during training. To cope with this issue, in this paper we present a novel unsupervised deep learning approach for predicting depth maps and show that the depth estimation task can be effectively tackled within an adversarial learning framework. Specifically, we propose a deep generative network that learns to predict the correspondence field (i.e. the disparity map) between two image views in a calibrated stereo camera setting. The proposed architecture consists of two generative sub-networks jointly trained with adversarial learning for reconstructing the disparity map and organized in a cycle such as to provide mutual constraints and supervision to each other. Extensive experiments on the publicly available datasets KITTI and Cityscapes demonstrate the effectiveness of the proposed model and competitive results with state of the art methods. The code is available at https://github.com/andrea-pilzer/unsup-stereo-depthGAN",
"title": ""
},
{
"docid": "neg:1840546_7",
"text": "The nonlinear Fourier transform is a transmission and signal processing technique that makes positive use of the Kerr nonlinearity in optical fibre channels. I will overview recent advances and some of challenges in this field.",
"title": ""
},
{
"docid": "neg:1840546_8",
"text": "1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.",
"title": ""
},
{
"docid": "neg:1840546_9",
"text": "According to the distributional inclusion hypothesis, entailment between words can be measured via the feature inclusions of their distributional vectors. In recent work, we showed how this hypothesis can be extended from words to phrases and sentences in the setting of compositional distributional semantics. This paper focuses on inclusion properties of tensors; its main contribution is a theoretical and experimental analysis of how feature inclusion works in different concrete models of verb tensors. We present results for relational, Frobenius, projective, and holistic methods and compare them to the simple vector addition, multiplication, min, and max models. The degrees of entailment thus obtained are evaluated via a variety of existing wordbased measures, such as Weed’s and Clarke’s, KL-divergence, APinc, balAPinc, and two of our previously proposed metrics at the phrase/sentence level. We perform experiments on three entailment datasets, investigating which version of tensor-based composition achieves the highest performance when combined with the sentence-level measures.",
"title": ""
},
{
"docid": "neg:1840546_10",
"text": "Rationale: The imidazopyridine hypnotic zolpidem may produce less memory and cognitive impairment than classic benzodiazepines, due to its relatively low binding affinity for the benzodiazepine receptor subtypes found in areas of the brain which are involved in learning and memory. Objectives: The study was designed to compare the acute effects of single oral doses of zolpidem (5, 10, 20 mg/70 kg) and the benzodiazepine hypnotic triazolam (0.125, 0.25, and 0.5 mg/70 kg) on specific memory and attentional processes. Methods: Drug effects on memory for target (i.e., focal) information and contextual information (i.e., peripheral details surrounding a target stimulus presentation) were evaluated using a source monitoring paradigm, and drug effects on selective attention mechanisms were evaluated using a negative priming paradigm, in 18 healthy volunteers in a double-blind, placebo-controlled, crossover design. Results: Triazolam and zolpidem produced strikingly similar dose-related effects on memory for target information. Both triazolam and zolpidem impaired subjects’ ability to remember whether a word stimulus had been presented to them on the computer screen or whether they had been asked to generate the stimulus based on an antonym cue (memory for the origin of a stimulus, which is one type of contextual information). The results suggested that triazolam, but not zolpidem, impaired memory for the screen location of picture stimuli (spatial contextual information). Although both triazolam and zolpidem increased overall reaction time in the negative priming task, only triazolam increased the magnitude of negative priming relative to placebo. Conclusions: The observed differences between triazolam and zolpidem have implications for the cognitive and pharmacological mechanisms underlying drug-induced deficits in specific memory and attentional processes, as well for the cognitive and brain mechanisms underlying these processes.",
"title": ""
},
{
"docid": "neg:1840546_11",
"text": "Transfer learning algorithms are used when one has sufficient training data for one supervised learning task (the source task) but only very limited training data for a second task (the target task) that is similar but not identical to the first. These algorithms use varying assumptions about the similarity between the tasks to carry information from the source to the target task. Common assumptions are that only certain specific marginal or conditional distributions have changed while all else remains the same. Moreover, not much work on transfer learning has considered the case when a few labels in the test domain are available. Alternatively, if one has only the target task, but also has the ability to choose a limited amount of additional training data to collect, then active learning algorithms are used to make choices which will most improve performance on the target task. These algorithms may be combined into active transfer learning, but previous efforts have had to apply the two methods in sequence or use restrictive transfer assumptions. This thesis focuses on active transfer learning under the model shift assumption. We start by proposing two transfer learning algorithms that allow changes in all marginal and conditional distributions but assume the changes are smooth in order to achieve transfer between the tasks. We then propose an active learning algorithm for the second method that yields a combined active transfer learning algorithm. By analyzing the risk bounds for the proposed transfer learning algorithms, we show that when the conditional distribution changes, we are able to obtain a generalization error bound of O( 1 λ∗ √ nl ) with respect to the labeled target sample size nl, modified by the smoothness of the change (λ∗) across domains. Our analysis also sheds light on conditions when transfer learning works better than no-transfer learning (learning by labeled target data only). Furthermore, we consider a general case where both the support and the model change across domains. We transform both X (features) and Y (labels) by a parameterized-location-scale shift to achieve transfer between tasks. On the other hand, multi-task learning attempts to simultaneously leverage data from multiple domains in order to estimate related functions on each domain. Similar to transfer learning, multi-task problems are also solved by imposing some kind of “smooth” relationship among/between tasks. We study how different smoothness assumptions on task relations affect the upper bounds of algorithms proposed for these problems under different settings. Finally, we propose methods to predict the entire distribution P (Y ) and P (Y |X) by transfer, while allowing both marginal and conditional distributions to change. Moreover, we extend this framework to multi-source distribution transfer. We demonstrate the effectiveness of our methods on both synthetic examples and real-world applications, including yield estimation on the grape image dataset, predicting air-quality from Weibo posts for cities, predicting whether a robot successfully climbs over an obstacle, examination score prediction for schools, and location prediction for taxis. Acknowledgments First and foremost, I would like to express my sincere gratitude to my advisor Jeff Schneider, who has been the biggest help during my whole PhD life. His brilliant insights have helped me formulate the problems of this thesis, brainstorm on new ideas and exciting algorithms. I have learnt many things about research from him, including how to organize ideas in a paper, how to design experiments, and how to give a good academic talk. This thesis would not have been possible without his guidance, advice, patience and encouragement. I would like to thank my thesis committee members Christos Faloutsos, Geoff Gordon and Jerry Zhu for providing great insights and feedbacks on my thesis. Christos has been very nice and he always finds time to talk to me even if he is very busy. Geoff has provided great insights on extending my work to classification and helped me clarified many notations/descriptions in my thesis. Jerry has been very helpful in extending my work on the text data and providing me the air quality dataset. I feel very fortunate to have them as my committee members. I would also like to thank Professor Barnabás Póczos, Professor Roman Garnett and Professor Artur Dubrawski, for providing very helpful suggestions and collaborations during my PhD. I am very grateful to many of the faculty members at Carnegie Mellon. Eric Xing’s Machine Learning course has been my introduction course for Machine Learning at Carnegie Mellon and it has taught me a lot about the foundations of machine learning, including all the inspiring machine learning algorithms and the theories behind them. Larry Wasserman’s Intermediate Statistics and Statistical Machine Learning are both wonderful courses and have been keys to my understanding of the statistical perspective of many machine learning algorithms. Geoff Gordon and Ryan Tibshirani’s Convex Optimization course has been a great tutorial for me to develop all the efficient optimizing techniques for the algorithms I have proposed. Further I want to thank all my colleagues and friends at Carnegie Mellon, especially people from the Auton Lab and the Computer Science Department at CMU. I would like to thank Dougal Sutherland, Yifei Ma, Junier Oliva, Tzu-Kuo Huang for insightful discussions and advices for my research. I would also like to thank all my friends who have provided great support and help during my stay at Carnegie Mellon, and to name a few, Nan Li, Junchen Jiang, Guangyu Xia, Zi Yang, Yixin Luo, Lei Li, Lin Xiao, Liu Liu, Yi Zhang, Liang Xiong, Ligia Nistor, Kirthevasan Kandasamy, Madalina Fiterau, Donghan Wang, Yuandong Tian, Brian Coltin. I would also like to thank Prof. Alon Halevy, who has been a great mentor during my summer internship at google research and also has been a great help in my job searching process. Finally I would like to thank my family, my parents Sisi and Tiangui, for their unconditional love, endless support, and unwavering faith in me. I truly thank them for shaping who I am, for teaching me to be a person who would never lose hope and give up.",
"title": ""
},
{
"docid": "neg:1840546_12",
"text": "Background learning is a pre-processing of motion detection which is a basis step of video analysis. For the static background, many previous works have already achieved good performance. However, the results on learning dynamic background are still much to be improved. To address this challenge, in this paper, a novel and practical method is proposed based on deep auto-encoder networks. Firstly, dynamic background images are extracted through a deep auto-encoder network (called Background Extraction Network) from video frames containing motion objects. Then, a dynamic background model is learned by another deep auto-encoder network (called Background Learning Network) using the extracted background images as the input. To be more flexible, our background model can be updated on-line to absorb more training samples. Our main contributions are 1) a cascade of two deep auto-encoder networks which can deal with the separation of dynamic background and foregrounds very efficiently; 2) a method of online learning is adopted to accelerate the training of Background Extraction Network. Compared with previous algorithms, our approach obtains the best performance over six benchmark data sets. Especially, the experiments show that our algorithm can handle large variation background very well.",
"title": ""
},
{
"docid": "neg:1840546_13",
"text": "Precision and planning are key to reconstructive surgery. Augmented reality (AR) can bring the information within preoperative computed tomography angiography (CTA) imaging to life, allowing the surgeon to 'see through' the patient's skin and appreciate the underlying anatomy without making a single incision. This work has demonstrated that AR can assist the accurate identification, dissection and execution of vascular pedunculated flaps during reconstructive surgery. Separate volumes of osseous, vascular, skin, soft tissue structures and relevant vascular perforators were delineated from preoperative CTA scans to generate three-dimensional images using two complementary segmentation software packages. These were converted to polygonal models and rendered by means of a custom application within the HoloLens™ stereo head-mounted display. Intraoperatively, the models were registered manually to their respective subjects by the operating surgeon using a combination of tracked hand gestures and voice commands; AR was used to aid navigation and accurate dissection. Identification of the subsurface location of vascular perforators through AR overlay was compared to the positions obtained by audible Doppler ultrasound. Through a preliminary HoloLens-assisted case series, the operating surgeon was able to demonstrate precise and efficient localisation of perforating vessels.",
"title": ""
},
{
"docid": "neg:1840546_14",
"text": "An observation one can make when reviewing the literature on physical activity is that health-enhancing exercise habits tend to wear off as soon as individuals enter adolescence. Therefore, exercise habits should be promoted and preserved early in life. This article focuses on the formation of physical exercise habits. First, the literature on motivational determinants of habitual exercise and related behaviours is discussed, and the concept of habit is further explored. Based on this literature, a theoretical model of exercise habit formation is proposed. More specifically, expanding on the idea that habits are the result of automated cognitive processes, it is argued that physical exercise habits are capable of being automatically activated by the situational features that normally precede these behaviours. These habits may enhance health as a result of consistent performance over a long period of time. Subsequently, obstacles to the formation of exercise habits are discussed and interventions that may anticipate these obstacles are presented. Finally, implications for theory and practice are briefly discussed.",
"title": ""
},
{
"docid": "neg:1840546_15",
"text": "In 2003, psychology professor and sex researcher J. Michael Bailey published a book entitled The Man Who Would Be Queen: The Science of Gender-Bending and Transsexualism. The book's portrayal of male-to-female (MTF) transsexualism, based on a theory developed by sexologist Ray Blanchard, outraged some transgender activists. They believed the book to be typical of much of the biomedical literature on transsexuality-oppressive in both tone and claims, insulting to their senses of self, and damaging to their public identities. Some saw the book as especially dangerous because it claimed to be based on rigorous science, was published by an imprint of the National Academy of Sciences, and argued that MTF sex changes are motivated primarily by erotic interests and not by the problem of having the gender identity common to one sex in the body of the other. Dissatisfied with the option of merely criticizing the book, a small number of transwomen (particularly Lynn Conway, Andrea James, and Deirdre McCloskey) worked to try to ruin Bailey. Using published and unpublished sources as well as original interviews, this essay traces the history of the backlash against Bailey and his book. It also provides a thorough exegesis of the book's treatment of transsexuality and includes a comprehensive investigation of the merit of the charges made against Bailey that he had behaved unethically, immorally, and illegally in the production of his book. The essay closes with an epilogue that explores what has happened since 2003 to the central ideas and major players in the controversy.",
"title": ""
},
{
"docid": "neg:1840546_16",
"text": "Fog/edge computing, function as a service, and programmable infrastructures, like software-defined networking or network function virtualisation, are becoming ubiquitously used in modern Information Technology infrastructures. These technologies change the characteristics and capabilities of the underlying computational substrate where services run (e.g. higher volatility, scarcer computational power, or programmability). As a consequence, the nature of the services that can be run on them changes too (smaller codebases, more fragmented state, etc.). These changes bring new requirements for service orchestrators, which need to evolve so as to support new scenarios where a close interaction between service and infrastructure becomes essential to deliver a seamless user experience. Here, we present the challenges brought forward by this new breed of technologies and where current orchestration techniques stand with regards to the new challenges. We also present a set of promising technologies that can help tame this brave new world.",
"title": ""
},
{
"docid": "neg:1840546_17",
"text": "OBJECTIVES\nThe study sought to evaluate clinical outcomes in clinical practice with rhythm control versus rate control strategy for management of atrial fibrillation (AF).\n\n\nBACKGROUND\nRandomized trials have not demonstrated significant differences in stroke, heart failure, or mortality between rhythm and rate control strategies. The comparative outcomes in contemporary clinical practice are not well described.\n\n\nMETHODS\nPatients managed with a rhythm control strategy targeting maintenance of sinus rhythm were retrospectively compared with a strategy of rate control alone in a AF registry across various U.S. practice settings. Unadjusted and adjusted (inverse-propensity weighted) outcomes were estimated.\n\n\nRESULTS\nThe overall study population (N = 6,988) had a median of 74 (65 to 81) years of age, 56% were males, 77% had first detected or paroxysmal AF, and 68% had CHADS2 score ≥2. In unadjusted analyses, rhythm control was associated with lower all-cause death, cardiovascular death, first stroke/non-central nervous system systemic embolization/transient ischemic attack, or first major bleeding event (all p < 0.05); no difference in new onset heart failure (p = 0.28); and more frequent cardiovascular hospitalizations (p = 0.0006). There was no difference in the incidence of pacemaker, defibrillator, or cardiac resynchronization device implantations (p = 0.99). In adjusted analyses, there were no statistical differences in clinical outcomes between rhythm control and rate control treated patients (all p > 0.05); however, rhythm control was associated with more cardiovascular hospitalizations (hazard ratio: 1.24; 95% confidence interval: 1.10 to 1.39; p = 0.0003).\n\n\nCONCLUSIONS\nAmong patients with AF, rhythm control was not superior to rate control strategy for outcomes of stroke, heart failure, or mortality, but was associated with more cardiovascular hospitalizations.",
"title": ""
},
{
"docid": "neg:1840546_18",
"text": "The needs of the child are paramount. The clinician’s first task is to diagnose the cause of symptoms and signs whether accidental, inflicted or the result of an underlying medical condition. Where abuse is diagnosed the task is to safeguard the child and treat the physical and psychological effects of maltreatment. A child is one who has not yet reached his or her 18th birthday. Child abuse is any action by another person that causes significant harm to a child or fails to meet a basic need. It involves acts of both commission and omission with effects on the child’s physical, developmental, and psychosocial well-being. The vast majority of carers from whatever walk of life, love, nurture and protect their children. A very few, in a momentary loss of control in an otherwise caring parent, cause much regretted injury. An even smaller number repeatedly maltreat their children in what becomes a pattern of abuse. One parent may harm, the other may fail to protect by omitting to seek help. Child abuse whether physical or psychological is unlawful.",
"title": ""
}
] |
1840547 | Mindfulness-based stress reduction for stress management in healthy people: a review and meta-analysis. | [
{
"docid": "pos:1840547_0",
"text": "The inability to cope successfully with the enormous stress of medical education may lead to a cascade of consequences at both a personal and professional level. The present study examined the short-term effects of an 8-week meditation-based stress reduction intervention on premedical and medical students using a well-controlled statistical design. Findings indicate that participation in the intervention can effectively (1) reduce self-reported state and trait anxiety, (2) reduce reports of overall psychological distress including depression, (3) increase scores on overall empathy levels, and (4) increase scores on a measure of spiritual experiences assessed at termination of intervention. These results (5) replicated in the wait-list control group, (6) held across different experiments, and (7) were observed during the exam period. Future research should address potential long-term effects of mindfulness training for medical and premedical students.",
"title": ""
},
{
"docid": "pos:1840547_1",
"text": "A previous study of 22 medical patients with DSM-III-R-defined anxiety disorders showed clinically and statistically significant improvements in subjective and objective symptoms of anxiety and panic following an 8-week outpatient physician-referred group stress reduction intervention based on mindfulness meditation. Twenty subjects demonstrated significant reductions in Hamilton and Beck Anxiety and Depression scores postintervention and at 3-month follow-up. In this study, 3-year follow-up data were obtained and analyzed on 18 of the original 22 subjects to probe long-term effects. Repeated measures analysis showed maintenance of the gains obtained in the original study on the Hamilton [F(2,32) = 13.22; p < 0.001] and Beck [F(2,32) = 9.83; p < 0.001] anxiety scales as well as on their respective depression scales, on the Hamilton panic score, the number and severity of panic attacks, and on the Mobility Index-Accompanied and the Fear Survey. A 3-year follow-up comparison of this cohort with a larger group of subjects from the intervention who had met criteria for screening for the original study suggests generalizability of the results obtained with the smaller, more intensively studied cohort. Ongoing compliance with the meditation practice was also demonstrated in the majority of subjects at 3 years. We conclude that an intensive but time-limited group stress reduction intervention based on mindfulness meditation can have long-term beneficial effects in the treatment of people diagnosed with anxiety disorders.",
"title": ""
},
{
"docid": "pos:1840547_2",
"text": "OBJECTIVES\nTo review and synthesize the state of research on a variety of meditation practices, including: the specific meditation practices examined; the research designs employed and the conditions and outcomes examined; the efficacy and effectiveness of different meditation practices for the three most studied conditions; the role of effect modifiers on outcomes; and the effects of meditation on physiological and neuropsychological outcomes.\n\n\nDATA SOURCES\nComprehensive searches were conducted in 17 electronic databases of medical and psychological literature up to September 2005. Other sources of potentially relevant studies included hand searches, reference tracking, contact with experts, and gray literature searches.\n\n\nREVIEW METHODS\nA Delphi method was used to develop a set of parameters to describe meditation practices. Included studies were comparative, on any meditation practice, had more than 10 adult participants, provided quantitative data on health-related outcomes, and published in English. Two independent reviewers assessed study relevance, extracted the data and assessed the methodological quality of the studies.\n\n\nRESULTS\nFive broad categories of meditation practices were identified (Mantra meditation, Mindfulness meditation, Yoga, Tai Chi, and Qi Gong). Characterization of the universal or supplemental components of meditation practices was precluded by the theoretical and terminological heterogeneity among practices. Evidence on the state of research in meditation practices was provided in 813 predominantly poor-quality studies. The three most studied conditions were hypertension, other cardiovascular diseases, and substance abuse. Sixty-five intervention studies examined the therapeutic effect of meditation practices for these conditions. Meta-analyses based on low-quality studies and small numbers of hypertensive participants showed that TM(R), Qi Gong and Zen Buddhist meditation significantly reduced blood pressure. Yoga helped reduce stress. Yoga was no better than Mindfulness-based Stress Reduction at reducing anxiety in patients with cardiovascular diseases. No results from substance abuse studies could be combined. The role of effect modifiers in meditation practices has been neglected in the scientific literature. The physiological and neuropsychological effects of meditation practices have been evaluated in 312 poor-quality studies. Meta-analyses of results from 55 studies indicated that some meditation practices produced significant changes in healthy participants.\n\n\nCONCLUSIONS\nMany uncertainties surround the practice of meditation. Scientific research on meditation practices does not appear to have a common theoretical perspective and is characterized by poor methodological quality. Firm conclusions on the effects of meditation practices in healthcare cannot be drawn based on the available evidence. Future research on meditation practices must be more rigorous in the design and execution of studies and in the analysis and reporting of results.",
"title": ""
}
] | [
{
"docid": "neg:1840547_0",
"text": "This paper introduces a new approach, named micro-crowdfunding, for motivating people to participate in achieving a sustainable society. Increasing people's awareness of how they participate in maintaining the sustainability of common resources, such as public sinks, toilets, shelves, and office areas, is central to achieving a sustainable society. Micro-crowdfunding, as proposed in the paper, is a new type of community-based crowdsourcing architecture that is based on the crowdfunding concept and uses the local currency idea as a tool for encouraging people who live in urban environments to increase their awareness of how important it is to sustain small, common resources through their minimum efforts. Because our approach is lightweight and uses a mobile phone, people can participate in micro-crowdfunding activities with little effort anytime and anywhere.\n We present the basic concept of micro-crowdfunding and a prototype system. We also describe our experimental results, which show how economic and social factors are effective in facilitating micro-crowdfunding. Our results show that micro-crowdfunding increases the awareness about social sustainability, and we believe that micro-crowdfunding makes it possible to motivate people for achieving a sustainable society.",
"title": ""
},
{
"docid": "neg:1840547_1",
"text": "(1) Disregard pseudo-queries that do not retrieve their pseudo-relevant document in the top nrank. (2) Select the top nneg retrieved documents are negative training examples. General Approach: Generate mock interaction embeddings and filter training examples down to those the most nearly match a set of template query-document pairs (given a distance function). Since interaction embeddings specific to what a model “sees,” interaction filters are model-specific.",
"title": ""
},
{
"docid": "neg:1840547_2",
"text": "In the paper, we describe analysis of Vivaldi antenna array aimed for microwave image application and SAR application operating at Ka band. The antenna array is fed by a SIW feed network for its low insertion loss and broadband performances in millimeter wave range. In our proposal we have replaced the large feed network by a simple relatively broadband network of compact size to reduce the losses in substrate integrated waveguide (SIW) and save space on PCB. The feed network is power 8-way divider fed by a wideband SIW-GCPW transition and directly connected to the antenna elements. The final antenna array will be designed, fabricated and obtained measured results will be compared with numerical ones.",
"title": ""
},
{
"docid": "neg:1840547_3",
"text": "This paper presents a novel algorithm aiming at analysis and identification of faces viewed from different poses and illumination conditions. Face analysis from a single image is performed by recovering the shape and textures parameters of a 3D Morphable Model in an analysis-by-synthesis fashion. The shape parameters are computed from a shape error estimated by optical flow and the texture parameters are obtained from a texture error. The algorithm uses linear equations to recover the shape and texture parameters irrespective of pose and lighting conditions of the face image. Identification experiments are reported on more than 5000 images from the publicly available CMU-PIE database which includes faces viewed from 13 different poses and under 22 different illuminations. Extensive identification results are available on our web page for future comparison with novel algorithms.",
"title": ""
},
{
"docid": "neg:1840547_4",
"text": "Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. While this chapter cannot replace such books, it aims to provide a self-contained mathematical introduction to optimal control theory that is su¢ ciently broad and yet su¢ ciently detailed when it comes to key concepts. The text is not tailored to the
eld of motor control (apart from the last section, and the overall emphasis on systems with continuous state) so it will hopefully be of interest to a wider audience. Of special interest in the context of this book is the material on the duality of optimal control and probabilistic inference; such duality suggests that neural information processing in sensory and motor areas may be more similar than currently thought. The chapter is organized in the following sections:",
"title": ""
},
{
"docid": "neg:1840547_5",
"text": "Pythium species were isolated from seedlings of strawberry with root and crown rot. The isolates were identified as P. helicoides on the basis of morphological characteristics and sequences of the ribosomal DNA internal transcribed spacer regions. In pathogenicity tests, the isolates caused root and crown rot similar to the original disease symptoms. Multiplex PCR was used to survey pathogen occurrence in strawberry production areas of Japan. Pythium helicoides was detected in 11 of 82 fields. The pathogen is distributed over six prefectures.",
"title": ""
},
{
"docid": "neg:1840547_6",
"text": "Intrusive multi-step attacks, such as Advanced Persistent Threat (APT) attacks, have plagued enterprises with significant financial losses and are the top reason for enterprises to increase their security budgets. Since these attacks are sophisticated and stealthy, they can remain undetected for years if individual steps are buried in background \"noise.\" Thus, enterprises are seeking solutions to \"connect the suspicious dots\" across multiple activities. This requires ubiquitous system auditing for long periods of time, which in turn causes overwhelmingly large amount of system audit events. Given a limited system budget, how to efficiently handle ever-increasing system audit logs is a great challenge. This paper proposes a new approach that exploits the dependency among system events to reduce the number of log entries while still supporting high-quality forensic analysis. In particular, we first propose an aggregation algorithm that preserves the dependency of events during data reduction to ensure the high quality of forensic analysis. Then we propose an aggressive reduction algorithm and exploit domain knowledge for further data reduction. To validate the efficacy of our proposed approach, we conduct a comprehensive evaluation on real-world auditing systems using log traces of more than one month. Our evaluation results demonstrate that our approach can significantly reduce the size of system logs and improve the efficiency of forensic analysis without losing accuracy.",
"title": ""
},
{
"docid": "neg:1840547_7",
"text": "Brain-computer interfacing (BCI) is a steadily growing area of research. While initially BCI research was focused on applications for paralyzed patients, increasingly more alternative applications in healthy human subjects are proposed and investigated. In particular, monitoring of mental states and decoding of covert user states have seen a strong rise of interest. Here, we present some examples of such novel applications which provide evidence for the promising potential of BCI technology for non-medical uses. Furthermore, we discuss distinct methodological improvements required to bring non-medical applications of BCI technology to a diversity of layperson target groups, e.g., ease of use, minimal training, general usability, short control latencies.",
"title": ""
},
{
"docid": "neg:1840547_8",
"text": "An overview of the current design practices in the field of Renewable Energy (RE) is presented; also paper delineates the background to the development of unique and novel techniques for power generation using the kinetic energy of tidal streams and other marine currents. Also this study focuses only on vertical axis tidal turbine. Tidal stream devices have been developed as an alternative method of extracting the energy from the tides. This form of tidal power technology poses less threat to the environment and does not face the same limiting factors associated with tidal barrage schemes, therefore making it a more feasible method of electricity production. Large companies are taking interest in this new source of power. There is a rush to research and work with this new energy source. Marine scientists are looking into how much these will affect the environment, while engineers are developing turbines that are harmless for the environment. In addition, the progression of technological advancements tracing several decades of R & D efforts on vertical axis turbines is highlighted.",
"title": ""
},
{
"docid": "neg:1840547_9",
"text": "The content of images users post to their social media is driven in part by personality. In this study, we analyze how Twitter profile images vary with the personality of the users posting them. In our main analysis, we use profile images from over 66,000 users whose personality we estimate based on their tweets. To facilitate interpretability, we focus our analysis on aesthetic and facial features and control for demographic variation in image features and personality. Our results show significant differences in profile picture choice between personality traits, and that these can be harnessed to predict personality traits with robust accuracy. For example, agreeable and conscientious users display more positive emotions in their profile pictures, while users high in openness prefer more aesthetic photos.",
"title": ""
},
{
"docid": "neg:1840547_10",
"text": "The algorithmic Markov condition states that the most likely causal direction between two random variables X and Y can be identified as the direction with the lowest Kolmogorov complexity. This notion is very powerful as it can detect any causal dependency that can be explained by a physical process. However, due to the halting problem, it is also not computable. In this paper we propose an computable instantiation that provably maintains the key aspects of the ideal. We propose to approximate Kolmogorov complexity via the Minimum Description Length (MDL) principle, using a score that is mini-max optimal with regard to the model class under consideration. This means that even in an adversarial setting, the score degrades gracefully, and we are still maximally able to detect dependencies between the marginal and the conditional distribution. As a proof of concept, we propose CISC, a linear-time algorithm for causal inference by stochastic complexity, for pairs of univariate discrete variables. Experiments show that CISC is highly accurate on synthetic, benchmark, as well as real-world data, outperforming the state of the art by a margin, and scales extremely well with regard to sample and domain sizes.",
"title": ""
},
{
"docid": "neg:1840547_11",
"text": "Parametric methods are commonly used despite evidence that model assumptions are often violated. Various statistical procedures have been suggested for analyzing data from multiple-group repeated measures (i.e., split-plot) designs when parametric model assumptions are violated (e.g., Akritas and Arnold (J. Amer. Statist. Assoc. 89 (1994) 336); Brunner and Langer (Biometrical J. 42 (2000) 663)), including the use of Friedman ranks. The e8ects of Friedman ranking on data and the resultant test statistics for single sample repeated measures designs have been examined (e.g., Harwell and Serlin (Comput. Statist. Data Anal. 17 (1994) 35; Comm. Statist. Simulation Comput. 26 (1997) 605); Zimmerman and Zumbo (J. Experiment. Educ. 62 (1993) 75)). However, there have been fewer investigations concerning Friedman ranks applied to multiple groups of repeated measures data (e.g., Beasley (J. Educ. Behav. Statist. 25 (2000) 20); Rasmussen (British J. Math. Statist. Psych. 42 (1989) 91)). We investigate the use of Friedman ranks for testing the interaction in a split-plot design as a robust alternative to parametric procedures. We demonstrated that the presence of a repeated measures main e8ect may reduce the power of interaction tests performed on Friedman ranks. Aligning the data before applying Friedman ranks was shown to produce more statistical power than simply analyzing Friedman ranks. Results from a simulation study showed that aligning the data (i.e., removing main e8ects) before applying Friedman ranks and then performing either a univariate or multivariate test can provide more statistical power than parametric tests if the error distributions are skewed. c © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840547_12",
"text": "In this paper, the authors introduce a type of transverse flux reluctance machines. These machines work without permanent magnets or electric rotor excitation and hold several advantages, including a high power density, high torque, and compact design. Disadvantages are a high fundamental frequency and a high torque ripple that complicates the control of the motor. The device uses soft magnetic composites (SMCs) for the magnetic circuit, which allows complex stator geometries with 3-D magnetic flux paths. The winding is made from hollow copper tubes, which also form the main heat sink of the machine by using water as a direct copper coolant. Models concerning the design and computation of the magnetic circuit, torque, and the power output are described. A crucial point in this paper is the determination of hysteresis and eddy-current losses in the SMC and the calculation of power losses and current displacement in the copper winding. These are calculated with models utilizing a combination of analytic approaches and finite-element method simulations. Finally, a thermal model based on lumped parameters is introduced, and calculated temperature rises are presented.",
"title": ""
},
{
"docid": "neg:1840547_13",
"text": "Silicon carbide (SiC) power devices have been investigated extensively in the past two decades, and there are many devices commercially available now. Owing to the intrinsic material advantages of SiC over silicon (Si), SiC power devices can operate at higher voltage, higher switching frequency, and higher temperature. This paper reviews the technology progress of SiC power devices and their emerging applications. The design challenges and future trends are summarized at the end of the paper.",
"title": ""
},
{
"docid": "neg:1840547_14",
"text": "Ridesharing platforms match drivers and riders to trips, using dynamic prices to balance supply and demand. A challenge is to set prices that are appropriately smooth in space and time, so that drivers will choose to accept their dispatched trips, rather than drive to another area or wait for higher prices or a better trip. We work in a complete information, discrete time, multiperiod, multi-location model, and introduce the Spatio-Temporal Pricing (STP) mechanism. The mechanism is incentive-aligned, in that it is a subgame-perfect equilibrium for drivers to accept their dispatches. The mechanism is also welfare-optimal, envy-free, individually rational, budget balanced and core-selecting from any history onward. The proof of incentive alignment makes use of the M ♮ concavity of min-cost flow objectives. We also give an impossibility result, that there can be no dominant-strategy mechanism with the same economic properties. An empirical analysis conducted in simulation suggests that the STP mechanism can achieve significantly higher social welfare than a myopic pricing mechanism.",
"title": ""
},
{
"docid": "neg:1840547_15",
"text": "Experimental evidence has pointed toward a negative effect of violent video games on social behavior. Given that the availability and presence of video games is pervasive, negative effects from playing them have potentially large implications for public policy. It is, therefore, important that violent video game effects are thoroughly and experimentally explored, with the current experiment focusing on prosocial behavior. 120 undergraduate volunteers (Mage = 19.01, 87.5% male) played an ultra-violent, violent, or non-violent video game and were then assessed on two distinct measures of prosocial behavior: how much they donated to a charity and how difficult they set a task for an ostensible participant. It was hypothesized that participants playing the ultra-violent games would show the least prosocial behavior and those playing the non-violent game would show the most. These hypotheses were not supported, with participants responding in similar ways, regardless of the type of game played. While null effects are difficult to interpret, samples of this nature (undergraduate volunteers, high male skew) may be problematic, and participants were possibly sensitive to the hypothesis at some level, this experiment adds to the growing body of evidence suggesting that violent video game effects are less clear than initially",
"title": ""
},
{
"docid": "neg:1840547_16",
"text": "Our brain is a network. It consists of spatially distributed, but functionally linked regions that continuously share information with each other. Interestingly, recent advances in the acquisition and analysis of functional neuroimaging data have catalyzed the exploration of functional connectivity in the human brain. Functional connectivity is defined as the temporal dependency of neuronal activation patterns of anatomically separated brain regions and in the past years an increasing body of neuroimaging studies has started to explore functional connectivity by measuring the level of co-activation of resting-state fMRI time-series between brain regions. These studies have revealed interesting new findings about the functional connections of specific brain regions and local networks, as well as important new insights in the overall organization of functional communication in the brain network. Here we present an overview of these new methods and discuss how they have led to new insights in core aspects of the human brain, providing an overview of these novel imaging techniques and their implication to neuroscience. We discuss the use of spontaneous resting-state fMRI in determining functional connectivity, discuss suggested origins of these signals, how functional connections tend to be related to structural connections in the brain network and how functional brain communication may form a key role in cognitive performance. Furthermore, we will discuss the upcoming field of examining functional connectivity patterns using graph theory, focusing on the overall organization of the functional brain network. Specifically, we will discuss the value of these new functional connectivity tools in examining believed connectivity diseases, like Alzheimer's disease, dementia, schizophrenia and multiple sclerosis.",
"title": ""
},
{
"docid": "neg:1840547_17",
"text": "DistributedLog is a high performance, strictly ordered, durably replicated log. It is multi-tenant, designed with a layered architecture that allows reads and writes to be scaled independently and supports OLTP, stream processing and batch workloads. It also supports a globally synchronous consistent replicated log spanning multiple geographically separated regions. This paper describes how DistributedLog is structured, its components and the rationale underlying various design decisions. We have been using DistributedLog in production for several years, supporting applications ranging from transactional database journaling, real-time data ingestion, and analytics to general publish-subscribe messaging.",
"title": ""
},
{
"docid": "neg:1840547_18",
"text": "The Internet of things (IoT) is still in its infancy and has attracted much interest in many industrial sectors including medical fields, logistics tracking, smart cities and automobiles. However as a paradigm, it is susceptible to a range of significant intrusion threats. This paper presents a threat analysis of the IoT and uses an Artificial Neural Network (ANN) to combat these threats. A multi-level perceptron, a type of supervised ANN, is trained using internet packet traces, then is assessed on its ability to thwart Distributed Denial of Service (DDoS/DoS) attacks. This paper focuses on the classification of normal and threat patterns on an IoT Network. The ANN procedure is validated against a simulated IoT network. The experimental results demonstrate 99.4% accuracy and can successfully detect various DDoS/DoS attacks.",
"title": ""
}
] |
1840548 | Extending the road beyond CMOS - IEEE Circuits and Devices Magazine | [
{
"docid": "pos:1840548_0",
"text": "Quantum computers promise to exceed the computational efficiency of ordinary classical machines because quantum algorithms allow the execution of certain tasks in fewer steps. But practical implementation of these machines poses a formidable challenge. Here I present a scheme for implementing a quantum-mechanical computer. Information is encoded onto the nuclear spins of donor atoms in doped silicon electronic devices. Logical operations on individual spins are performed using externally applied electric fields, and spin measurements are made using currents of spin-polarized electrons. The realization of such a computer is dependent on future refinements of conventional silicon electronics.",
"title": ""
}
] | [
{
"docid": "neg:1840548_0",
"text": "In this paper, we develop a cooperative mechanism, RELICS, to combat selfishness in DTNs. In DTNs, nodes belong to self-interested individuals. A node may be selfish in expending resources, such as energy, on forwarding messages from others, unless offered incentives. We devise a rewarding scheme that provides incentives to nodes in a physically realizable way in that the rewards are reflected into network operation. We call it in-network realization of incentives. We introduce explicit ranking of nodes depending on their transit behavior, and translate those ranks into message priority. Selfishness drives each node to set its energy depletion rate as low as possible while maintaining its own delivery ratio above some threshold. We show that our cooperative mechanism compels nodes to cooperate and also achieves higher energy-economy compared to other previous results.",
"title": ""
},
{
"docid": "neg:1840548_1",
"text": "BACKGROUND\nAsthma guidelines indicate that the goal of treatment should be optimum asthma control. In a busy clinic practice with limited time and resources, there is need for a simple method for assessing asthma control with or without lung function testing.\n\n\nOBJECTIVES\nThe objective of this article was to describe the development of the Asthma Control Test (ACT), a patient-based tool for identifying patients with poorly controlled asthma.\n\n\nMETHODS\nA 22-item survey was administered to 471 patients with asthma in the offices of asthma specialists. The specialist's rating of asthma control after spirometry was also collected. Stepwise regression methods were used to select a subset of items that showed the greatest discriminant validity in relation to the specialist's rating of asthma control. Internal consistency reliability was computed, and discriminant validity tests were conducted for ACT scale scores. The performance of ACT was investigated by using logistic regression methods and receiver operating characteristic analyses.\n\n\nRESULTS\nFive items were selected from regression analyses. The internal consistency reliability of the 5-item ACT scale was 0.84. ACT scale scores discriminated between groups of patients differing in the specialist's rating of asthma control (F = 34.5, P <.00001), the need for change in patient's therapy (F = 40.3, P <.00001), and percent predicted FEV(1) (F = 4.3, P =.0052). As a screening tool, the overall agreement between ACT and the specialist's rating ranged from 71% to 78% depending on the cut points used, and the area under the receiver operating characteristic curve was 0.77.\n\n\nCONCLUSION\nResults reinforce the usefulness of a brief, easy to administer, patient-based index of asthma control.",
"title": ""
},
{
"docid": "neg:1840548_2",
"text": "BACKGROUND\nThere was less than satisfactory progress, especially in sub-Saharan Africa, towards child and maternal mortality targets of Millennium Development Goals (MDGs) 4 and 5. The main aim of this study was to describe the prevalence and determinants of essential new newborn care practices in the Lawra District of Ghana.\n\n\nMETHODS\nA cross-sectional study was carried out in June 2014 on a sample of 422 lactating mothers and their children aged between 1 and 12 months. A systematic random sampling technique was used to select the study participants who attended post-natal clinic in the Lawra district hospital.\n\n\nRESULTS\nOf the 418 newborns, only 36.8% (154) was judged to have had safe cord care, 34.9% (146) optimal thermal care, and 73.7% (308) were considered to have had adequate neonatal feeding. The overall prevalence of adequate new born care comprising good cord care, optimal thermal care and good neonatal feeding practices was only 15.8%. Mothers who attained at least Senior High Secondary School were 20.5 times more likely to provide optimal thermal care [AOR 22.54; 95% CI (2.60-162.12)], compared to women had no formal education at all. Women who received adequate ANC services were 4.0 times (AOR = 4.04 [CI: 1.53, 10.66]) and 1.9 times (AOR = 1.90 [CI: 1.01, 3.61]) more likely to provide safe cord care and good neonatal feeding as compared to their counterparts who did not get adequate ANC. However, adequate ANC services was unrelated to optimal thermal care. Compared to women who delivered at home, women who delivered their index baby in a health facility were 5.6 times more likely of having safe cord care for their babies (AOR = 5.60, Cl: 1.19-23.30), p = 0.03.\n\n\nCONCLUSIONS\nThe coverage of essential newborn care practices was generally low. Essential newborn care practices were positively associated with high maternal educational attainment, adequate utilization of antenatal care services and high maternal knowledge of newborn danger signs. Therefore, greater improvement in essential newborn care practices could be attained through proven low-cost interventions such as effective ANC services, health and nutrition education that should span from community to health facility levels.",
"title": ""
},
{
"docid": "neg:1840548_3",
"text": "Decision Tree induction is commonly used classification algorithm. One of the important problems is how to use records with unknown values from training as well as testing data. Many approaches have been proposed to address the impact of unknown values at training on accuracy of prediction. However, very few techniques are there to address the problem in testing data. In our earlier work, we discussed and summarized these strategies in details. In Lazy Decision Tree, the problem of unknown attribute values in test instance is completely eliminated by delaying the construction of tree till the classification time and using only known attributes for classification. In this paper we present novel algorithm ‘Eager Decision Tree’ which constructs a single prediction model at the time of training which considers all possibilities of unknown attribute values from testing data. It naturally removes the problem of handing unknown values in testing data in Decision Tree induction like Lazy Decision Tree.",
"title": ""
},
{
"docid": "neg:1840548_4",
"text": "Dimensionality reduction is an important aspect in the pattern classification literature, and linear discriminant analysis (LDA) is one of the most widely studied dimensionality reduction technique. The application of variants of LDA technique for solving small sample size (SSS) problem can be found in many research areas e.g. face recognition, bioinformatics, text recognition, etc. The improvement of the performance of variants of LDA technique has great potential in various fields of research. In this paper, we present an overview of these methods. We covered the type, characteristics and taxonomy of these methods which can overcome SSS problem. We have also highlighted some important datasets and software/ packages.",
"title": ""
},
{
"docid": "neg:1840548_5",
"text": "With the rapid development of cloud storage, data security in storage receives great attention and becomes the top concern to block the spread development of cloud service. In this paper, we systematically study the security researches in the storage systems. We first present the design criteria that are used to evaluate a secure storage system and summarize the widely adopted key technologies. Then, we further investigate the security research in cloud storage and conclude the new challenges in the cloud environment. Finally, we give a detailed comparison among the selected secure storage systems and draw the relationship between the key technologies and the design criteria.",
"title": ""
},
{
"docid": "neg:1840548_6",
"text": "Recent advances in far-field fluorescence microscopy have led to substantial improvements in image resolution, achieving a near-molecular resolution of 20 to 30 nanometers in the two lateral dimensions. Three-dimensional (3D) nanoscale-resolution imaging, however, remains a challenge. We demonstrated 3D stochastic optical reconstruction microscopy (STORM) by using optical astigmatism to determine both axial and lateral positions of individual fluorophores with nanometer accuracy. Iterative, stochastic activation of photoswitchable probes enables high-precision 3D localization of each probe, and thus the construction of a 3D image, without scanning the sample. Using this approach, we achieved an image resolution of 20 to 30 nanometers in the lateral dimensions and 50 to 60 nanometers in the axial dimension. This development allowed us to resolve the 3D morphology of nanoscopic cellular structures.",
"title": ""
},
{
"docid": "neg:1840548_7",
"text": "The theory of embodied cognition can provide HCI practitioners and theorists with new ideas about interaction and new principles for better designs. I support this claim with four ideas about cognition: (1) interacting with tools changes the way we think and perceive -- tools, when manipulated, are soon absorbed into the body schema, and this absorption leads to fundamental changes in the way we perceive and conceive of our environments; (2) we think with our bodies not just with our brains; (3) we know more by doing than by seeing -- there are times when physically performing an activity is better than watching someone else perform the activity, even though our motor resonance system fires strongly during other person observation; (4) there are times when we literally think with things. These four ideas have major implications for interaction design, especially the design of tangible, physical, context aware, and telepresence systems.",
"title": ""
},
{
"docid": "neg:1840548_8",
"text": "Android is the most widely used smartphone OS with 82.8% market share in 2015 (IDC, 2015). It is therefore the most widely targeted system by malware authors. Researchers rely on dynamic analysis to extract malware behaviors and often use emulators to do so. However, using emulators lead to new issues. Malware may detect emulation and as a result it does not execute the payload to prevent the analysis. Dealing with virtual device evasion is a never-ending war and comes with a non-negligible computation cost (Lindorfer et al., 2014). To overcome this state of affairs, we propose a system that does not use virtual devices for analysing malware behavior. Glassbox is a functional prototype for the dynamic analysis of malware applications. It executes applications on real devices in a monitored and controlled environment. It is a fully automated system that installs, tests and extracts features from the application for further analysis. We present the architecture of the platform and we compare it with existing Android dynamic analysis platforms. Lastly, we evaluate the capacity of Glassbox to trigger application behaviors by measuring the average coverage of basic blocks on the AndroCoverage dataset (AndroCoverage, 2016). We show that it executes on average 13.52% more basic blocks than the Monkey program.",
"title": ""
},
{
"docid": "neg:1840548_9",
"text": "Disturbance regimes are changing rapidly, and the consequences of such changes for ecosystems and linked social-ecological systems will be profound. This paper synthesizes current understanding of disturbance with an emphasis on fundamental contributions to contemporary landscape and ecosystem ecology, then identifies future research priorities. Studies of disturbance led to insights about heterogeneity, scale, and thresholds in space and time and catalyzed new paradigms in ecology. Because they create vegetation patterns, disturbances also establish spatial patterns of many ecosystem processes on the landscape. Drivers of global change will produce new spatial patterns, altered disturbance regimes, novel trajectories of change, and surprises. Future disturbances will continue to provide valuable opportunities for studying pattern-process interactions. Changing disturbance regimes will produce acute changes in ecosystems and ecosystem services over the short (years to decades) and long-term (centuries and beyond). Future research should address questions related to (1) disturbances as catalysts of rapid ecological change, (2) interactions among disturbances, (3) relationships between disturbance and society, especially the intersection of land use and disturbance, and (4) feedbacks from disturbance to other global drivers. Ecologists should make a renewed and concerted effort to understand and anticipate the causes and consequences of changing disturbance regimes.",
"title": ""
},
{
"docid": "neg:1840548_10",
"text": "We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning — answering image-related questions which require a multi-step, high-level process — a task which has proven difficult for standard deep learning methods that do not explicitly model reasoning. Specifically, we show on visual reasoning tasks that FiLM layers 1) halve state-of-theart error for the CLEVR benchmark, 2) modulate features in a coherent manner, 3) are robust to ablations and architectural modifications, and 4) generalize well to challenging, new data from few examples or even zero-shot.",
"title": ""
},
{
"docid": "neg:1840548_11",
"text": "Language modeling is a prototypical unsupervised task of natural language processing (NLP). It has triggered the developments of essential bricks of models used in speech recognition, translation or summarization. More recently, language modeling has been shown to give a sensible loss function for learning high-quality unsupervised representations in tasks like text classification (Howard & Ruder, 2018), sentiment detection (Radford et al., 2017) or word vector learning (Peters et al., 2018) and there is thus a revived interest in developing better language models. More generally, improvement in sequential prediction models are believed to be beneficial for a wide range of applications like model-based planning or reinforcement learning whose models have to encode some form of memory.",
"title": ""
},
{
"docid": "neg:1840548_12",
"text": "Subband adaptive filtering (SAF) techniques play a prominent role in designing active noise control (ANC) systems. They reduce the computational complexity of ANC algorithms, particularly, when the acoustic noise is a broadband signal and the system models have long impulse responses. In the commonly used uniform-discrete Fourier transform (DFT)-modulated (UDFTM) filter banks, increasing the number of subbands decreases the computational burden but can introduce excessive distortion, degrading performance of the ANC system. In this paper, we propose a new UDFTM-based adaptive subband filtering method that alleviates the degrading effects of the delay and side-lobe distortion introduced by the prototype filter on the system performance. The delay in filter bank is reduced by prototype filter design and the side-lobe distortion is compensated for by oversampling and appropriate stacking of subband weights. Experimental results show the improvement of performance and computational complexity of the proposed method in comparison to two commonly used subband and block adaptive filtering algorithms.",
"title": ""
},
{
"docid": "neg:1840548_13",
"text": "In this article, we present a novel class of robots that are able to move by growing and building their own structure. In particular, taking inspiration by the growing abilities of plant roots, we designed and developed a plant root-like robot that creates its body through an additive manufacturing process. Each robotic root includes a tubular body, a growing head, and a sensorized tip that commands the robot behaviors. The growing head is a customized three-dimensional (3D) printer-like system that builds the tubular body of the root in the format of circular layers by fusing and depositing a thermoplastic material (i.e., polylactic acid [PLA] filament) at the tip level, thus obtaining movement by growing. A differential deposition of the material can create an asymmetry that results in curvature of the built structure, providing the possibility of root bending to follow or escape from a stimulus or to reach a desired point in space. Taking advantage of these characteristics, the robotic roots are able to move inside a medium by growing their body. In this article, we describe the design of the growing robot together with the modeling of the deposition process and the description of the implemented growing movement strategy. Experiments were performed in air and in an artificial medium to verify the functionalities and to evaluate the robot performance. The results showed that the robotic root, with a diameter of 50 mm, grows with a speed of up to 4 mm/min, overcoming medium pressure of up to 37 kPa (i.e., it is able to lift up to 6 kg) and bending with a minimum radius of 100 mm.",
"title": ""
},
{
"docid": "neg:1840548_14",
"text": "Hindi is very complex language with large number of phonemes and being used with various ascents in different regions in India. In this manuscript, speaker dependent and independent isolated Hindi word recognizers using the Hidden Markov Model (HMM) is implemented, under noisy environment. For this study, a set of 10 Hindi names has been chosen as a test set for which the training and testing is performed. The scheme instigated here implements the Mel Frequency Cepstral Coefficients (MFCC) in order to compute the acoustic features of the speech signal. Then, K-means algorithm is used for the codebook generation by performing clustering over the obtained feature space. Baum Welch algorithm is used for re-estimating the parameters, and finally for deciding the recognized Hindi word whose model likelihood is highest, Viterbi algorithm has been implemented; for the given HMM. This work resulted in successful recognition with 98. 6% recognition rate for speaker dependent recognition, for total of 10 speakers (6 male, 4 female) and 97. 5% for speaker independent isolated word recognizer for 10 speakers (male).",
"title": ""
},
{
"docid": "neg:1840548_15",
"text": "The conventional border patrol systems suffer from intensive human involvement. Recently, unmanned border patrol systems employ high-tech devices, such as unmanned aerial vehicles, unattended ground sensors, and surveillance towers equipped with camera sensors. However, any single technique encounters inextricable problems, such as high false alarm rate and line-of-sight-constraints. There lacks a coherent system that coordinates various technologies to improve the system accuracy. In this paper, the concept of BorderSense, a hybrid wireless sensor network architecture for border patrol systems, is introduced. BorderSense utilizes the most advanced sensor network technologies, including the wireless multimedia sensor networks and the wireless underground sensor networks. The framework to deploy and operate BorderSense is developed. Based on the framework, research challenges and open research issues are discussed. 2010 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840548_16",
"text": "Online, reverse auctions are increasingly being utilized in industrial sourcing activities. This phenomenon represents a novel, emerging area of inquiry with significant implications for sourcing strategies. However, there is little systematic thinking or empirical evidence on the topic. In this paper, the use of these auctions in sourcing activities is reviewed and four key aspects are highlighted: (i) the differences from physical auctions or those of the theoretical literature, (ii) the conditions for using online, reverse auctions, (iii) methods for structuring the auctions, and (iv) evaluations of auction performance. Some empirical evidence on these issues is also provided. ONLINE, REVERSE AUCTIONS: ISSUES, THEMES, AND PROSPECTS FOR THE FUTURE INTRODUCTION For nearly the past decade, managers, analysts, researchers, and the business press have been remarking that, “The Internet will change everything.” And since the advent of the Internet, we have seen it challenge nearly every aspect of marketing practice. This raises the obligation to consider the consequences of the Internet to management practices, the theme of this special issue. Yet, it may take decades to fully understand the impact of the Internet on marketing practice, in general. This paper is one step in that direction. Specifically, I consider the impact of the Internet in a business-to-business context, the sourcing of direct and indirect materials from a supply base. It has been predicted that the Internet will bring about $1 trillion in efficiencies to the annual $7 trillion that is spent on the procurement of goods and services worldwide (USA Today, 2/7/00, B1). How and when this will happen remains an open question. However, one trend that is showing increasing promise is the use of online, reverse auctions. Virtually every major industry has begun to use and adopt these auctions on a regular basis (Smith 2002). During the late 1990s, slow-growth, manufacturing firms such as Boeing, SPX/Eaton, United Technologies, and branches of the United States military, utilized these auctions. Since then, consumer product companies such as Emerson Electronics, Nestle, and Quaker have followed suit. Even high-tech firms such as Dell, Hewlett-Packard, Intel, and Sun Microsystems have increased their usage of auctions in sourcing activities. And the intention and potential for the use of these auctions to continue to grow in the future is clear. In their annual survey of purchasing managers, Purchasing magazine found that 25% of its respondents expected to use reverse auctions in their sourcing efforts. Currently, the annual throughput in these auctions is estimated to be $40 billion; however, the addressable spend of the Global 500 firms is potentially $6.3 trillion.",
"title": ""
},
{
"docid": "neg:1840548_17",
"text": "The combination of visual and inertial sensors has proved to be very popular in robot navigation and, in particular, Micro Aerial Vehicle (MAV) navigation due the flexibility in weight, power consumption and low cost it offers. At the same time, coping with the big latency between inertial and visual measurements and processing images in real-time impose great research challenges. Most modern MAV navigation systems avoid to explicitly tackle this by employing a ground station for off-board processing. In this paper, we propose a navigation algorithm for MAVs equipped with a single camera and an Inertial Measurement Unit (IMU) which is able to run onboard and in real-time. The main focus here is on the proposed speed-estimation module which converts the camera into a metric body-speed sensor using IMU data within an EKF framework. We show how this module can be used for full self-calibration of the sensor suite in real-time. The module is then used both during initialization and as a fall-back solution at tracking failures of a keyframe-based VSLAM module. The latter is based on an existing high-performance algorithm, extended such that it achieves scalable 6DoF pose estimation at constant complexity. Fast onboard speed control is ensured by sole reliance on the optical flow of at least two features in two consecutive camera frames and the corresponding IMU readings. Our nonlinear observability analysis and our real experiments demonstrate that this approach can be used to control a MAV in speed, while we also show results of operation at 40Hz on an onboard Atom computer 1.6 GHz.",
"title": ""
},
{
"docid": "neg:1840548_18",
"text": "In this paper, we consider the directional multigigabit (DMG) transmission problem in IEEE 802.11ad wireless local area networks (WLANs) and design a random-access-based medium access control (MAC) layer protocol incorporated with a directional antenna and cooperative communication techniques. A directional cooperative MAC protocol, namely, D-CoopMAC, is proposed to coordinate the uplink channel access among DMG stations (STAs) that operate in an IEEE 802.11ad WLAN. Using a 3-D Markov chain model with consideration of the directional hidden terminal problem, we develop a framework to analyze the performance of the D-CoopMAC protocol and derive a closed-form expression of saturated system throughput. Performance evaluations validate the accuracy of the theoretical analysis and show that the performance of D-CoopMAC varies with the number of DMG STAs or beam sectors. In addition, the D-CoopMAC protocol can significantly improve system performance, as compared with the traditional IEEE 802.11ad MAC protocol.",
"title": ""
},
{
"docid": "neg:1840548_19",
"text": "The application potential of very high resolution (VHR) remote sensing imagery has been boosted by recent developments in the data acquisition and processing ability of aerial photogrammetry. However, shadows in images contribute to problems such as incomplete spectral information, lower intensity brightness, and fuzzy boundaries, which seriously affect the efficiency of the image interpretation. In this paper, to address these issues, a simple and automatic method of shadow detection is presented. The proposed method combines the advantages of the property-based and geometric-based methods to automatically detect the shadowed areas in VHR imagery. A geometric model of the scene and the solar position are used to delineate the shadowed and non-shadowed areas in the VHR image. A matting method is then applied to the image to refine the shadow mask. Different types of shadowed aerial orthoimages were used to verify the effectiveness of the proposed shadow detection method, and the results were compared with the results obtained by two state-of-the-art methods. The overall accuracy of the proposed method on the three tests was around 90%, confirming the effectiveness and robustness of the new method for detecting fine shadows, without any human input. The proposed method also performs better in detecting shadows in areas with water than the other two methods.",
"title": ""
}
] |
1840549 | Top-down control of visual attention | [
{
"docid": "pos:1840549_0",
"text": "Single cells were recorded in the visual cortex of monkeys trained to attend to stimuli at one location in the visual field and ignore stimuli at another. When both locations were within the receptive field of a cell in prestriate area V4 or the inferior temporal cortex, the response to the unattended stimulus was dramatically reduced. Cells in the striate cortex were unaffected by attention. The filtering of irrelevant information from the receptive fields of extrastriate neurons may underlie the ability to identify and remember the properties of a particular object out of the many that may be represented on the retina.",
"title": ""
}
] | [
{
"docid": "neg:1840549_0",
"text": "BACKGROUND\nNumerous studies report an association between social support and protection from depression, but no systematic review or meta-analysis exists on this topic.\n\n\nAIMS\nTo review systematically the characteristics of social support (types and source) associated with protection from depression across life periods (childhood and adolescence; adulthood; older age) and by study design (cross-sectional v cohort studies).\n\n\nMETHOD\nA systematic literature search conducted in February 2015 yielded 100 eligible studies. Study quality was assessed using a critical appraisal checklist, followed by meta-analyses.\n\n\nRESULTS\nSources of support varied across life periods, with parental support being most important among children and adolescents, whereas adults and older adults relied more on spouses, followed by family and then friends. Significant heterogeneity in social support measurement was noted. Effects were weaker in both magnitude and significance in cohort studies.\n\n\nCONCLUSIONS\nKnowledge gaps remain due to social support measurement heterogeneity and to evidence of reverse causality bias.",
"title": ""
},
{
"docid": "neg:1840549_1",
"text": "Do the languages that we speak affect how we experience the world? This question was taken up in a linguistic survey and two non-linguistic psychophysical experiments conducted in native speakers of English, Indonesian, Greek, and Spanish. All four of these languages use spatial metaphors to talk about time, but the particular metaphoric mappings between time and space vary across languages. A linguistic corpus study revealed that English and Indonesian tend to map duration onto linear distance (e.g., a long time), whereas Greek and Spanish preferentially map duration onto quantity (e.g., much time). Two psychophysical time estimation experiments were conducted to determine whether this cross-linguistic difference has implications for speakers’ temporal thinking. Performance on the psychophysical tasks reflected the relative frequencies of the ‘time as distance’ and ‘time as quantity’ metaphors in English, Indonesian, Greek, and Spanish. This was true despite the fact that the tasks used entirely nonlinguistic stimuli and responses. Results suggest that: (1.) The spatial metaphors in our native language may profoundly influence the way we mentally represent time. (2.) Language can shape even primitive, low-level mental processes such as estimating brief durations – an ability we share with babies and non-human animals.",
"title": ""
},
{
"docid": "neg:1840549_2",
"text": "In this article, we introduce an explicit count-based strategy to build word space models with syntactic contexts (dependencies). A filtering method is defined to reduce explicit word-context vectors. This traditional strategy is compared with a neural embedding (predictive) model also based on syntactic dependencies. The comparison was performed using the same parsed corpus for both models. Besides, the dependency-based methods are also compared with bag-of-words strategies, both count-based and predictive ones. The results show that our traditional countbased model with syntactic dependencies outperforms other strategies, including dependency-based embeddings, but just for the tasks focused on discovering similarity between words with the same function (i.e. near-synonyms).",
"title": ""
},
{
"docid": "neg:1840549_3",
"text": "Named Entity Recognition (NER) is a subtask of information extraction and aims to identify atomic entities in text that fall into predefined categories such as person, location, organization, etc. Recent efforts in NER try to extract entities and link them to linked data entities. Linked data is a term used for data resources that are created using semantic web standards such as DBpedia. There are a number of online tools that try to identify named entities in text and link them to linked data resources. Although one can use these tools via their APIs and web interfaces, they use different data resources and different techniques to identify named entities and not all of them reveal this information. One of the major tasks in NER is disambiguation that is identifying the right entity among a number of entities with the same names; for example \"apple\" standing for both \"Apple, Inc.\" the company and the fruit. We developed a similar tool called NERSO, short for Named Entity Recognition Using Semantic Open Data, to automatically extract named entities, disambiguating and linking them to DBpedia entities. Our disambiguation method is based on constructing a graph of linked data entities and scoring them using a graph-based centrality algorithm. We evaluate our system by comparing its performance with two publicly available NER tools. The results show that NERSO performs better.",
"title": ""
},
{
"docid": "neg:1840549_4",
"text": "Followership has been an understudied topic in the academic literature and an underappreciated topic among practitioners. Although it has always been important, the study of followership has become even more crucial with the advent of the information age and dramatic changes in the workplace. This paper provides a fresh look at followership by providing a synthesis of the literature and presents a new model for matching followership styles to leadership styles. The model’s practical value lies in its usefulness for describing how leaders can best work with followers, and how followers can best work with leaders.",
"title": ""
},
{
"docid": "neg:1840549_5",
"text": "Predictive state representations (PSRs) have recently been proposed as an alternative to partially observable Markov decision processes (POMDPs) for representing the state of a dynamical system (Littman et al., 2001). We present a learning algorithm that learns a PSR from observational data. Our algorithm produces a variant of PSRs called transformed predictive state representations (TPSRs). We provide an efficient principal-components-based algorithm for learning a TPSR, and show that TPSRs can perform well in comparison to Hidden Markov Models learned with Baum-Welch in a real world robot tracking task for low dimensional representations and long prediction horizons.",
"title": ""
},
{
"docid": "neg:1840549_6",
"text": "This paper presents a newly defined set-based concurrent engineering process, which the authors believe addresses some of the key challenges faced by engineering enterprises in the 21 century. The main principles of Set-Based Concurrent Engineering (SBCE) have been identified via an extensive literature review. Based on these principles the SBCE baseline model was developed. The baseline model defines the stages and activities which represent the product development process to be employed in the LeanPPD (lean product and process development) project. The LeanPPD project is addressing the needs of European manufacturing companies for a new model that extends beyond lean manufacturing, and incorporates lean thinking in the product design development process.",
"title": ""
},
{
"docid": "neg:1840549_7",
"text": "We are living in a world where there is an increasing need for evidence in organizations. Good digital evidence is becoming a business enabler. Very few organizations have the structures (management and infrastructure) in place to enable them to conduct cost effective, low-impact and fficient digital investigations [1]. Digital Forensics (DF) is a vehicle that organizations use to provide good and trustworthy evidence and processes. The current DF models concentrate on reactive investigations, with limited reference to DF readiness and live investigations. However, organizations use DF for other purposes for example compliance testing. The paper proposes that DF consists of three components: Pro-active (ProDF), Active (ActDF) and Re-active (ReDF). ProDF concentrates on DF readiness and the proactive responsible use of DF to demonstrate good governance and enhance governance structures. ActDF considers the gathering of live evidence during an ongoing attack with a limited live investigation element whilst ReDF deals with the traditional DF investigation. The paper discusses each component and the relationship between the components.",
"title": ""
},
{
"docid": "neg:1840549_8",
"text": "Electrospun membranes are gaining interest for use in membrane distillation (MD) due to their high porosity and interconnected pore structure; however, they are still susceptible to wetting during MD operation because of their relatively low liquid entry pressure (LEP). In this study, post-treatment had been applied to improve the LEP, as well as its permeation and salt rejection efficiency. The post-treatment included two continuous procedures: heat-pressing and annealing. In this study, annealing was applied on the membranes that had been heat-pressed. It was found that annealing improved the MD performance as the average flux reached 35 L/m2·h or LMH (>10% improvement of the ones without annealing) while still maintaining 99.99% salt rejection. Further tests on LEP, contact angle, and pore size distribution explain the improvement due to annealing well. Fourier transform infrared spectroscopy and X-ray diffraction analyses of the membranes showed that there was an increase in the crystallinity of the polyvinylidene fluoride-co-hexafluoropropylene (PVDF-HFP) membrane; also, peaks indicating the α phase of polyvinylidene fluoride (PVDF) became noticeable after annealing, indicating some β and amorphous states of polymer were converted into the α phase. The changes were favorable for membrane distillation as the non-polar α phase of PVDF reduces the dipolar attraction force between the membrane and water molecules, and the increase in crystallinity would result in higher thermal stability. The present results indicate the positive effect of the heat-press followed by an annealing post-treatment on the membrane characteristics and MD performance.",
"title": ""
},
{
"docid": "neg:1840549_9",
"text": "Time series is an important class of temporal data objects and it can be easily obtained from scientific and financial applications, and anomaly detection for time series is becoming a hot research topic recently. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. In this paper, we have discussed the definition of anomaly and grouped existing techniques into different categories based on the underlying approach adopted by each technique. And for each category, we identify the advantages and disadvantages of the techniques in that category. Then, we provide a briefly discussion on the representative methods recently. Furthermore, we also point out some key issues about multivariate time series anomaly. Finally, some suggestions about anomaly detection are discussed and future research trends are also summarized, which is hopefully beneficial to the researchers of time series and other relative domains.",
"title": ""
},
{
"docid": "neg:1840549_10",
"text": "Low-resolution face recognition (LRFR) has received increasing attention over the past few years. Its applications lie widely in the real-world environment when highresolution or high-quality images are hard to capture. One of the biggest demands for LRFR technologies is video surveillance. As the the number of surveillance cameras in the city increases, the videos that captured will need to be processed automatically. However, those videos or images are usually captured with large standoffs, arbitrary illumination condition, and diverse angles of view. Faces in these images are generally small in size. Several studies addressed this problem employed techniques like super resolution, deblurring, or learning a relationship between different resolution domains. In this paper, we provide a comprehensive review of approaches to low-resolution face recognition in the past five years. First, a general problem definition is given. Later, systematically analysis of the works on this topic is presented by catogory. In addition to describing the methods, we also focus on datasets and experiment settings. We further address the related works on unconstrained lowresolution face recognition and compare them with the result that use synthetic low-resolution data. Finally, we summarized the general limitations and speculate a priorities for the future effort.",
"title": ""
},
{
"docid": "neg:1840549_11",
"text": "Context: The processes of estimating, planning and managing are crucial for software development projects, since the results must be related to several business strategies. The broad expansion of the Internet and the global and interconnected economy make Web development projects be often characterized by expressions like delivering as soon as possible, reducing time to market and adapting to undefined requirements. In this kind of environment, traditional methodologies based on predictive techniques sometimes do not offer very satisfactory results. The rise of Agile methodologies and practices has provided some useful tools that, combined with Web Engineering techniques, can help to establish a framework to estimate, manage and plan Web development projects. Objective: This paper presents a proposal for estimating, planning and managing Web projects, by combining some existing Agile techniques with Web Engineering principles, presenting them as an unified framework which uses the business value to guide the delivery of features. Method: The proposal is analyzed by means of a case study, including a real-life project, in order to obtain relevant conclusions. Results: The results achieved after using the framework in a development project are presented, including interesting results on project planning and estimation, as well as on team productivity throughout the project. Conclusion: It is concluded that the framework can be useful in order to better manage Web-based projects, through a continuous value-based estimation and management process.",
"title": ""
},
{
"docid": "neg:1840549_12",
"text": "Training large-scale question answering systems is complicated because training sources usually cover a small portion of the range of possible questions. This paper studies the impact of multitask and transfer learning for simple question answering ; a setting for which the reasoning required to answer is quite easy, as long as one can retrieve the correct evidence given a question, which can be difficult in large-scale conditions. To this end, we introduce a new dataset of 100k questions that we use in conjunction with existing benchmarks. We conduct our study within the framework of Memory Networks (Weston et al., 2015) because this perspective allows us to eventually scale up to more complex reasoning, and show that Memory Networks can be successfully trained to achieve excellent performance .",
"title": ""
},
{
"docid": "neg:1840549_13",
"text": "This article examines cognitive, affective, and behavioral measures of motivation and reviews their use throughout the discipline of experimental social psychology. We distinguish between two dimensions of motivation (outcome-focused motivation and process-focused motivation). We discuss circumstances under which measures may help distinguish between different dimensions of motivation, as well as circumstances under which measures may capture different dimensions of motivation in similar ways. Furthermore, we examine situations in which various measures may capture fluctuations in nonmotivational factors, such as learning or physiological depletion. This analysis seeks to advance research in experimental social psychology by highlighting the need for caution when selecting measures of motivation and when interpreting fluctuations captured by these measures. Motivation – the psychological force that enables action – has long been the object of scientific inquiry (Carver & Scheier, 1998; Festinger, 1957; Fishbein & Ajzen, 1974; Hull, 1932; Kruglanski, 1996; Lewin, 1935; Miller, Galanter, & Pribram, 1960; Mischel, Shoda, & Rodriguez, 1989; Zeigarnik, 1927). Because motivation is a psychological construct that cannot be observed or recorded directly, studying it raises an important question: how to measure motivation? Researchers measure motivation in terms of observable cognitive (e.g., recall, perception), affective (e.g., subjective experience), behavioral (e.g., performance), and physiological (e.g., brain activation) responses and using self-reports. Furthermore, motivation is measured in relative terms: compared to previous or subsequent levels of motivation or to motivation in a different goal state (e.g., salient versus non-salient goal). For example, following exposure to a health-goal prime (e.g., gymmembership card), an individual might be more motivated to exercise now than she was 20minutes ago (before exposure to the prime), or than another person who was not exposed to the same prime. An important aspect of determining how to measure motivation is understanding what type of motivation one is attempting to capture. Thus, in exploring the measures of motivation, the present article takes into account different dimensions of motivation. In particular, we highlight the distinction between the outcome-focused motivation to complete a goal (Brehm & Self, 1989; Locke & Latham, 1990; Powers, 1973) and the process-focused motivation to attend to elements related to the process of goal pursuit – with less emphasis on the outcome. Process-related elements may include using “proper” means during goal pursuit (means-focused motivation; Higgins, Idson, Freitas, Spiegel, & Molden, 2003; Touré-Tillery & Fishbach, 2012) and enjoying the experience of goal pursuit (intrinsic motivation; Deci & Ryan, 1985; Fishbach & Choi, 2012; Sansone & Harackiewicz, 1996; Shah & Kruglanski, 2000). In some cases, particular measures of motivation may help distinguish between these different dimensions of motivation, whereas other measures may not. For example, the measured speed at which a person works on a task can have several interpretations. © 2014 John Wiley & Sons Ltd How to Measure Motivation 329 Working slowly could mean (a) that the individual’s motivation to complete the task is low (outcome-focused motivation); or (b) that her motivation to engage in the task is high such that she is “savoring” the task (intrinsic motivation); or (c) that her motivation to “do it right” and use proper means is high such that she is applying herself (means-focused motivation); or even (d) that she is tired (diminished physiological resources). In this case, additional measures (e.g., accuracy in performance) and manipulations (e.g., task difficulty) may help tease apart these various potential interpretations. Thus, experimental researchers must exercise caution when selecting measures of motivation and when interpreting the fluctuations captured by these measures. This review provides a guide for how to measure fluctuations in motivation in experimental settings. One approach is to ask people to rate their motivation (i.e., “how motivated are you?”). However, such an approach is limited to people’s conscious understanding of their own psychological states and can further be biased by social desirability concerns; hence, research in experimental social psychology developed a variety of cognitive and behavioral paradigms to assess motivation without relying on self-reports. We focus on these objective measures of situational fluctuations in motivation. We note that other fields of psychological research commonly use physiological measures (e.g., brain activation, skin conductance), self-report measures (i.e., motivation scales), or measure motivation as a stable trait. These physiological, self-report, and trait measures of motivation are beyond the scope our review. In the sections that follow, we start with a discussion of measures researchers commonly use to capture motivation. We review cognitive measures such as memory accessibility, evaluations, and perceptions of goal-relevant objects, as well as affective measures such as subjective experience. Next, we examine the use of behavioral measures such as speed, performance, and choice to capture fluctuations in motivational strength. In the third section, we discuss the outcomeand process-focused dimensions of motivation and examine specific measures of process-focused motivation, including measures of intrinsic motivation and means-focused motivation. We then discuss how different measures may help distinguish between the outcomeand process-focused dimensions. In the final section, we explore circumstances under which measures may capture fluctuations in learning and physiological resources, rather than changes in motivation. We conclude with some implications of this analysis for the measurement and study of motivation. Cognitive and Affective Measures of Motivation Experimental social psychologists conceptualize a goal as the cognitive representation of a desired end state (Fishbach & Ferguson, 2007; Kruglanski, 1996). According to this view, goals are organized in associative memory networks connecting each goal to corresponding constructs. Goal-relevant constructs could be activities or objects that contribute to goal attainment (i.e., means; Kruglanski et al., 2002), as well as activities or objects that hinder goal attainment (i.e., temptations; Fishbach, Friedman, & Kruglanski, 2003). For example, the goal to eat healthily may be associated with constructs such as apple, doctor (facilitating means), or French fries (hindering temptation). Cognitive and affective measures of motivation include the activation, evaluation, and perception of these goal-related constructs and the subjective experience they evoke. Goal activation: Memory, accessibility, and inhibition of goal-related constructs Constructs related to a goal can activate or prime the pursuit of that goal. For example, the presence of one’s study partner or the word “exam” in a game of scrabble can activate a student’s academic goal and hence increase her motivation to study. Once a goal is active, Social and Personality Psychology Compass 8/7 (2014): 328–341, 10.1111/spc3.12110 © 2014 John Wiley & Sons Ltd 330 How to Measure Motivation the motivational system prepares the individual for action by activating goal-relevant information (Bargh & Barndollar, 1996; Gollwitzer, 1996; Kruglanski, 1996). Thus, motivation manifests itself in terms of how easily goal-related constructs are brought tomind (i.e., accessibility; Aarts, Dijksterhuis, & De Vries, 2001; Higgins & King, 1981; Wyer & Srull, 1986). The activation and subsequent pursuit of a goal can be conscious, such that one is aware of the cues that led to goal-related judgments and behaviors. This activation can also be non-conscious, such that a one is unaware of the goal prime or that one is even exhibiting goal-related judgments and behaviors. Whether goals are conscious or non-conscious, a fundamental characteristic of goal-driven processes is the persistence of the accessibility of goal-related constructs for as long as the goal is active or until an individual disengages from the goal (Bargh, Gollwitzer, Lee-Chai, Barndollar, & Trotschel, 2001; Goschke & Kuhl, 1993). Upon goal completion, motivation diminishes and accessibility is inhibited (Liberman & Förster, 2000; Marsh, Hicks, & Bink, 1998). This active reduction in accessibility allows individuals to direct their cognitive resources to other tasks at hand without being distracted by thoughts of a completed goal. Thus, motivation can be measured by the degree to which goal-related concepts are accessible inmemory. Specifically, the greater the motivation to pursue/achieve a goal, the more likely individuals are to remember, notice, or recognize concepts, objects, or persons related to that goal. For example, in a classic study, Zeigarnik (1927) instructed participants to perform 20 short tasks, ten of which they did not get a chance to finish because the experimenter interrupted them. At the end of the study, Zeigarnik inferred the strength of motivation by asking participants to recall as many of the tasks as possible. Consistent with the notion that unfulfilled goals are associated with heightened motivational states, whereas fulfilled goals inhibit motivation, the results show that participants recalled more uncompleted tasks (i.e., unfulfilled goals) than completed tasks (i.e., fulfilled goals; the Zeigarnik effect). More recently, Förster, Liberman, and Higgins (2005) replicated these findings; inferring motivation from performance on a lexical decision task. Their study assessed the speed of recognizing – i.e., identifying as words versus non-words –words related to a focal goal prior to (versus after) completing that goal. A related measure of motivation is the inhibition of conflicting constructs. In",
"title": ""
},
{
"docid": "neg:1840549_14",
"text": "We present the use of an oblique angle physical vapor deposition OAPVDd technique with substrate rotation to obtain conformal thin films with enhanced step coverage on patterned surfaces. We report the results of rutheniumsRud films sputter deposited on trench structures with aspect ratio ,2 and show that OAPVD with an incidence angle less that 30° with respect to the substrate surface normal one can create a more conformal coating without overhangs and voids compared to that obtained by normal incidence deposition. A simple geometrical shadowing effect is presented to explain the results. The technique has the potential of extending the present PVD technique to future chip interconnect fabrication. ©2005 American Institute of Physics . fDOI: 10.1063/1.1937476 g",
"title": ""
},
{
"docid": "neg:1840549_15",
"text": "As an initial assessment, over 480,000 labeled virtual images of normal highway driving were readily generated in Grand Theft Auto V's virtual environment. Using these images, a CNN was trained to detect following distance to cars/objects ahead, lane markings, and driving angle (angular heading relative to lane centerline): all variables necessary for basic autonomous driving. Encouraging results were obtained when tested on over 50,000 labeled virtual images from substantially different GTA-V driving environments. This initial assessment begins to define both the range and scope of the labeled images needed for training as well as the range and scope of labeled images needed for testing the definition of boundaries and limitations of trained networks. It is the efficacy and flexibility of a\"GTA-V\"-like virtual environment that is expected to provide an efficient well-defined foundation for the training and testing of Convolutional Neural Networks for safe driving. Additionally, described is the Princeton Virtual Environment (PVE) for the training, testing and enhancement of safe driving AI, which is being developed using the video-game engine Unity. PVE is being developed to recreate rare but critical corner cases that can be used in re-training and enhancing machine learning models and understanding the limitations of current self driving models. The Florida Tesla crash is being used as an initial reference.",
"title": ""
},
{
"docid": "neg:1840549_16",
"text": "The cellular concept applied in mobile communication systems enables significant increase of overall system capacity, but requires careful radio network planning and dimensioning. Wireless and mobile network operators typically rely on various commercial radio network planning and dimensioning tools, which incorporate different radio signal propagation models. In this paper we present the use of open-source Geographical Resources Analysis Support System (GRASS) for the calculation of radio signal coverage. We developed GRASS modules for radio coverage prediction for a number of different radio channel models, with antenna radiation patterns given in the standard MSI format. The results are stored in a data base (e.g. MySQL, PostgreSQL) for further processing and in a simplified form as a bit-map file for displaying in GRASS. The accuracy of prediction was confirmed by comparison with results obtained by a dedicated professional prediction tool as well as with measurement results. Key-Words: network planning tool, open-source, GRASS GIS, path loss, raster, clutter, radio signal coverage",
"title": ""
},
{
"docid": "neg:1840549_17",
"text": "This study examined the effects of self-presentation goals on the amount and type of verbal deception used by participants in same-gender and mixed-gender dyads. Participants were asked to engage in a conversation that was secretly videotaped. Self-presentational goal was manipulated, where one member of the dyad (the self-presenter) was told to either appear (a) likable, (b) competent, or (c) was told to simply get to know his or her partner (control condition). After the conversation, self-presenters were asked to review a video recording of the interaction and identify the instances in which they had deceived the other person. Overall, participants told more lies when they had a goal to appear likable or competent compared to participants in the control condition, and the content of the lies varied according to self-presentation goal. In addition, lies told by men and women differed in content, although not in quantity.",
"title": ""
},
{
"docid": "neg:1840549_18",
"text": "We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ~ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.",
"title": ""
}
] |
1840550 | High-order Graph-based Neural Dependency Parsing | [
{
"docid": "pos:1840550_0",
"text": "In recent years, variants of a neural network architecture for statistical language modeling have been proposed and successfully applied, e.g. in the language modeling component of speech recognizers. The main advantage of these architectures is that they learn an embedding for words (or other symbols) in a continuous space that helps to smooth the language model and provide good generalization even when the number of training examples is insufficient. However, these models are extremely slow in comparison to the more commonly used n-gram models, both for training and recognition. As an alternative to an importance sampling method proposed to speed-up training, we introduce a hierarchical decomposition of the conditional probabilities that yields a speed-up of about 200 both during training and recognition. The hierarchical decomposition is a binary hierarchical clustering constrained by the prior knowledge extracted from the WordNet semantic hierarchy.",
"title": ""
},
{
"docid": "pos:1840550_1",
"text": "We explore the application of neural language models to machine translation. We develop a new model that combines the neural probabilistic language model of Bengio et al., rectified linear units, and noise-contrastive estimation, and we incorporate it into a machine translation system both by reranking k-best lists and by direct integration into the decoder. Our large-scale, large-vocabulary experiments across four language pairs show that our neural language model improves translation quality by up to 1.1 Bleu.",
"title": ""
},
{
"docid": "pos:1840550_2",
"text": "A goal of statistical language modeling is to learn the joint probability function of sequences of words in a language. This is intrinsically difficult because of the curse of dimensionality: a word sequence on which the model will be tested is likely to be different from all the word sequences seen during training. Traditional but very successful approaches based on n-grams obtain generalization by concatenating very short overlapping sequences seen in the training set. We propose to fight the curse of dimensionality by learning a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences. The model learns simultaneously (1) a distributed representation for each word along with (2) the probability function for word sequences, expressed in terms of these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar (in the sense of having a nearby representation) to words forming an already seen sentence. Training such large models (with millions of parameters) within a reasonable time is itself a significant challenge. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach significantly improves on state-of-the-art n-gram models, and that the proposed approach allows to take advantage of longer contexts.",
"title": ""
}
] | [
{
"docid": "neg:1840550_0",
"text": "While machine learning systems have recently achieved impressive, (super)human-level performance in several tasks, they have often relied on unnatural amounts of supervision – e.g. large numbers of labeled images or continuous scores in video games. In contrast, human learning is largely unsupervised, driven by observation and interaction with the world. Emulating this type of learning in machines is an open challenge, and one that is critical for general artificial intelligence. Here, we explore prediction of future frames in video sequences as an unsupervised learning rule. A key insight here is that in order to be able to predict how the visual world will change over time, an agent must have at least some implicit model of object structure and the possible transformations objects can undergo. To this end, we have designed several models capable of accurate prediction in complex sequences. Our first model consists of a recurrent extension to the standard autoencoder framework. Trained end-to-end to predict the movement of synthetic stimuli, we find that the model learns a representation of the underlying latent parameters of the 3D objects themselves. Importantly, we find that this representation is naturally tolerant to object transformations, and generalizes well to new tasks, such as classification of static images. Similar models trained solely with a reconstruction loss fail to generalize as effectively. In addition, we explore the use of an adversarial loss, as in a Generative Adversarial Network, illustrating its complementary effects to traditional pixel losses for the task of next-frame prediction.",
"title": ""
},
{
"docid": "neg:1840550_1",
"text": "Recent changes in the Music Encoding Initiative (MEI) have transformed it into an extensible platform from which new notation encoding schemes can be produced. This paper introduces MEI as a document-encoding framework, and illustrates how it can be extended to encode new types of notation, eliminating the need for creating specialized and potentially incompatible notation encoding standards.",
"title": ""
},
{
"docid": "neg:1840550_2",
"text": "If we are to achieve natural human–robot interaction, we may need to complement current vision and speech interfaces. Touch may provide us with an extra tool in this quest. In this paper we demonstrate the role of touch in interaction between a robot and a human. We show how infrared sensors located on robots can be easily used to detect and distinguish human interaction, in this case interaction with individual children. This application of infrared sensors potentially has many uses; for example, in entertainment or service robotics. This system could also benefit therapy or rehabilitation, where the observation and recording of movement and interaction is important. In the long term, this technique might enable robots to adapt to individuals or individual types of user. c © 2006 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "neg:1840550_3",
"text": "Given the resources needed to launch a retail store on the Internet or change an existing online storefront design, it is important to allocate product development resources to interface features that actually improve store traffic and sales. We identified features that impact store traffic and sales using regression models of 1996 store traffic and dollar sales as dependent variables and interface design features such as number of links into the store, hours of promotional ads, number of products, and store navigation features as the independent variables. Product list navigation features that reduce the time to purchase products online account for 61% of the variance in monthly sales. Other factors explaining the variance in monthly sales include: number of hyperlinks into the store (10%), hours of promotion (4%) and customer service feedback (1%). These findings demonstrate that the user interface is an essential link between the customer and the retail store in Web-based shopping environments.",
"title": ""
},
{
"docid": "neg:1840550_4",
"text": "Gradient-based policy search is an alternative to value-function-based methods for reinforcement learning in non-Markovian domains. One apparent drawback of policy search is its requirement that all actions be \\on-policy\"; that is, that there be no explicit exploration. In this paper, we provide a method for using importance sampling to allow any well-behaved directed exploration policy during learning. We show both theoretically and experimentally that using this method can achieve dramatic performance improvements. During this work, Nicolas Meuleau was at the MIT Arti cial Intelligence laboratory, supported in part by a research grant from NTT; Leonid Peshkin by grants from NSF and NTT; and Kee-Eung Kim in part by AFOSR/RLF 30602-95-1-0020.",
"title": ""
},
{
"docid": "neg:1840550_5",
"text": "Network traffic prediction aims at predicting the subsequent network traffic by using the previous network traffic data. This can serve as a proactive approach for network management and planning tasks. The family of recurrent neural network (RNN) approaches is known for time series data modeling which aims to predict the future time series based on the past information with long time lags of unrevealed size. RNN contains different network architectures like simple RNN, long short term memory (LSTM), gated recurrent unit (GRU), identity recurrent unit (IRNN) which is capable to learn the temporal patterns and long range dependencies in large sequences of arbitrary length. To leverage the efficacy of RNN approaches towards traffic matrix estimation in large networks, we use various RNN networks. The performance of various RNN networks is evaluated on the real data from GÉANT backbone networks. To identify the optimal network parameters and network structure of RNN, various experiments are done. All experiments are run up to 200 epochs with learning rate in the range [0.01-0.5]. LSTM has performed well in comparison to the other RNN and classical methods. Moreover, the performance of various RNN methods is comparable to LSTM.",
"title": ""
},
{
"docid": "neg:1840550_6",
"text": "Well-designed games are good motivators by nature, as they imbue players with clear goals and a sense of reward and fulfillment, thus encouraging them to persist and endure in their quests. Recently, this motivational power has started to be applied to non- game contexts, a practice known as Gamification. This adds gaming elements to non-game processes, motivating users to adopt new behaviors, such as improving their physical condition, working more, or learning something new. This paper describes an experiment in which game-like elements were used to improve the delivery of a Master's level College course, including scoring, levels, leaderboards, challenges and badges. To assess how gamification impacted the learning experience, we compare the gamified course to its non-gamified version from the previous year, using different performance measures. We also assessed student satisfaction as compared to other regular courses in the same academic context. Results were very encouraging, showing significant increases ranging from lecture attendance to online participation, proactive behaviors and perusing the course reference materials. Moreover, students considered the gamified instance to be more motivating, interesting and easier to learn as compared to other courses. We finalize by discussing the implications of these results on the design of future gamified learning experiences.",
"title": ""
},
{
"docid": "neg:1840550_7",
"text": "Hardware accelerators are being increasingly deployed to boost the performance and energy efficiency of deep neural network (DNN) inference. In this paper we propose Thundervolt, a new framework that enables aggressive voltage underscaling of high-performance DNN accelerators without compromising classification accuracy even in the presence of high timing error rates. Using post-synthesis timing simulations of a DNN acceleratormodeled on theGoogle TPU,we show that Thundervolt enables between 34%-57% energy savings on stateof-the-art speech and image recognition benchmarks with less than 1% loss in classification accuracy and no performance loss. Further, we show that Thundervolt is synergistic with and can further increase the energy efficiency of commonly used run-timeDNNpruning techniques like Zero-Skip.",
"title": ""
},
{
"docid": "neg:1840550_8",
"text": "A 50-year-old man developed numerous pustules and bullae on the trunk and limbs 15 days after anal fissure surgery. The clinicopathological diagnosis was iododerma induced by topical povidone-iodine sitz baths postoperatively. Complete resolution occurred within 3 weeks using systemic corticosteroids and forced diuresis.",
"title": ""
},
{
"docid": "neg:1840550_9",
"text": "How we design and evaluate for emotions depends crucially on what we take emotions to be. In affective computing, affect is often taken to be another kind of information discrete units or states internal to an individual that can be transmitted in a loss-free manner from people to computational systems and back. While affective computing explicitly challenges the primacy of rationality in cognitivist accounts of human activity, at a deeper level it often relies on and reproduces the same information-processing model of cognition. Drawing on cultural, social, and interactional critiques of cognition which have arisen in HCI, as well as anthropological and historical accounts of emotion, we explore an alternative perspective on emotion as interaction: dynamic, culturally mediated, and socially constructed and experienced. We demonstrate how this model leads to new goals for affective systems instead of sensing and transmitting emotion, systems should support human users in understanding, interpreting, and experiencing emotion in its full complexity and ambiguity. In developing from emotion as objective, externally measurable unit to emotion as experience, evaluation, too, alters focus from externally tracking the circulation of emotional information to co-interpreting emotions as they are made in interaction.",
"title": ""
},
{
"docid": "neg:1840550_10",
"text": "This work, set in the context of the apparel industry, proposes an action-oriented disclosure tool to help solve the sustainability challenges of complex fast-fashion supply chains (SCs). In a search for effective disclosure, it focusses on actions towards sustainability instead of the measurements and indicators of its impacts. We applied qualitative and quantitative content analysis to the sustainability reporting of the world’s two largest fast-fashion companies in three phases. First, we searched for the challenges that the organisations report they are currently facing. Second, we introduced the United Nations’ Sustainable Development Goals (SDGs) framework to overcome the voluntary reporting drawback of ‘choosing what to disclose’, and revealed orphan issues. This broadened the scope from internal corporate challenges to issues impacting the ecosystems in which companies operate. Third, we analysed the reported sustainability actions and decomposed them into topics, instruments, and actors. The results showed that fast-fashion reporting has a broadly developed analysis base, but lacks action orientation. This has led us to propose the ‘Fast-Fashion Sustainability Scorecard’ as a universal disclosure framework that shifts the focus from (i) reporting towards action; (ii) financial performance towards sustainable value creation; and (iii) corporate boundaries towards value creation for the broader SC ecosystem.",
"title": ""
},
{
"docid": "neg:1840550_11",
"text": "The resurgence of effort within computational semantics has led to increased interest in various types of relation extraction and semantic parsing. While various manually annotated resources exist for enabling this work, these materials have been developed with different standards and goals in mind. In an effort to develop better general understanding across these resources, we provide a summary overview of the standards underlying ACE, ERE, TAC-KBP Slot-filling, and FrameNet.",
"title": ""
},
{
"docid": "neg:1840550_12",
"text": "Accurate project effort prediction is an important goal for the software engineering community. To date most work has focused upon building algorithmic models of effort, for example COCOMO. These can be calibrated to local environments. We describe an alternative approach to estimation based upon the use of analogies. The underlying principle is to characterize projects in terms of features (for example, the number of interfaces, the development method or the size of the functional requirements document). Completed projects are stored and then the problem becomes one of finding the most similar projects to the one for which a prediction is required. Similarity is defined as Euclidean distance in n-dimensional space where n is the number of project features. Each dimension is standardized so all dimensions have equal weight. The known effort values of the nearest neighbors to the new project are then used as the basis for the prediction. The process is automated using a PC-based tool known as ANGEL. The method is validated on nine different industrial datasets (a total of 275 projects) and in all cases analogy outperforms algorithmic models based upon stepwise regression. From this work we argue that estimation by analogy is a viable technique that, at the very least, can be used by project managers to complement current estimation techniques.",
"title": ""
},
{
"docid": "neg:1840550_13",
"text": "PURPOSE\nTo report a novel method for measuring the degree of inferior oblique muscle overaction and to investigate the correlation with other factors.\n\n\nDESIGN\nCross-sectional diagnostic study.\n\n\nMETHODS\nOne hundred and forty-two eyes (120 patients) were enrolled in this study. Subjects underwent a full orthoptic examination and photographs were obtained in the cardinal positions of gaze. The images were processed using Photoshop and analyzed using the ImageJ program to measure the degree of inferior oblique muscle overaction. Reproducibility or interobserver variability was assessed by Bland-Altman plots and by calculation of the intraclass correlation coefficient (ICC). The correlation between the degree of inferior oblique muscle overaction and the associated factors was estimated with linear regression analysis.\n\n\nRESULTS\nThe mean angle of inferior oblique muscle overaction was 17.8 ± 10.1 degrees (range, 1.8-54.1 degrees). The 95% limit of agreement of interobserver variability for the degree of inferior oblique muscle overaction was ±1.76 degrees, and ICC was 0.98. The angle of inferior oblique muscle overaction showed significant correlation with the clinical grading scale (R = 0.549, P < .001) and with hypertropia in the adducted position (R = 0.300, P = .001). The mean angles of inferior oblique muscle overaction classified into grades 1, 2, 3, and 4 according to the clinical grading scale were 10.5 ± 9.1 degrees, 16.8 ± 7.8 degrees, 24.3 ± 8.8 degrees, and 40.0 ± 12.2 degrees, respectively (P < .001).\n\n\nCONCLUSIONS\nWe describe a new method for measuring the degree of inferior oblique muscle overaction using photographs of the cardinal positions. It has the potential to be a diagnostic tool that measures inferior oblique muscle overaction with minimal observer dependency.",
"title": ""
},
{
"docid": "neg:1840550_14",
"text": "Fashion markets are synonymous with rapid change and, as a result, commercial success or failure in those markets is largely determined by the organisation’s flexibility and responsiveness. Responsiveness is characterised by short time-to-market, the ability to scale up (or down) quickly and the rapid incorporation of consumer preferences into the design process. In this paper it is argued that conventional organisational structures and forecast-driven supply chains are not adequate to meet the challenges of volatile and turbulent demand which typify fashion markets today. Instead, the requirement is for the creation of an agile organisation embedded within an agile supply chain INTRODUCTION Fashion markets have long attracted the interest of researchers. More often the focus of their work was the psychology and sociology of fashion and with the process by which fashions were adopted across populations (see for example Wills and Midgley, 1973). In parallel with this, a body of work has developed seeking to identify cycles in fashions (e.g. Carman, 1966). Much of this earlier work was intended to create insights and even tools to help improve the demand forecasting of fashion products. However, the reality that is now gradually being accepted both by those who work in the industry and those who study it, is that the demand for fashion products cannot be forecast. Instead, we need to recognise that fashion markets are complex open systems that frequently demonstrate high levels of ‘chaos’. In such conditions managerial effort may be better expended on devising strategies",
"title": ""
},
{
"docid": "neg:1840550_15",
"text": "Abstract In the present study biodiesel was synthesized from Waste Cook Oil (WCO) by three-step method and regressive analyzes of the process was done. The raw oil, containing 1.9wt% Free Fatty Acid (FFA) and viscosity was 47.6mm/s. WCO was collected from local restaurant of Sylhet city in Bangladesh. Transesterification method gives lower yield than three-step method. In the three-step method, the first step is saponification of the oil followed by acidification to produce FFA and finally esterification of FFA to produce biodiesel. In the saponification reaction, various reaction parameters such as oil to sodium hydroxide molar ratio and reaction time were optimized and the oil to NaOH molar ratio was 1:2, In the esterification reaction, the reaction parameters such as methanol to FFA molar ratio, catalyst concentration and reaction temperature were optimized. Silica gel was used during esterification reaction to adsorb water produced in the reaction. Hence the reaction rate was increased and finally the FFA was reduced to 0.52wt%. A factorial design was studied for esterification reaction based on yield of biodiesel. Finally various properties of biodiesel such as FFA, viscosity, specific gravity, cetane index, pour point, flash point etc. were measured and compared with biodiesel and petro-diesel standard. The reaction yield was 79%.",
"title": ""
},
{
"docid": "neg:1840550_16",
"text": "In the light of evidence from about 200 studies showing gender symmetry in perpetration of partner assault, research can now focus on why gender symmetry is predominant and on the implications of symmetry for primary prevention and treatment of partner violence. Progress in such research is handicapped by a number of problems: (1) Insufficient empirical research and a surplus of discussion and theory, (2) Blinders imposed by commitment to a single causal factor theory-patriarchy and male dominance-in the face of overwhelming evidence that this is only one of a multitude of causes, (3) Research purporting to investigate gender differences but which obtains data on only one gender, (4) Denial of research grants to projects that do not assume most partner violence is by male perpetrators, (5) Failure to investigate primary prevention and treatment programs for female offenders, and (6) Suppression of evidence on female perpetration by both researchers and agencies.",
"title": ""
},
{
"docid": "neg:1840550_17",
"text": "This work presents a systematic study toward the design and first demonstration of high-performance n-type monolayer tungsten diselenide (WSe2) field effect transistors (FET) by selecting the contact metal based on understanding the physics of contact between metal and monolayer WSe2. Device measurements supported by ab initio density functional theory (DFT) calculations indicate that the d-orbitals of the contact metal play a key role in forming low resistance ohmic contacts with monolayer WSe2. On the basis of this understanding, indium (In) leads to small ohmic contact resistance with WSe2 and consequently, back-gated In-WSe2 FETs attained a record ON-current of 210 μA/μm, which is the highest value achieved in any monolayer transition-metal dichalcogenide- (TMD) based FET to date. An electron mobility of 142 cm(2)/V·s (with an ON/OFF current ratio exceeding 10(6)) is also achieved with In-WSe2 FETs at room temperature. This is the highest electron mobility reported for any back gated monolayer TMD material till date. The performance of n-type monolayer WSe2 FET was further improved by Al2O3 deposition on top of WSe2 to suppress the Coulomb scattering. Under the high-κ dielectric environment, electron mobility of Ag-WSe2 FET reached ~202 cm(2)/V·s with an ON/OFF ratio of over 10(6) and a high ON-current of 205 μA/μm. In tandem with a recent report of p-type monolayer WSe2 FET ( Fang , H . et al. Nano Lett. 2012 , 12 , ( 7 ), 3788 - 3792 ), this demonstration of a high-performance n-type monolayer WSe2 FET corroborates the superb potential of WSe2 for complementary digital logic applications.",
"title": ""
},
{
"docid": "neg:1840550_18",
"text": "Delay Tolerant Networks (DTN) are networks of self-organizing wireless nodes, where end-to-end connectivity is intermittent. In these networks, forwarding decisions are made using locally collected knowledge about node behavior (e.g., past contacts between nodes) to predict which nodes are likely to deliver a content or bring it closer to the destination. One promising way of predicting future contact opportunities is to aggregate contacts seen in the past to a social graph and use metrics from complex network analysis (e.g., centrality and similarity) to assess the utility of a node to carry a piece of content. This aggregation presents an inherent tradeoff between the amount of time-related information lost during this mapping and the predictive capability of complex network analysis in this context. In this paper, we use two recent DTN routing algorithms that rely on such complex network analysis, to show that contact aggregation significantly affects the performance of these protocols. We then propose simple contact mapping algorithms that demonstrate improved performance up to a factor of 4 in delivery ratio, and robustness to various connectivity scenarios for both protocols.",
"title": ""
},
{
"docid": "neg:1840550_19",
"text": "Despite the growing popularity of digital imaging devices, the problem of accurately estimating the spatial frequency response or optical transfer function (OTF) of these devices has been largely neglected. Traditional methods for estimating OTFs were designed for film cameras and other devices that form continuous images. These traditional techniques do not provide accurate OTF estimates for typical digital image acquisition devices because they do not account for the fixed sampling grids of digital devices . This paper describes a simple method for accurately estimating the OTF of a digital image acquisition device. The method extends the traditional knife-edge technique''3 to account for sampling. One of the principal motivations for digital imaging systems is the utility of digital image processing algorithms, many of which require an estimate of the OTF. Algorithms for enhancement, spatial registration, geometric transformations, and other purposes involve restoration—removing the effects of the image acquisition device. Nearly all restoration algorithms (e.g., the",
"title": ""
}
] |
1840551 | Meta-Unsupervised-Learning: A supervised approach to unsupervised learning | [
{
"docid": "pos:1840551_0",
"text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.",
"title": ""
},
{
"docid": "pos:1840551_1",
"text": "Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.",
"title": ""
},
{
"docid": "pos:1840551_2",
"text": "Problems of clustering data from pairwise similarity information are ubiquitous in Computer Science. Theoretical treatments typically view the similarity information as ground-truth and then design algorithms to (approximately) optimize various graph-based objective functions. However, in most applications, this similarity information is merely based on some heuristic; the ground truth is really the unknown correct clustering of the data points and the real goal is to achieve low error on the data. In this work, we develop a theoretical approach to clustering from this perspective. In particular, motivated by recent work in learning theory that asks \"what natural properties of a similarity (or kernel) function are sufficient to be able to learn well?\" we ask \"what natural properties of a similarity function are sufficient to be able to cluster well?\"\n To study this question we develop a theoretical framework that can be viewed as an analog of the PAC learning model for clustering, where the object of study, rather than being a concept class, is a class of (concept, similarity function) pairs, or equivalently, a property the similarity function should satisfy with respect to the ground truth clustering. We then analyze both algorithmic and information theoretic issues in our model. While quite strong properties are needed if the goal is to produce a single approximately-correct clustering, we find that a number of reasonable properties are sufficient under two natural relaxations: (a) list clustering: analogous to the notion of list-decoding, the algorithm can produce a small list of clusterings (which a user can select from) and (b) hierarchical clustering: the algorithm's goal is to produce a hierarchy such that desired clustering is some pruning of this tree (which a user could navigate). We develop a notion of the clustering complexity of a given property (analogous to notions of capacity in learning theory), that characterizes its information-theoretic usefulness for clustering. We analyze this quantity for several natural game-theoretic and learning-theoretic properties, as well as design new efficient algorithms that are able to take advantage of them. Our algorithms for hierarchical clustering combine recent learning-theoretic approaches with linkage-style methods. We also show how our algorithms can be extended to the inductive case, i.e., by using just a constant-sized sample, as in property testing. The analysis here uses regularity-type results of [FK] and [AFKK].",
"title": ""
}
] | [
{
"docid": "neg:1840551_0",
"text": "Visual-to-auditory Sensory Substitution Devices (SSDs) are non-invasive sensory aids that provide visual information to the blind via their functioning senses, such as audition. For years SSDs have been confined to laboratory settings, but we believe the time has come to use them also for their original purpose of real-world practical visual rehabilitation. Here we demonstrate this potential by presenting for the first time new features of the EyeMusic SSD, which gives the user whole-scene shape, location & color information. These features include higher resolution and attempts to overcome previous stumbling blocks by being freely available to download and run from a smartphone platform. We demonstrate with use the EyeMusic the potential of SSDs in noisy real-world scenarios for tasks such as identifying and manipulating objects. We then discuss the neural basis of using SSDs, and conclude by discussing other steps-in-progress on the path to making their practical use more widespread.",
"title": ""
},
{
"docid": "neg:1840551_1",
"text": "Cloud computing is emerging as a viable platform for scientific exploration. Elastic and on-demand access to resources (and other services), the abstraction of “unlimited” resources, and attractive pricing models provide incentives for scientists to move their workflows into clouds. Generalizing these concepts beyond a single virtualized datacenter, it is possible to create federated marketplaces where different types of resources (e.g., clouds, HPC grids, supercomputers) that may be geographically distributed, are collectively exposed as a single elastic infrastructure. This presents opportunities for optimizing the execution of application workflows with heterogeneous and dynamic requirements, and tackling larger scale problems. In this paper, we introduce a framework to manage the end-to-end execution of data-intensive application workflows in dynamic software-defined resource federation. This framework enables the autonomic execution of workflows by elastically provisioning an appropriate set of resources that meet application requirements, and by adapting this set of resources at runtime as the requirements change. It also allows users to customize scheduling policies that drive the way resources federated and used. To demonstrate the benefits of our approach, we study the execution of two different data-intensive scientific workflows in a multi-cloud federation using different policies and objective functions.",
"title": ""
},
{
"docid": "neg:1840551_2",
"text": "Recurrent neural networks (RNNs) form an important class of architectures among neural networks useful for language modeling and sequential prediction. However, optimizing RNNs is known to be harder compared to feed-forward neural networks. A number of techniques have been proposed in literature to address this problem. In this paper we propose a simple technique called fraternal dropout that takes advantage of dropout to achieve this goal. Specifically, we propose to train two identical copies of an RNN (that share parameters) with different dropout masks while minimizing the difference between their (pre-softmax) predictions. In this way our regularization encourages the representations of RNNs to be invariant to dropout mask, thus being robust. We show that our regularization term is upper bounded by the expectation-linear dropout objective which has been shown to address the gap due to the difference between the train and inference phases of dropout. We evaluate our model and achieve state-of-the-art results in sequence modeling tasks on two benchmark datasets – Penn Treebank and Wikitext-2. We also show that our approach leads to performance improvement by a significant margin in image captioning (Microsoft COCO) and semi-supervised (CIFAR-10) tasks.",
"title": ""
},
{
"docid": "neg:1840551_3",
"text": "In our 1990 paper, we showed that managers concerned with their reputations might choose to mimic the behavior of other managers and ignore their own information. We presented a model in which “smart” managers receive correlated, informative signals, whereas “dumb” managers receive independent, uninformative signals. Managers have an incentive to follow the herd to indicate to the labor market that they have received the same signal as others, and hence are likely to be smart. This model of reputational herding has subsequently found empirical support in a number of recent papers, including Judith A. Chevalier and Glenn D. Ellison’s (1999) study of mutual fund managers and Harrison G. Hong et al.’s (2000) study of equity analysts. We argued in our 1990 paper that reputational herding “requires smart managers’ prediction errors to be at least partially correlated with each other” (page 468). In their Comment, Marco Ottaviani and Peter Sørensen (hereafter, OS) take issue with this claim. They write: “correlation is not necessary for herding, other than in degenerate cases.” It turns out that the apparent disagreement hinges on how strict a definition of herding one adopts. In particular, we had defined a herding equilibrium as one in which agentB alwaysignores his own information and follows agent A. (See, e.g., our Propositions 1 and 2.) In contrast, OS say that there is herding when agent B sometimesignores his own information and follows agent A. The OS conclusion is clearly correct given their weaker definition of herding. At the same time, however, it also seems that for the stricter definition that we adopted in our original paper, correlated errors on the part of smart managers are indeed necessary for a herding outcome—even when one considers the expanded parameter space that OS do. We will try to give some intuition for why the different definitions of herding lead to different conclusions about the necessity of correlated prediction errors. Along the way, we hope to convince the reader that our stricter definition is more appropriate for isolating the economic effects at work in the reputational herding model. An example is helpful in illustrating what is going on. Consider a simple case where the parameter values are as follows: p 5 3⁄4; q 5 1⁄4; z 5 1⁄2, andu 5 1⁄2. In our 1990 paper, we also imposed the constraint that z 5 ap 1 (1 2 a)q, which further implies thata 5 1⁄2. The heart of the OS Comment is the idea that this constraint should be disposed of—i.e., we should look at other values of a. Without loss of generality, we will consider values of a above 1⁄2, and distinguish two cases.",
"title": ""
},
{
"docid": "neg:1840551_4",
"text": "Advances in artificial impedance surface conformal antennas are presented. A detailed conical impedance modulation is proposed for the first time. By coating an artificial impedance surface on a cone, we can control the conical surface wave radiating at the desired direction. The surface impedance is constructed by printing a dense texture of sub wavelength metal patches on a grounded dielectric slab. The effective surface impedance depends on the size of the patches, and can be varied as a function of position. The final devices are conical conformal antennas with simple layout and feeding. Simulated results are presented, and better aperture efficiency and lower side lobe level are obtained than our predecessors [2].",
"title": ""
},
{
"docid": "neg:1840551_5",
"text": "Nontechnical losses, particularly due to electrical theft, have been a major concern in power system industries for a long time. Large-scale consumption of electricity in a fraudulent manner may imbalance the demand-supply gap to a great extent. Thus, there arises the need to develop a scheme that can detect these thefts precisely in the complex power networks. So, keeping focus on these points, this paper proposes a comprehensive top-down scheme based on decision tree (DT) and support vector machine (SVM). Unlike existing schemes, the proposed scheme is capable enough to precisely detect and locate real-time electricity theft at every level in power transmission and distribution (T&D). The proposed scheme is based on the combination of DT and SVM classifiers for rigorous analysis of gathered electricity consumption data. In other words, the proposed scheme can be viewed as a two-level data processing and analysis approach, since the data processed by DT are fed as an input to the SVM classifier. Furthermore, the obtained results indicate that the proposed scheme reduces false positives to a great extent and is practical enough to be implemented in real-time scenarios.",
"title": ""
},
{
"docid": "neg:1840551_6",
"text": "Active learning (AL) is an increasingly popular strategy for mitigating the amount of labeled data required to train classifiers, thereby reducing annotator effort. We describe a real-world, deployed application of AL to the problem of biomedical citation screening for systematic reviews at the Tufts Medical Center's Evidence-based Practice Center. We propose a novel active learning strategy that exploits a priori domain knowledge provided by the expert (specifically, labeled features)and extend this model via a Linear Programming algorithm for situations where the expert can provide ranked labeled features. Our methods outperform existing AL strategies on three real-world systematic review datasets. We argue that evaluation must be specific to the scenario under consideration. To this end, we propose a new evaluation framework for finite-pool scenarios, wherein the primary aim is to label a fixed set of examples rather than to simply induce a good predictive model. We use a method from medical decision theory for eliciting the relative costs of false positives and false negatives from the domain expert, constructing a utility measure of classification performance that integrates the expert preferences. Our findings suggest that the expert can, and should, provide more information than instance labels alone. In addition to achieving strong empirical results on the citation screening problem, this work outlines many important steps for moving away from simulated active learning and toward deploying AL for real-world applications.",
"title": ""
},
{
"docid": "neg:1840551_7",
"text": "We prove that there are arbitrarily long arithmetic progressions of primes. There are three major ingredients. [. . . ] [. . . ] for all x ∈ ZN (here (m0, t0, L0) = (3, 2, 1)) and E ( ν((x− y)/2)ν((x− y + h2)/2)ν(−y)ν(−y − h1)× × ν((x− y′)/2)ν((x− y′ + h2)/2)ν(−y)ν(−y − h1)× × ν(x)ν(x + h1)ν(x + h2)ν(x + h1 + h2) ∣∣∣∣ x, h1, h2, y, y′ ∈ ZN) = 1 + o(1) (0.1) (here (m0, t0, L0) = (12, 5, 2)). [. . . ] Proposition 0.1 (Generalised von Neumann). Suppose that ν is k-pseudorandom. Let f0, . . . , fk−1 ∈ L(ZN) be functions which are pointwise bounded by ν+νconst, or in other words |fj(x)| 6 ν(x) + 1 for all x ∈ ZN , 0 6 j 6 k − 1. (0.2) Let c0, . . . , ck−1 be a permutation of {0, 1, . . . , k − 1} (in practice we will take cj := j). Then E ( k−1 ∏ j=0 fj(x + cjr) ∣∣∣∣ x, r ∈ ZN) = O( inf 06j6k−1 ‖fj‖Uk−1) + o(1).",
"title": ""
},
{
"docid": "neg:1840551_8",
"text": "Land surface temperature and emissivity (LST&E) products are generated by the Moderate Resolution Imaging Spectroradiometer (MODIS) and Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) on the National Aeronautics and Space Administration's Terra satellite. These products are generated at different spatial, spectral, and temporal resolutions, resulting in discrepancies between them that are difficult to quantify, compounded by the fact that different retrieval algorithms are used to produce them. The highest spatial resolution MODIS emissivity product currently produced is from the day/night algorithm, which has a spatial resolution of 5 km. The lack of a high-spatial-resolution emissivity product from MODIS limits the usefulness of the data for a variety of applications and limits utilization with higher resolution products such as those from ASTER. This paper aims to address this problem by using the ASTER Temperature Emissivity Separation (TES) algorithm, combined with an improved atmospheric correction method, to generate the LST&E products for MODIS at 1-km spatial resolution and for ASTER in a consistent manner. The rms differences between the ASTER and MODIS emissivities generated from TES over the southwestern U.S. were 0.013 at 8.6 μm and 0.0096 at 11 μm, with good correlations of up to 0.83. The validation with laboratory-measured sand samples from the Algodones and Kelso Dunes in CA showed a good agreement in spectral shape and magnitude, with mean emissivity differences in all bands of 0.009 and 0.010 for MODIS and ASTER, respectively. These differences are equivalent to approximately 0.6 K in the LST for a material at 300 K and at 11 μm.",
"title": ""
},
{
"docid": "neg:1840551_9",
"text": "Projects in the area of architectural design and urban planning typically engage several architects as well as experts from other professions. While the design and review meetings thus often involve a large number of cooperating participants, the actual design is still done by the individuals in the time between those meetings using desktop PCs and CAD applications. A real collaborative approach to architectural design and urban planning is often limited to early paper-based sketches. In order to overcome these limitations we designed and realized the Augmented Round Table, a new approach to support complex design and planning decisions for architects. While AR has been applied to this area earlier, our approach does not try to replace the use of CAD systems but rather integrates them seamlessly into the collaborative AR environment. The approach is enhanced by intuitive interaction mechanisms that can be easily configured for different application scenarios.",
"title": ""
},
{
"docid": "neg:1840551_10",
"text": "The article is related to the development of techniques for automatic recognition of bird species by their sounds. It has been demonstrated earlier that a simple model of one time-varying sinusoid is very useful in classification and recognition of typical bird sounds. However, a large class of bird sounds are not pure sinusoids but have a clear harmonic spectrum structure. We introduce a way to classify bird syllables into four classes by their harmonic structure.",
"title": ""
},
{
"docid": "neg:1840551_11",
"text": "Today’s huge volumes of data, heterogeneous information and communication technologies, and borderless cyberinfrastructures create new challenges for security experts and law enforcement agencies investigating cybercrimes. The future of digital forensics is explored, with an emphasis on these challenges and the advancements needed to effectively protect modern societies and pursue cybercriminals.",
"title": ""
},
{
"docid": "neg:1840551_12",
"text": "Human pose estimation using deep neural networks aims to map input images with large variations into multiple body keypoints, which must satisfy a set of geometric constraints and interdependence imposed by the human body model. This is a very challenging nonlinear manifold learning process in a very high dimensional feature space. We believe that the deep neural network, which is inherently an algebraic computation system, is not the most efficient way to capture highly sophisticated human knowledge, for example those highly coupled geometric characteristics and interdependence between keypoints in human poses. In this work, we propose to explore how external knowledge can be effectively represented and injected into the deep neural networks to guide its training process using learned projections that impose proper prior. Specifically, we use the stacked hourglass design and inception-resnet module to construct a fractal network to regress human pose images into heatmaps with no explicit graphical modeling. We encode external knowledge with visual features, which are able to characterize the constraints of human body models and evaluate the fitness of intermediate network output. We then inject these external features into the neural network using a projection matrix learned using an auxiliary cost function. The effectiveness of the proposed inception-resnet module and the benefit in guided learning with knowledge projection is evaluated on two widely used human pose estimation benchmarks. Our approach achieves state-of-the-art performance on both datasets.",
"title": ""
},
{
"docid": "neg:1840551_13",
"text": "With the rapid expansion of new available information presented to us online on a daily basis, text classification becomes imperative in order to classify and maintain it. Word2vec offers a unique perspective to the text mining community. By converting words and phrases into a vector representation, word2vec takes an entirely new approach on text classification. Based on the assumption that word2vec brings extra semantic features that helps in text classification, our work demonstrates the effectiveness of word2vec by showing that tf-idf and word2vec combined can outperform tf-idf because word2vec provides complementary features (e.g. semantics that tf-idf can't capture) to tf-idf. Our results show that the combination of word2vec weighted by tf-idf and tf-idf does not outperform tf-idf consistently. It is consistent enough to say the combination of the two can outperform either individually.",
"title": ""
},
{
"docid": "neg:1840551_14",
"text": "Recently, Long Term Evolution (LTE) has developed a femtocell for indoor coverage extension. However, interference problem between the femtocell and the macrocell should be solved in advance. In this paper, we propose an interference management scheme in the LTE femtocell systems using Fractional Frequency Reuse (FFR). Under the macrocell allocating frequency band by the FFR, the femtocell chooses sub-bands which are not used in the macrocell sub-area to avoid interference. Simulation results show that proposed scheme enhances total/edge throughputs and reduces the outage probability in overall network, especially for the cell edge users.",
"title": ""
},
{
"docid": "neg:1840551_15",
"text": "We present CHARAGRAM embeddings, a simple approach for learning character-based compositional models to embed textual sequences. A word or sentence is represented using a character n-gram count vector, followed by a single nonlinear transformation to yield a low-dimensional embedding. We use three tasks for evaluation: word similarity, sentence similarity, and part-of-speech tagging. We demonstrate that CHARAGRAM embeddings outperform more complex architectures based on character-level recurrent and convolutional neural networks, achieving new state-of-the-art performance on several similarity tasks. 1",
"title": ""
},
{
"docid": "neg:1840551_16",
"text": "This article surveys the literature on analyses of mobile traffic collected by operators within their network infrastructure. This is a recently emerged research field, and, apart from a few outliers, relevant works cover the period from 2005 to date, with a sensible densification over the last three years. We provide a thorough review of the multidisciplinary activities that rely on mobile traffic datasets, identifying major categories and sub-categories in the literature, so as to outline a hierarchical classification of research lines. When detailing the works pertaining to each class, we balance a comprehensive view of state-of-the-art results with punctual focuses on the methodological aspects. Our approach provides a complete introductory guide to the research based on mobile traffic analysis. It allows summarizing the main findings of the current state-of-the-art, as well as pinpointing important open research directions.",
"title": ""
},
{
"docid": "neg:1840551_17",
"text": "A residual network (or ResNet) is a standard deep neural net architecture, with stateof-the-art performance across numerous applications. The main premise of ResNets is that they allow the training of each layer to focus on fitting just the residual of the previous layer’s output and the target output. Thus, we should expect that the trained network is no worse than what we can obtain if we remove the residual layers and train a shallower network instead. However, due to the non-convexity of the optimization problem, it is not at all clear that ResNets indeed achieve this behavior, rather than getting stuck at some arbitrarily poor local minimum. In this paper, we rigorously prove that arbitrarily deep, nonlinear residual units indeed exhibit this behavior, in the sense that the optimization landscape contains no local minima with value above what can be obtained with a linear predictor (namely a 1-layer network). Notably, we show this under minimal or no assumptions on the precise network architecture, data distribution, or loss function used. We also provide a quantitative analysis of approximate stationary points for this problem. Finally, we show that with a certain tweak to the architecture, training the network with standard stochastic gradient descent achieves an objective value close or better than any linear predictor.",
"title": ""
},
{
"docid": "neg:1840551_18",
"text": "Multipath is exploited to image targets that are hidden due to lack of line of sight (LOS) path in urban environments. Urban radar scenes include building walls, therefore creating reflections causing multipath returns. Conventional processing via synthetic aperture beamforming algorithms do not detect or localize the target at its true position. To remove these limitations, two multipath exploitation techniques to image a hidden target at its true location are presented under the assumptions that the locations of the reflecting walls are known and that the target multipath is resolvable and detectable. The first technique directly operates on the radar returns, whereas the second operates on the traditional beamformed image. Both these techniques mitigate the false alarms arising from the multipath while simultaneously permitting the shadowed target to be detected at its true location. While these techniques are general, they are examined for two important urban radar applications: detecting shadowed targets in an urban canyon, and detecting shadowed targets around corners.",
"title": ""
},
{
"docid": "neg:1840551_19",
"text": "Cloud applications are increasingly built from a mixture of runtime technologies. Hosted functions and service-oriented web hooks are among the most recent ones which are natively supported by cloud platforms. They are collectively referred to as serverless computing by application engineers due to the transparent on-demand instance activation and microbilling without the need to provision infrastructure explicitly. This half-day tutorial explains the use cases for serverless computing and the drivers and existing software solutions behind the programming and deployment model also known as Function-as-a-Service in the overall cloud computing stack. Furthermore, it presents practical open source tools for deriving functions from legacy code and for the management and execution of functions in private and public clouds.",
"title": ""
}
] |
1840552 | Towards Creation of a Corpus for Argumentation Mining the Biomedical Genetics Research Literature | [
{
"docid": "pos:1840552_0",
"text": "This paper describes recent approaches using text-mining to automatically profile and extract arguments from legal cases. We outline some of the background context and motivations. We then turn to consider issues related to the construction and composition of a corpora of legal cases. We show how a Context-Free Grammar can be used to extract arguments, and how ontologies and Natural Language Processing can identify complex information such as case factors and participant roles. Together the results bring us closer to automatic identification of legal arguments.",
"title": ""
}
] | [
{
"docid": "neg:1840552_0",
"text": "REQUIRED) In this paper, we present a social/behavioral study of individual information security practices of internet users in Latin America, specifically presenting the case of Bolivia. The research model uses social cognitive theory in order to explain the individual cognitive factors that influence information security behavior. The model includes individuals’ beliefs about their abilities to competently use computer information security tools and information security awareness in the determination of effective information security practices. The operationalization of constructs that are part of our research model, such as information security practice as the dependent variable, self-efficacy and information security awareness as independent variables , are presented both in Spanish and English. In this study, we offer the analysis of a survey of 255 Internet users from Bolivia who replied to our survey and provided responses about their information security behavior. A discussion about information security awareness and practices is presented.",
"title": ""
},
{
"docid": "neg:1840552_1",
"text": "Deep networks have recently enjoyed enormous success when applied to recognition and classification problems in computer vision [22, 33], but their use in graphics problems has been limited ([23, 7] are notable recent exceptions). In this work, we present a novel deep architecture that performs new view synthesis directly from pixels, trained from a large number of posed image sets. In contrast to traditional approaches, which consist of multiple complex stages of processing, each of which requires careful tuning and can fail in unexpected ways, our system is trained end-to-end. The pixels from neighboring views of a scene are presented to the network, which then directly produces the pixels of the unseen view. The benefits of our approach include generality (we only require posed image sets and can easily apply our method to different domains), and high quality results on traditionally difficult scenes. We believe this is due to the end-to-end nature of our system, which is able to plausibly generate pixels according to color, depth, and texture priors learnt automatically from the training data. We show view interpolation results on imagery from the KITTI dataset [12], from data from [1] as well as on Google Street View images. To our knowledge, our work is the first to apply deep learning to the problem of new view synthesis from sets of real-world, natural imagery.",
"title": ""
},
{
"docid": "neg:1840552_2",
"text": "This paper describes recent work on the “Crosswatch” project, which is a computer vision-based smartphone system developed for providing guidance to blind and visually impaired travelers at traffic intersections. A key function of Crosswatch is self-localization - the estimation of the user's location relative to the crosswalks in the current traffic intersection. Such information may be vital to users with low or no vision to ensure that they know which crosswalk they are about to enter, and are properly aligned and positioned relative to the crosswalk. However, while computer vision-based methods have been used for finding crosswalks and helping blind travelers align themselves to them, these methods assume that the entire crosswalk pattern can be imaged in a single frame of video, which poses a significant challenge for a user who lacks enough vision to know where to point the camera so as to properly frame the crosswalk. In this paper we describe work in progress that tackles the problem of crosswalk detection and self-localization, building on recent work describing techniques enabling blind and visually impaired users to acquire 360° image panoramas while turning in place on a sidewalk. The image panorama is converted to an aerial (overhead) view of the nearby intersection, centered on the location that the user is standing at, so as to facilitate matching with a template of the intersection obtained from Google Maps satellite imagery. The matching process allows crosswalk features to be detected and permits the estimation of the user's precise location relative to the crosswalk of interest. We demonstrate our approach on intersection imagery acquired by blind users, thereby establishing the feasibility of the approach.",
"title": ""
},
{
"docid": "neg:1840552_3",
"text": "J.E. Dietrich (ed.), Female Puberty: A Comprehensive Guide for Clinicians, DOI 10.1007/978-1-4939-0912-4_2, © Springer Science+Business Media New York 2014 Abstract The development of a female child into an adult woman is a complex process. Puberty, and the hormones that fuel the physical and psychological changes which are its hallmarks, is generally viewed as a rough and often unpredictable storm that must be weathered by the surrounding adults. The more we learn, however, about the intricate interplay between the endocrine regulators and the endorgan responses to this hormonal symphony, puberty seems less like chaos, and more of an incredible metamorphosis that leads to reproductive capacity and psychosocial maturation. Physically, female puberty is marked by accelerated growth and the development of secondary sexual characteristics. Secondary sexual characteristics are those that distinguish two different sexes in a species, but are not directly part of the reproductive system. Analogies from the animal kingdom include manes in male lions and the elaborate tails of male peacocks. The visible/external sequence of events is generally: breast budding (thelarche), onset of pubic hair (pubarche), maximal growth velocity, menarche, development of axillary hair, attainment of the adult breast type, adult pubic hair pattern. Underlying these external developments is the endocrine axis orchestrating the increase in gonadal steroid production (gonadarche), the increase in adrenal androgen production (adrenarche) and the associated changes in the reproductive tract that allow fertility. Meanwhile, the brain is rapidly adapting to the new hormonal milieu. The extent of variation in this scenario is enormous. On average, the process from accelerated growth and breast budding to menarche is approximately 4.5 years with a range from 1.5 to 6 years. There are differences in timing and expression of maturation based on ethnicity, geography, and genetics. Being familiar with the spectrum that encompasses normal development is Chapter 2 Normal Pubertal Physiology in Females",
"title": ""
},
{
"docid": "neg:1840552_4",
"text": "Adult patients seeking orthodontic treatment are increasingly motivated by esthetic considerations. The majority of these patients reject wearing labial fixed appliances and are looking instead to more esthetic treatment options, including lingual orthodontics and Invisalign appliances. Since Align Technology introduced the Invisalign appliance in 1999 in an extensive public campaign, the appliance has gained tremendous attention from adult patients and dental professionals. The transparency of the Invisalign appliance enhances its esthetic appeal for those adult patients who are averse to wearing conventional labial fixed orthodontic appliances. Although guidelines about the types of malocclusions that this technique can treat exist, few clinical studies have assessed the effectiveness of the appliance. A few recent studies have outlined some of the limitations associated with this technique that clinicians should recognize early before choosing treatment options.",
"title": ""
},
{
"docid": "neg:1840552_5",
"text": "Hypertension — the chronic elevation of blood pressure — is a major human health problem. In most cases, the root cause of the disease remains unknown, but there is mounting evidence that many forms of hypertension are initiated and maintained by an elevated sympathetic tone. This review examines how the sympathetic tone to cardiovascular organs is generated, and discusses how elevated sympathetic tone can contribute to hypertension.",
"title": ""
},
{
"docid": "neg:1840552_6",
"text": "Restricted Boltzmann machines (RBMs) are powerful machine learning models, but learning and some kinds of inference in the model require sampling-based approximations, which, in classical digital computers, are implemented using expensive MCMC. Physical computation offers the opportunity to reduce the cost of sampling by building physical systems whose natural dynamics correspond to drawing samples from the desired RBM distribution. Such a system avoids the burn-in and mixing cost of a Markov chain. However, hardware implementations of this variety usually entail limitations such as low-precision and limited range of the parameters and restrictions on the size and topology of the RBM. We conduct software simulations to determine how harmful each of these restrictions is. Our simulations are based on the D-Wave Two computer, but the issues we investigate arise in most forms of physical computation. Our findings suggest that designers of new physical computing hardware and algorithms for physical computers should focus their efforts on overcoming the limitations imposed by the topology restrictions of currently existing physical computers.",
"title": ""
},
{
"docid": "neg:1840552_7",
"text": "Risks have a significant impact on a construction project’s performance in terms of cost, time and quality. As the size and complexity of the projects have increased, an ability to manage risks throughout the construction process has become a central element preventing unwanted consequences. How risks are shared between the project actors is to a large extent governed by the procurement option and the content of the related contract documents. Therefore, selecting an appropriate project procurement option is a key issue for project actors. The overall aim of this research is to increase the understanding of risk management in the different procurement options: design-bid-build contracts, designbuild contracts and collaborative form of partnering. Deeper understanding is expected to contribute to a more effective risk management and, therefore, a better project output and better value for both clients and contractors. The study involves nine construction projects recently performed in Sweden and comprises a questionnaire survey and a series of interviews with clients, contractors and consultants involved in these construction projects. The findings of this work show a lack of an iterative approach to risk management, which is a weakness in current procurement practices. This aspect must be addressed if the risk management process is to serve projects and, thus, their clients. The absence of systematic risk management is especially noted in the programme phase, where it arguably has the greatest potential impact. The production phase is where most interest and activity are to be found. As a matter of practice, the communication of risks between the actors simply does not work to the extent that it must if projects are to be delivered with certainty, irrespective of the form of procurement. A clear connection between the procurement option and risk management in construction projects has been found. Traditional design-bid-build contracts do not create opportunities for open discussion of project risks and joint risk management. A number of drivers of and obstacles to effective risk management have been explored in the study. Every actor’s involvement in dialogue, effective communication and information exchange, open attitudes and trustful relationship are the factors that support open discussion of project risks and, therefore, contribute to successful risk management. Based on the findings, a number of recommendations facilitating more effective risk management have been developed for the industry practitioners. Keywords--Risk Management, Risk Allocation, Construction Project, Construction Contract, Design-BidBuild, Design-Build, Partnering",
"title": ""
},
{
"docid": "neg:1840552_8",
"text": "Being grateful has been associated with many positive outcomes, including greater happiness, positive affect, optimism, and self-esteem. There is limited research, however, on the associations between gratitude and different domains of life satisfaction across cultures. The current study examined the associations between gratitude and three domains of life satisfaction, including satisfaction in relationships, work, and health, and overall life satisfaction, in the United States and Japan. A total of 945 participants were drawn from two samples of middle aged and older adults, the Midlife Development in the United States and the Midlife Development in Japan. There were significant positive bivariate associations between gratitude and all four measures of life satisfaction. In addition, after adjusting for demographics, neuroticism, extraversion, and the other measures of satisfaction, gratitude was uniquely and positively associated with satisfaction with relationships and life overall but not with satisfaction with work or health. Furthermore, results indicated that women and individuals who were more extraverted and lived in the United States were more grateful and individuals with less than a high school degree were less grateful. The findings from this study suggest that gratitude is uniquely associated with specific domains of life satisfaction. Results are discussed with respect to future research and the design and implementation of gratitude interventions, particularly when including individuals from different cultures.",
"title": ""
},
{
"docid": "neg:1840552_9",
"text": "In this article, we present an image-based modeling and rendering system, which we call pop-up light field, that models a sparse light field using a set of coherent layers. In our system, the user specifies how many coherent layers should be modeled or popped up according to the scene complexity. A coherent layer is defined as a collection of corresponding planar regions in the light field images. A coherent layer can be rendered free of aliasing all by itself, or against other background layers. To construct coherent layers, we introduce a Bayesian approach, coherence matting, to estimate alpha matting around segmented layer boundaries by incorporating a coherence prior in order to maintain coherence across images.We have developed an intuitive and easy-to-use user interface (UI) to facilitate pop-up light field construction. The key to our UI is the concept of human-in-the-loop where the user specifies where aliasing occurs in the rendered image. The user input is reflected in the input light field images where pop-up layers can be modified. The user feedback is instant through a hardware-accelerated real-time pop-up light field renderer. Experimental results demonstrate that our system is capable of rendering anti-aliased novel views from a sparse light field.",
"title": ""
},
{
"docid": "neg:1840552_10",
"text": "In this paper, a new design of mm-Wave phased array 5G antenna for multiple-input multiple-output (MIMO) applications has been introduced. Two identical linear phased arrays with eight leaf-shaped bow-tie antenna elements have been used at different sides of the mobile-phone PCB. An Arlon AR 350 dielectric with properties of h=0.5 mm, ε=3.5, and δ=0.0026 has been used as a substrate of the proposed design. The antenna is working in the frequency range of 25 to 40 GHz (more than 45% FBW) and can be easily fit into current handheld devices. The proposed MIMO antenna has good radiation performances at 28 and 38 GHz which both are powerful candidates to be the carrier frequency of the future 5G cellular networks.",
"title": ""
},
{
"docid": "neg:1840552_11",
"text": "Wall‐climbing welding robots (WCWRs) can replace workers in manufacturing and maintaining large unstructured equipment, such as ships. The adhesion mechanism is the key component of WCWRs. As it is directly related to the robot’s ability in relation to adsorbing, moving flexibly and obstacle‐passing. In this paper, a novel non‐contact adjustably magnetic adhesion mechanism is proposed. The magnet suckers are mounted under the robot’s axils and the sucker and wall are in non‐contact. In order to pass obstacles, the sucker and the wheel unit can be pulled up and pushed down by a lifting mechanism. The magnetic adhesion force can be adjusted by changing the height of the gap between the sucker and the wall by the lifting mechanism. In order to increase the adhesion force, the value of the sucker’s magnetic energy density (MED) is maximized by optimizing the magnet sucker’s structure parameters with a finite element method. Experiments prove that the magnetic adhesion mechanism has enough adhesion force and that the WCWR can complete wall‐climbing work within a large unstructured environment.",
"title": ""
},
{
"docid": "neg:1840552_12",
"text": "The trend of bring your own device (BYOD) has been rapidly adopted by organizations. Despite the pros and cons of BYOD adoption, this trend is expected to inevitably keep increasing. Yet, BYOD has raised significant concerns about information system security as employees use their personal devices to access organizational resources. This study aims to examine employees' intention to comply with an organization’s IS security policy in the context of BYOD. We derived our research model from reactance, protection motivation and organizational justice theories. The results of this study demonstrate that an employee’s perceived response efficacy and perceived justice positively affect an employee’s intention to comply with BYOD security policy. Perceived security threat appraisal was found to marginally promote the intention to comply. Conversely, perceived freedom threat due to imposed security policy negatively affects an employee’s intention to comply with the security policy. We also found that an employee’s perceived cost associated with compliance behavior positively affects an employee’s perceptions of threat to an individual freedom. An interesting double-edged sword effect of a security awareness program was confirmed by the results. BYOD security awareness program increases an employee’s response efficacy (a positive effect) and response cost (a negative effect). The study also demonstrates the importance of having an IT support team for BYOD, as it increases an employee’s response-efficacy and perceived justice.",
"title": ""
},
{
"docid": "neg:1840552_13",
"text": "Deep learning models have lately shown great performance in various fields such as computer vision, speech recognition, speech translation, and natural language processing. However, alongside their state-of-the-art performance, it is still generally unclear what is the source of their generalization ability. Thus, an important question is what makes deep neural networks able to generalize well from the training set to new data. In this article, we provide an overview of the existing theory and bounds for the characterization of the generalization error of deep neural networks, combining both classical and more recent theoretical and empirical results.",
"title": ""
},
{
"docid": "neg:1840552_14",
"text": "Two images of a single scene/object are related by the epipolar geometry, which can be described by a 3×3 singular matrix called the essential matrix if images' internal parameters are known, or the fundamental matrix otherwise. It captures all geometric information contained in two images, and its determination is very important in many applications such as scene modeling and vehicle navigation. This paper gives an introduction to the epipolar geometry, and provides a complete review of the current techniques for estimating the fundamental matrix and its uncertainty. A well-founded measure is proposed to compare these techniques. Projective reconstruction is also reviewed. The software which we have developed for this review is available on the Internet.",
"title": ""
},
{
"docid": "neg:1840552_15",
"text": "We present simulations and demonstrate experimentally a new concept in winding a planar induction heater. The winding results in minimal ac magnetic field below the plane of the heater, while concentrating the flux above. Ferrites and other types of magnetic shielding are typically not required. The concept of a one-sided ac field can generalized to other geometries as well.",
"title": ""
},
{
"docid": "neg:1840552_16",
"text": "Hope is the sum of goal thoughts as tapped by pathways and agency. Pathways reflect the perceived capability to produce goal routes; agency reflects the perception that one can initiate action along these pathways. Using trait and state hope scales, studies explored hope in college student athletes. In Study 1, male and female athletes were higher in trait hope than nonathletes; moreover, hope significantly predicted semester grade averages beyond cumulative grade point average and overall self-worth. In Study 2, with female cross-country athletes, trait hope predicted athletic outcomes; further, weekly state hope tended to predict athletic outcomes beyond dispositional hope, training, and self-esteem, confidence, and mood. In Study 3, with female track athletes, dispositional hope significantly predicted athletic outcomes beyond variance related to athletic abilities and affectivity; moreover, athletes had higher hope than nonathletes.",
"title": ""
},
{
"docid": "neg:1840552_17",
"text": "Published scientific articles are linked together into a graph, the citation graph, through their citations. This paper explores the notion of similarity based on connectivity alone, and proposes several algorithms to quantify it. Our metrics take advantage of the local neighborhoods of the nodes in the citation graph. Two variants of link-based similarity estimation between two nodes are described, one based on the separate local neighborhoods of the nodes, and another based on the joint local neighborhood expanded from both nodes at the same time. The algorithms are implemented and evaluated on a subgraph of the citation graph of computer science in a retrieval context. The results are compared with text-based similarity, and demonstrate the complementarity of link-based and text-based retrieval.",
"title": ""
},
{
"docid": "neg:1840552_18",
"text": "Employee turnover has been identified as a key issue for organizations because of its adverse impact on work place productivity and long term growth strategies. To solve this problem, organizations use machine learning techniques to predict employee turnover. Accurate predictions enable organizations to take action for retention or succession planning of employees. However, the data for this modeling problem comes from HR Information Systems (HRIS); these are typically under-funded compared to the Information Systems of other domains in the organization which are directly related to its priorities. This leads to the prevalence of noise in the data that renders predictive models prone to over-fitting and hence inaccurate. This is the key challenge that is the focus of this paper, and one that has not been addressed historically. The novel contribution of this paper is to explore the application of Extreme Gradient Boosting (XGBoost) technique which is more robust because of its regularization formulation. Data from the HRIS of a global retailer is used to compare XGBoost against six historically used supervised classifiers and demonstrate its significantly higher accuracy for predicting employee turnover. Keywords—turnover prediction; machine learning; extreme gradient boosting; supervised classification; regularization",
"title": ""
},
{
"docid": "neg:1840552_19",
"text": "Object viewpoint classification aims at predicting an approximate 3D pose of objects in a scene and is receiving increasing attention. State-of-the-art approaches to viewpoint classification use generative models to capture relations between object parts. In this work we propose to use a mixture of holistic templates (e.g. HOG) and discriminative learning for joint viewpoint classification and category detection. Inspired by the work of Felzenszwalb et al 2009, we discriminatively train multiple components simultaneously for each object category. A large number of components are learned in the mixture and they are associated with canonical viewpoints of the object through different levels of supervision, being fully supervised, semi-supervised, or unsupervised. We show that discriminative learning is capable of producing mixture components that directly provide robust viewpoint classification, significantly outperforming the state of the art: we improve the viewpoint accuracy on the Savarese et al 3D Object database from 57% to 74%, and that on the VOC 2006 car database from 73% to 86%. In addition, the mixture-of-templates approach to object viewpoint/pose has a natural extension to the continuous case by discriminatively learning a linear appearance model locally at each discrete view. We evaluate continuous viewpoint estimation on a dataset of everyday objects collected using IMUs for groundtruth annotation: our mixture model shows great promise comparing to a number of baselines including discrete nearest neighbor and linear regression.",
"title": ""
}
] |
1840553 | Obstacle detection with ultrasonic sensors and signal analysis metrics | [
{
"docid": "pos:1840553_0",
"text": "This paper demonstrates an innovative and simple solution for obstacle detection and collision avoidance of unmanned aerial vehicles (UAVs) optimized for and evaluated with quadrotors. The sensors exploited in this paper are low-cost ultrasonic and infrared range finders, which are much cheaper though noisier than more expensive sensors such as laser scanners. This needs to be taken into consideration for the design, implementation, and parametrization of the signal processing and control algorithm for such a system, which is the topic of this paper. For improved data fusion, inertial and optical flow sensors are used as a distance derivative for reference. As a result, a UAV is capable of distance controlled collision avoidance, which is more complex and powerful than comparable simple solutions. At the same time, the solution remains simple with a low computational burden. Thus, memory and time-consuming simultaneous localization and mapping is not required for collision avoidance.",
"title": ""
}
] | [
{
"docid": "neg:1840553_0",
"text": "Scheduling in the context of parallel systems is often thought of in terms of assigning tasks in a program to processors, so as to minimize the makespan. This formulation assumes that the processors are dedicated to the program in question. But when the parallel system is shared by a number of users, this is not necessarily the case. In the context of multiprogrammed parallel machines, scheduling refers to the execution of threads from competing programs. This is an operating system issue, involved with resource allocation, not a program development issue. Scheduling schemes for multiprogrammed parallel systems can be classi ed as one or two leveled. Single-level scheduling combines the allocation of processing power with the decision of which thread will use it. Two level scheduling decouples the two issues: rst, processors are allocated to the job, and then the job's threads are scheduled using this pool of processors. The processors of a parallel system can be shared in two basic ways, which are relevant for both one-level and two-level scheduling. One approach is to use time slicing, e.g. when all the processors in the system (or all the processors in the pool) service a global queue of ready threads. The other approach is to use space slicing, and partition the processors statically or dynamically among the di erent jobs. As these approaches are orthogonal to each other, it is also possible to combine them in various ways; for example, this is often done in gang scheduling. Systems using the various approaches are described, and the implications of the di erent mechanisms are discussed. The goals of this survey are to describe the many di erent approaches within a uni ed framework based on the mechanisms used to achieve multiprogramming, and at the same time document commercial systems that have not been described in the open literature.",
"title": ""
},
{
"docid": "neg:1840553_1",
"text": "Deep Learning has emerged as a new area in machine learning and is applied to a number of signal and image applications.The main purpose of the work presented in this paper, is to apply the concept of a Deep Learning algorithm namely, Convolutional neural networks (CNN) in image classification. The algorithm is tested on various standard datasets, like remote sensing data of aerial images (UC Merced Land Use Dataset) and scene images from SUN database. The performance of the algorithm is evaluated based on the quality metric known as Mean Squared Error (MSE) and classification accuracy. The graphical representation of the experimental results is given on the basis of MSE against the number of training epochs. The experimental result analysis based on the quality metrics and the graphical representation proves that the algorithm (CNN) gives fairly good classification accuracy for all the tested datasets.",
"title": ""
},
{
"docid": "neg:1840553_2",
"text": "Collecting quality data from software projects can be time-consuming and expensive. Hence, some researchers explore âunsupervisedâ approaches to quality prediction that does not require labelled data. An alternate technique is to use âsupervisedâ approaches that learn models from project data labelled with, say, âdefectiveâ or ânot-defectiveâ. Most researchers use these supervised models since, it is argued, they can exploit more knowledge of the projects. \nAt FSEâ16, Yang et al. reported startling results where unsupervised defect predictors outperformed supervised predictors for effort-aware just-in-time defect prediction. If confirmed, these results would lead to a dramatic simplification of a seemingly complex task (data mining) that is widely explored in the software engineering literature. \nThis paper repeats and refutes those results as follows. (1) There is much variability in the efficacy of the Yang et al. predictors so even with their approach, some supervised data is required to prune weaker predictors away. (2) Their findings were grouped across N projects. When we repeat their analysis on a project-by-project basis, supervised predictors are seen to work better. \nEven though this paper rejects the specific conclusions of Yang et al., we still endorse their general goal. In our our experiments, supervised predictors did not perform outstandingly better than unsupervised ones for effort-aware just-in-time defect prediction. Hence, they may indeed be some combination of unsupervised learners to achieve comparable performance to supervised ones. We therefore encourage others to work in this promising area.",
"title": ""
},
{
"docid": "neg:1840553_3",
"text": "In this paper we present Sentimentor, a tool for sentiment analysis of Twitter data. Sentimentor utilises the naive Bayes Classifier to classify Tweets into positive, negative or objective sets. We present experimental evaluation of our dataset and classification results, our findings are not contridictory with existing work.",
"title": ""
},
{
"docid": "neg:1840553_4",
"text": "Detecting frauds in credit card transactions is perhaps one of the best testbeds for computational intelligence algorithms. In fact, this problem involves a number of relevant challenges, namely: concept drift (customers’ habits evolve and fraudsters change their strategies over time), class imbalance (genuine transactions far outnumber frauds), and verification latency (only a small set of transactions are timely checked by investigators). However, the vast majority of learning algorithms that have been proposed for fraud detection rely on assumptions that hardly hold in a real-world fraud-detection system (FDS). This lack of realism concerns two main aspects: 1) the way and timing with which supervised information is provided and 2) the measures used to assess fraud-detection performance. This paper has three major contributions. First, we propose, with the help of our industrial partner, a formalization of the fraud-detection problem that realistically describes the operating conditions of FDSs that everyday analyze massive streams of credit card transactions. We also illustrate the most appropriate performance measures to be used for fraud-detection purposes. Second, we design and assess a novel learning strategy that effectively addresses class imbalance, concept drift, and verification latency. Third, in our experiments, we demonstrate the impact of class unbalance and concept drift in a real-world data stream containing more than 75 million transactions, authorized over a time window of three years.",
"title": ""
},
{
"docid": "neg:1840553_5",
"text": "Xiaoming Zhai is a doctoral student in the Department of Physics, Beijing Normal University, and is a visiting scholar in the College of Education, University of Washington. His research interests include physics assessment and evaluation, as well as technology-supported physics instruction. He has been a distinguished high school physics teacher who won numerous nationwide instructional awards. Meilan Zhang is an instructor in the Department of Teacher Education at University of Texas at El Paso. Her research focuses on improving student learning using mobile technology, understanding Internet use and the digital divide using big data from Internet search trends and Web analytics. Min Li is an Associate Professor in the College of Education, University of Washington. Her expertise is science assessment and evaluation, and quantitative methods. Address for correspondence: Xiaoming Zhai, Department of Physics, Beijing Normal University, Room A321, No. 19 Xinjiekouwai Street, Haidian District, Beijing 100875, China. Email: [email protected]",
"title": ""
},
{
"docid": "neg:1840553_6",
"text": "As the urban population is increasing, more and more cars are circulating in the city to search for parking spaces which contributes to the global problem of traffic congestion. To alleviate the parking problems, smart parking systems must be implemented. In this paper, the background on parking problems is introduced and relevant algorithms, systems, and techniques behind the smart parking are reviewed and discussed. This paper provides a good insight into the guidance, monitoring and reservations components of the smart car parking and directions to the future development.",
"title": ""
},
{
"docid": "neg:1840553_7",
"text": "Face anti-spoofing (a.k.a. presentation attack detection) has recently emerged as an active topic with great significance for both academia and industry due to the rapidly increasing demand in user authentication on mobile phones, PCs, tablets, and so on. Recently, numerous face spoofing detection schemes have been proposed based on the assumption that training and testing samples are in the same domain in terms of the feature space and marginal probability distribution. However, due to unlimited variations of the dominant conditions (illumination, facial appearance, camera quality, and so on) in face acquisition, such single domain methods lack generalization capability, which further prevents them from being applied in practical applications. In light of this, we introduce an unsupervised domain adaptation face anti-spoofing scheme to address the real-world scenario that learns the classifier for the target domain based on training samples in a different source domain. In particular, an embedding function is first imposed based on source and target domain data, which maps the data to a new space where the distribution similarity can be measured. Subsequently, the Maximum Mean Discrepancy between the latent features in source and target domains is minimized such that a more generalized classifier can be learned. State-of-the-art representations including both hand-crafted and deep neural network learned features are further adopted into the framework to quest the capability of them in domain adaptation. Moreover, we introduce a new database for face spoofing detection, which contains more than 4000 face samples with a large variety of spoofing types, capture devices, illuminations, and so on. Extensive experiments on existing benchmark databases and the new database verify that the proposed approach can gain significantly better generalization capability in cross-domain scenarios by providing consistently better anti-spoofing performance.",
"title": ""
},
{
"docid": "neg:1840553_8",
"text": "Code-switching refers to the phenomena of mixing of words or phrases from foreign languages while communicating in a native language by the multilingual speakers. Codeswitching is a global phenomenon and is widely accepted in multilingual communities. However, for training the language model (LM) for such tasks, a very limited code-switched textual resources are available as yet. In this work, we present an approach to reduce the perplexity (PPL) of Hindi-English code-switched data when tested over the LM trained on purely native Hindi data. For this purpose, we propose a novel textual feature which allows the LM to predict the code-switching instances. The proposed feature is referred to as code-switching factor (CS-factor). Also, we developed a tagger that facilitates the automatic tagging of the code-switching instances. This tagger is trained on a development data and assigns an equivalent class of foreign (English) words to each of the potential native (Hindi) words. For this study, the textual resource has been created by crawling the blogs from a couple of websites educating about the usage of the Internet. In the context of recognition of the code-switching data, the proposed technique is found to yield a substantial improvement in terms of PPL.",
"title": ""
},
{
"docid": "neg:1840553_9",
"text": "This paper presents a review of thermal energy storage system design methodologies and the factors to be considered at different hierarchical levels for concentrating solar power (CSP) plants. Thermal energy storage forms a key component of a power plant for improvement of its dispatchability. Though there have been many reviews of storage media, there are not many that focus on storage system design along with its integration into the power plant. This paper discusses the thermal energy storage system designs presented in the literature along with thermal and exergy efficiency analyses of various thermal energy storage systems integrated into the power plant. Economic aspects of these systems and the relevant publications in literature are also summarized in this effort. 2013 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840553_10",
"text": "This paper concentrated on a new application of Deep Neural Network (DNN) approach. The DNN, also widely known as Deep Learning(DL), has been the most popular topic in research community recently. Through the DNN, the original data set can be represented in a new feature space with machine learning algorithms, and intelligence models may have the chance to obtain a better performance in the “learned” feature space. Scientists have achieved encouraging results by employing DNN in some research fields, including Computer Vision, Speech Recognition, Natural Linguistic Programming and Bioinformation Processing. However, as an approach mainly functioned for learning features, DNN is reasonably believed to be a more universal approach: it may have the potential in other data domains and provide better feature spaces for other type of problems. In this paper, we present some initial investigations on applying DNN to deal with the time series problem in meteorology field. In our research, we apply DNN to process the massive weather data involving millions of atmosphere records provided by The Hong Kong Observatory (HKO)1. The obtained features are employed to predict the weather change in the next 24 hours. The results show that the DNN is able to provide a better feature space for weather data sets, and DNN is also a potential tool for the feature fusion of time series problems.",
"title": ""
},
{
"docid": "neg:1840553_11",
"text": "Nonnegative matrix factorization (NMF) is a popular method for multivariate analysis of nonnegative data, the goal of which is to decompose a data matrix into a product of two factor matrices with all entries in factor matrices restricted to be nonnegative. NMF was shown to be useful in a task of clustering (especially document clustering), but in some cases NMF produces the results inappropriate to the clustering problems. In this paper, we present an algorithm for orthogonal nonnegative matrix factorization, where an orthogonality constraint is imposed on the nonnegative decomposition of a term-document matrix. The result of orthogonal NMF can be clearly interpreted for the clustering problems, and also the performance of clustering is usually better than that of the NMF. We develop multiplicative updates directly from true gradient on Stiefel manifold, whereas existing algorithms consider additive orthogonality constraints. Experiments on several different document data sets show our orthogonal NMF algorithms perform better in a task of clustering, compared to the standard NMF and an existing orthogonal NMF.",
"title": ""
},
{
"docid": "neg:1840553_12",
"text": "Creating graphic designs can be challenging for novice users. This paper presents DesignScape, a system which aids the design process by making interactive layout suggestions, i.e., changes in the position, scale, and alignment of elements. The system uses two distinct but complementary types of suggestions: refinement suggestions, which improve the current layout, and brainstorming suggestions, which change the style. We investigate two interfaces for interacting with suggestions. First, we develop a suggestive interface, where suggestions are previewed and can be accepted. Second, we develop an adaptive interface where elements move automatically to improve the layout. We compare both interfaces with a baseline without suggestions, and show that for novice designers, both interfaces produce significantly better layouts, as evaluated by other novices.",
"title": ""
},
{
"docid": "neg:1840553_13",
"text": "This paper is intended to investigate the copper-graphene surface plasmon resonance (SPR)-based biosensor by considering the high adsorption efficiency of graphene. Copper (Cu) is used as a plasmonic material whereas graphene is used to prevent Cu from oxidation and enhance the reflectance intensity. Numerical investigation is performed using finite-difference-time-domain (FDTD) method by comparing the sensing performance such as reflectance intensity that explains the sensor sensitivity and the full-width-at-half-maximum (FWHM) of the spectrum for detection accuracy. The measurements were observed with various Cu thin film thicknesses ranging from 20nm to 80nm with 785nm operating wavelength. The proposed sensor shows that the 40nm-thick Cu-graphene (1 layer) SPR-based sensor gave better performance with narrower plasmonic spectrum line width (reflectance intensity of 91.2%) and better FWHM of 3.08°. The measured results also indicate that the Cu-graphene SPR-based sensor is suitable for detecting urea with refractive index of 1.49 in dielectric medium.",
"title": ""
},
{
"docid": "neg:1840553_14",
"text": "Memory encoding occurs rapidly, but the consolidation of memory in the neocortex has long been held to be a more gradual process. We now report, however, that systems consolidation can occur extremely quickly if an associative \"schema\" into which new information is incorporated has previously been created. In experiments using a hippocampal-dependent paired-associate task for rats, the memory of flavor-place associations became persistent over time as a putative neocortical schema gradually developed. New traces, trained for only one trial, then became assimilated and rapidly hippocampal-independent. Schemas also played a causal role in the creation of lasting associative memory representations during one-trial learning. The concept of neocortical schemas may unite psychological accounts of knowledge structures with neurobiological theories of systems memory consolidation.",
"title": ""
},
{
"docid": "neg:1840553_15",
"text": "Simplification of IT services is an imperative of the times we are in. Large legacy behemoths that exist at financial institutions are a result of years of patch work development on legacy landscapes that have developed in silos at various lines of businesses (LOBs). This increases costs -- for running financial services, changing the services as well as providing services to customers. We present here a basic guide to what constitutes complexity of IT landscape at financial institutions, what simplification means, and opportunities for simplification and how it can be carried out. We also explain a 4-phase approach to planning and executing Simplification of IT services at financial institutions.",
"title": ""
},
{
"docid": "neg:1840553_16",
"text": "A query over RDF data is usually expressed in terms of matching between a graph representing the target and a huge graph representing the source. Unfortunately, graph matching is typically performed in terms of subgraph isomorphism, which makes semantic data querying a hard problem. In this paper we illustrate a novel technique for querying RDF data in which the answers are built by combining paths of the underlying data graph that align with paths specified by the query. The approach is approximate and generates the combinations of the paths that best align with the query. We show that, in this way, the complexity of the overall process is significantly reduced and verify experimentally that our framework exhibits an excellent behavior with respect to other approaches in terms of both efficiency and effectiveness.",
"title": ""
},
{
"docid": "neg:1840553_17",
"text": "Given a task of predicting Y from X , a loss function L, and a set of probability distributions Γ on (X,Y ), what is the optimal decision rule minimizing the worstcase expected loss over Γ? In this paper, we address this question by introducing a generalization of the maximum entropy principle. Applying this principle to sets of distributions with marginal on X constrained to be the empirical marginal, we provide a minimax interpretation of the maximum likelihood problem over generalized linear models, which connects the minimax problem for each loss function to a generalized linear model. While in some cases such as quadratic and logarithmic loss functions we revisit well-known linear and logistic regression models, our approach reveals novel models for other loss functions. In particular, for the 0-1 loss we derive a classification approach which we call the minimax SVM. The minimax SVM minimizes the worst-case expected 0-1 loss over the proposed Γ by solving a tractable optimization problem. Moreover, applying the minimax approach to Brier loss function we derive a new classification model called the minimax Brier. The maximum likelihood problem for this model uses the Huber penalty function. We perform several numerical experiments to show the power of the minimax SVM and the minimax Brier.",
"title": ""
},
{
"docid": "neg:1840553_18",
"text": "In recent years, researchers have proposed systems for running trusted code on an untrusted operating system. Protection mechanisms deployed by such systems keep a malicious kernel from directly manipulating a trusted application's state. Under such systems, the application and kernel are, conceptually, peers, and the system call API defines an RPC interface between them.\n We introduce Iago attacks, attacks that a malicious kernel can mount in this model. We show how a carefully chosen sequence of integer return values to Linux system calls can lead a supposedly protected process to act against its interests, and even to undertake arbitrary computation at the malicious kernel's behest.\n Iago attacks are evidence that protecting applications from malicious kernels is more difficult than previously realized.",
"title": ""
},
{
"docid": "neg:1840553_19",
"text": "AutoTutor is a learning environment that tutors students by holding a conversation in natural language. AutoTutor has been developed for Newtonian qualitative physics and computer literacy. Its design was inspired by explanation-based constructivist theories of learning, intelligent tutoring systems that adaptively respond to student knowledge, and empirical research on dialogue patterns in tutorial discourse. AutoTutor presents challenging problems (formulated as questions) from a curriculum script and then engages in mixed initiative dialogue that guides the student in building an answer. It provides the student with positive, neutral, or negative feedback on the student's typed responses, pumps the student for more information, prompts the student to fill in missing words, gives hints, fills in missing information with assertions, identifies and corrects erroneous ideas, answers the student's questions, and summarizes answers. AutoTutor has produced learning gains of approximately .70 sigma for deep levels of comprehension.",
"title": ""
}
] |
1840554 | Perceived , not actual , similarity predicts initial attraction in a live romantic context : Evidence from the speed-dating paradigm | [
{
"docid": "pos:1840554_0",
"text": "Little is known about whether personality characteristics influence initial attraction. Because adult attachment differences influence a broad range of relationship processes, the authors examined their role in 3 experimental attraction studies. The authors tested four major attraction hypotheses--self similarity, ideal-self similarity, complementarity, and attachment security--and examined both actual and perceptual factors. Replicated analyses across samples, designs, and manipulations showed that actual security and self similarity predicted attraction. With regard to perceptual factors, ideal similarity, self similarity, and security all were significant predictors. Whereas perceptual ideal and self similarity had incremental predictive power, perceptual security's effects were subsumed by perceptual ideal similarity. Perceptual self similarity fully mediated actual attachment similarity effects, whereas ideal similarity was only a partial mediator.",
"title": ""
}
] | [
{
"docid": "neg:1840554_0",
"text": "DBMSs have long suffered from SQL’s lack of power and extensibility. We have implemented ATLaS [1], a powerful database language and system that enables users to develop complete data-intensive applications in SQL—by writing new aggregates and table functions in SQL, rather than in procedural languages as in current Object-Relational systems. As a result, ATLaS’ SQL is Turing-complete [7], and is very suitable for advanced data-intensive applications, such as data mining and stream queries. The ATLaS system is now available for download along with a suite of applications [1] including various data mining functions, that have been coded in ATLaS’ SQL, and execute with a modest (20–40%) performance overhead with respect to the same applications written in C/C++. Our proposed demo will illustrate the key features and applications of ATLaS. In particular, we will demonstrate:",
"title": ""
},
{
"docid": "neg:1840554_1",
"text": "The purpose of this article is to review literature that is relevant to the social scientific study of ethics and leadership, as well as outline areas for future study. We first discuss ethical leadership and then draw from emerging research on \"dark side\" organizational behavior to widen the boundaries of the review to include ««ethical leadership. Next, three emerging trends within the organizational behavior literature are proposed for a leadership and ethics research agenda: 1 ) emotions, 2) fit/congruence, and 3) identity/ identification. We believe each shows promise in extending current thinking. The review closes with discussion of important issues that are relevant to the advancement of research on leadership and ethics. T IMPORTANCE OF LEADERSHIP in promoting ethical conduct in organizations has long been understood. Within a work environment, leaders set the tone for organizational goals and behavior. Indeed, leaders are often in a position to control many outcomes that affect employees (e.g., strategies, goal-setting, promotions, appraisals, resources). What leaders incentivize communicates what they value and motivates employees to act in ways to achieve such rewards. It is not surprising, then, that employees rely on their leaders for guidance when faced with ethical questions or problems (Treviño, 1986). Research supports this contention, and shows that employees conform to the ethical values of their leaders (Schminke, Wells, Peyrefitte, & Sabora, 2002). Furthermore, leaders who are perceived as ethically positive influence productive employee work behavior (Mayer, Kuenzi, Greenbaum, Bardes, & Salvador, 2009) and negatively influence counterproductive work behavior (Brown & Treviño, 2006b; Mayer et al., 2009). Recently, there has been a surge of empirical research seeking to understand the influence of leaders on building ethical work practices and employee behaviors (see Brown & Treviño, 2006a for a review). Initial theory and research (Bass & Steidlemeier, 1999; Brown, Treviño, & Harrison, 2005; Ciulla, 2004; Treviño, Brown, & Hartman, 2003; Treviño, Hartman, & Brown, 2000) sought to define ethical leadership from both normative and social scientific (descriptive) approaches to business ethics. The normative perspective is rooted in philosophy and is concerned with prescribing how individuals \"ought\" or \"should\" behave in the workplace. For example, normative scholarship on ethical leadership (Bass & Steidlemeier, 1999; Ciulla, 2004) examines ethical decision making from particular philosophical frameworks, evaluates the ethicality of particular leaders, and considers the degree to which certain styles of leadership or influence tactics are ethical. ©2010 Business Ethics Quarterly 20:4 (October 2010); ISSN 1052-150X pp. 583-616 584 BUSINESS ETHICS QUARTERLY In contrast, our article emphasizes a social scientific approach to ethical leadership (e.g.. Brown et al., 2005; Treviño et al., 2000; Treviño et al, 2003). This approach is rooted in disciplines such as psychology, sociology, and organization science, and it attempts to understand how people perceive ethical leadership and investigates the antecedents, outcomes, and potential boundary conditions of those perceptions. This research has focused on investigating research questions such as: What is ethical leadership (Brown et al., 2005; Treviño et al., 2003)? What traits are associated with perceived ethical leadership (Walumbwa & Schaubroeck, 2009)? How does ethical leadership flow through various levels of management within organizations (Mayer et al., 2009)? And, does ethical leadership help or hurt a leader's promotability within organizations (Rubin, Dierdorff, & Brown, 2010)? The purpose of our article is to review literature that is relevant to the descriptive study of ethics and leadership, as well as outhne areas for future empirical study. We first discuss ethical leadership and then draw from emerging research on what often is called \"dark\" (destructive) organizational behavior, so as to widen the boundaries of our review to also include ««ethical leadership. Next, we discuss three emerging trends within the organizational behavior literature—1) emotions, 2) fit/congruence, and 3) identity/identification—that we believe show promise in extending current thinking on the influence of leadership (both positive and negative) on organizational ethics. We conclude with a discussion of important issues that are relevant to the advancement of research in this domain. A REVIEW OF SOCIAL SCIENTIFIC ETHICAL LEADERSHIP RESEARCH The Concept of Ethical Leadership Although the topic of ethical leadership has long been considered by scholars, descriptive research on ethical leadership is relatively new. Some of the first formal investigations focused on defining ethical leadership from a descriptive perspective and were conducted by Treviño and colleagues (Treviño et al., 2000, 2003). Their qualitative research revealed that ethical leaders were best described along two related dimensions: moral person and moral manager. The moral person dimension refers to the qualities of the ethical leader as a person. Strong moral persons are honest and trustworthy. They demonstrate a concern for other people and are also seen as approachable. Employees can come to these individuals with problems and concerns, knowing that they will be heard. Moral persons have a reputation for being fair and principled. Lastly, riioral persons are seen as consistently moral in both their personal and professional lives. The moral manager dimension refers to how the leader uses the tools of the position of leadership to promote ethical conduct at work. Strong moral managers see themselves as role models in the workplace. They make ethics salient by modeling ethical conduct to their employees. Moral managers set and communicate ethical standards and use rewards and punishments to ensure those standards are followed. In sum, leaders who are moral managers \"walk the talk\" and \"talk the walk,\" patterning their behavior and organizational processes to meet moral standards. ETHICAL AND UNETHICAL LEADERSHIP 585 Treviño and colleagues (Treviño et al., 2000, 2003) argued that individuals in power must be both strong moral persons and moral managers in order to be seen as ethical leaders by those around them. Strong moral managers who are weak moral persons are likely to be seen as hypocrites, failing to practice what they preach. Hypocritical leaders talk about the importance of ethics, but their actions show them to be dishonest and unprincipled. Conversely, a strong moral person who is a weak moral manager runs the risk of being seen as an ethically \"neutral\" leader. That is, the leader is perceived as being silent on ethical issues, suggesting to employees that the leader does not really care about ethics. Subsequent research by Brown, Treviño, and Harrison (2005:120) further clarified the construct and provided a formal definition of ethical leadership as \"the demonstration of normatively appropriate conduct through personal actions and interpersonal relationships, and the promotion of such conduct to followers through two-way communication, reinforcement, and decision-making.\" They noted that \"the term normatively appropriate is 'deliberately vague'\" (Brown et al., 2005: 120) because norms vary across organizations, industries, and cultures. Brown et al. (2005) ground their conceptualization of ethical leadership in social learning theory (Bandura, 1977, 1986). This theory suggests individuals can learn standards of appropriate behavior by observing how role models (like teachers, parents, and leaders) behave. Accordingly, ethical leaders \"teach\" ethical conduct to employees through their own behavior. Ethical leaders are relevant role models because they occupy powerful and visible positions in organizational hierarchies that allow them to capture their follower's attention. They communicate ethical expectations through formal processes (e.g., rewards, policies) and personal example (e.g., interpersonal treatment of others). Effective \"ethical\" modeling, however, requires more than power and visibility. For social learning of ethical behavior to take place, role models must be credible in terms of moral behavior. By treating others fairly, honestly, and considerately, leaders become worthy of emulation by others. Otherwise, followers might ignore a leader whose behavior is inconsistent with his/her ethical pronouncements or who fails to interact with followers in a caring, nurturing style (Yussen & Levy, 1975). Outcomes of Ethical Leadership Researchers have used both social learning theory (Bandura, 1977,1986) and social exchange theory (Blau, 1964) to explain the effects of ethical leadership on important outcomes (Brown et al., 2005; Brown & Treviño, 2006b; Mayer et al , 2009; Walumbwa & Schaubroeck, 2009). According to principles of reciprocity in social exchange theory (Blau, 1964; Gouldner, 1960), individuals feel obligated to return beneficial behaviors when they believe another has been good and fair to them. In line with this reasoning, researchers argue and find that employees feel indebted to ethical leaders because of their trustworthy and fair nature; consequently, they reciprocate with beneficial work behavior (e.g., higher levels of ethical behavior and citizenship behaviors) and refrain from engaging in destructive behavior (e.g., lower levels of workplace deviance). 586 BUSINESS ETHICS QUARTERLY Emerging research has found that ethical leadership is related to important follower outcomes, such as employees' job satisfaction, organizational commitment, willingness to report problems to supervisors, willingness to put in extra effort on the job, voice behavior (i.e., expression of constructive suggestions intended to improve standard procedures), and perceptions of organizational culture and ethical climate (Brown et al., 2005; Neubert, Carlson, Kacmar, Roberts,",
"title": ""
},
{
"docid": "neg:1840554_2",
"text": "We propose a deontological approach to machine ethics that avoids some weaknesses of an intuition-based system, such as that of Anderson and Anderson. In particular, it has no need to deal with conflicting intuitions, and it yields a more satisfactory account of when autonomy should be respected. We begin with a “dual standpoint” theory of action that regards actions as grounded in reasons and therefore as having a conditional form that is suited to machine instructions. We then derive ethical principles based on formal properties that the reasons must exhibit to be coherent, and formulate the principles using quantified modal logic. We conclude that deontology not only provides a more satisfactory basis for machine ethics but endows the machine with an ability to explain its actions, thus contributing to transparency in AI.",
"title": ""
},
{
"docid": "neg:1840554_3",
"text": "Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance.",
"title": ""
},
{
"docid": "neg:1840554_4",
"text": "The objective of the article is to highlight various roles of glutamic acid like endogenic anticancer agent, conjugates to anticancer agents, and derivatives of glutamic acid as possible anticancer agents. Besides these emphases are given especially for two endogenous derivatives of glutamic acid such as glutamine and glutamate. Glutamine is a derivative of glutamic acid and is formed in the body from glutamic acid and ammonia in an energy requiring reaction catalyzed by glutamine synthase. It also possesses anticancer activity. So the transportation and metabolism of glutamine are also discussed for better understanding the role of glutamic acid. Glutamates are the carboxylate anions and salts of glutamic acid. Here the roles of various enzymes required for the metabolism of glutamates are also discussed.",
"title": ""
},
{
"docid": "neg:1840554_5",
"text": "This paper presents the study on the semiconductor-based galvanic isolation. This solution delivers the differential-mode (DM) power via semiconductor power switches during their on states, while sustaining the common-mode (CM) voltage and blocking the CM leakage current with those switches during their off states. While it is impractical to implement this solution with Si devices, the latest SiC devices and the coming vertical GaN devices, however, provide unprecedented properties and thus can potentially enable the practical implementation. An isolated dc/dc converter based on the switched-capacitor circuit is studied as an example. The CM leakage current caused by the line input and the resulted touch current (TC) are quantified and compared to the limits in the safety standard IEC60950. To reduce the TC, low switch output capacitance and low converter switching frequency are needed. Then, discussions are presented on the TC reduction approaches and the design considerations to achieve high power density and high efficiency. A 400-V, 400-W prototype based on 1.7-kV SiC MOSFETs is built to demo the DM power delivery performance and showcase the CM leakage current problem. Further study on the CM leakage current elimination is needed to validate this solution.",
"title": ""
},
{
"docid": "neg:1840554_6",
"text": "This paper presents an S-Transform based probabilistic neural network (PNN) classifier for recognition of power quality (PQ) disturbances. The proposed method requires less number of features as compared to wavelet based approach for the identification of PQ events. The features extracted through the S-Transform are trained by a PNN for automatic classification of the PQ events. Since the proposed methodology can reduce the features of the disturbance signal to a great extent without losing its original property, less memory space and learning PNN time are required for classification. Eleven types of disturbances are considered for the classification problem. The simulation results reveal that the combination of S-Transform and PNN can effectively detect and classify different PQ events. The classification performance of PNN is compared with a feedforward multilayer (FFML) neural network (NN) and learning vector quantization (LVQ) NN. It is found that the classification performance of PNN is better than both FFML and LVQ.",
"title": ""
},
{
"docid": "neg:1840554_7",
"text": "With the advance of wireless communication systems and increasing importance of other wireless applications, wideband and low profile antennas are in great demand for both commercial and military applications. Multi-band and wideband antennas are desirable in personal communication systems, small satellite communication terminals, and other wireless applications. Wideband antennas also find applications in Unmanned Aerial Vehicles (UAVs), Counter Camouflage, Concealment and Deception (CC&D), Synthetic Aperture Radar (SAR), and Ground Moving Target Indicators (GMTI). Some of these applications also require that an antenna be embedded into the airframe structure Traditionally, a wideband antenna in the low frequency wireless bands can only be achieved with heavily loaded wire antennas, which usually means different antennas are needed for different frequency bands. Recent progress in the study of fractal antennas suggests some attractive solutions for using a single small antenna operating in several frequency bands. The purpose of this article is to introduce the concept of the fractal, review the progress in fractal antenna study and implementation, compare different types of fractal antenna elements and arrays and discuss the challenge and future of this new type of antenna.",
"title": ""
},
{
"docid": "neg:1840554_8",
"text": "The Programmer's Learning Machine (PLM) is an interactive exerciser for learning programming and algorithms. Using an integrated and graphical environment that provides a short feedback loop, it allows students to learn in a (semi)-autonomous way. This generic platform also enables teachers to create specific programming microworlds that match their teaching goals. This paper discusses our design goals and motivations, introduces the existing material and the proposed microworlds, and details the typical use cases from the student and teacher point of views.",
"title": ""
},
{
"docid": "neg:1840554_9",
"text": "Maintaining the quality of roadways is a major challenge for governments around the world. In particular, poor road surfaces pose a significant safety threat to motorists, especially when motorbikes make up a significant portion of roadway traffic. According to the statistics of the Ministry of Justice in Taiwan, there were 220 claims for state compensation caused by road quality problems between 2005 to 2007, and the government paid a total of 113 million NTD in compensation. This research explores utilizing a mobile phone with a tri-axial accelerometer to collect acceleration data while riding a motorcycle. The data is analyzed to detect road anomalies and to evaluate road quality. Motorcycle-based acceleration data is collected on twelve stretches of road, with a data log spanning approximately three hours, and a total road length of about 60 kilometers. Both supervised and unsupervised machine learning methods are used to recognize road conditions. SVM learning is used to detect road anomalies and to identify their corresponding positions from labeled acceleration data. This method of road anomaly detection achieves a precision of 78.5%. Furthermore, to construct a model of smooth roads, unsupervised learning is used to learn anomaly thresholds by clustering data collected from the accelerometer. The results are used to rank the quality of the road segments in the experiment. We compare the ranked list from the learned evaluator with the ranked list from human evaluators who rode along the same roadways during the test phase. Based on the Kendall tau rank correlation coefficient, the automatically ranked result exhibited excellent performance. Keywords-mobile device; machine learning; accelerometer; road surface anomaly; pothole;",
"title": ""
},
{
"docid": "neg:1840554_10",
"text": "Inverted pendulum system is a complicated, unstable and multivariable nonlinear system. In order to control the angle and displacement of inverted pendulum system effectively, a novel double-loop digital PID control strategy is presented in this paper. Based on impulse transfer function, the model of the single linear inverted pendulum system is divided into two parts according to the controlled parameters. The inner control loop that is formed by the digital PID feedback control can control the angle of the pendulum, while in order to control the cart displacement, the digital PID series control is adopted to form the outer control loop. The simulation results show the digital control strategy is very effective to single inverted pendulum and when the sampling period is selected as 50 ms, the performance of the digital control system is similar to that of the analog control system. Copyright © 2013 IFSA.",
"title": ""
},
{
"docid": "neg:1840554_11",
"text": "CUDA is a new general-purpose C language interface to GPU developed by NVIDIA. It makes full use of parallel of GPU and has been widely used now. 3D model reconstruction is a traditional and common technique which has been widely used in engineering experiments, CAD and computer graphics. In this paper, we present an algorithm of CUDA-based Poisson surface reconstruction. Our algorithm makes full use of parallel of GPU and runs entirely on GPU and is ten times faster than previous CPU algorithm.",
"title": ""
},
{
"docid": "neg:1840554_12",
"text": "Spin-transfer torque random access memory (STT-RAM) has emerged as an attractive candidate for future nonvolatile memories. It advantages the benefits of current state-of-the-art memories including high-speed read operation (of static RAM), high density (of dynamic RAM), and nonvolatility (of flash memories). However, the write operation in the 1T-1MTJ STT-RAM bitcell is asymmetric and stochastic, which leads to high energy consumption and long latency. In this paper, a new write assist technique is proposed to terminate the write operation immediately after switching takes place in the magnetic tunneling junction (MTJ). As a result, both the write time and write energy consumption of 1T-1MTJ bitcells improves. Moreover, the proposed write assist technique leads to an error-free write operation. The simulation results using a 65-nm CMOS access transistor and a 40-nm MTJ technology confirm that the proposed write assist technique results in three orders of magnitude improvement in bit error rate compared with the best existing techniques. Moreover, the proposed write assist technique leads to 81% energy saving compared with a cell without write assist and adds only 9.6% area overhead to a 16-kbit STT-RAM array.",
"title": ""
},
{
"docid": "neg:1840554_13",
"text": "Prospective memory (PM) research typically examines the ability to remember to execute delayed intentions but often ignores the ability to forget finished intentions. We had participants perform (or not perform; control group) a PM task and then instructed them that the PM task was finished. We later (re)presented the PM cue. Approximately 25% of participants made a commission error, the erroneous repetition of a PM response following intention completion. Comparisons between the PM groups and control group suggested that commission errors occurred in the absence of preparatory monitoring. Response time analyses additionally suggested that some participants experienced fatigue across the ongoing task block, and those who did were more susceptible to making a commission error. These results supported the hypothesis that commission errors can arise from the spontaneous retrieval of finished intentions and possibly the failure to exert executive control to oppose the PM response.",
"title": ""
},
{
"docid": "neg:1840554_14",
"text": "This Contrast enhancement is frequently referred to as one of the most important issues in image processing. Histogram equalization (HE) is one of the common methods used for improving contrast in digital images. Histogram equalization (HE) has proved to be a simple and effective image contrast enhancement technique. However, the conventional histogram equalization methods usually result in excessive contrast enhancement, which causes the unnatural look and visual artifacts of the processed image. This paper presents a review of new forms of histogram for image contrast enhancement. The major difference among the methods in this family is the criteria used to divide the input histogram. Brightness preserving BiHistogram Equalization (BBHE) and Quantized Bi-Histogram Equalization (QBHE) use the average intensity value as their separating point. Dual Sub-Image Histogram Equalization (DSIHE) uses the median intensity value as the separating point. Minimum Mean Brightness Error Bi-HE (MMBEBHE) uses the separating point that produces the smallest Absolute Mean Brightness Error (AMBE). Recursive Mean-Separate Histogram Equalization (RMSHE) is another improvement of BBHE. The Brightness preserving dynamic histogram equalization (BPDHE) method is actually an extension to both MPHEBP and DHE. Weighting mean-separated sub-histogram equalization (WMSHE) method is to perform the effective contrast enhancement of the digital image. Keywords-component image processing; contrast enhancement; histogram equalization; minimum mean brightness error; brightness preserving enhancement, histogram partition.",
"title": ""
},
{
"docid": "neg:1840554_15",
"text": "Cancer cells often have characteristic changes in metabolism. Cellular proliferation, a common feature of all cancers, requires fatty acids for synthesis of membranes and signaling molecules. Here, we provide a view of cancer cell metabolism from a lipid perspective, and we summarize evidence that limiting fatty acid availability can control cancer cell proliferation.",
"title": ""
},
{
"docid": "neg:1840554_16",
"text": "We present Synereo, a next-gen decentralized and distributed social network designed for an attention economy. Our presentation is given in two chapters. Chapter 1 presents our design philosophy. Our goal is to make our users more effective agents by presenting social content that is relevant and actionable based on the user’s own estimation of value. We discuss the relationship between attention, value, and social agency in order to motivate the central mechanisms for content flow on the network. Chapter 2 defines a network model showing the mechanics of the network interactions, as well as the compensation model enabling users to promote content on the network and receive compensation for attention given to the network. We discuss the high-level technical implementation of these concepts based on the π-calculus the most well known of a family of computational formalisms known as the mobile process calculi. 0.1 Prologue: This is not a manifesto The Internet is overflowing with social network manifestos. Ello has a manifesto. Tsu has a manifesto. SocialSwarm has a manifesto. Even Disaspora had a manifesto. Each one of them is written in earnest with clear intent (see figure 1). Figure 1: Ello manifesto The proliferation of these manifestos and the social networks they advertise represents an important market shift, one that needs to be understood in context. The shift from mainstream media to social media was all about “user generated content”. In other words, people took control of the content by making it for and distributing it to each other. In some real sense it was a remarkable expansion of the shift from glamrock to punk and DIY; and like that movement, it was the sense of people having a say in what impressions they received that has been the underpinning of the success of Facebook and Twitter and YouTube and the other social media giants. In the wake of that shift, though, we’ve seen that even when the people are producing the content, if the service is in somebody else’s hands then things still go wonky: the service providers run psychology experiments via the social feeds [1]; they sell people’s personally identifiable and other critical info [2]; and they give data to spooks [3]. Most importantly, they do this without any real consent of their users. With this new wave of services people are expressing a desire to take more control of the service, itself. When the service is distributed, as is the case with Splicious and Diaspora, it is truly cooperative. And, just as with the music industry, where the technology has reached the point that just about anybody can have a professional studio in their home, the same is true with media services. People are recognizing that we don’t need big data centers with massive environmental impact, we need engagement at the level of the service, itself. If this really is the underlying requirement the market is articulating, then there is something missing from a social network that primarily serves up a manifesto with their service. While each of the networks mentioned above constitutes an important step in the right direction, they lack any clear indication",
"title": ""
},
{
"docid": "neg:1840554_17",
"text": "OBJECTIVE\nThis study examined the effects of various backpack loads on elementary schoolchildren's posture and postural compensations as demonstrated by a change in forward head position.\n\n\nSUBJECTS\nA convenience sample of 11 schoolchildren, aged 8-11 years participated.\n\n\nMETHODS\nSagittal digital photographs were taken of each subject standing without a backpack, and then with the loaded backpack before and after walking 6 minutes (6MWT) at free walking speed. This was repeated over three consecutive weeks using backpacks containing randomly assigned weights of 10%, 15%, or 20% body weight of each respective subject. The craniovertebral angle (CVA) was measured using digitizing software, recorded and analyzed.\n\n\nRESULTS\nSubjects demonstrated immediate and statistically significant changes in CVA, indicating increased forward head positions upon donning the backpacks containing 15% and 20% body weight. Following the 6MWT, the CVA demonstrated further statistically significant changes for all backpack loads indicating increased forward head postures. For the 15 & 20%BW conditions, more than 50% of the subjects reported discomfort after walking, with the neck as the primary location of reported pain.\n\n\nCONCLUSIONS\nBackpack loads carried by schoolchildren should be limited to 10% body weight due to increased forward head positions and subjective complaints at 15% and 20% body weight loads.",
"title": ""
},
{
"docid": "neg:1840554_18",
"text": "The framework of dynamic movement primitives (DMPs) contains many favorable properties for the execution of robotic trajectories, such as indirect dependence on time, response to perturbations, and the ability to easily modulate the given trajectories, but the framework in its original form remains constrained to the kinematic aspect of the movement. In this paper, we bridge the gap to dynamic behavior by extending the framework with force/torque feedback. We propose and evaluate a modulation approach that allows interaction with objects and the environment. Through the proposed coupling of originally independent robotic trajectories, the approach also enables the execution of bimanual and tightly coupled cooperative tasks. We apply an iterative learning control algorithm to learn a coupling term, which is applied to the original trajectory in a feed-forward fashion and, thus, modifies the trajectory in accordance to the desired positions or external forces. A stability analysis and results of simulated and real-world experiments using two KUKA LWR arms for bimanual tasks and interaction with the environment are presented. By expanding on the framework of DMPs, we keep all the favorable properties, which is demonstrated with temporal modulation and in a two-agent obstacle avoidance task.",
"title": ""
},
{
"docid": "neg:1840554_19",
"text": "The deep convolutional neural network(CNN) has significantly raised the performance of image classification and face recognition. Softmax is usually used as supervision, but it only penalizes the classification loss. In this paper, we propose a novel auxiliary supervision signal called contrastive-center loss, which can further enhance the discriminative power of the features, for it learns a class center for each class. The proposed contrastive-center loss simultaneously considers intra-class compactness and inter-class separability, by penalizing the contrastive values between: (1)the distances of training samples to their corresponding class centers, and (2)the sum of the distances of training samples to their non-corresponding class centers. Experiments on different datasets demonstrate the effectiveness of contrastive-center loss.",
"title": ""
}
] |
1840555 | Tensor decomposition of EEG signals: A brief review | [
{
"docid": "pos:1840555_0",
"text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.",
"title": ""
}
] | [
{
"docid": "neg:1840555_0",
"text": "The design simulation, fabrication, and measurement of a 2.4-GHz horizontally polarized omnidirectional planar printed antenna for WLAN applications is presented. The antenna adopts the printed Alford-loop-type structure. The three-dimensional (3-D) EM simulator HFSS is used for design simulation. The designed antenna is fabricated on an FR-4 printed-circuit-board substrate. The measured input standing-wave-ratio (SWR) is less than three from 2.40 to 2.483 GHz. As desired, the horizontal-polarization H-plane pattern is quite omnidirectional and the E-plane pattern is also very close to that of an ideal dipole antenna. Also a comparison with the popular printed inverted-F antenna (PIFA) has been conducted, the measured H-plane pattern of the Alford-loop-structure antenna is better than that of the PIFA when the omnidirectional pattern is desired. Further more, the study of the antenna printed on a simulated PCMCIA card and that inserted inside a laptop PC are also conducted. The HFSS model of a laptop PC housing, consisting of the display, the screen, and the metallic box with the keyboard, is constructed. The effect of the laptop PC housing with different angle between the display and keyboard on the antenna is also investigated. It is found that there is about 15 dB attenuation of the gain pattern (horizontal-polarization field) in the opposite direction of the PCMCIA slot on the laptop PC. Hence, the effect of the large ground plane of the PCMCIA card and the attenuation effect of the laptop PC housing should be taken into consideration for the antenna design for WLAN applications. For the proposed antenna, in addition to be used alone for a horizontally polarized antenna, it can be also a part of a diversity antenna",
"title": ""
},
{
"docid": "neg:1840555_1",
"text": "To develop a knowledge-aware recommender system, a key data problem is how we can obtain rich and structured knowledge information for recommender system (RS) items. Existing datasets or methods either use side information from original recommender systems (containing very few kinds of useful information) or utilize private knowledge base (KB). In this paper, we present the first public linked KB dataset for recommender systems, named KB4Rec v1.0, which has linked three widely used RS datasets with the popular KB Freebase. Based on our linked dataset, we first preform some interesting qualitative analysis experiments, in which we discuss the effect of two important factors (i.e., popularity and recency) on whether a RS item can be linked to a KB entity. Finally, we present the comparison of several knowledge-aware recommendation algorithms on our linked dataset.",
"title": ""
},
{
"docid": "neg:1840555_2",
"text": "A particularly insidious type of concurrency bug is atomicity violations. While there has been substantial work on automatic detection of atomicity violations, each existing technique has focused on a certain type of atomic region. To address this limitation, this paper presents Atom Tracker, a comprehensive approach to atomic region inference and violation detection. Atom Tracker is the first scheme to (1) automatically infer generic atomic regions (not limited by issues such as the number of variables accessed, the number of instructions included, or the type of code construct the region is embedded in) and (2) automatically detect violations of them at runtime with negligible execution overhead. Atom Tracker provides novel algorithms to infer generic atomic regions and to detect atomicity violations of them. Moreover, we present a hardware implementation of the violation detection algorithm that leverages cache coherence state transitions in a multiprocessor. In our evaluation, we take eight atomicity violation bugs from real-world codes like Apache, MySql, and Mozilla, and show that Atom Tracker detects them all. In addition, Atom Tracker automatically infers all of the atomic regions in a set of micro benchmarks accurately. Finally, we also show that the hardware implementation induces a negligible execution time overhead of 0.2–4.0% and, therefore, enables Atom Tracker to find atomicity violations on-the-fly in production runs.",
"title": ""
},
{
"docid": "neg:1840555_3",
"text": "Manoeuvre assistance is currently receiving increasing attention from the car industry. In this article we focus on the implementation of a reverse parking assistance and more precisely, a reverse parking manoeuvre planner. This paper is based on a manoeuvre planning technique presented in previous work and specialised in planning reverse parking manoeuvre. Since a key part of the previous method was not explicited, our goal in this paper is to present a practical and reproducible way to implement a reverse parking manoeuvre planner. Our implementation uses a database engine to search for the elementary movements that will make the complete parking manoeuvre. Our results have been successfully tested on a real platform: the CSIRO Autonomous Tractor.",
"title": ""
},
{
"docid": "neg:1840555_4",
"text": "Utilizing parametric and nonparametric techniques, we assess the role of a heretofore relatively unexplored ‘input’ in the educational process, homework, on academic achievement. Our results indicate that homework is an important determinant of student test scores. Relative to more standard spending related measures, extra homework has a larger and more significant impact on test scores. However, the effects are not uniform across different subpopulations. Specifically, we find additional homework to be most effective for high and low achievers, which is further confirmed by stochastic dominance analysis. Moreover, the parametric estimates of the educational production function overstate the impact of schooling related inputs. In all estimates, the homework coefficient from the parametric model maps to the upper deciles of the nonparametric coefficient distribution and as a by-product the parametric model understates the percentage of students with negative responses to additional homework. JEL: C14, I21, I28",
"title": ""
},
{
"docid": "neg:1840555_5",
"text": "Polythelia is a rare congenital malformation that occurs in 1-2% of the population. Intra-areolar polythelia is the presence of one or more supernumerary nipples located within the areola. This is extremely rare. This article presents 3 cases of intra-areolar polythelia treated at our Department. These cases did not present other associated malformation. Surgical correction was performed for psychological and cosmetic reasons using advancement flaps. The aesthetic and functional results were satisfactory.",
"title": ""
},
{
"docid": "neg:1840555_6",
"text": "An automatic road sign recognition system first locates road signs within images captured by an imaging sensor on-board of a vehicle, and then identifies the detected road signs. This paper presents an automatic neural-network-based road sign recognition system. First, a study of the existing road sign recognition research is presented. In this study, the issues associated with automatic road sign recognition are described, the existing methods developed to tackle the road sign recognition problem are reviewed, and a comparison of the features of these methods is given. Second, the developed road sign recognition system is described. The system is capable of analysing live colour road scene images, detecting multiple road signs within each image, and classifying the type of road signs detected. The system consists of two modules: detection and classification. The detection module segments the input image in the hue-saturation-intensity colour space, and then detects road signs using a Multi-layer Perceptron neural-network. The classification module determines the type of detected road signs using a series of one to one architectural Multi-layer Perceptron neural networks. Two sets of classifiers are trained using the Resillient-Backpropagation and Scaled-Conjugate-Gradient algorithms. The two modules of the system are evaluated individually first. Then the system is tested as a whole. The experimental results demonstrate that the system is capable of achieving an average recognition hit-rate of 95.96% using the scaled-conjugate-gradient trained classifiers.",
"title": ""
},
{
"docid": "neg:1840555_7",
"text": "Hierarchical multilabel classification (HMC) allows an instance to have multiple labels residing in a hierarchy. A popular loss function used in HMC is the H-loss, which penalizes only the first classification mistake along each prediction path. However, the H-loss metric can only be used on tree-structured label hierarchies, but not on DAG hierarchies. Moreover, it may lead to misleading predictions as not all misclassifications in the hierarchy are penalized. In this paper, we overcome these deficiencies by proposing a hierarchy-aware loss function that is more appropriate for HMC. Using Bayesian decision theory, we then develop a Bayes-optimal classifier with respect to this loss function. Instead of requiring an exhaustive summation and search for the optimal multilabel, the proposed classification problem can be efficiently solved using a greedy algorithm on both tree-and DAG-structured label hierarchies. Experimental results on a large number of real-world data sets show that the proposed algorithm outperforms existing HMC methods.",
"title": ""
},
{
"docid": "neg:1840555_8",
"text": "An increase in pulsatile release of LHRH is essential for the onset of puberty. However, the mechanism controlling the pubertal increase in LHRH release is still unclear. In primates the LHRH neurosecretory system is already active during the neonatal period but subsequently enters a dormant state in the juvenile/prepubertal period. Neither gonadal steroid hormones nor the absence of facilitatory neuronal inputs to LHRH neurons is responsible for the low levels of LHRH release before the onset of puberty in primates. Recent studies suggest that during the prepubertal period an inhibitory neuronal system suppresses LHRH release and that during the subsequent maturation of the hypothalamus this prepubertal inhibition is removed, allowing the adult pattern of pulsatile LHRH release. In fact, y-aminobutyric acid (GABA) appears to be an inhibitory neurotransmitter responsible for restricting LHRH release before the onset of puberty in female rhesus monkeys. In addition, it appears that the reduction in tonic GABA inhibition allows an increase in the release of glutamate as well as other neurotransmitters, which contributes to the increase in pubertal LHRH release. In this review, developmental changes in several neurotransmitter systems controlling pulsatile LHRH release are extensively reviewed.",
"title": ""
},
{
"docid": "neg:1840555_9",
"text": "Cloud Computing (CC) is fast becoming well known in the computing world as the latest technology. CC enables users to use resources as and when they are required. Mobile Cloud Computing (MCC) is an integration of the concept of cloud computing within a mobile environment, which removes barriers linked to the mobile devices' performance. Nevertheless, these new benefits are not problem-free entirely. Several common problems encountered by MCC are privacy, personal data management, identity authentication, and potential attacks. The security issues are a major hindrance in the mobile cloud computing's adaptability. This study begins by presenting the background of MCC including the various definitions, infrastructures, and applications. In addition, the current challenges and opportunities will be presented including the different approaches that have been adapted in studying MCC.",
"title": ""
},
{
"docid": "neg:1840555_10",
"text": "We study shock-based methods for credible causal inference in corporate finance research. We focus on corporate governance research, survey 13,461 papers published between 2001 and 2011 in 22 major accounting, economics, finance, law, and management journals; and identify 863 empirical studies in which corporate governance is associated with firm value or other characteristics. We classify the methods used in these studies and assess whether they support a causal link between corporate governance and firm value or another outcome. Only a stall minority of studies have convincing causal inference strategies. The convincing strategies largely rely on external shocks – usually from legal rules – often called “natural experiments”. We examine the 74 shock-based papers and provide a guide to shock-based research design, which stresses the common features across different designs and the value of using combined designs.",
"title": ""
},
{
"docid": "neg:1840555_11",
"text": "In today’s world most of us depend on Social Media to communicate, express our feelings and share information with our friends. Social Media is the medium where now a day’s people feel free to express their emotions. Social Media collects the data in structured and unstructured, formal and informal data as users do not care about the spellings and accurate grammatical construction of a sentence while communicating with each other using different social networking websites ( Facebook, Twitter, LinkedIn and YouTube). Gathered data contains sentiments and opinion of users which will be processed using data mining techniques and analyzed for achieving the meaningful information from it. Using Social media data we can classify the type of users by analysis of their posted data on the social web sites. Machine learning algorithms are used for text classification which will extract meaningful data from these websites. Here, in this paper we will discuss the different types of classifiers and their advantages and disadvantages.",
"title": ""
},
{
"docid": "neg:1840555_12",
"text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.",
"title": ""
},
{
"docid": "neg:1840555_13",
"text": "This paper proposes an extension to the Generative Adversarial Networks (GANs), namely as ArtGAN to synthetically generate more challenging and complex images such as artwork that have abstract characteristics. This is in contrast to most of the current solutions that focused on generating natural images such as room interiors, birds, flowers and faces. The key innovation of our work is to allow back-propagation of the loss function w.r.t. the labels (randomly assigned to each generated images) to the generator from the discriminator. With the feedback from the label information, the generator is able to learn faster and achieve better generated image quality. Empirically, we show that the proposed ArtGAN is capable to create realistic artwork, as well as generate compelling real world images that globally look natural with clear shape on CIFAR-10.",
"title": ""
},
{
"docid": "neg:1840555_14",
"text": "Altered cell metabolism is a characteristic feature of many cancers. Aside from well-described changes in nutrient consumption and waste excretion, altered cancer cell metabolism also results in changes to intracellular metabolite concentrations. Increased levels of metabolites that result directly from genetic mutations and cancer-associated modifications in protein expression can promote cancer initiation and progression. Changes in the levels of specific metabolites, such as 2-hydroxyglutarate, fumarate, succinate, aspartate and reactive oxygen species, can result in altered cell signalling, enzyme activity and/or metabolic flux. In this Review, we discuss the mechanisms that lead to changes in metabolite concentrations in cancer cells, the consequences of these changes for the cells and how they might be exploited to improve cancer therapy.",
"title": ""
},
{
"docid": "neg:1840555_15",
"text": "I2Head database has been created with the aim to become an optimal reference for low cost gaze estimation. It exhibits the following outstanding characteristics: it takes into account key aspects of low resolution eye tracking technology; it combines images of users gazing at different grids of points from alternative positions with registers of user’s head position and it provides calibration information of the camera and a simple 3D head model for each user. Hardware used to build the database includes a 6D magnetic sensor and a webcam. A careful calibration method between the sensor and the camera has been developed to guarantee the accuracy of the data. Different sessions have been recorded for each user including not only static head scenarios but also controlled displacements and even free head movements. The database is an outstanding framework to test both gaze estimation algorithms and head pose estimation methods.",
"title": ""
},
{
"docid": "neg:1840555_16",
"text": "We present the Balloon family of password hashing functions. These are the first cryptographic hash functions with proven space-hardness properties that: (i) use a password-independent access pattern, (ii) build exclusively upon standard cryptographic primitives, and (iii) are fast enough for real-world use. Space-hard functions require a large amount of working space to evaluate efficiently and, when used for password hashing, they dramatically increase the cost of offline dictionary attacks. The central technical challenge of this work was to devise the graph-theoretic and linear-algebraic techniques necessary to prove the space-hardness properties of the Balloon functions (in the random-oracle model). To motivate our interest in security proofs, we demonstrate that it is possible to compute Argon2i, a recently proposed space-hard function that lacks a formal analysis, in less than the claimed required space with no increase in the computation time.",
"title": ""
},
{
"docid": "neg:1840555_17",
"text": "Recent years have seen a deluge of behavioral data from players hitting the game industry. Reasons for this data surge are many and include the introduction of new business models, technical innovations, the popularity of online games, and the increasing persistence of games. Irrespective of the causes, the proliferation of behavioral data poses the problem of how to derive insights therefrom. Behavioral data sets can be large, time-dependent and high-dimensional. Clustering offers a way to explore such data and to discover patterns that can reduce the overall complexity of the data. Clustering and other techniques for player profiling and play style analysis have, therefore, become popular in the nascent field of game analytics. However, the proper use of clustering techniques requires expertise and an understanding of games is essential to evaluate results. With this paper, we address game data scientists and present a review and tutorial focusing on the application of clustering techniques to mine behavioral game data. Several algorithms are reviewed and examples of their application shown. Key topics such as feature normalization are discussed and open problems in the context of game analytics are pointed out.",
"title": ""
},
{
"docid": "neg:1840555_18",
"text": "Although the mechanism of action of botulinum toxin (BTX) has been intensively studied, many unanswered questions remain regarding the composition and clinical properties of the two formulations of BTX currently approved for cosmetic use. In the first half of this review, these questions are explored in detail, with emphasis on the most pertinent and revelatory studies in the literature. The second half delineates most of the common and some not so common uses of BTX in the face and neck, stressing important patient selection and safety considerations. Complications from neurotoxins at cosmetic doses are generally rare and usually technique dependent.",
"title": ""
}
] |
1840556 | Grab 'n Run: Secure and Practical Dynamic Code Loading for Android Applications | [
{
"docid": "pos:1840556_0",
"text": "Android phone manufacturers are under the perpetual pressure to move quickly on their new models, continuously customizing Android to fit their hardware. However, the security implications of this practice are less known, particularly when it comes to the changes made to Android's Linux device drivers, e.g., those for camera, GPS, NFC etc. In this paper, we report the first study aimed at a better understanding of the security risks in this customization process. Our study is based on ADDICTED, a new tool we built for automatically detecting some types of flaws in customized driver protection. Specifically, on a customized phone, ADDICTED performs dynamic analysis to correlate the operations on a security-sensitive device to its related Linux files, and then determines whether those files are under-protected on the Linux layer by comparing them with their counterparts on an official Android OS. In this way, we can detect a set of likely security flaws on the phone. Using the tool, we analyzed three popular phones from Samsung, identified their likely flaws and built end-to-end attacks that allow an unprivileged app to take pictures and screenshots, and even log the keys the user enters through touch screen. Some of those flaws are found to exist on over a hundred phone models and affect millions of users. We reported the flaws and helped the manufacturers fix those problems. We further studied the security settings of device files on 2423 factory images from major phone manufacturers, discovered over 1,000 vulnerable images and also gained insights about how they are distributed across different Android versions, carriers and countries.",
"title": ""
},
{
"docid": "pos:1840556_1",
"text": "In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.",
"title": ""
}
] | [
{
"docid": "neg:1840556_0",
"text": "Neural networks have recently had a lot of success for many tasks. However, neural network architectures that perform well are still typically designed manually by experts in a cumbersome trial-and-error process. We propose a new method to automatically search for well-performing CNN architectures based on a simple hill climbing procedure whose operators apply network morphisms, followed by short optimization runs by cosine annealing. Surprisingly, this simple method yields competitive results, despite only requiring resources in the same order of magnitude as training a single network. E.g., on CIFAR-10, our method designs and trains networks with an error rate below 6% in only 12 hours on a single GPU; training for one day reduces this error further, to almost 5%.",
"title": ""
},
{
"docid": "neg:1840556_1",
"text": "The neocortex has a high capacity for plasticity. To understand the full scope of this capacity, it is essential to know how neurons choose particular partners to form synaptic connections. By using multineuron whole-cell recordings and confocal microscopy we found that axons of layer V neocortical pyramidal neurons do not preferentially project toward the dendrites of particular neighboring pyramidal neurons; instead, axons promiscuously touch all neighboring dendrites without any bias. Functional synaptic coupling of a small fraction of these neurons is, however, correlated with the existence of synaptic boutons at existing touch sites. These data provide the first direct experimental evidence for a tabula rasa-like structural matrix between neocortical pyramidal neurons and suggests that pre- and postsynaptic interactions shape the conversion between touches and synapses to form specific functional microcircuits. These data also indicate that the local neocortical microcircuit has the potential to be differently rewired without the need for remodeling axonal or dendritic arbors.",
"title": ""
},
{
"docid": "neg:1840556_2",
"text": "This study examined perceived coping (perceived problem-solving ability and progress in coping with problems) as a mediator between adult attachment (anxiety and avoidance) and psychological distress (depression, hopelessness, anxiety, anger, and interpersonal problems). Survey data from 515 undergraduate students were analyzed using structural equation modeling. Results indicated that perceived coping fully mediated the relationship between attachment anxiety and psychological distress and partially mediated the relationship between attachment avoidance and psychological distress. These findings suggest not only that it is important to consider attachment anxiety or avoidance in understanding distress but also that perceived coping plays an important role in these relationships. Implications for these more complex relations are discussed for both counseling interventions and further research.",
"title": ""
},
{
"docid": "neg:1840556_3",
"text": "The VIENNA rectifiers have advantages of high efficiency as well as low output harmonics and are widely utilized in power conversion system when dc power sources are needed for supplying dc loads. VIENNA rectifiers based on three-phase/level can provide two voltage outputs with a neutral line at relatively low costs. However, total harmonic distortion (THD) of input current deteriorates seriously when unbalanced voltages occur. In addition, voltage outputs depend on system parameters, especially multiple loads. Therefore, unbalance output voltage controller and modified carrier-based pulse-width modulation (CBPWM) are proposed in this paper to solve the above problems. Unbalanced output voltage controller is designed based on average model considering independent output voltage and loads conditions. Meanwhile, reference voltages are modified according to different neutral point voltage conditions. The simulation and experimental results are presented to verify the proposed method.",
"title": ""
},
{
"docid": "neg:1840556_4",
"text": "While recognized as a theoretical and practical concept for over 20 years, only now ransomware has taken centerstage as one of the most prevalent cybercrimes. Various reports demonstrate the enormous burden placed on companies, which have to grapple with the ongoing attack waves. At the same time, our strategic understanding of the threat and the adversarial interaction between organizations and cybercriminals perpetrating ransomware attacks is lacking. In this paper, we develop, to the best of our knowledge, the first gametheoretic model of the ransomware ecosystem. Our model captures a multi-stage scenario involving organizations from different industry sectors facing a sophisticated ransomware attacker. We place particular emphasis on the decision of companies to invest in backup technologies as part of a contingency plan, and the economic incentives to pay a ransom if impacted by an attack. We further study to which degree comprehensive industry-wide backup investments can serve as a deterrent for ongoing attacks.",
"title": ""
},
{
"docid": "neg:1840556_5",
"text": "Automatic judgment prediction aims to predict the judicial results based on case materials. It has been studied for several decades mainly by lawyers and judges, considered as a novel and prospective application of artificial intelligence techniques in the legal field. Most existing methods follow the text classification framework, which fails to model the complex interactions among complementary case materials. To address this issue, we formalize the task as Legal Reading Comprehension according to the legal scenario. Following the working protocol of human judges, LRC predicts the final judgment results based on three types of information, including fact description, plaintiffs’ pleas, and law articles. Moreover, we propose a novel LRC model, AutoJudge, which captures the complex semantic interactions among facts, pleas, and laws. In experiments, we construct a real-world civil case dataset for LRC. Experimental results on this dataset demonstrate that our model achieves significant improvement over stateof-the-art models. We will publish all source codes and datasets of this work on github. com for further research.",
"title": ""
},
{
"docid": "neg:1840556_6",
"text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U",
"title": ""
},
{
"docid": "neg:1840556_7",
"text": "A low-offset latched comparator using new dynamic offset cancellation technique is proposed. The new technique achieves low offset voltage without pre-amplifier and quiescent current. Furthermore the overdrive voltage of the input transistor can be optimized to reduce the offset voltage of the comparator independent of the input common mode voltage. A prototype comparator has been fabricated in 90 nm 9M1P CMOS technology with 152 µm2. Experimental results show that the comparator achieves 3.8 mV offset at 1 sigma at 500 MHz operating, while dissipating 39 μW from a 1.2 V supply.",
"title": ""
},
{
"docid": "neg:1840556_8",
"text": "It has recently been shown that Bondi-van der Burg-Metzner-Sachs supertranslation symmetries imply an infinite number of conservation laws for all gravitational theories in asymptotically Minkowskian spacetimes. These laws require black holes to carry a large amount of soft (i.e., zero-energy) supertranslation hair. The presence of a Maxwell field similarly implies soft electric hair. This Letter gives an explicit description of soft hair in terms of soft gravitons or photons on the black hole horizon, and shows that complete information about their quantum state is stored on a holographic plate at the future boundary of the horizon. Charge conservation is used to give an infinite number of exact relations between the evaporation products of black holes which have different soft hair but are otherwise identical. It is further argued that soft hair which is spatially localized to much less than a Planck length cannot be excited in a physically realizable process, giving an effective number of soft degrees of freedom proportional to the horizon area in Planck units.",
"title": ""
},
{
"docid": "neg:1840556_9",
"text": "The psychometric properties and clinical utility of the Separation Anxiety Avoidance Inventory, child and parent version (SAAI-C/P) were examined in two studies. The aim of the SAAI, a self- and parent-report measure, is to evaluate the avoidance relating to separation anxiety disorder (SAD) situations. In the first study, a school sample of 384 children and their parents (n = 279) participated. In the second study, 102 children with SAD and 35 children with other anxiety disorders (AD) were investigated. In addition, 93 parents of children with SAD, and 35 parents of children with other AD participated. A two-factor structure was confirmed by confirmatory factor analysis. The SAAI-C and SAAI-P demonstrated good internal consistency, test-retest reliability, as well as construct and discriminant validity. Furthermore, the SAAI was sensitive to treatment change. The parent-child agreement was substantial. Overall, these results provide support for the use of the SAAI-C/P version in clinical and research settings.",
"title": ""
},
{
"docid": "neg:1840556_10",
"text": "A half-bridge integrated zero-voltage-switching (ZVS) full-bridge converter with reduced conduction loss for battery on-board chargers in electric vehicles (EVs) or plug-in hybrid electric vehicles (PHEVs) is proposed in this paper. The proposed converter features a reduction in primary-conduction loss and a lower secondary-voltage stress. In addition, the proposed converter has the most favorable characteristics as battery chargers as follows: a full ZVS capability and a significantly reduced output filter size due to the improved output waveform. In this paper, the circuit configuration, operation principle, and relevant analysis results of the proposed converter are described, followed by the experimental results on a prototype converter realized with a scale-downed 2-kW battery charger for EVs or PHEVs. The experimental results validate the theoretical analysis and show the effectiveness of the proposed converter as battery on-board chargers for EVs or PHEVs.",
"title": ""
},
{
"docid": "neg:1840556_11",
"text": "In this paper we investigate the co-authorship graph obtained from all papers published at SIGMOD between 1975 and 2002. We find some interesting facts, for instance, the identity of the authors who, on average, are \"closest\" to all other authors at a given time. We also show that SIGMOD's co-authorship graph is yet another example of a small world---a graph topology which has received a lot of attention recently. A companion web site for this paper can be found at http://db.cs.ualberta.ca/coauthorship.",
"title": ""
},
{
"docid": "neg:1840556_12",
"text": "We present here SEMILAR, a SEMantic simILARity toolkit. SEMILAR implements a number of algorithms for assessing the semantic similarity between two texts. It is available as a Java library and as a Java standalone application offering GUI-based access to the implemented semantic similarity methods. Furthermore, it offers facilities for manual semantic similarity annotation by experts through its component SEMILAT (a SEMantic simILarity Annotation Tool).",
"title": ""
},
{
"docid": "neg:1840556_13",
"text": "Clinical Scenario: Patients who experience prolonged concussion symptoms can be diagnosed with postconcussion syndrome (PCS) when those symptoms persist longer than 4 weeks. Aerobic exercise protocols have been shown to be effective in improving physical and mental aspects of health. Emerging research suggests that aerobic exercise may be useful as a treatment for PCS, where exercise allows patients to feel less isolated and more active during the recovery process.\n\n\nCLINICAL QUESTION\nIs aerobic exercise more beneficial in reducing symptoms than current standard care in patients with prolonged symptoms or PCS lasting longer than 4 weeks? Summary of Key Findings: After a thorough literature search, 4 studies relevant to the clinical question were selected. Of the 4 studies, 1 study was a randomized control trial and 3 studies were case series. All 4 studies investigated aerobic exercise protocol as treatment for PCS. Three studies demonstrated a greater rate of symptom improvement from baseline assessment to follow-up after a controlled subsymptomatic aerobic exercise program. One study showed a decrease in symptoms in the aerobic exercise group compared with the full-body stretching group. Clinical Bottom Line: There is moderate evidence to support subsymptomatic aerobic exercise as a treatment of PCS; therefore, it should be considered as a clinical option for reducing PCS and prolonged concussion symptoms. A previously validated protocol, such as the Buffalo Concussion Treadmill test, Balke protocol, or rating of perceived exertion, as mentioned in this critically appraised topic, should be used to measure baseline values and treatment progression. Strength of Recommendation: Level C evidence exists that the aerobic exercise protocol is more effective than the current standard of care in treating PCS.",
"title": ""
},
{
"docid": "neg:1840556_14",
"text": "This paper presents a clock generator for a MIPI M-PHY serial link transmitter, which includes an ADPLL, a digitally controlled oscillator (DCO), a programmable multiplier, and the actual serial driver. The paper focuses on the design of a DCO and how to enhance the frequency resolution to diminish the quantization noise introduced by the frequency discretization. As a result, a 17-kHz DCO frequency tuning resolution is demonstrated. Furthermore, implementation details of a low-power programmable 1-to-2-or-4 frequency multiplier are elaborated. The design has been implemented in a 40-nm CMOS process. The measurement results verify that the circuit provides the MIPI clock data rates from 1.248 GHz to 5.83 GHz. The DCO and multiplier unit dissipates a maximum of 3.9 mW from a 1.1 V supply and covers a small die area of 0.012 mm2.",
"title": ""
},
{
"docid": "neg:1840556_15",
"text": "The application of frequent patterns in classification has demonstrated its power in recent studies. It often adopts a two-step approach: frequent pattern (or classification rule) mining followed by feature selection (or rule ranking). However, this two-step process could be computationally expensive, especially when the problem scale is large or the minimum support is low. It was observed that frequent pattern mining usually produces a huge number of \"patterns\" that could not only slow down the mining process but also make feature selection hard to complete. In this paper, we propose a direct discriminative pattern mining approach, DDPMine, to tackle the efficiency issue arising from the two-step approach. DDPMine performs a branch-and-bound search for directly mining discriminative patterns without generating the complete pattern set. Instead of selecting best patterns in a batch, we introduce a \"feature-centered\" mining approach that generates discriminative patterns sequentially on a progressively shrinking FP-tree by incrementally eliminating training instances. The instance elimination effectively reduces the problem size iteratively and expedites the mining process. Empirical results show that DDPMine achieves orders of magnitude speedup without any downgrade of classification accuracy. It outperforms the state-of-the-art associative classification methods in terms of both accuracy and efficiency.",
"title": ""
},
{
"docid": "neg:1840556_16",
"text": "Understanding how housing values evolve over time is important to policy makers, consumers and real estate professionals. Existing methods for constructing housing indices are computed at a coarse spatial granularity, such as metropolitan regions, which can mask or distort price dynamics apparent in local markets, such as neighborhoods and census tracts. A challenge in moving to estimates at, for example, the census tract level is the scarcity of spatiotemporally localized house sales observations. Our work aims to address this challenge by leveraging observations from multiple census tracts discovered to have correlated valuation dynamics. Our proposed Bayesian nonparametric approach builds on the framework of latent factor models to enable a flexible, data-driven method for inferring the clustering of correlated census tracts. We explore methods for scalability and parallelizability of computations, yielding a housing valuation index at the level of census tract rather than zip code, and on a monthly basis rather than quarterly. Our analysis is provided on a large Seattle metropolitan housing dataset.",
"title": ""
},
{
"docid": "neg:1840556_17",
"text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.",
"title": ""
},
{
"docid": "neg:1840556_18",
"text": "Recognizing plants is a vital problem especially for biologists, chemists, and environmentalists. Plant recognition can be performed by human experts manually but it is a time consuming and low-efficiency process. Automation of plant recognition is an important process for the fields working with plants. This paper presents an approach for plant recognition using leaf images. Shape and color features extracted from leaf images are used with k-Nearest Neighbor, Support Vector Machines, Naive Bayes, and Random Forest classification algorithms to recognize plant types. The presented approach is tested on 1897 leaf images and 32 kinds of leaves. The results demonstrated that success rate of plant recognition can be improved up to 96% with Random Forest method when both shape and color features are used.",
"title": ""
},
{
"docid": "neg:1840556_19",
"text": "The study examined the etiology of individual differences in early drawing and of its longitudinal association with school mathematics. Participants (N = 14,760), members of the Twins Early Development Study, were assessed on their ability to draw a human figure, including number of features, symmetry, and proportionality. Human figure drawing was moderately stable across 6 months (average r = .40). Individual differences in drawing at age 4½ were influenced by genetic (.21), shared environmental (.30), and nonshared environmental (.49) factors. Drawing was related to later (age 12) mathematical ability (average r = .24). This association was explained by genetic and shared environmental factors that also influenced general intelligence. Some genetic factors, unrelated to intelligence, also contributed to individual differences in drawing.",
"title": ""
}
] |
1840557 | Gaussian Processes for Rumour Stance Classification in Social Media | [
{
"docid": "pos:1840557_0",
"text": "The open structure of online social networks and their uncurated nature give rise to problems of user credibility and influence. In this paper, we address the task of predicting the impact of Twitter users based only on features under their direct control, such as usage statistics and the text posted in their tweets. We approach the problem as regression and apply linear as well as nonlinear learning methods to predict a user impact score, estimated by combining the numbers of the user’s followers, followees and listings. The experimental results point out that a strong prediction performance is achieved, especially for models based on the Gaussian Processes framework. Hence, we can interpret various modelling components, transforming them into indirect ‘suggestions’ for impact boosting.",
"title": ""
}
] | [
{
"docid": "neg:1840557_0",
"text": "OBJECTIVE\nTo establish the psychosexual outcome of gender-dysphoric children at 16 years or older and to examine childhood characteristics related to psychosexual outcome.\n\n\nMETHOD\nWe studied 77 children who had been referred in childhood to our clinic because of gender dysphoria (59 boys, 18 girls; mean age 8.4 years, age range 5-12 years). In childhood, we measured the children's cross-gender identification and discomfort with their own sex and gender roles. At follow-up 10.4 +/- 3.4 years later, 54 children (mean age 18.9 years, age range 16-28 years) agreed to participate. In this group, we assessed gender dysphoria and sexual orientation.\n\n\nRESULTS\nAt follow-up, 30% of the 77 participants (19 boys and 4 girls) did not respond to our recruiting letter or were not traceable; 27% (12 boys and 9 girls) were still gender dysphoric (persistence group), and 43% (desistance group: 28 boys and 5 girls) were no longer gender dysphoric. Both boys and girls in the persistence group were more extremely cross-gendered in behavior and feelings and were more likely to fulfill gender identity disorder (GID) criteria in childhood than the children in the other two groups. At follow-up, nearly all male and female participants in the persistence group reported having a homosexual or bisexual sexual orientation. In the desistance group, all of the girls and half of the boys reported having a heterosexual orientation. The other half of the boys in the desistance group had a homosexual or bisexual sexual orientation.\n\n\nCONCLUSIONS\nMost children with gender dysphoria will not remain gender dysphoric after puberty. Children with persistent GID are characterized by more extreme gender dysphoria in childhood than children with desisting gender dysphoria. With regard to sexual orientation, the most likely outcome of childhood GID is homosexuality or bisexuality.",
"title": ""
},
{
"docid": "neg:1840557_1",
"text": "This memo describes a snapshot of the reasoning behind a proposed new namespace, the Host Identity namespace, and a new protocol layer, the Host Identity Protocol (HIP), between the internetworking and transport layers. Herein are presented the basics of the current namespaces, their strengths and weaknesses, and how a new namespace will add completeness to them. The roles of this new namespace in the protocols are defined. The memo describes the thinking of the authors as of Fall 2003. The architecture may have evolved since. This document represents one stable point in that evolution of understanding.",
"title": ""
},
{
"docid": "neg:1840557_2",
"text": "BACKGROUND\nLysergic acid diethylamide (LSD) is a potent serotonergic hallucinogen or psychedelic that modulates consciousness in a marked and novel way. This study sought to examine the acute and mid-term psychological effects of LSD in a controlled study.\n\n\nMETHOD\nA total of 20 healthy volunteers participated in this within-subjects study. Participants received LSD (75 µg, intravenously) on one occasion and placebo (saline, intravenously) on another, in a balanced order, with at least 2 weeks separating sessions. Acute subjective effects were measured using the Altered States of Consciousness questionnaire and the Psychotomimetic States Inventory (PSI). A measure of optimism (the Revised Life Orientation Test), the Revised NEO Personality Inventory, and the Peter's Delusions Inventory were issued at baseline and 2 weeks after each session.\n\n\nRESULTS\nLSD produced robust psychological effects; including heightened mood but also high scores on the PSI, an index of psychosis-like symptoms. Increased optimism and trait openness were observed 2 weeks after LSD (and not placebo) and there were no changes in delusional thinking.\n\n\nCONCLUSIONS\nThe present findings reinforce the view that psychedelics elicit psychosis-like symptoms acutely yet improve psychological wellbeing in the mid to long term. It is proposed that acute alterations in mood are secondary to a more fundamental modulation in the quality of cognition, and that increased cognitive flexibility subsequent to serotonin 2A receptor (5-HT2AR) stimulation promotes emotional lability during intoxication and leaves a residue of 'loosened cognition' in the mid to long term that is conducive to improved psychological wellbeing.",
"title": ""
},
{
"docid": "neg:1840557_3",
"text": "Behavioral interventions preceded by a functional analysis have been proven efficacious in treating severe problem behavior associated with autism. There is, however, a lack of research showing socially validated outcomes when assessment and treatment procedures are conducted by ecologically relevant individuals in typical settings. In this study, interview-informed functional analyses and skill-based treatments (Hanley et al. in J Appl Behav Anal 47:16-36, 2014) were applied by a teacher and home-based provider in the classroom and home of two children with autism. The function-based treatments resulted in socially validated reductions in severe problem behavior (self-injury, aggression, property destruction). Furthermore, skills lacking in baseline-functional communication, denial and delay tolerance, and compliance with adult instructions-occurred with regularity following intervention. The generality and costs of the process are discussed.",
"title": ""
},
{
"docid": "neg:1840557_4",
"text": "J. Naidoo1*, D. B. Page2, B. T. Li3, L. C. Connell3, K. Schindler4, M. E. Lacouture5,6, M. A. Postow3,6 & J. D. Wolchok3,6 Sidney Kimmel Comprehensive Cancer Center, Johns Hopkins University, Baltimore; Providence Portland Medical Center and Earl A. Chiles Research Institute, Portland; Department of Medicine and Ludwig Center, Memorial Sloan Kettering Cancer Center, New York, USA; Department of Dermatology, Medical University of Vienna, Vienna, Austria; Dermatology Service, Memorial Sloan Kettering Cancer Center, New York; Department of Medicine, Weill Cornell Medical College, New York, USA",
"title": ""
},
{
"docid": "neg:1840557_5",
"text": "Information technology (IT) such as Electronic Data Interchange (EDI), Radio Frequency Identification Technology (RFID), wireless, the Internet and World Wide Web (WWW), and Information Systems (IS) such as Electronic Commerce (E-Commerce) systems and Enterprise Resource Planning (ERP) systems have had tremendous impact in education, healthcare, manufacturing, transportation, retailing, pure services, and even war. Many organizations turned to IT/IS to help them achieve their goals; however, many failed to achieve the full potential of IT/IS. These failures can be attributed at least in part to a weak link in the planning process. That weak link is the IT/IS justification process. The decision-making process has only grown more difficult in recent years with the increased complexity of business brought about by the rapid growth of supply chain management, the virtual enterprise and E-business. These are but three of the many changes in the business environment over the past 10–12 years. The complexities of this dynamic new business environment should be taken into account in IT/IS justification. We conducted a review of the current literature on IT/IS justification. The purpose of the literature review was to assemble meaningful information for the development of a framework for IT/IS evaluation that better reflects the new business environment. A suitable classification scheme has been proposed for organizing the literature reviewed. Directions for future research are indicated. 2005 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840557_6",
"text": "Recent advances in artificial intelligence (AI) and machine learning, combined with developments in neuromorphic hardware technologies and ubiquitous computing, promote machines to emulate human perceptual and cognitive abilities in a way that will continue the trend of automation for several upcoming decades. Despite the gloomy scenario of automation as a job eliminator, we argue humans and machines can cross-fertilise in a way that forwards a cooperative coexistence. We build our argument on three pillars: (i) the economic mechanism of automation, (ii) the dichotomy of ‘experience’ that separates the first-person perspective of humans from artificial learning algorithms, and (iii) the interdependent relationship between humans and machines. To realise this vision, policy makers have to implement alternative educational approaches that support lifelong training and flexible job transitions.",
"title": ""
},
{
"docid": "neg:1840557_7",
"text": "Therabot is a robotic therapy support system designed to supplement a therapist and to provide support to patients diagnosed with conditions associated with trauma and adverse events. The system takes on the form factor of a floppy-eared dog which fits in a person»s lap and is designed for patients to provide support and encouragement for home therapy exercises and in counseling.",
"title": ""
},
{
"docid": "neg:1840557_8",
"text": "Semantic image inpainting is a challenging task where large missing regions have to be filled based on the available visual data. Existing methods which extract information from only a single image generally produce unsatisfactory results due to the lack of high level context. In this paper, we propose a novel method for semantic image inpainting, which generates the missing content by conditioning on the available data. Given a trained generative model, we search for the closest encoding of the corrupted image in the latent image manifold using our context and prior losses. This encoding is then passed through the generative model to infer the missing content. In our method, inference is possible irrespective of how the missing content is structured, while the state-of-the-art learning based method requires specific information about the holes in the training phase. Experiments on three datasets show that our method successfully predicts information in large missing regions and achieves pixel-level photorealism, significantly outperforming the state-of-the-art methods.",
"title": ""
},
{
"docid": "neg:1840557_9",
"text": "Expedia users who prefer the same types of hotels presumably share other commonalities (i.e., non-hotel commonalities) with each other. With this in mind, Kaggle challenged developers to recommend hotels to Expedia users. Armed with a training set containing data about 37 million Expedia users, we set out to do just that. Our machine-learning algorithms ranged from direct applications of material learned in class to multi-part algorithms with novel combinations of recommender system techniques. Kaggle’s benchmark for randomly guessing a user’s hotel cluster is 0.02260, and the mean average precision K = 5 value for näıve recommender systems is 0.05949. Our best combination of machine-learning algorithms achieved a figure just over 0.30. Our results provide insight into performing multi-class classification on data sets that lack linear structure.",
"title": ""
},
{
"docid": "neg:1840557_10",
"text": "This paper proposes a real-time variable-Q non-stationary Gabor transform (VQ-NSGT) system for speech pitch shifting. The system allows for time-frequency representations of speech on variable-Q (VQ) with perfect reconstruction and computational efficiency. The proposed VQ-NSGT phase vocoder can be used for pitch shifting by simple frequency translation (transposing partials along the frequency axis) instead of spectral stretching in frequency domain by the Fourier transform. In order to retain natural sounding pitch shifted speech, a hybrid of smoothly varying Q scheme is used to retain the formant structure of the original signal at both low and high frequencies. Moreover, the preservation of transients of speech are improved due to the high time resolution of VQ-NSGT at high frequencies. A sliced VQ-NSGT is used to retain inter-partials phase coherence by synchronized overlap-add method. Therefore, the proposed system lends itself to real-time processing while retaining the formant structure of the original signal and inter-partial phase coherence. The simulation results showed that the proposed approach is suitable for pitch shifting of both speech and music signals.",
"title": ""
},
{
"docid": "neg:1840557_11",
"text": "We propose to detect abnormal events via a sparse reconstruction over the normal bases. Given a collection of normal training examples, e.g., an image sequence or a collection of local spatio-temporal patches, we propose the sparse reconstruction cost (SRC) over the normal dictionary to measure the normalness of the testing sample. By introducing the prior weight of each basis during sparse reconstruction, the proposed SRC is more robust compared to other outlier detection criteria. To condense the over-completed normal bases into a compact dictionary, a novel dictionary selection method with group sparsity constraint is designed, which can be solved by standard convex optimization. Observing that the group sparsity also implies a low rank structure, we reformulate the problem using matrix decomposition, which can handle large scale training samples by reducing the memory requirement at each iteration from O(k2) to O(k) where k is the number of samples. We use the column wise coordinate descent to solve the matrix decomposition represented formulation, which empirically leads to a similar solution to the group sparsity formulation. By designing different types of spatio-temporal basis, our method can detect both local and global abnormal events. Meanwhile, as it does not rely on object detection and tracking, it can be applied to crowded video scenes. By updating the dictionary incrementally, our 1This work was supported in part by the Nanyang Assistant Professorship (M4080134), JSPSNTU joint project (M4080882), Natural Science Foundation of China (61105013), and National Science and Technology Pillar Program (2012BAI14B03). Part of this work was done when Yang Cong was a research fellow at NTU. Preprint submitted to Pattern Recognition January 30, 2013 method can be easily extended to online event detection. Experiments on three benchmark datasets and the comparison to the state-of-the-art methods validate the advantages of our method.",
"title": ""
},
{
"docid": "neg:1840557_12",
"text": "ETHNOPHARMACOLOGICAL RELEVANCE\nSenna occidentalis, Leonotis ocymifolia, Leucas martinicensis, Rumex abyssinicus, and Albizia schimperiana are traditionally used for treatment of various ailments including helminth infection in Ethiopia.\n\n\nMATERIALS AND METHODS\nIn vitro egg hatch assay and larval development tests were conducted to determine the possible anthelmintic effects of crude aqueous and hydro-alcoholic extracts of the leaves of Senna occidentalis, aerial parts of Leonotis ocymifolia, Leucas martinicensis, Rumex abyssinicus, and stem bark of Albizia schimperiana on eggs and larvae of Haemonchus contortus.\n\n\nRESULTS\nBoth aqueous and hydro-alcoholic extracts of Leucas martinicensis, Leonotis ocymifolia and aqueous extract of Senna occidentalis and Albizia schimperiana induced complete inhibition of egg hatching at concentration less than or equal to 1mg/ml. Aqueous and hydro-alcoholic extracts of all tested medicinal plants have shown statistically significant and dose dependent egg hatching inhibition. Based on ED(50), the most potent extracts were aqueous and hydro-alcoholic extracts of Leucas martinicensis (0.09 mg/ml), aqueous extracts of Rumex abyssinicus (0.11 mg/ml) and Albizia schimperiana (0.11 mg/ml). Most of the tested plant extracts have shown remarkable larval development inhibition. Aqueous extracts of Leonotis ocymifolia, Leucas martinicensis, Albizia schimperiana and Senna occidentalis induced 100, 99.85, 99.31, and 96.36% inhibition of larval development, respectively; while hydro-alcoholic extracts of Albizia schimperiana induced 99.09 inhibition at the highest concentration tested (50mg/ml). Poor inhibition was recorded for hydro-alcoholic extracts of Senna occidentalis (9%) and Leonotis ocymifolia (37%) at 50mg/ml.\n\n\nCONCLUSIONS\nThe overall findings of the current study indicated that the evaluated medicinal plants have potential anthelmintic effect and further in vitro and in vivo evaluation is indispensable to make use of these plants.",
"title": ""
},
{
"docid": "neg:1840557_13",
"text": "With the increasing in mobile application systems and a high competition between companies, that led to increase in the number of mobile application projects. Mobile software development is a group of process for creating software for mobile devices with limited resources like small screen, low-power. The development of mobile applications is a big challenging because of rapidly changing business requirements and technical constraints for mobile systems. So, developers faced the challenge of a dynamic environment and the Changing of mobile application requirements. Moreover, Mobile applications should adapt appropriate software development methods that act in response efficiently to these challenges. However, at the moment, there is limited knowledge about the suitability of different software practices for the development of mobile applications. According to many researchers ,Agile methodologies was found to be most suitable for mobile development projects as they are short time, require flexibility, reduces waste and time to market. Finally, in this research we are looking for a suitable process model that conforms to the requirement of mobile application, we are going to investigate agile development methods to find a way, making the development of mobile application easy and compatible with mobile device features.",
"title": ""
},
{
"docid": "neg:1840557_14",
"text": "The concept of agile process models has gained great popularity in software (SW) development community in past few years. Agile models promote fast development. This property has certain drawbacks, such as poor documentation and bad quality. Fast development promotes use of agile process models in small-scale projects. This paper modifies and evaluates extreme programming (XP) process model and proposes a novel adaptive process mode based on these modifications. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840557_15",
"text": "Although there has been considerable progress in reducing cancer incidence in the United States, the number of cancer survivors continues to increase due to the aging and growth of the population and improvements in survival rates. As a result, it is increasingly important to understand the unique medical and psychosocial needs of survivors and be aware of resources that can assist patients, caregivers, and health care providers in navigating the various phases of cancer survivorship. To highlight the challenges and opportunities to serve these survivors, the American Cancer Society and the National Cancer Institute estimated the prevalence of cancer survivors on January 1, 2012 and January 1, 2022, by cancer site. Data from Surveillance, Epidemiology, and End Results (SEER) registries were used to describe median age and stage at diagnosis and survival; data from the National Cancer Data Base and the SEER-Medicare Database were used to describe patterns of cancer treatment. An estimated 13.7 million Americans with a history of cancer were alive on January 1, 2012, and by January 1, 2022, that number will increase to nearly 18 million. The 3 most prevalent cancers among males are prostate (43%), colorectal (9%), and melanoma of the skin (7%), and those among females are breast (41%), uterine corpus (8%), and colorectal (8%). This article summarizes common cancer treatments, survival rates, and posttreatment concerns and introduces the new National Cancer Survivorship Resource Center, which has engaged more than 100 volunteer survivorship experts nationwide to develop tools for cancer survivors, caregivers, health care professionals, advocates, and policy makers.",
"title": ""
},
{
"docid": "neg:1840557_16",
"text": "This paper describes two digital implementations of a new mathematical transform, namely, the second generation curvelet transform in two and three dimensions. The first digital transformation is based on unequally spaced fast Fourier transforms, while the second is based on the wrapping of specially selected Fourier samples. The two implementations essentially differ by the choice of spatial grid used to translate curvelets at each scale and angle. Both digital transformations return a table of digital curvelet coefficients indexed by a scale parameter, an orientation parameter, and a spatial location parameter. And both implementations are fast in the sense that they run in O(n2 logn) flops for n by n Cartesian arrays; in addition, they are also invertible, with rapid inversion algorithms of about the same complexity. Our digital transformations improve upon earlier implementations—based upon the first generation of curvelets—in the sense that they are conceptually simpler, faster, and far less redundant. The software CurveLab, which implements both transforms presented in this paper, is available at http://www.curvelet.org.",
"title": ""
},
{
"docid": "neg:1840557_17",
"text": "The utility industry has invested widely in smart grid (SG) over the past decade. They considered it the future electrical grid while the information and electricity are delivered in two-way flow. SG has many Artificial Intelligence (AI) applications such as Artificial Neural Network (ANN), Machine Learning (ML) and Deep Learning (DL). Recently, DL has been a hot topic for AI applications in many fields such as time series load forecasting. This paper introduces the common algorithms of DL in the literature applied to load forecasting problems in the SG and power systems. The intention of this survey is to explore the different applications of DL that are used in the power systems and smart grid load forecasting. In addition, it compares the accuracy results RMSE and MAE for the reviewed applications and shows the use of convolutional neural network CNN with k-means algorithm had a great percentage of reduction in terms of RMSE.",
"title": ""
},
{
"docid": "neg:1840557_18",
"text": "About ten years ago, soon after the Web’s birth, Web “search engines” were first by word of mouth. Soon, however, automated search engines became a world wide phenomenon, especially AltaVista at the beginning. I was pleasantly surprised by the amount and diversity of information made accessible by the Web search engines even in the mid 1990’s. The growth of the available Web pages is beyond most, if not all, people’s imagination. The search engines enabled people to find information, facts, and references among these Web pages.",
"title": ""
},
{
"docid": "neg:1840557_19",
"text": "Standard targets are typically used for structural (white-box) evaluation of fingerprint readers, e.g., for calibrating imaging components of a reader. However, there is no standard method for behavioral (black-box) evaluation of fingerprint readers in operational settings where variations in finger placement by the user are encountered. The goal of this research is to design and fabricate 3D targets for repeatable behavioral evaluation of fingerprint readers. 2D calibration patterns with known characteristics (e.g., sinusoidal gratings of pre-specified orientation and frequency, and fingerprints with known singular points and minutiae) are projected onto a generic 3D finger surface to create electronic 3D targets. A state-of-the-art 3D printer (Stratasys Objet350 Connex) is used to fabricate wearable 3D targets with materials similar in hardness and elasticity to the human finger skin. The 3D printed targets are cleaned using 2M NaOH solution to obtain evaluation-ready 3D targets. Our experimental results show that: 1) features present in the 2D calibration pattern are preserved during the creation of the electronic 3D target; 2) features engraved on the electronic 3D target are preserved during the physical 3D target fabrication; and 3) intra-class variability between multiple impressions of the physical 3D target is small. We also demonstrate that the generated 3D targets are suitable for behavioral evaluation of three different (500/1000 ppi) PIV/Appendix F certified optical fingerprint readers in the operational settings.",
"title": ""
}
] |
1840558 | Motion Blur Kernel Estimation via Deep Learning | [
{
"docid": "pos:1840558_0",
"text": "Camera shake leads to non-uniform image blurs. State-of-the-art methods for removing camera shake model the blur as a linear combination of homographically transformed versions of the true image. While this is conceptually interesting, the resulting algorithms are computationally demanding. In this paper we develop a forward model based on the efficient filter flow framework, incorporating the particularities of camera shake, and show how an efficient algorithm for blur removal can be obtained. Comprehensive comparisons on a number of real-world blurry images show that our approach is not only substantially faster, but it also leads to better deblurring results.",
"title": ""
},
{
"docid": "pos:1840558_1",
"text": "Blind image deconvolution is an ill-posed problem that requires regularization to solve. However, many common forms of image prior used in this setting have a major drawback in that the minimum of the resulting cost function does not correspond to the true sharp solution. Accordingly, a range of additional methods are needed to yield good results (Bayesian methods, adaptive cost functions, alpha-matte extraction and edge localization). In this paper we introduce a new type of image regularization which gives lowest cost for the true sharp image. This allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Due to its simplicity the algorithm is fast and very robust. We demonstrate our method on real images with both spatially invariant and spatially varying blur.",
"title": ""
}
] | [
{
"docid": "neg:1840558_0",
"text": "Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"title": ""
},
{
"docid": "neg:1840558_1",
"text": "The current study demonstrates the separability of spatial and verbal working memory resources among college students. In Experiment 1, we developed a spatial span task that taxes both the processing and storage components of spatial working memory. This measure correlates with spatial ability (spatial visualization) measures, but not with verbal ability measures. In contrast, the reading span test, a common test of verbal working memory, correlates with verbal ability measures, but not with spatial ability measures. Experiment 2, which uses an interference paradigm to cross the processing and storage demands of span tasks, replicates this dissociation and further demonstrates that both the processing and storage components of working memory tasks are important for predicting performance on spatial thinking and language processing tasks.",
"title": ""
},
{
"docid": "neg:1840558_2",
"text": "We report a case of a 48-year-old male patient with “krokodil” drug-related osteonecrosis of both jaws. Patient history included 1.5 years of “krokodil” use, with 8-month drug withdrawal prior to surgery. The patient was HCV positive. On the maxilla, sequestrectomy was performed. On the mandible, sequestrectomy was combined with bone resection. From ramus to ramus, segmental defect was formed, which was not reconstructed with any method. Post-operative follow-up period was 3 years and no disease recurrence was noted. On 3-year post-operative orthopantomogram, newly formed mandibular bone was found. This phenomenon shows that spontaneous bone formation is possible after mandible segmental resection in osteonecrosis patients.",
"title": ""
},
{
"docid": "neg:1840558_3",
"text": "We present CryptoML, the first practical framework for provably secure and efficient delegation of a wide range of contemporary matrix-based machine learning (ML) applications on massive datasets. In CryptoML a delegating client with memory and computational resource constraints wishes to assign the storage and ML-related computations to the cloud servers, while preserving the privacy of its data. We first suggest the dominant components of delegation performance cost, and create a matrix sketching technique that aims at minimizing the cost by data pre-processing. We then propose a novel interactive delegation protocol based on the provably secure Shamir's secret sharing. The protocol is customized for our new sketching technique to maximize the client's resource efficiency. CryptoML shows a new trade-off between the efficiency of secure delegation and the accuracy of the ML task. Proof of concept evaluations corroborate applicability of CryptoML to datasets with billions of non-zero records.",
"title": ""
},
{
"docid": "neg:1840558_4",
"text": "As antivirus and network intrusion detection systems have increasingly proven insufficient to detect advanced threats, large security operations centers have moved to deploy endpoint-based sensors that provide deeper visibility into low-level events across their enterprises. Unfortunately, for many organizations in government and industry, the installation, maintenance, and resource requirements of these newer solutions pose barriers to adoption and are perceived as risks to organizations' missions. To mitigate this problem we investigated the utility of agentless detection of malicious endpoint behavior, using only the standard built-in Windows audit logging facility as our signal. We found that Windows audit logs, while emitting manageable sized data streams on the endpoints, provide enough information to allow robust detection of malicious behavior. Audit logs provide an effective, low-cost alternative to deploying additional expensive agent-based breach detection systems in many government and industrial settings, and can be used to detect, in our tests, 83% percent of malware samples with a 0.1% false positive rate. They can also supplement already existing host signature-based antivirus solutions, like Kaspersky, Symantec, and McAfee, detecting, in our testing environment, 78% of malware missed by those antivirus systems.",
"title": ""
},
{
"docid": "neg:1840558_5",
"text": "We present a high gain linearly polarized Ku-band planar array for mobile satellite TV reception. In contrast with previously presented three dimensional designs, the approach presented here results in a low profile planar array with a similar performance. The elevation scan is performed electronically, whereas the azimuth scan is done mechanically using an electric motor. The incident angle of the arriving satellite signal is generally large, varying between 25° to 65° depending on the location of the receiver, thereby creating a considerable off-axis scan loss. In order to alleviate this problem, and yet maintaining a planar design, the antenna array is designed to be consisting of subarrays with a fixed scanned beam at 45°. Therefore, the array of fixed-beam subarrays needs to be scanned ±20° around their peak beam, which results in a higher combined gain/directivity. The proposed antenna demonstrates the minimum measured gain of 23.1 dBi throughout the scan range (for 65° scan) with the peak gain of 26.5 dBi (for 32° scan) at 12 GHz while occupying a circular aperture of 26 cm in diameter.",
"title": ""
},
{
"docid": "neg:1840558_6",
"text": "Sparse data and irregular data access patterns are hugely important to many applications, such as molecular dynamics and data analytics. Accelerating applications with these characteristics requires maximizing usable bandwidth at all levels of the memory hierarchy, reducing latency, maximizing reuse of moved data, and minimizing the amount the data is moved in the first place. Many specialized data structures have evolved to meet these requisites for specific applications, however, there are no general solutions for improving the performance of sparse applications. The structure of the memory hierarchy itself, conspires against general hardware for accelerating sparse applications, being designed for efficient bulk transport of data versus one byte at a time. This paper presents a general solution for a programmable data rearrangement/reduction engine near-memory to deliver bulk byte-addressable data access. The key technology presented in this paper is the Sparse Data Reduction Engine (SPDRE), which builds previous similar efforts to provide a practical near-memory reorganization engine. In addition to the primary contribution, this paper describes a programmer interface that enables all combinations of rearrangement, analysis of the methodology on a small series of applications, and finally a discussion of future work.",
"title": ""
},
{
"docid": "neg:1840558_7",
"text": "Recurrent neural networks (RNNs) provide state-of-the-art accuracy for performing analytics on datasets with sequence (e.g., language model). This paper studied a state-of-the-art RNN variant, Gated Recurrent Unit (GRU). We first proposed memoization optimization to avoid 3 out of the 6 dense matrix vector multiplications (SGEMVs) that are the majority of the computation in GRU. Then, we study the opportunities to accelerate the remaining SGEMVs using FPGAs, in comparison to 14-nm ASIC, GPU, and multi-core CPU. Results show that FPGA provides superior performance/Watt over CPU and GPU because FPGA's on-chip BRAMs, hard DSPs, and reconfigurable fabric allow for efficiently extracting fine-grained parallelisms from small/medium size matrices used by GRU. Moreover, newer FPGAs with more DSPs, on-chip BRAMs, and higher frequency have the potential to narrow the FPGA-ASIC efficiency gap.",
"title": ""
},
{
"docid": "neg:1840558_8",
"text": "Accurate vessel detection in retinal images is an important and difficult task. Detection is made more challenging in pathological images with the presence of exudates and other abnormalities. In this paper, we present a new unsupervised vessel segmentation approach to address this problem. A novel inpainting filter, called neighborhood estimator before filling, is proposed to inpaint exudates in a way that nearby false positives are significantly reduced during vessel enhancement. Retinal vascular enhancement is achieved with a multiple-scale Hessian approach. Experimental results show that the proposed vessel segmentation method outperforms state-of-the-art algorithms reported in the recent literature, both visually and in terms of quantitative measurements, with overall mean accuracy of 95.62% on the STARE dataset and 95.81% on the HRF dataset.",
"title": ""
},
{
"docid": "neg:1840558_9",
"text": "Visual restoration and recognition are traditionally addressed in pipeline fashion, i.e. denoising followed by classification. Instead, observing correlations between the two tasks, for example clearer image will lead to better categorization and vice visa, we propose a joint framework for visual restoration and recognition for handwritten images, inspired by advances in deep autoencoder and multi-modality learning. Our model is a 3-pathway deep architecture with a hidden-layer representation which is shared by multi-inputs and outputs, and each branch can be composed of a multi-layer deep model. Thus, visual restoration and classification can be unified using shared representation via non-linear mapping, and model parameters can be learnt via backpropagation. Using MNIST and USPS data corrupted with structured noise, the proposed framework performs at least 20% better in classification than separate pipelines, as well as clearer recovered images.",
"title": ""
},
{
"docid": "neg:1840558_10",
"text": "In this paper, we extend an attention-based neural machine translation (NMT) model by allowing it to access an entire training set of parallel sentence pairs even after training. The proposed approach consists of two stages. In the first stage– retrieval stage–, an off-the-shelf, black-box search engine is used to retrieve a small subset of sentence pairs from a training set given a source sentence. These pairs are further filtered based on a fuzzy matching score based on edit distance. In the second stage–translation stage–, a novel translation model, called search engine guided NMT (SEG-NMT), seamlessly uses both the source sentence and a set of retrieved sentence pairs to perform the translation. Empirical evaluation on three language pairs (En-Fr, En-De, and En-Es) shows that the proposed approach significantly outperforms the baseline approach and the improvement is more significant when more relevant sentence pairs were retrieved.",
"title": ""
},
{
"docid": "neg:1840558_11",
"text": "What causes adolescents to be materialistic? Prior research shows parents and peers are an important influence. Researchers have viewed parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents. We take a different approach, viewing parents and peers as important sources of emotional support and psychological well-being, which increase self-esteem in adolescents. Supportive parents and peers boost adolescents' self-esteem, which decreases their need to turn to material goods to develop positive selfperceptions. In a study with 12–18 year-olds, we find support for our view that self-esteem mediates the relationship between parent/peer influence and adolescent materialism. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved. Rising levels of materialism among adolescents have raised concerns among parents, educators, and consumer advocates.More than half of 9–14 year-olds agree that, “when you grow up, the more money you have, the happier you are,” and over 60% agree that, “the only kind of job I want when I grow up is one that getsme a lot of money” (Goldberg, Gorn, Peracchio, & Bamossy, 2003). These trends have lead social scientists to conclude that adolescents today are “...the most brand-oriented, consumer-involved, and materialistic generation in history” (Schor, 2004, p. 13). What causes adolescents to bematerialistic? Themost consistent finding to date is that adolescent materialism is related to the interpersonal influences in their lives—notably, parents and peers. The vast majority of research is based on a social influence perspective, viewing parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents through modeling, reinforcement, and social interaction. In early research, Churchill and Moschis (1979) proposed that adolescents learn rational aspects of consumption from their parents and social aspects of consumption (materialism) from their peers. Moore and ⁎ Corresponding author. Villanova School of Business, 800 Lancaster Avenue, Villanova, PA 19085, USA. Fax: +1 520 621 7483. E-mail addresses: [email protected] (L.N. Chaplin), [email protected] (D.R. John). 1057-7408/$ see front matter © 2010 Society for Consumer Psychology. Publish doi:10.1016/j.jcps.2010.02.002 Moschis (1981) examined family communication styles, suggesting that certain styles (socio-oriented) promote conformity to others' views, setting the stage for materialism. In later work, Goldberg et al. (2003) posited that parents transmit materialistic values to their offspring by modeling these values. Researchers have also reported positive correlations betweenmaterialism and socio-oriented family communication (Moore & Moschis, 1981), parents' materialism (Flouri, 2004; Goldberg et al., 2003), peer communication about consumption (Churchill & Moschis, 1979; Moschis & Churchill, 1978), and susceptibility to peer influence (Achenreiner, 1997; Banerjee & Dittmar, 2008; Roberts, Manolis, & Tanner, 2008). We take a different approach. Instead of viewing parents and peers as socialization agents that transmit consumption attitudes and values, we consider parents and peers as important sources of emotional support and psychological well-being, which lay the foundation for self-esteem in adolescents. We argue that supportive parents and peers boost adolescents' self-esteem, which decreases their need to embrace material goods as a way to develop positive self-perceptions. Prior research is suggestive of our perspective. In studies with young adults, researchers have found a link between (1) lower parental support (cold and controlling mothers) and a focus on financial success aspirations (Kasser, Ryan, Zax, & Sameroff, 1995: 18 year-olds) and (2) lower parental support (less affection and supervision) in ed by Elsevier Inc. All rights reserved. 1 Support refers to warmth, affection, nurturance, and acceptance (Becker, 1981; Ellis, Thomas, and Rollins, 1976). Parental nurturance involves the development of caring relationships, in which parents reason with their children about moral conflicts, involve them in family decision making, and set high moral expectations (Maccoby, 1984; Staub, 1988). 177 L.N. Chaplin, D.R. John / Journal of Consumer Psychology 20 (2010) 176–184 divorced families and materialism (Rindfleisch, Burroughs, & Denton, 1997: 20–32 year-olds). These studies do not focus on adolescents, do not examine peer factors, nor do they include measures of self-esteem or self-worth. But, they do suggest that parents and peers can influence materialism in ways other than transmitting consumption attitudes and values, which has been the focus of prior research on adolescent materialism. In this article, we seek preliminary evidence for our view by testing whether self-esteem mediates the relationship between parent/peer influence and adolescent materialism. We include parent and peer factors that inhibit or encourage adolescent materialism, which allows us to test self-esteem as a mediator under both conditions. For parental influence, we include parental support (inhibits materialism) and parents' materialism (encourages materialism). Both factors have appeared in prior materialism studies, but our interest here is whether self-esteem is a mediator of their influence on materialism. For peer influence, we include peer support (inhibits materialism) and peers' materialism (encourages materialism), with our interest being whether self-esteem is a mediator of their influence on materialism. These peer factors are new to materialism research and offer potentially new insights. Contrary to prior materialism research, which views peers as encouraging materialism among adolescents, we also consider the possibility that peers may be a positive influence by providing emotional support in the same way that parents do. Our research offers several contributions to understanding materialism in adolescents. First, we provide a broader perspective on the role of parents and peers as influences on adolescent materialism. The social influence perspective, which views parents and peers as transmitting consumption attitudes and values, has dominated materialism research with children and adolescents since its early days. We provide a broader perspective by considering parents and peers as much more than socialization agents—they contribute heavily to the sense of self-esteem that adolescents possess, which influences materialism. Second, our perspective provides a process explanation for why parents and peers influence materialism that can be empirically tested. Prior research offers a valuable set of findings about what factors correlate with adolescent materialism, but the process responsible for the correlation is left untested. Finally, we provide a parsimonious explanation for why different factors related to parent and peer influence affect adolescent materialism. Although the number of potential parent and peer factors is large, it is possible that there is a common thread (self-esteem) for why these factors influence adolescent materialism. Isolating mediators, such as selfesteem, could provide the basis for developing a conceptual framework to tie together findings across prior studies with different factors, providing a more unified explanation for why certain adolescents are more vulnerable to materialism.",
"title": ""
},
{
"docid": "neg:1840558_12",
"text": "We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation. Our approach requires no changes to the model architecture of a standard NMT system, but instead introduces a new component, the contextual parameter generator (CPG), that generates the parameters of the system (e.g., weights in a neural network). This parameter generator accepts source and target language embeddings as input, and generates the parameters for the encoder and the decoder, respectively. The rest of the model remains unchanged and is shared across all languages. We show how this simple modification enables the system to use monolingual data for training and also perform zero-shot translation. We further show it is able to surpass state-of-theart performance for both the IWSLT-15 and IWSLT-17 datasets and that the learned language embeddings are able to uncover interesting relationships between languages.",
"title": ""
},
{
"docid": "neg:1840558_13",
"text": "To improve data availability and resilience MapReduce frameworks use file systems that replicate data uniformly. However, analysis of job logs from a large production cluster shows wide disparity in data popularity. Machines and racks storing popular content become bottlenecks; thereby increasing the completion times of jobs accessing this data even when there are machines with spare cycles in the cluster. To address this problem, we present Scarlett, a system that replicates blocks based on their popularity. By accurately predicting file popularity and working within hard bounds on additional storage, Scarlett causes minimal interference to running jobs. Trace driven simulations and experiments in two popular MapReduce frameworks (Hadoop, Dryad) show that Scarlett effectively alleviates hotspots and can speed up jobs by 20.2%.",
"title": ""
},
{
"docid": "neg:1840558_14",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemes have been proposed but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. PACT allows quantizing activations to arbitrary bit precisions, while achieving much better accuracy relative to published state-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance due to a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.",
"title": ""
},
{
"docid": "neg:1840558_15",
"text": "This work integrates deep learning and symbolic programming paradigms into a unified method for deploying applications to a neuromorphic system. The approach removes the need for coordination among disjoint co-processors by embedding both types entirely on a neuromorphic processor. This integration provides a flexible approach for using each technique where it performs best. A single neuromorphic solution can seamlessly deploy neural networks for classifying sensor-driven noisy data obtained from the environment alongside programmed symbolic logic to processes the input from the networks. We present a concrete implementation of the proposed framework using the TrueNorth neuromorphic processor to play blackjack using a pre-programmed optimal strategy algorithm combined with a neural network trained to classify card images as input. Future extensions of this approach will develop a symbolic neuromorphic compiler for automatically creating networks from a symbolic programming language.",
"title": ""
},
{
"docid": "neg:1840558_16",
"text": "To detect illegal copies of copyrighted images, recent copy detection methods mostly rely on the bag-of-visual-words (BOW) model, in which local features are quantized into visual words for image matching. However, both the limited discriminability of local features and the BOW quantization errors will lead to many false local matches, which make it hard to distinguish similar images from copies. Geometric consistency verification is a popular technology for reducing the false matches, but it neglects global context information of local features and thus cannot solve this problem well. To address this problem, this paper proposes a global context verification scheme to filter false matches for copy detection. More specifically, after obtaining initial scale invariant feature transform (SIFT) matches between images based on the BOW quantization, the overlapping region-based global context descriptor (OR-GCD) is proposed for the verification of these matches to filter false matches. The OR-GCD not only encodes relatively rich global context information of SIFT features but also has good robustness and efficiency. Thus, it allows an effective and efficient verification. Furthermore, a fast image similarity measurement based on random verification is proposed to efficiently implement copy detection. In addition, we also extend the proposed method for partial-duplicate image detection. Extensive experiments demonstrate that our method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.",
"title": ""
},
{
"docid": "neg:1840558_17",
"text": "In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov Complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee’s level of intelligence in order to obtain an intelligence score within a limited time.",
"title": ""
},
{
"docid": "neg:1840558_18",
"text": "Background: The extrahepatic biliary tree with the exact anatomic features of the arterial supply observed by laparoscopic means has not been described heretofore. Iatrogenic injuries of the extrahepatic biliary tree and neighboring blood vessels are not rare. Accidents involving vessels or the common bile duct during laparoscopic cholecystectomy, with or without choledocotomy, can be avoided by careful dissection of Calot's triangle and the hepatoduodenal ligament. Methods: We performed 244 laparoscopic cholecystectomies over a 2-year period between January 1, 1995 and January 1, 1997. Results: In 187 of 244 consecutive cases (76.6%), we found a typical arterial supply anteromedial to the cystic duct, near the sentinel cystic lymph node. In the other cases, there was an atypical arterial supply, and 27 of these cases (11.1%) had no cystic artery in Calot's triangle. A typical blood supply and accessory arteries were observed in 18 cases (7.4%). Conclusion: Young surgeons who are not yet familiar with the handling of an anatomically abnormal cystic blood supply need to be more aware of the precise anatomy of the extrahepatic biliary tree.",
"title": ""
},
{
"docid": "neg:1840558_19",
"text": "A great amount of research has been developed around the early cognitive impairments that best predict the onset of Alzheimer's disease (AD). Given that mild cognitive impairment (MCI) is no longer considered to be an intermediate state between normal aging and AD, new paths have been traced to acquire further knowledge about this condition and its subtypes, and to determine which of them have a higher risk of conversion to AD. It is now known that other deficits besides episodic and semantic memory impairments may be present in the early stages of AD, such as visuospatial and executive function deficits. Furthermore, recent investigations have proven that the hippocampus and the medial temporal lobe structures are not only involved in memory functioning, but also in visual processes. These early changes in memory, visual, and executive processes may also be detected with the study of eye movement patterns in pathological conditions like MCI and AD. In the present review, we attempt to explore the existing literature concerning these patterns of oculomotor changes and how these changes are related to the early signs of AD. In particular, we argue that deficits in visual short-term memory, specifically in iconic memory, attention processes, and inhibitory control, may be found through the analysis of eye movement patterns, and we discuss how they might help to predict the progression from MCI to AD. We add that the study of eye movement patterns in these conditions, in combination with neuroimaging techniques and appropriate neuropsychological tasks based on rigorous concepts derived from cognitive psychology, may highlight the early presence of cognitive impairments in the course of the disease.",
"title": ""
}
] |
1840559 | FEDD: Feature Extraction for Explicit Concept Drift Detection in time series | [
{
"docid": "pos:1840559_0",
"text": "0167-8655/$ see front matter 2011 Published by doi:10.1016/j.patrec.2011.08.019 ⇑ Corresponding author. Tel.: +44 (0) 2075940990; E-mail addresses: [email protected], gr203@i ic.ac.uk (N.M. Adams), [email protected] (D.K. Tas Hand). Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an exponentially weighted moving average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time. 2011 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "pos:1840559_1",
"text": "Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the class-probability distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. A central idea is the concept of context: a set of contiguous examples where the distribution is stationary. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error wil decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example kw, and the drift level at example kd. This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since kw. The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and also with learning the new concept. We also observe that the method is independent of the learning algorithm.",
"title": ""
}
] | [
{
"docid": "neg:1840559_0",
"text": "Semantic Reliability is a novel correctness criterion for multicast protocols based on the concept of message obsolescence: A message becomes obsolete when its content or purpose is superseded by a subsequent message. By exploiting obsolescence, a reliable multicast protocol may drop irrelevant messages to find additional buffer space for new messages. This makes the multicast protocol more resilient to transient performance perturbations of group members, thus improving throughput stability. This paper describes our experience in developing a suite of semantically reliable protocols. It summarizes the motivation, definition, and algorithmic issues and presents performance figures obtained with a running implementation. The data obtained experimentally is compared with analytic and simulation models. This comparison allows us to confirm the validity of these models and the usefulness of the approach. Finally, the paper reports the application of our prototype to distributed multiplayer games.",
"title": ""
},
{
"docid": "neg:1840559_1",
"text": "Grid cells in the entorhinal cortex of freely moving rats provide a strikingly periodic representation of self-location which is indicative of very specific computational mechanisms. However, the existence of grid cells in humans and their distribution throughout the brain are unknown. Here we show that the preferred firing directions of directionally modulated grid cells in rat entorhinal cortex are aligned with the grids, and that the spatial organization of grid-cell firing is more strongly apparent at faster than slower running speeds. Because the grids are also aligned with each other, we predicted a macroscopic signal visible to functional magnetic resonance imaging (fMRI) in humans. We then looked for this signal as participants explored a virtual reality environment, mimicking the rats’ foraging task: fMRI activation and adaptation showing a speed-modulated six-fold rotational symmetry in running direction. The signal was found in a network of entorhinal/subicular, posterior and medial parietal, lateral temporal and medial prefrontal areas. The effect was strongest in right entorhinal cortex, and the coherence of the directional signal across entorhinal cortex correlated with spatial memory performance. Our study illustrates the potential power of combining single-unit electrophysiology with fMRI in systems neuroscience. Our results provide evidence for grid-cell-like representations in humans, and implicate a specific type of neural representation in a network of regions which supports spatial cognition and also autobiographical memory.",
"title": ""
},
{
"docid": "neg:1840559_2",
"text": "The negative capacitance (NC) of ferroelectric materials has paved the way for achieving sub-60-mV/decade switching feature in complementary metal-oxide-semiconductor (CMOS) field-effect transistors, by simply inserting a ferroelectric thin layer in the gate stack. However, in order to utilize the ferroelectric capacitor (as a breakthrough technique to overcome the Boltzmann limit of the device using thermionic emission process), the thickness of the ferroelectric layer should be scaled down to sub-10-nm for ease of integration with conventional CMOS logic devices. In this paper, we demonstrate an NC fin-shaped field-effect transistor (FinFET) with a 6-nm-thick HfZrO ferroelectric capacitor. The performance parameters of NC FinFET such as on-/off-state currents and subthreshold slope are compared with those of the conventional FinFET. Furthermore, a repetitive and reliable steep switching feature of the NC FinFET at various drain voltages is demonstrated.",
"title": ""
},
{
"docid": "neg:1840559_3",
"text": "Background A precise understanding of the anatomical structures of the heart and great vessels is essential for surgical planning in order to avoid unexpected findings. Rapid prototyping techniques are used to print three-dimensional (3D) replicas of patients’ cardiovascular anatomy based on 3D clinical images such as MRI. The purpose of this study is to explore the use of 3D patient-specific cardiovascular models using rapid prototyping techniques to improve surgical planning in patients with complex congenital heart disease.",
"title": ""
},
{
"docid": "neg:1840559_4",
"text": "This paper reports on a mixed-method research project that examined the attitudes of computer users toward accidental/naive information security (InfoSec) behaviour. The aim of this research was to investigate the extent to which attitude data elicited from repertory grid technique (RGT) interviewees support their responses collected via an online survey questionnaire. Twenty five university students participated in this two-stage project. Individual attitude scores were calculated for each of the research methods and were compared across seven behavioural focus areas using Spearman product-moment correlation coefficient. The two sets of data exhibited a small-to-medium correlation when individual attitudes were analysed for each of the focus areas. In summary, this exploratory research indicated that the two research approaches were reasonably complementary and the RGT interview results tended to triangulate the attitude scores derived from the online survey questionnaire, particularly in regard to attitudes toward Incident Reporting behaviour, Email Use behaviour and Social Networking Site Use behaviour. The results also highlighted some attitude items in the online questionnaire that need to be reviewed for clarity, relevance and non-ambiguity.",
"title": ""
},
{
"docid": "neg:1840559_5",
"text": "Traditional algorithms to design hand-crafted features for action recognition have been a hot research area in last decade. Compared to RGB video, depth sequence is more insensitive to lighting changes and more discriminative due to its capability to catch geometric information of object. Unlike many existing methods for action recognition which depend on well-designed features, this paper studies deep learning-based action recognition using depth sequences and the corresponding skeleton joint information. Firstly, we construct a 3Dbased Deep Convolutional Neural Network (3DCNN) to directly learn spatiotemporal features from raw depth sequences, then compute a joint based feature vector named JointVector for each sequence by taking into account the simple position and angle information between skeleton joints. Finally, support vector machine (SVM) classification results from 3DCNN learned features and JointVector are fused to take action recognition. Experimental results demonstrate that our method can learn feature representation which is time-invariant and viewpoint-invariant from depth sequences. The proposed method achieves comparable results to the state-of-the-art methods on the UTKinect-Action3D dataset and achieves superior performance in comparison to baseline methods on the MSR-Action3D dataset. We further investigate the generalization of the trained model by transferring the learned features from one dataset (MSREmail addresses: [email protected] (Zhi Liu), [email protected] (Chenyang Zhang), [email protected] (Yingli Tian) Preprint submitted to Image and Vision Computing April 11, 2016 Action3D) to another dataset (UTKinect-Action3D) without retraining and obtain very promising classification accuracy.",
"title": ""
},
{
"docid": "neg:1840559_6",
"text": "To handle the colorization problem, we propose a deep patch-wise colorization model for grayscale images. Distinguished with some constructive color mapping models with complicated mathematical priors, we alternately apply two loss metric functions in the deep model to suppress the training errors under the convolutional neural network. To address the potential boundary artifacts, a refinement scheme is presented inspired by guided filtering. In the experiment section, we summarize our network parameters setting in practice, including the patch size, amount of layers and the convolution kernels. Our experiments demonstrate this model can output more satisfactory visual colorizations compared with the state-of-the-art methods. Moreover, we prove our method has extensive application domains and can be applied to stylistic colorization.",
"title": ""
},
{
"docid": "neg:1840559_7",
"text": "Multicarrier phase-based ranging is fast emerging as a cost-optimized solution for a wide variety of proximitybased applications due to its low power requirement, low hardware complexity and compatibility with existing standards such as ZigBee and 6LoWPAN. Given potentially critical nature of the applications in which phasebased ranging can be deployed (e.g., access control, asset tracking), it is important to evaluate its security guarantees. Therefore, in this work, we investigate the security of multicarrier phase-based ranging systems and specifically focus on distance decreasing relay attacks that have proven detrimental to the security of proximity-based access control systems (e.g., vehicular passive keyless entry and start systems). We show that phase-based ranging, as well as its implementations, are vulnerable to a variety of distance reduction attacks. We describe different attack realizations and verify their feasibility by simulations and experiments on a commercial ranging system. Specifically, we successfully reduced the estimated range to less than 3m even though the devices were more than 50 m apart. We discuss possible countermeasures against such attacks and illustrate their limitations, therefore demonstrating that phase-based ranging cannot be fully secured against distance decreasing attacks.",
"title": ""
},
{
"docid": "neg:1840559_8",
"text": "Recent developments in deep convolutional neural networks (DCNNs) have shown impressive performance improvements on various object detection/recognition problems. This has been made possible due to the availability of large annotated data and a better understanding of the nonlinear mapping between images and class labels, as well as the affordability of powerful graphics processing units (GPUs). These developments in deep learning have also improved the capabilities of machines in understanding faces and automatically executing the tasks of face detection, pose estimation, landmark localization, and face recognition from unconstrained images and videos. In this article, we provide an overview of deep-learning methods used for face recognition. We discuss different modules involved in designing an automatic face recognition system and the role of deep learning for each of them. Some open issues regarding DCNNs for face recognition problems are then discussed. This article should prove valuable to scientists, engineers, and end users working in the fields of face recognition, security, visual surveillance, and biometrics.",
"title": ""
},
{
"docid": "neg:1840559_9",
"text": "This paper presents a symbolic-execution-based approach and its implementation by POM/JLEC for checking the logical equivalence between two programs in the system replacement context. The primary contributions lie in the development of POM/JLEC, a fully automatic equivalence checker for Java enterprise systems. POM/JLEC consists of three main components: Domain Specific Pre-Processor for extracting the target code from the original system and adjusting it to a suitable scope for verification, Symbolic Execution for generating symbolic summaries, and solver-based EQuality comparison for comparing the symbolic summaries together and returning counter examples in the case of non-equivalence. We have evaluated POM/JLEC with a large-scale benchmark created from the function layer code of an industrial enterprise system. The evaluation result with 54% test cases passed shows the feasibility for deploying its mature version into software development industry.",
"title": ""
},
{
"docid": "neg:1840559_10",
"text": "Congenitally missing teeth are frequently presented to the dentist. Interdisciplinary approach may be needed for the proper treatment plan. The available treatment modalities to replace congenitally missing teeth include prosthodontic fixed and removable prostheses, resin bonded retainers, orthodontic movement of maxillary canine to the lateral incisor site and single tooth implants. Dental implants offer a promising treatment option for placement of congenitally missing teeth. Interdisciplinary approach may be needed in these cases. This article aims to present a case report of replacement of unilaterally congenitally missing maxillary lateral incisors with dental implants.",
"title": ""
},
{
"docid": "neg:1840559_11",
"text": "With the introduction of IT to conduct business we accepted the loss of a human control step. For this reason, the introduction of new IT systems was accompanied by the development of the authorization concept. But since, in reality, there is no such thing as 100 per cent security; auditors are commissioned to examine all transactions for misconduct. Since the data exists in digital form already, it makes sense to use computer-based processes to analyse it. Such processes allow the auditor to carry out extensive checks within an acceptable timeframe and with reasonable effort. Once the algorithm has been defined, it only takes sufficient computing power to evaluate larger quantities of data. This contribution presents the state of the art for IT-based data analysis processes that can be used to identify fraudulent activities.",
"title": ""
},
{
"docid": "neg:1840559_12",
"text": "Lacking the presence of human and social elements is claimed one major weakness that is hindering the growth of e-commerce. The emergence of social commerce (SC) might help ameliorate this situation. Social commerce is a new evolution of e-commerce that combines the commercial and social activities by deploying social technologies into e-commerce sites. Social commerce reintroduces the social aspect of shopping to e-commerce, increasing the degree of social presences in online environment. Drawing upon the social presence theory, this study theorizes the nature of social aspect in online SC marketplace by proposing a set of three social presence variables. These variables are then hypothesized to have positive impacts on trusting beliefs which in turn result in online purchase behaviors. The research model is examined via data collected from a typical ecommerce site in China. Our findings suggest that social presence factors grounded in social technologies contribute significantly to the building of the trustworthy online exchanging relationships. In doing so, this paper confirms the positive role of social aspect in shaping online purchase behaviors, providing a theoretical evidence for the fusion of social and commercial activities. Finally, this paper introduces a new perspective of e-commerce and calls more attention to this new phenomenon.",
"title": ""
},
{
"docid": "neg:1840559_13",
"text": "Side-channel attacks pose a critical threat to the deployment of secure embedded systems. Differential-power analysis is a technique relying on measuring the power consumption of device while it computes a cryptographic primitive, and extracting the secret information from it exploiting the knowledge of the operations involving the key. There is no open literature describing how to properly employ Digital Signal Processing (DSP) techniques in order to improve the effectiveness of the attacks. This paper presents a pre-processing technique based on DSP, reducing the number of traces needed to perform an attack by an order of magnitude with respect to the results obtained with raw datasets, and puts it into practical use attacking a commercial 32-bit software implementation of AES running on a Cortex-M3 CPU. The main contribution of this paper is proposing a leakage model for software implemented cryptographic primitives and an effective framework to extract it.",
"title": ""
},
{
"docid": "neg:1840559_14",
"text": "There is extensive evidence indicating that new neurons are generated in the dentate gyrus of the adult mammalian hippocampus, a region of the brain that is important for learning and memory. However, it is not known whether these new neurons become functional, as the methods used to study adult neurogenesis are limited to fixed tissue. We use here a retroviral vector expressing green fluorescent protein that only labels dividing cells, and that can be visualized in live hippocampal slices. We report that newly generated cells in the adult mouse hippocampus have neuronal morphology and can display passive membrane properties, action potentials and functional synaptic inputs similar to those found in mature dentate granule cells. Our findings demonstrate that newly generated cells mature into functional neurons in the adult mammalian brain.",
"title": ""
},
{
"docid": "neg:1840559_15",
"text": "A green and reliable method using supercritical fluid extraction (SFE) and molecular distillation (MD) was optimized for the separation and purification of standardized typical volatile components fraction (STVCF) from turmeric to solve the shortage of reference compounds in quality control (QC) of volatile components. A high quality essential oil with 76.0% typical components of turmeric was extracted by SFE. A sequential distillation strategy was performed by MD. The total recovery and purity of prepared STVCF were 97.3% and 90.3%, respectively. Additionally, a strategy, i.e., STVCF-based qualification and quantitative evaluation of major bioactive analytes by multiple calibrated components, was proposed to easily and effectively control the quality of turmeric. Compared with the individual calibration curve method, the STVCF-based quantification method was demonstrated to be credible and was effectively adapted for solving the shortage of reference volatile compounds and improving the QC of typical volatile components in turmeric, especially its functional products.",
"title": ""
},
{
"docid": "neg:1840559_16",
"text": "In the last decade, the ease of online payment has opened up many new opportunities for e-commerce, lowering the geographical boundaries for retail. While e-commerce is still gaining popularity, it is also the playground of fraudsters who try to misuse the transparency of online purchases and the transfer of credit card records. This paper proposes APATE, a novel approach to detect fraudulent credit card ∗NOTICE: this is the author’s version of a work that was accepted for publication in Decision Support Systems in May 8, 2015, published online as a self-archive copy after the 24 month embargo period. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Please cite this paper as follows: Van Vlasselaer, V., Bravo, C., Caelen, O., Eliassi-Rad, T., Akoglu, L., Snoeck, M., Baesens, B. (2015). APATE: A novel approach for automated credit card transaction fraud detection using network-based extensions. Decision Support Systems, 75, 38-48. Available Online: http://www.sciencedirect.com/science/article/pii/S0167923615000846",
"title": ""
},
{
"docid": "neg:1840559_17",
"text": "Delay Tolerant Networks (DTN) are networks of self-organizing wireless nodes, where end-to-end connectivity is intermittent. In these networks, forwarding decisions are made using locally collected knowledge about node behavior (e.g., past contacts between nodes) to predict which nodes are likely to deliver a content or bring it closer to the destination. One promising way of predicting future contact opportunities is to aggregate contacts seen in the past to a social graph and use metrics from complex network analysis (e.g., centrality and similarity) to assess the utility of a node to carry a piece of content. This aggregation presents an inherent tradeoff between the amount of time-related information lost during this mapping and the predictive capability of complex network analysis in this context. In this paper, we use two recent DTN routing algorithms that rely on such complex network analysis, to show that contact aggregation significantly affects the performance of these protocols. We then propose simple contact mapping algorithms that demonstrate improved performance up to a factor of 4 in delivery ratio, and robustness to various connectivity scenarios for both protocols.",
"title": ""
},
{
"docid": "neg:1840559_18",
"text": "In this paper, an all NMOS voltage-mode four-quadrant analog multiplier, based on a basic NMOS differential amplifier that can produce the output signal in voltage form without using resistors, is presented. The proposed circuit has been simulated with SPICE and achieved -3 dB bandwidth of 120 MHz. The power consumption is about 3.6 mW from a /spl plusmn/2.5 V power supply voltage, and the total harmonic distortion is 0.85% with a 1 V input signal.",
"title": ""
},
{
"docid": "neg:1840559_19",
"text": "With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques",
"title": ""
}
] |
1840560 | Video Imagination from a Single Image with Transformation Generation | [
{
"docid": "pos:1840560_0",
"text": "We address the problem of synthesizing new video frames in an existing video, either in-between existing frames (interpolation), or subsequent to them (extrapolation). This problem is challenging because video appearance and motion can be highly complex. Traditional optical-flow-based solutions often fail where flow estimation is challenging, while newer neural-network-based methods that hallucinate pixel values directly often produce blurry results. We combine the advantages of these two methods by training a deep network that learns to synthesize video frames by flowing pixel values from existing ones, which we call deep voxel flow. Our method requires no human supervision, and any video can be used as training data by dropping, and then learning to predict, existing frames. The technique is efficient, and can be applied at any video resolution. We demonstrate that our method produces results that both quantitatively and qualitatively improve upon the state-of-the-art.",
"title": ""
}
] | [
{
"docid": "neg:1840560_0",
"text": "This paper provides an overview of Data warehousing, Data Mining, OLAP, OLTP technologies, exploring the features, applications and the architecture of Data Warehousing. The data warehouse supports on-line analytical processing (OLAP), the functional and performance requirements of which are quite different from those of the on-line transaction processing (OLTP) applications traditionally supported by the operational databases. Data warehouses provide on-line analytical processing (OLAP) tools for the interactive analysis of multidimensional data of varied granularities, which facilitates effective data mining. Data warehousing and on-line analytical processing (OLAP) are essential elements of decision support, which has increasingly become a focus of the database industry. OLTP is customer-oriented and is used for transaction and query processing by clerks, clients and information technology professionals. An OLAP system is market-oriented and is used for data analysis by knowledge workers, including managers, executives and analysts. Data warehousing and OLAP have emerged as leading technologies that facilitate data storage, organization and then, significant retrieval. Decision support places some rather different requirements on database technology compared to traditional on-line transaction processing applications.",
"title": ""
},
{
"docid": "neg:1840560_1",
"text": "This document describes Virtual eXtensible Local Area Network (VXLAN), which is used to address the need for overlay networks within virtualized data centers accommodating multiple tenants. The scheme and the related protocols can be used in networks for cloud service providers and enterprise data centers. This memo documents the deployed VXLAN protocol for the benefit of the Internet community.",
"title": ""
},
{
"docid": "neg:1840560_2",
"text": "This paper argues that current technology-driven implementations of Smart Cities, although being an important step in the right direction, fall short in exploiting the most important human dimension of cities. The paper argues therefore in support of the concept of Human Smart Cities. In a Human Smart City, people rather than technology are the true actors of the urban \"smartness\". The creation of a participatory innovation ecosystem in which citizens and communities interact with public authorities and knowledge developers is key. Such collaborative interaction leads to co-designed user centered innovation services and calls for new governance models. The urban transformation in which citizens are the main \"drivers of change\" through their empowerment and motivation ensures that the major city challenges can be addressed, including sustainable behavior transformations. Furthermore, the authors argue that the city challenges can be more effectively addressed at the scale of neighborhood and they provide examples and experiences that demonstrate the viability, importance and impact of such approach. The paper builds on the experience of implementing Human Smart Cities projects in 27 European cities located in 17 different countries. Details of the technologies, methodologies, tools and policies are illustrated with examples extracted from the project My Neighbourhood.",
"title": ""
},
{
"docid": "neg:1840560_3",
"text": "Personality profiling is the task of detecting personality traits of authors based on writing style. Several personality typologies exist, however, the Myers-Briggs Type Indicator (MBTI) is particularly popular in the non-scientific community, and many people use it to analyse their own personality and talk about the results online. Therefore, large amounts of self-assessed data on MBTI are readily available on social-media platforms such as Twitter. We present a novel corpus of tweets annotated with the MBTI personality type and gender of their author for six Western European languages (Dutch, German, French, Italian, Portuguese and Spanish). We outline the corpus creation and annotation, show statistics of the obtained data distributions and present first baselines on Myers-Briggs personality profiling and gender prediction for all six languages.",
"title": ""
},
{
"docid": "neg:1840560_4",
"text": "Malaria remains the leading communicable disease in Ethiopia, with around one million clinical cases of malaria reported annually. The country currently has plans for elimination for specific geographic areas of the country. Human movement may lead to the maintenance of reservoirs of infection, complicating attempts to eliminate malaria. An unmatched case–control study was conducted with 560 adult patients at a Health Centre in central Ethiopia. Patients who received a malaria test were interviewed regarding their recent travel histories. Bivariate and multivariate analyses were conducted to determine if reported travel outside of the home village within the last month was related to malaria infection status. After adjusting for several known confounding factors, travel away from the home village in the last 30 days was a statistically significant risk factor for infection with Plasmodium falciparum (AOR 1.76; p=0.03) but not for infection with Plasmodium vivax (AOR 1.17; p=0.62). Male sex was strongly associated with any malaria infection (AOR 2.00; p=0.001). Given the importance of identifying reservoir infections, consideration of human movement patterns should factor into decisions regarding elimination and disease prevention, especially when targeted areas are limited to regions within a country.",
"title": ""
},
{
"docid": "neg:1840560_5",
"text": "Recent progress in both Artificial Intelligence (AI) and Robotics have enabled the development of general purpose robot platforms that are capable of executing a wide variety of complex, temporally extended service tasks in open environments. This article introduces a novel, custom-designed multi-robot platform for research on AI, robotics, and especially Human-Robot Interaction (HRI) for service robots. Called BWIBots, the robots were designed as a part of the Building-Wide Intelligence (BWI) project at the University of Texas at Austin. The article begins with a description of, and justification for, the hardware and software design decisions underlying the BWIBots, with the aim of informing the design of such platforms in the future. It then proceeds to present an overview of various research contributions that have enabled the BWIBots to better (i) execute action sequences to complete user requests, (ii) efficiently ask questions to resolve user requests, (iii) understand human commands given in natural language, and (iv) understand human intention from afar. The article concludes with a look forward towards future research opportunities and applications enabled by the BWIBot platform.",
"title": ""
},
{
"docid": "neg:1840560_6",
"text": "We design and implement a simple zero-knowledge argument protocol for NP whose communication complexity is proportional to the square-root of the verification circuit size. The protocol can be based on any collision-resistant hash function. Alternatively, it can be made non-interactive in the random oracle model, yielding concretely efficient zk-SNARKs that do not require a trusted setup or public-key cryptography.\n Our protocol is attractive not only for very large verification circuits but also for moderately large circuits that arise in applications. For instance, for verifying a SHA-256 preimage in zero-knowledge with 2-40 soundness error, the communication complexity is roughly 44KB (or less than 34KB under a plausible conjecture), the prover running time is 140 ms, and the verifier running time is 62 ms. This proof is roughly 4 times shorter than a similar proof of ZKB++ (Chase et al., CCS 2017), an optimized variant of ZKBoo (Giacomelli et al., USENIX 2016).\n The communication complexity of our protocol is independent of the circuit structure and depends only on the number of gates. For 2-40 soundness error, the communication becomes smaller than the circuit size for circuits containing roughly 3 million gates or more. Our efficiency advantages become even bigger in an amortized setting, where several instances need to be proven simultaneously.\n Our zero-knowledge protocol is obtained by applying an optimized version of the general transformation of Ishai et al. (STOC 2007) to a variant of the protocol for secure multiparty computation of Damgard and Ishai (Crypto 2006). It can be viewed as a simple zero-knowledge interactive PCP based on \"interleaved\" Reed-Solomon codes.",
"title": ""
},
{
"docid": "neg:1840560_7",
"text": "Driving a vehicle is a task affected by an increasing number and a rising complexity of Driver Assistance Systems (DAS) resulting in a raised cognitive load of the driver, and in consequence to the distraction from the main activity of driving. A number of potential solutions have been proposed so far, however, although these techniques broaden the perception horizon (e. g. the introduction of the sense of touch as additional information modality or the utilization of multimodal instead of unimodal interfaces), they demand the attention of the driver too. In order to cope with the issues of workload and/or distraction, it would be essential to find a non-distracting and noninvasive solution for the emergence of information.\n In this work we have investigated the application of heart rate variability (HRV) analysis to electrocardiography (ECG) data for identifying driving situations of possible threat by monitoring and recording the autonomic arousal states of the driver. For verification we have collected ECG and global positioning system (GPS) data in more than 20 test journeys on two regularly driven routes during a period of two weeks.\n The first results have shown that an indicated difference of the arousal state of the driver for a dedicated point on a route, compared to its usual state, can be interpreted as a warning sign and used to notify the driver about this, perhaps safety critical, change. To provide evidence for this hypothesis it would be essential in the next step to conduct a large number of journeys on different times of the day, using different drivers and various roadways.",
"title": ""
},
{
"docid": "neg:1840560_8",
"text": "Frank K.Y. Chan Hong Kong University of Science and Technology",
"title": ""
},
{
"docid": "neg:1840560_9",
"text": "Especially for microcontroller and mobile applications, embedded nonvolatile memory is an important technology offering to reduce power and provide local persistent storage. This article describes a new resistive RAM device with fast write operation to improve the speed of embedded nonvolatile memories.",
"title": ""
},
{
"docid": "neg:1840560_10",
"text": "Impedance-source converters, an emerging technology in electric energy conversion, overcome limitations of conventional solutions by the use of specific impedance-source networks. Focus of this paper is on the topologies of galvanically isolated impedance-source dc-dc converters. These converters are particularly appropriate for distributed generation systems with renewable or alternative energy sources, which require input voltage and load regulation in a wide range. We review here the basic topologies for researchers and engineers, and classify all the topologies of the impedance-source galvanically isolated dc-dc converters according to the element that transfers energy from the input to the output: a transformer, a coupled inductor, or their combination. This classification reveals advantages and disadvantages, as well as a wide space for further research. This paper also outlines the most promising research directions in this field.",
"title": ""
},
{
"docid": "neg:1840560_11",
"text": "To achieve a compact and lightweight surgical robot with force-sensing capability, in this paper, we propose a surgical robot called “S-surge,” which is developed for robot-assisted minimally invasive surgery, focusing mainly on its mechanical design and force-sensing system. The robot consists of a 4-degree-of-freedom (DOF) surgical instrument and a 3-DOF remote center-of-motion manipulator. The manipulator is designed by adopting a double-parallelogram mechanism and spherical parallel mechanism to provide advantages such as compactness, simplicity, improved accuracy, and high stiffness. Kinematic analysis was performed in order to optimize workspace. The surgical instrument enables multiaxis force sensing including a three-axis pulling force and single-axis grasping force. In this study, it will be verified that it is feasible to carry the entire robot around thanks to its light weight (4.7 kg); therefore, allowing the robot to be applicable for telesurgery in remote areas. Finally, it will be explained how we experimented with the performance of the robot and conducted tissue manipulating task using the motion and force sensing capability of the robot in a simulated surgical setting.",
"title": ""
},
{
"docid": "neg:1840560_12",
"text": "An exact algorithm to compute an optimal 3D oriented bounding box was published in 1985 by Joseph O'Rourke, but it is slow and extremely hard to implement. In this article we propose a new approach, where the computation of the minimal-volume OBB is formulated as an unconstrained optimization problem on the rotation group SO(3,ℝ). It is solved using a hybrid method combining the genetic and Nelder-Mead algorithms. This method is analyzed and then compared to the current state-of-the-art techniques. It is shown to be either faster or more reliable for any accuracy.",
"title": ""
},
{
"docid": "neg:1840560_13",
"text": "In this paper, we present the design and performance of a portable, arbitrary waveform, multichannel constant current electrotactile stimulator that costs less than $30 in components. The stimulator consists of a stimulation controller and power supply that are less than half the size of a credit card and can produce ±15 mA at ±150 V. The design is easily extensible to multiple independent channels that can receive an arbitrary waveform input from a digital-to-analog converter, drawing only 0.9 W/channel (lasting 4–5 hours upon continuous stimulation using a 9 V battery). Finally, we compare the performance of our stimulator to similar stimulators both commercially available and developed in research.",
"title": ""
},
{
"docid": "neg:1840560_14",
"text": "Certain questions about memory address a relatively global, structural level of analysis. Is there one kind of memory or many? What brain structures or systems are involved in memory and what jobs do they do? One useful approach to such questions has focused on studies of neurological patients with memory impair-merit and parallel studies with animal models. Memory impairment sometimes occurs as a circum-scribed disorder in the absence of other intellectual deficits 1-7. In such cases, the memory impairment occurs in the context of normal scores on conventional intelligence tests, normal immediate (digit span) memory, and intact memory for very remote events. The analysis of memory impairment can provide useful information about the organization of memory and about the function of the damaged neural structures. Clinically significant memory impairment, i.e. amnesia, can occur for a variety of reasons and is typically associated with bilateral damage to the medial temporal lobe or the diencephalic midline. The severity and purity of the amnesia can vary greatly depending on the extent and pattern of damage. Standard quantitative tests are available for the assessment of memory and other cognitive functions, so that the findings from different groups of study patients can be compared 8-1°. The deficit in amnesia is readily detectable in tests of paired-associate learning and delayed recall. Indeed, amnesic patients are deficient in most tests of new learning, especially when they try to acquire an amount of information that exceeds what can be kept in mind through active rehearsal or when they try to retain information across a delay. This deficit occurs regardless of the sensory modality in which information is presented and regardless whether memory is tested by recall or recognition techniques. Moreover, the memory impairment is not limited to artificial laboratory situations, where patients are instructed explicitly to learn material that occurs in a particular episode and then are later instructed explicitly to recall the material. For example, patients can be provided items of general information with no special instruction to learn (e.g. Angel Falls is located in Venezuela); and later they can simply be asked factual questions without any reference to a recent learning episode (e.g. Where is Angel Falls located?). In this case, amnesic patients are impaired both in tests of free recall as well as in tests of recognition memory, in which the correct answer is selected from among several alternatives 11. These aspects of amnesia show …",
"title": ""
},
{
"docid": "neg:1840560_15",
"text": "This letter presents a semi-automatic approach to delineating road networks from very high resolution satellite images. The proposed method consists of three main steps. First, the geodesic method is used to extract the initial road segments that link the road seed points prescribed in advance by users. Next, a road probability map is produced based on these coarse road segments and a further direct thresholding operation separates the image into two classes of surfaces: the road and nonroad classes. Using the road class image, a kernel density estimation map is generated, upon which the geodesic method is used once again to link the foregoing road seed points. Experiments demonstrate that this proposed method can extract smooth correct road centerlines.",
"title": ""
},
{
"docid": "neg:1840560_16",
"text": "The goal of this work is to replace objects in an RGB-D scene with corresponding 3D models from a library. We approach this problem by first detecting and segmenting object instances in the scene using the approach from Gupta et al. [13]. We use a convolutional neural network (CNN) to predict the pose of the object. This CNN is trained using pixel normals in images containing rendered synthetic objects. When tested on real data, it outperforms alternative algorithms trained on real data. We then use this coarse pose estimate along with the inferred pixel support to align a small number of prototypical models to the data, and place the model that fits the best into the scene. We observe a 48% relative improvement in performance at the task of 3D detection over the current state-of-the-art [33], while being an order of magnitude faster at the same time.",
"title": ""
},
{
"docid": "neg:1840560_17",
"text": "A primary design decision in HTTP/2, the successor of HTTP/1.1, is object multiplexing. While multiplexing improves web performance in many scenarios, it still has several drawbacks due to complex cross-layer interactions. In this paper, we propose a novel multiplexing architecture called TM that overcomes many of these limitations. TM strategically leverages multiple concurrent multiplexing pipes in a transparent manner, and eliminates various types of head-of-line blocking that can severely impact user experience. TM works beyond HTTP over TCP and applies to a wide range of application and transport protocols. Extensive evaluations on LTE and wired networks show that TM substantially improves performance e.g., reduces web page load time by an average of 24% compared to SPDY, which is the basis for HTTP/2. For lossy links and concurrent transfers, the improvements are more pronounced: compared to SPDY, TM achieves up to 42% of average PLT reduction under losses and up to 90% if concurrent transfers exist.",
"title": ""
},
{
"docid": "neg:1840560_18",
"text": "In this work, a passive rectifier circuit is presented, which is operating at 868 MHz. It allows energy harvesting from low power RF waves with a high efficiency. It consists of a novel multiplier circuit design and high quality components to reduce parasitic effects, losses and reaches a low startup voltage. Using lower capacitor rises up the switching speed of the whole circuit. An inductor L serves to store energy in a magnetic field during the negative cycle wave and returns it during the positive one. A low pass filter is arranged in cascade with the rectifier circuit to reduce ripple at high frequencies and to get a stable DC signal. A 50 kΩ load is added at the output to measure the output power and to visualize the behavior of the whole circuit. Simulation results show an outstanding potential of this RF-DC converter witch has a relative high sensitivity beginning with -40 dBm.",
"title": ""
},
{
"docid": "neg:1840560_19",
"text": "A hand injury can greatly affect a person's daily life. Physicians must evaluate the state of recovery of a patient's injured hand. However, current manual evaluations of hand functions are imprecise and inconvenient. In this paper, a data glove embedded with 9-axis inertial sensors and force sensitive resistors is proposed. The proposed data glove system enables hand movement to be tracked in real-time. In addition, the system can be used to obtain useful parameters for physicians, is an efficient tool for evaluating the hand function of patients, and can improve the quality of hand rehabilitation.",
"title": ""
}
] |
1840561 | Circumferential Traveling Wave Slot Array on Cylindrical Substrate Integrated Waveguide (CSIW) | [
{
"docid": "pos:1840561_0",
"text": "The design, fabrication and characterization of 79 GHz slot antennas based on substrate integrated waveguides (SIW) are presented in this paper. All the prototypes are fabricated in a polyimide flex foil using printed circuit board (PCB) fabrication processes. A novel concept is used to minimize the leakage losses of the SIWs at millimeter wave frequencies. Different losses in the SIWs are analyzed. SIW-based single slot antenna, longitudinal and four-by-four slot array antennas are numerically and experimentally studied. Measurements of the antennas show approximately 4.7%, 5.4% and 10.7% impedance bandwidth (S11=-10 dB) with 2.8 dBi, 6.0 dBi and 11.0 dBi maximum antenna gain around 79 GHz, respectively. The measured results are in good agreement with the numerical simulations.",
"title": ""
},
{
"docid": "pos:1840561_1",
"text": "Transverse slot array antennas fed by a half-mode substrate integrated waveguide (HMSIW) are proposed and developed in this paper. The design concept of these new radiating structures is based on the study of the field distribution and phase constant along the HMSIW as well as on the resonant characteristics of a single slot etched on its top conducting wall. Two types of HMSIW-fed slot array antennas, operating, respectively, in X-band and Ka-band, are designed following a procedure similar to the design of slot array antennas fed by a dielectric-filled rectangular waveguide. Compared with slot array antennas fed by a conventional rectangular waveguide, such proposed HMSIW-fed slot array antennas possess the advantages of low profile, compact size, low cost, and easy integration with other microwave and millimeter wave planar circuits. It is worth noting that the width of HMSIW slot array antennas is reduced by nearly half compared to that of slot array antennas fed by a substrate integrated waveguide.",
"title": ""
},
{
"docid": "pos:1840561_2",
"text": "A millimeter-wave shaped-beam substrate integrated conformal array antenna is demonstrated in this paper. After discussing the influence of conformal shape on the characteristics of a substrate integrated waveguide (SIW) and a radiating slot, an array mounted on a cylindrical surface with a radius of 20 mm, i.e., 2.3 λ, is synthesized at the center frequency of 35 GHz. All components, including a 1-to-8 divider, a phase compensated network and an 8 × 8 slot array are fabricated in a single dielectric substrate together. In measurement, it has a - 27.4 dB sidelobe level (SLL) beam in H-plane and a flat-topped fan beam with -38° ~ 37° 3 dB beamwidth in E-plane at the center frequency of 35 GHz. The cross polarization is lower than -41.7 dB at the beam direction. Experimental results agree well with simulations, thus validating our design. This SIW scheme is able to solve the difficulty of integration between conformal array elements and a feed network in millimeter-wave frequency band, while avoid radiation leakage and element-to-element parasitic cross-coupling from the feed network.",
"title": ""
},
{
"docid": "pos:1840561_3",
"text": "A Ka-band compact single layer substrate integrated waveguide monopulse slot array antenna for the application of monopulse tracking system is designed, fabricated and measured. The feeding network as well as the monopulse comparator and the subarrays is integrated on the same dielectric with the size of 140 mmtimes130 mm. The bandwidth ( S11 < -10 dB) of the antenna is 7.39% with an operating frequency range of 30.80 GHz-33.14 GHz. The maximum gain at 31.5 GHz is 18.74 dB and the maximum null depth is -46.3 dB. The sum- and difference patterns of three planes: H-plane, E-plane and diagonal plane are measured and presented.",
"title": ""
},
{
"docid": "pos:1840561_4",
"text": "A substrate integrated metamaterial-based leaky-wave antenna is proposed to improve its boresight radiation bandwidth. The proposed leaky-wave antenna based on a composite right/left-handed substrate integrated waveguide consists of two leaky-wave radiator elements which are with different unit cells. The dual-element antenna prototype features boresight gain of 12.0 dBi with variation of 1.0 dB over the frequency range of 8.775-9.15 GHz or 4.2%. In addition, the antenna is able to offer a beam scanning from to with frequency from 8.25 GHz to 13.0 GHz.",
"title": ""
}
] | [
{
"docid": "neg:1840561_0",
"text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1",
"title": ""
},
{
"docid": "neg:1840561_1",
"text": "Wireless LANs, especially WiFi, have been pervasively deployed and have fostered myriad wireless communication services and ubiquitous computing applications. A primary concern in designing each scenario-tailored application is to combat harsh indoor propagation environments, particularly Non-Line-Of-Sight (NLOS) propagation. The ability to distinguish Line-Of-Sight (LOS) path from NLOS paths acts as a key enabler for adaptive communication, cognitive radios, robust localization, etc. Enabling such capability on commodity WiFi infrastructure, however, is prohibitive due to the coarse multipath resolution with mere MAC layer RSSI. In this work, we dive into the PHY layer and strive to eliminate irrelevant noise and NLOS paths with long delays from the multipath channel responses. To further break away from the intrinsic bandwidth limit of WiFi, we extend to the spatial domain and harness natural mobility to magnify the randomness of NLOS paths while retaining the deterministic nature of the LOS component. We prototype LiFi, a statistical LOS identification scheme for commodity WiFi infrastructure and evaluate it in typical indoor environments covering an area of 1500 m2. Experimental results demonstrate an overall LOS identification rate of 90.4% with a false alarm rate of 9.3%.",
"title": ""
},
{
"docid": "neg:1840561_2",
"text": "In this paper, the problem of workspace analysis of spherical parallel manipulators (SPMs) is addressed with respect to a spherical robotic wrist. The wrist is designed following a modular approach and capable of a unlimited rotation of rolling. An equation dealing with singularity surfaces is derived and branches of the singularity surfaces are identified. By using the Euler parameters, the singularity surfaces are generated in a solid unit sphere, the workspace analysis and dexterity evaluation hence being able to be performed in the confined region of the sphere. Examples of workspace evaluation of the spherical wrist and general SPMs are included to demonstrate the application of the proposed method.",
"title": ""
},
{
"docid": "neg:1840561_3",
"text": "We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN’s design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN’s simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.",
"title": ""
},
{
"docid": "neg:1840561_4",
"text": "BACKGROUND\nGuava leaf tea (GLT), exhibiting a diversity of medicinal bioactivities, has become a popularly consumed daily beverage. To improve the product quality, a new process was recommended to the Ser-Tou Farmers' Association (SFA), who began field production in 2005. The new process comprised simplified steps: one bud-two leaves were plucked at 3:00-6:00 am, in the early dawn period, followed by withering at ambient temperature (25-28 °C), rolling at 50 °C for 50-70 min, with or without fermentation, then drying at 45-50 °C for 70-90 min, and finally sorted.\n\n\nRESULTS\nThe product manufactured by this new process (named herein GLTSF) exhibited higher contents (in mg g(-1), based on dry ethyl acetate fraction/methanolic extract) of polyphenolics (417.9 ± 12.3) and flavonoids (452.5 ± 32.3) containing a compositional profile much simpler than previously found: total quercetins (190.3 ± 9.1), total myricetin (3.3 ± 0.9), total catechins (36.4 ± 5.3), gallic acid (8.8 ± 0.6), ellagic acid (39.1 ± 6.4) and tannins (2.5 ± 9.1).\n\n\nCONCLUSION\nWe have successfully developed a new process for manufacturing GLTSF with a unique polyphenolic profile. Such characteristic compositional distribution can be ascribed to the right harvesting hour in the early dawn and appropriate treatment process at low temperature, avoiding direct sunlight.",
"title": ""
},
{
"docid": "neg:1840561_5",
"text": "Professor Yrjo Paatero, in 1961, first introduced the Orthopantomography (OPG) [1]. It has been extensively used in dentistry for analysing the number and type of teeth present, caries, impacted teeth, root resorption, ankylosis, shape of the condyles [2], temporomandibular joints, sinuses, fractures, cysts, tumours and alveolar bone level [3,4]. Panoramic radiography is advised to all patients seeking orthodontic treatment; including Class I malocclusions [5].",
"title": ""
},
{
"docid": "neg:1840561_6",
"text": "Internet of Things (IoT) is one of the emerging technologies of this century and its various aspects, such as the Infrastructure, Security, Architecture and Privacy, play an important role in shaping the future of the digitalised world. Internet of Things devices are connected through sensors which have significant impacts on the data and its security. In this research, we used IoT five layered architecture of the Internet of Things to address the security and private issues of IoT enabled services and applications. Furthermore, a detailed survey on Internet of Things infrastructure, architecture, security, and privacy of the heterogeneous objects were presented. The paper identifies the major challenge in the field of IoT; one of them is to secure the data while accessing the objects through sensing machines. This research advocates the importance of securing the IoT ecosystem at each layer resulting in an enhanced overall security of the connected devices as well as the data generated. Thus, this paper put forwards a security model to be utilised by the researchers, manufacturers and developers of IoT devices, applications and services.",
"title": ""
},
{
"docid": "neg:1840561_7",
"text": "We propose a novel non-rigid image registration algorithm that is built upon fully convolutional networks (FCNs) to optimize and learn spatial transformations between pairs of images to be registered. Different from most existing deep learning based image registration methods that learn spatial transformations from training data with known corresponding spatial transformations, our method directly estimates spatial transformations between pairs of images by maximizing an image-wise similarity metric between fixed and deformed moving images, similar to conventional image registration algorithms. At the same time, our method also learns FCNs for encoding the spatial transformations at the same spatial resolution of images to be registered, rather than learning coarse-grained spatial transformation information. The image registration is implemented in a multi-resolution image registration framework to jointly optimize and learn spatial transformations and FCNs at different resolutions with deep selfsupervision through typical feedforward and backpropagation computation. Since our method simultaneously optimizes and learns spatial transformations for the image registration, our method can be directly used to register a pair of images, and the registration of a set of images is also a training procedure for FCNs so that the trained FCNs can be directly adopted to register new images by feedforward computation of the learned FCNs without any optimization. The proposed method has been evaluated for registering 3D structural brain magnetic resonance (MR) images and obtained better performance than state-of-the-art image registration algorithms.",
"title": ""
},
{
"docid": "neg:1840561_8",
"text": "With the increased complexity of modern computer attacks, there is a need for defenders not only to detect malicious activity as it happens, but also to predict the specific steps that will be taken by an adversary when performing an attack. However this is still an open research problem, and previous research in predicting malicious events only looked at binary outcomes (eg. whether an attack would happen or not), but not at the specific steps that an attacker would undertake. To fill this gap we present Tiresias xspace, a system that leverages Recurrent Neural Networks (RNNs) to predict future events on a machine, based on previous observations. We test Tiresias xspace on a dataset of 3.4 billion security events collected from a commercial intrusion prevention system, and show that our approach is effective in predicting the next event that will occur on a machine with a precision of up to 0.93. We also show that the models learned by Tiresias xspace are reasonably stable over time, and provide a mechanism that can identify sudden drops in precision and trigger a retraining of the system. Finally, we show that the long-term memory typical of RNNs is key in performing event prediction, rendering simpler methods not up to the task.",
"title": ""
},
{
"docid": "neg:1840561_9",
"text": "Three experiments in naming Chinese characters are presented here to address the relationships between character frequency, consistency, and regularity effects in Chinese character naming. Significant interactions between character consistency and frequency were found across the three experiments, regardless of whether the phonetic radical of the phonogram is a legitimate character in its own right or not. These findings suggest that the phonological information embedded in Chinese characters has an influence upon the naming process of Chinese characters. Furthermore, phonetic radicals exist as computation units mainly because they are structures occurring systematically within Chinese characters, not because they can function as recognized, freestanding characters. On the other hand, the significant interaction between regularity and consistency found in the first experiment suggests that these two factors affect Chinese character naming in different ways. These findings are accounted for within interactive activation frameworks and a connectionist model.",
"title": ""
},
{
"docid": "neg:1840561_10",
"text": "In this paper we show how word embeddings can be used to increase the effectiveness of a state-of-the art Locality Sensitive Hashing (LSH) based first story detection (FSD) system over a standard tweet corpus. Vocabulary mismatch, in which related tweets use different words, is a serious hindrance to the effectiveness of a modern FSD system. In this case, a tweet could be flagged as a first story even if a related tweet, which uses different but synonymous words, was already returned as a first story. In this work, we propose a novel approach to mitigate this problem of lexical variation, based on tweet expansion. In particular, we propose to expand tweets with semantically related paraphrases identified via automatically mined word embeddings over a background tweet corpus. Through experimentation on a large data stream comprised of 50 million tweets, we show that FSD effectiveness can be improved by 9.5% over a state-of-the-art FSD system.",
"title": ""
},
{
"docid": "neg:1840561_11",
"text": "The reflection of an object can be distorted by undulations of the reflector, be it a funhouse mirror or a fluid surface. Painters and photographers have long exploited this effect, for example, in imaging scenery distorted by ripples on a lake. Here, we use this phenomenon to visualize micrometric surface waves generated as a millimetric droplet bounces on the surface of a vibrating fluid bath (Bush 2015b). This system, discovered a decade ago (Couder et al. 2005), is of current interest as a hydrodynamic quantum analog; specifically, the walking droplets exhibit several features reminiscent of quantum particles (Bush 2015a).",
"title": ""
},
{
"docid": "neg:1840561_12",
"text": "In this paper we demonstrate the potential of data analytics methods for location-based services. We develop a support system that enables user-based relocation of vehicles in free-floating carsharing models. In these businesses, customers can rent and leave cars anywhere within a predefined operational area. However, due to this flexibility, freefloating carsharing is prone to supply and demand imbalance. The support system detects imbalances by analyzing patterns in vehicle idle times. Alternative rental destinations are proposed to customers in exchange for a discount. Using data on 250,000 rentals in the city of Vancouver, we evaluate the relocation system through a simulation. The results show that our approach decreases the average vehicle idle time by up to 16 percent, suggesting a more balanced state of supply and demand. Employing the system results in a higher degree of vehicle utilization and leads to a substantial increase of profits for providers.",
"title": ""
},
{
"docid": "neg:1840561_13",
"text": "In SDN, the underlying infrastructure is usually abstracted for applications that can treat the network as a logical or virtual entity. Commonly, the ``mappings\" between virtual abstractions and their actual physical implementations are not one-to-one, e.g., a single \"big switch\" abstract object might be implemented using a distributed set of physical devices. A key question is, what abstractions could be mapped to multiple physical elements while faithfully preserving their native semantics? E.g., can an application developer always expect her abstract \"big switch\" to act exactly as a physical big switch, despite being implemented using multiple physical switches in reality?\n We show that the answer to that question is \"no\" for existing virtual-to-physical mapping techniques: behavior can differ between the virtual \"big switch\" and the physical network, providing incorrect application-level behavior. We also show that that those incorrect behaviors occur despite the fact that the most pervasive and commonly-used correctness invariants, such as per-packet consistency, are preserved throughout. These examples demonstrate that for practical notions of correctness, new systems and a new analytical framework are needed. We take the first steps by defining end-to-end correctness, a correctness condition that focuses on applications only, and outline a research vision to obtain virtualization systems with correct virtual to physical mappings.",
"title": ""
},
{
"docid": "neg:1840561_14",
"text": "Character segmentation plays an important role in the Arabic optical character recognition (OCR) system, because the letters incorrectly segmented perform to unrecognized character. Accuracy of character recognition depends mainly on the segmentation algorithm used. The domain of off-line handwriting in the Arabic script presents unique technical challenges and has been addressed more recently than other domains. Many different segmentation algorithms for off-line Arabic handwriting recognition have been proposed and applied to various types of word images. This paper provides modify segmentation algorithm based on bounding box to improve segmentation accuracy using two main stages: preprocessing stage and segmentation stage. In preprocessing stage, used a set of methods such as noise removal, binarization, skew correction, thinning and slant correction, which retains shape of the character. In segmentation stage, the modify bounding box algorithm is done. In this algorithm a distance analysis use on bounding boxes of two connected components (CCs): main (CCs), auxiliary (CCs). The modified algorithm is presented and taking place according to three cases. Cut points also determined using structural features for segmentation character. The modified bounding box algorithm has been successfully tested on 450 word images of Arabic handwritten words. The results were very promising, indicating the efficiency of the suggested",
"title": ""
},
{
"docid": "neg:1840561_15",
"text": "Frame interpolation attempts to synthesise intermediate frames given one or more consecutive video frames. In recent years, deep learning approaches, and in particular convolutional neural networks, have succeeded at tackling lowand high-level computer vision problems including frame interpolation. There are two main pursuits in this line of research, namely algorithm efficiency and reconstruction quality. In this paper, we present a multi-scale generative adversarial network for frame interpolation (FIGAN). To maximise the efficiency of our network, we propose a novel multi-scale residual estimation module where the predicted flow and synthesised frame are constructed in a coarse-tofine fashion. To improve the quality of synthesised intermediate video frames, our network is jointly supervised at different levels with a perceptual loss function that consists of an adversarial and two content losses. We evaluate the proposed approach using a collection of 60fps videos from YouTube-8m. Our results improve the state-of-the-art accuracy and efficiency, and a subjective visual quality comparable to the best performing interpolation method.",
"title": ""
},
{
"docid": "neg:1840561_16",
"text": "Mobile IP is the current standard for supporting macromobility of mobile hosts. However, in the case of micromobility support, there are several competing proposals. In this paper, we present the design, implementation, and performance evaluation of HAWAII, a domain-based approach for supporting mobility. HAWAII uses specialized path setup schemes which install host-based forwarding entries in specific routers to support intra-domain micromobility. These path setup schemes deliver excellent performance by reducing mobility related disruption to user applications. Also, mobile hosts retain their network address while moving within the domain, simplifying quality-of-service (QoS) support. Furthermore, reliability is achieved through maintaining soft-state forwarding entries for the mobile hosts and leveraging fault detection mechanisms built in existing intra-domain routing protocols. HAWAII defaults to using Mobile IP for macromobility, thus providing a comprehensive solution for mobility support in wide-area wireless networks.",
"title": ""
},
{
"docid": "neg:1840561_17",
"text": "A calculus is developed in this paper (Part I) and the sequel (Part 11) for obtaining bounds on delay and buffering requirements in a communication network operating in a packet switched mode under a fixed routing strategy. The theory we develop is different from traditional approaches to analyzing delay because the model we use to describe the entry of data into the network is nonprobabilistic: We suppose that the data stream entered intq the network by any given user satisfies “burstiness constraints.” A data stream is said to satisfy a burstiness constraint if the quantity of data from the stream contained in any interval of time is less than a value that depends on the length of the interval. Several network elements are defined that can be used as building blocks to model a wide variety of communication networks. Each type of network element is analyzed by assuming that the traffic entering it satisfies burstiness constraints. Under this assumption bounds are obtained on delay and buffering requirements for the network element, burstiness constraints satisfied by the traffic that exits the element are derived. Index Terms -Queueing networks, burstiness, flow control, packet switching, high speed networks.",
"title": ""
},
{
"docid": "neg:1840561_18",
"text": "LSTMs and other RNN variants have shown strong performance on character-level language modeling. These models are typically trained using truncated backpropagation through time, and it is common to assume that their success stems from their ability to remember long-term contexts. In this paper, we show that a deep (64-layer) transformer model (Vaswani et al. 2017) with fixed context outperforms RNN variants by a large margin, achieving state of the art on two popular benchmarks: 1.13 bits per character on text8 and 1.06 on enwik8. To get good results at this depth, we show that it is important to add auxiliary losses, both at intermediate network layers and intermediate sequence positions.",
"title": ""
},
{
"docid": "neg:1840561_19",
"text": "We present a traceability recovery method and tool based on latent semantic indexing (LSI) in the context of an artefact management system. The tool highlights the candidate links not identified yet by the software engineer and the links identified but missed by the tool, probably due to inconsistencies in the usage of domain terms in the traced software artefacts. We also present a case study of using the traceability recovery tool on software artefacts belonging to different categories of documents, including requirement, design, and testing documents, as well as code components.",
"title": ""
}
] |
1840562 | Performance metrics in supply chain management | [
{
"docid": "pos:1840562_0",
"text": "The use of System Dynamics Modeling in Supply Chain Management has only recently re-emerged after a lengthy slack period. Current research on System Dynamics Modelling in supply chain management focuses on inventory decision and policy development, time compression, demand amplification, supply chain design and integration, and international supply chain management. The paper first gives an overview of recent research work in these areas, followed by a discussion of research issues that have evolved, and presents a taxonomy of research and development in System Dynamics Modelling in supply chain management.",
"title": ""
}
] | [
{
"docid": "neg:1840562_0",
"text": "Despite major scientific, medical and technological advances over the last few decades, a cure for cancer remains elusive. The disease initiation is complex, and including initiation and avascular growth, onset of hypoxia and acidosis due to accumulation of cells beyond normal physiological conditions, inducement of angiogenesis from the surrounding vasculature, tumour vascularization and further growth, and invasion of surrounding tissue and metastasis. Although the focus historically has been to study these events through experimental and clinical observations, mathematical modelling and simulation that enable analysis at multiple time and spatial scales have also complemented these efforts. Here, we provide an overview of this multiscale modelling focusing on the growth phase of tumours and bypassing the initial stage of tumourigenesis. While we briefly review discrete modelling, our focus is on the continuum approach. We limit the scope further by considering models of tumour progression that do not distinguish tumour cells by their age. We also do not consider immune system interactions nor do we describe models of therapy. We do discuss hybrid-modelling frameworks, where the tumour tissue is modelled using both discrete (cell-scale) and continuum (tumour-scale) elements, thus connecting the micrometre to the centimetre tumour scale. We review recent examples that incorporate experimental data into model parameters. We show that recent mathematical modelling predicts that transport limitations of cell nutrients, oxygen and growth factors may result in cell death that leads to morphological instability, providing a mechanism for invasion via tumour fingering and fragmentation. These conditions induce selection pressure for cell survivability, and may lead to additional genetic mutations. Mathematical modelling further shows that parameters that control the tumour mass shape also control its ability to invade. Thus, tumour morphology may serve as a predictor of invasiveness and treatment prognosis.",
"title": ""
},
{
"docid": "neg:1840562_1",
"text": "Collaborative filtering is the most popular approach to build recommender systems and has been successfully employed in many applications. However, it cannot make recommendations for so-called cold start users that have rated only a very small number of items. In addition, these methods do not know how confident they are in their recommendations. Trust-based recommendation methods assume the additional knowledge of a trust network among users and can better deal with cold start users, since users only need to be simply connected to the trust network. On the other hand, the sparsity of the user item ratings forces the trust-based approach to consider ratings of indirect neighbors that are only weakly trusted, which may decrease its precision. In order to find a good trade-off, we propose a random walk model combining the trust-based and the collaborative filtering approach for recommendation. The random walk model allows us to define and to measure the confidence of a recommendation. We performed an evaluation on the Epinions dataset and compared our model with existing trust-based and collaborative filtering methods.",
"title": ""
},
{
"docid": "neg:1840562_2",
"text": "Sepsis is a dangerous condition that is a leading cause of patient mortality. Treating sepsis is highly challenging, because individual patients respond very differently to medical interventions and there is no universally agreed-upon treatment for sepsis. In this work, we explore the use of continuous state-space model-based reinforcement learning (RL) to discover high-quality treatment policies for sepsis patients. Our quantitative evaluation reveals that by blending the treatment strategy discovered with RL with what clinicians follow, we can obtain improved policies, potentially allowing for better medical treatment for sepsis.",
"title": ""
},
{
"docid": "neg:1840562_3",
"text": "This article offers a succinct overview of the hypothesis that the evolution of cognition could benefit from a close examination of brain changes reflected in the shape of the neurocranium. I provide both neurological and genetic evidence in support of this hypothesis, and conclude that the study of language evolution need not be regarded as a mystery.",
"title": ""
},
{
"docid": "neg:1840562_4",
"text": "We consider the problem of building high-level, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a deep sparse autoencoder on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200×200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronous SGD on a cluster with 1,000 machines (16,000 cores) for three days. Contrary to what appears to be a widely-held intuition, our experimental results reveal that it is possible to train a face detector without having to label images as containing a face or not. Control experiments show that this feature detector is robust not only to translation but also to scaling and out-of-plane rotation. We also find that the same network is sensitive to other high-level concepts such as cat faces and human bodies. Starting from these learned features, we trained our network to recognize 22,000 object categories from ImageNet and achieve a leap of 70% relative improvement over the previous state-of-the-art.",
"title": ""
},
{
"docid": "neg:1840562_5",
"text": "In recent years several measures for the gold standard based evaluation of ontology learning were proposed. They can be distinguished by the layers of an ontology (e.g. lexical term layer and concept hierarchy) they evaluate. Judging those measures with a list of criteria we show that there exist some measures sufficient for evaluating the lexical term layer. However, existing measures for the evaluation of concept hierarchies fail to meet basic criteria. This paper presents a new taxonomic measure which overcomes the problems of current approaches.",
"title": ""
},
{
"docid": "neg:1840562_6",
"text": "This work explores conditional image generation with a new image density model based on the PixelCNN architecture. The model can be conditioned on any vector, including descriptive labels or tags, or latent embeddings created by other networks. When conditioned on class labels from the ImageNet database, the model is able to generate diverse, realistic scenes representing distinct animals, objects, landscapes and structures. When conditioned on an embedding produced by a convolutional network given a single image of an unseen face, it generates a variety of new portraits of the same person with different facial expressions, poses and lighting conditions. We also show that conditional PixelCNN can serve as a powerful decoder in an image autoencoder. Additionally, the gated convolutional layers in the proposed model improve the log-likelihood of PixelCNN to match the state-ofthe-art performance of PixelRNN on ImageNet, with greatly reduced computational cost.",
"title": ""
},
{
"docid": "neg:1840562_7",
"text": "OBJECTIVE\nThe purpose of this study was to investigate the effect of antidepressant treatment on hippocampal volumes in patients with major depression.\n\n\nMETHOD\nFor 38 female outpatients, the total time each had been in a depressive episode was divided into days during which the patient was receiving antidepressant medication and days during which no antidepressant treatment was received. Hippocampal gray matter volumes were determined by high resolution magnetic resonance imaging and unbiased stereological measurement.\n\n\nRESULTS\nLonger durations during which depressive episodes went untreated with antidepressant medication were associated with reductions in hippocampal volume. There was no significant relationship between hippocampal volume loss and time depressed while taking antidepressant medication or with lifetime exposure to antidepressants.\n\n\nCONCLUSIONS\nAntidepressants may have a neuroprotective effect during depression.",
"title": ""
},
{
"docid": "neg:1840562_8",
"text": "Outlier detection is an integral component of statistical modelling and estimation. For highdimensional data, classical methods based on the Mahalanobis distance are usually not applicable. We propose an outlier detection procedure that replaces the classical minimum covariance determinant estimator with a high-breakdown minimum diagonal product estimator. The cut-off value is obtained from the asymptotic distribution of the distance, which enables us to control the Type I error and deliver robust outlier detection. Simulation studies show that the proposed method behaves well for high-dimensional data.",
"title": ""
},
{
"docid": "neg:1840562_9",
"text": "As we outsource more of our decisions and activities to machines with various degrees of autonomy, the question of clarifying the moral and legal status of their autonomous behaviour arises. There is also an ongoing discussion on whether artificial agents can ever be liable for their actions or become moral agents. Both in law and ethics, the concept of liability is tightly connected with the concept of ability. But as we work to develop moral machines, we also push the boundaries of existing categories of ethical competency and autonomy. This makes the question of responsibility particularly difficult. Although new classification schemes for ethical behaviour and autonomy have been discussed, these need to be worked out in far more detail. Here we address some issues with existing proposals, highlighting especially the link between ethical competency and autonomy, and the problem of anchoring classifications in an operational understanding of what we mean by a moral",
"title": ""
},
{
"docid": "neg:1840562_10",
"text": "Automated license plate recognition (ALPR) has been applied to identify vehicles by their license plates and is critical in several important transportation applications. In order to achieve the recognition accuracy levels typically required in the market, it is necessary to obtain properly segmented characters. A standard method, projection-based segmentation, is challenged by substantial variation across the plate in the regions surrounding the characters. In this paper a reinforcement learning (RL) method is adapted to create a segmentation agent that can find appropriate segmentation paths that avoid characters, traversing from the top to the bottom of a cropped license plate image. Then a hybrid approach is proposed, leveraging the speed and simplicity of the projection-based segmentation technique along with the power of the RL method. The results of our experiments show significant improvement over the histogram projection currently used for character segmentation.",
"title": ""
},
{
"docid": "neg:1840562_11",
"text": "Cannabis (Cannabis sativa, or hemp) and its constituents-in particular the cannabinoids-have been the focus of extensive chemical and biological research for almost half a century since the discovery of the chemical structure of its major active constituent, Δ9-tetrahydrocannabinol (Δ9-THC). The plant's behavioral and psychotropic effects are attributed to its content of this class of compounds, the cannabinoids, primarily Δ9-THC, which is produced mainly in the leaves and flower buds of the plant. Besides Δ9-THC, there are also non-psychoactive cannabinoids with several medicinal functions, such as cannabidiol (CBD), cannabichromene (CBC), and cannabigerol (CBG), along with other non-cannabinoid constituents belonging to diverse classes of natural products. Today, more than 560 constituents have been identified in cannabis. The recent discoveries of the medicinal properties of cannabis and the cannabinoids in addition to their potential applications in the treatment of a number of serious illnesses, such as glaucoma, depression, neuralgia, multiple sclerosis, Alzheimer's, and alleviation of symptoms of HIV/AIDS and cancer, have given momentum to the quest for further understanding the chemistry, biology, and medicinal properties of this plant.This contribution presents an overview of the botany, cultivation aspects, and the phytochemistry of cannabis and its chemical constituents. Particular emphasis is placed on the newly-identified/isolated compounds. In addition, techniques for isolation of cannabis constituents and analytical methods used for qualitative and quantitative analysis of cannabis and its products are also reviewed.",
"title": ""
},
{
"docid": "neg:1840562_12",
"text": "In this article the design and the construction of an ultrawideband (UWB) 3 dB hybrid coupler are presented. The coupler is realized in broadside stripline technology to cover the operating bandwidth 0.5 - 18 GHz (more than five octaves). Detailed electromagnetic design has been carried to optimize performances according to bandwidth. The comparison between simulations and measurements validated the design approach. The first prototype guaranteed an insertion loss lower than 5 dB and a phase shift equal to 90° +/- 5° in bandwidth",
"title": ""
},
{
"docid": "neg:1840562_13",
"text": "OBJECTIVES\nTo review the sonographic features of spinal anomalies in first-trimester fetuses presenting for screening for chromosomal abnormalities.\n\n\nMETHODS\nFetuses with a spinal abnormality diagnosed prenatally or postnatally that underwent first-trimester sonographic evaluation at our institution had their clinical information retrieved and their sonograms reviewed.\n\n\nRESULTS\nA total of 21 fetuses complied with the entry criteria including eight with body stalk anomaly, seven with spina bifida, two with Vertebral, Anal, Cardiac, Tracheal, Esophageal, Renal, and Limb (VACTERL) association, and one case each of isolated kyphoscoliosis, tethered cord, iniencephaly, and sacrococcygeal teratoma. One fetus with body stalk anomaly and another with VACTERL association also had a myelomeningocele, making a total of nine cases of spina bifida in our series. Five of the nine (56%) cases with spina bifida, one of the two cases with VACTERL association, and the cases with tethered cord and sacrococcygeal teratoma were undiagnosed in the first trimester. Although increased nuchal translucency was found in seven (33%) cases, chromosomal analysis revealed only one case of aneuploidy in this series.\n\n\nCONCLUSIONS\nFetal spinal abnormalities diagnosed in the first trimester are usually severe and frequently associated with other major defects. The diagnosis of small defects is difficult and a second-trimester scan is still necessary to detect most cases of spina bifida.",
"title": ""
},
{
"docid": "neg:1840562_14",
"text": "Fully automatic methods that extract lists of objects from the Web have been studied extensively. Record extraction, the first step of this object extraction process, identifies a set of Web page segments, each of which represents an individual object (e.g., a product). State-of-the-art methods suffice for simple search, but they often fail to handle more complicated or noisy Web page structures due to a key limitation -- their greedy manner of identifying a list of records through pairwise comparison (i.e., similarity match) of consecutive segments. This paper introduces a new method for record extraction that captures a list of objects in a more robust way based on a holistic analysis of a Web page. The method focuses on how a distinct tag path appears repeatedly in the DOM tree of the Web document. Instead of comparing a pair of individual segments, it compares a pair of tag path occurrence patterns (called visual signals) to estimate how likely these two tag paths represent the same list of objects. The paper introduces a similarity measure that captures how closely the visual signals appear and interleave. Clustering of tag paths is then performed based on this similarity measure, and sets of tag paths that form the structure of data records are extracted. Experiments show that this method achieves higher accuracy than previous methods.",
"title": ""
},
{
"docid": "neg:1840562_15",
"text": "The optical properties of human skin, subcutaneous adipose tissue and human mucosa were measured in the wavelength range 400–2000 nm. The measurements were carried out using a commercially available spectrophotometer with an integrating sphere. The inverse adding–doubling method was used to determine the absorption and reduced scattering coefficients from the measurements.",
"title": ""
},
{
"docid": "neg:1840562_16",
"text": "IBM Watson is a cognitive computing system capable of question answering in natural languages. It is believed that IBM Watson can understand large corpora and answer relevant questions more effectively than any other question-answering system currently available. To unleash the full power of Watson, however, we need to train its instance with a large number of wellprepared question-answer pairs. Obviously, manually generating such pairs in a large quantity is prohibitively time consuming and significantly limits the efficiency of Watson’s training. Recently, a large-scale dataset of over 30 million question-answer pairs was reported. Under the assumption that using such an automatically generated dataset could relieve the burden of manual question-answer generation, we tried to use this dataset to train an instance of Watson and checked the training efficiency and accuracy. According to our experiments, using this auto-generated dataset was effective for training Watson, complementing manually crafted question-answer pairs. To the best of the authors’ knowledge, this work is the first attempt to use a largescale dataset of automatically generated questionanswer pairs for training IBM Watson. We anticipate that the insights and lessons obtained from our experiments will be useful for researchers who want to expedite Watson training leveraged by automatically generated question-answer pairs.",
"title": ""
},
{
"docid": "neg:1840562_17",
"text": "We introduce InfraStructs, material-based tags that embed information inside digitally fabricated objects for imaging in the Terahertz region. Terahertz imaging can safely penetrate many common materials, opening up new possibilities for encoding hidden information as part of the fabrication process. We outline the design, fabrication, imaging, and data processing steps to fabricate information inside physical objects. Prototype tag designs are presented for location encoding, pose estimation, object identification, data storage, and authentication. We provide detailed analysis of the constraints and performance considerations for designing InfraStruct tags. Future application scenarios range from production line inventory, to customized game accessories, to mobile robotics.",
"title": ""
},
{
"docid": "neg:1840562_18",
"text": "The ubiquitous webcam indicator LED is an important privacy feature which provides a visual cue that the camera is turned on. We describe how to disable the LED on a class of Apple internal iSight webcams used in some versions of MacBook laptops and iMac desktops. This enables video to be captured without any visual indication to the user and can be accomplished entirely in user space by an unprivileged (non-root) application. The same technique that allows us to disable the LED, namely reprogramming the firmware that runs on the iSight, enables a virtual machine escape whereby malware running inside a virtual machine reprograms the camera to act as a USB Human Interface Device (HID) keyboard which executes code in the host operating system. We build two proofs-of-concept: (1) an OS X application, iSeeYou, which demonstrates capturing video with the LED disabled; and (2) a virtual machine escape that launches Terminal.app and runs shell commands. To defend against these and related threats, we build an OS X kernel extension, iSightDefender, which prohibits the modification of the iSight’s firmware from user space.",
"title": ""
},
{
"docid": "neg:1840562_19",
"text": "The rapid advancement of robotics technology in recent years has pushed the development of a distinctive field of robotic applications, namely robotic exoskeletons. Because of the aging population, more people are suffering from neurological disorders such as stroke, central nervous system disorder, and spinal cord injury. As manual therapy seems to be physically demanding for both the patient and therapist, robotic exoskeletons have been developed to increase the efficiency of rehabilitation therapy. Robotic exoskeletons are capable of providing more intensive patient training, better quantitative feedback, and improved functional outcomes for patients compared to manual therapy. This review emphasizes treadmill-based and over-ground exoskeletons for rehabilitation. Analyses of their mechanical designs, actuation systems, and integrated control strategies are given priority because the interactions between these components are crucial for the optimal performance of the rehabilitation robot. The review also discusses the limitations of current exoskeletons and technical challenges faced in exoskeleton development. A general perspective of the future development of more effective robot exoskeletons, specifically real-time biological synergy-based exoskeletons, could help promote brain plasticity among neurologically impaired patients and allow them to regain normal walking ability.",
"title": ""
}
] |
1840563 | Knowledge management in software engineering - describing the process | [
{
"docid": "pos:1840563_0",
"text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.",
"title": ""
}
] | [
{
"docid": "neg:1840563_0",
"text": "Unusual site deep vein thrombosis (USDVT) is an uncommon form of venous thromboembolism (VTE) with heterogeneity in pathophysiology and clinical features. While the need for anticoagulation treatment is generally accepted, there is little data on optimal USDVT treatment. The TRUST study aimed to characterize the epidemiology, treatment and outcomes of USDVT. From 2008 to 2012, 152 patients were prospectively enrolled at 4 Canadian centers. After baseline, patients were followed at 6, 12 and 24months. There were 97 (64%) cases of splanchnic, 33 (22%) cerebral, 14 (9%) jugular, 6 (4%) ovarian and 2 (1%) renal vein thrombosis. Mean age was 52.9years and 113 (74%) cases were symptomatic. Of 72 (47%) patients tested as part of clinical care, 22 (31%) were diagnosed with new thrombophilia. Of 138 patients evaluated in follow-up, 66 (48%) completed at least 6months of anticoagulation. Estrogen exposure or inflammatory conditions preceding USDVT were commonly associated with treatment discontinuation before 6months, while previous VTE was associated with continuing anticoagulation beyond 6months. During follow-up, there were 22 (16%) deaths (20 from cancer), 4 (3%) cases of recurrent VTE and no fatal bleeding events. Despite half of USDVT patients receiving <6months of anticoagulation, the rate of VTE recurrence was low and anticoagulant treatment appears safe. Thrombophilia testing was common and thrombophilia prevalence was high. Further research is needed to determine the optimal investigation and management of USDVT.",
"title": ""
},
{
"docid": "neg:1840563_1",
"text": "The growing proliferation in solar deployment, especially at distribution level, has made the case for power system operators to develop more accurate solar forecasting models. This paper proposes a solar photovoltaic (PV) generation forecasting model based on multi-level solar measurements and utilizing a nonlinear autoregressive with exogenous input (NARX) model to improve the training and achieve better forecasts. The proposed model consists of four stages of data preparation, establishment of fitting model, model training, and forecasting. The model is tested under different weather conditions. Numerical simulations exhibit the acceptable performance of the model when compared to forecasting results obtained from two-level and single-level studies.",
"title": ""
},
{
"docid": "neg:1840563_2",
"text": "An interesting research problem in our age of Big Data is that of determining provenance. Granular evaluation of provenance of physical goods--e.g. tracking ingredients of a pharmaceutical or demonstrating authenticity of luxury goods--has often not been possible with today's items that are produced and transported in complex, inter-organizational, often internationally-spanning supply chains. Recent adoption of Internet of Things and Blockchain technologies give promise at better supply chain provenance. We are particularly interested in the blockchain as many favoured use cases of blockchain are for provenance tracking. We are also interested in applying ontologies as there has been some work done on knowledge provenance, traceability, and food provenance using ontologies. In this paper, we make a case for why ontologies can contribute to blockchain design. To support this case, we analyze a traceability ontology and translate some of its representations to smart contracts that execute a provenance trace and enforce traceability constraints on the Ethereum blockchain platform.",
"title": ""
},
{
"docid": "neg:1840563_3",
"text": "To build a natural sounding speech synthesis system, it is essential that the text processing component produce an appropriate sequence of phonemic units corresponding to an arbitrary input text. In this paper we discuss our efforts in addressing the issues of Font-to-Akshara mapping, pronunciation rules for Aksharas, text normalization in the context of building text-to-speech systems in Indian languages.",
"title": ""
},
{
"docid": "neg:1840563_4",
"text": "Natural language understanding research has recently shifted towards complex Machine Learning and Deep Learning algorithms. Such models often outperform significantly their simpler counterparts. However, their performance relies on the availability of large amounts of labeled data, which are rarely available. To tackle this problem, we propose a methodology for extending training datasets to arbitrarily big sizes and training complex, data-hungry models using weak supervision. We apply this methodology on biomedical relationship extraction, a task where training datasets are excessively time-consuming and expensive to create, yet has a major impact on downstream applications such as drug discovery. We demonstrate in a small-scale controlled experiment that our method consistently enhances the performance of an LSTM network, with performance improvements comparable to hand-labeled training data. Finally, we discuss the optimal setting for applying weak supervision using this methodology.",
"title": ""
},
{
"docid": "neg:1840563_5",
"text": "Amnesic patients demonstrate by their performance on a serial reaction time task that they learned a repeating spatial sequence despite their lack of awareness of the repetition (Nissen & Bullemer, 1987). In the experiments reported here, we investigated this form of procedural learning in normal subjects. A subgroup of subjects showed substantial procedural learning of the sequence in the absence of explicit declarative knowledge of it. Their ability to generate the sequence was effectively at chance and showed no savings in learning. Additional amounts of training increased both procedural and declarative knowledge of the sequence. Development of knowledge in one system seems not to depend on knowledge in the other. Procedural learning in this situation is neither solely perceptual nor solely motor. The learning shows minimal transfer to a situation employing the same motor sequence.",
"title": ""
},
{
"docid": "neg:1840563_6",
"text": "Two new topologies of three-phase segmented rotor switched reluctance machine (SRM) that enables the use of standard voltage source inverters (VSIs) for its operation are presented. The topologies has shorter end-turn length, axial length compared to SRM topologies that use three-phase inverters; compared to the conventional SRM (CSRM), these new topologies has the advantage of shorter flux paths that results in lower core losses. FEA based optimization have been performed for a given design specification. The new concentrated winding segmented SRMs demonstrate competitive performance with three-phase standard inverters compared to CSRM.",
"title": ""
},
{
"docid": "neg:1840563_7",
"text": "This paper presents the conceptual design, detailed development and flight testing of AtlantikSolar, a 5.6m-wingspan solar-powered Low-Altitude Long-Endurance (LALE) Unmanned Aerial Vehicle (UAV) designed and built at ETH Zurich. The UAV is required to provide perpetual endurance at a geographic latitude of 45°N in a 4-month window centered around June 21st. An improved conceptual design method is presented and applied to maximize the perpetual flight robustness with respect to local meteorological disturbances such as clouds or winds. Airframe, avionics hardware, state estimation and control method development for autonomous flight operations are described. Flight test results include a 12-hour flight relying solely on batteries to replicate night-flight conditions. In addition, we present flight results from Search-And-Rescue field trials where a camera and processing pod were mounted on the aircraft to create high-fidelity 3D-maps of a simulated disaster area.",
"title": ""
},
{
"docid": "neg:1840563_8",
"text": "Thanks to the convergence of pervasive mobile communications and fast-growing online social networking, mobile social networking is penetrating into our everyday life. Aiming to develop a systematic understanding of mobile social networks, in this paper we exploit social ties in human social networks to enhance cooperative device-to-device (D2D) communications. Specifically, as handheld devices are carried by human beings, we leverage two key social phenomena, namely social trust and social reciprocity, to promote efficient cooperation among devices. With this insight, we develop a coalitional game-theoretic framework to devise social-tie-based cooperation strategies for D2D communications. We also develop a network-assisted relay selection mechanism to implement the coalitional game solution, and show that the mechanism is immune to group deviations, individually rational, truthful, and computationally efficient. We evaluate the performance of the mechanism by using real social data traces. Simulation results corroborate that the proposed mechanism can achieve significant performance gain over the case without D2D cooperation.",
"title": ""
},
{
"docid": "neg:1840563_9",
"text": "During the past two decades, the prevalence of obesity in children has risen greatly worldwide. Obesity in childhood causes a wide range of serious complications, and increases the risk of premature illness and death later in life, raising public-health concerns. Results of research have provided new insights into the physiological basis of bodyweight regulation. However, treatment for childhood obesity remains largely ineffective. In view of its rapid development in genetically stable populations, the childhood obesity epidemic can be primarily attributed to adverse environmental factors for which straightforward, if politically difficult, solutions exist.",
"title": ""
},
{
"docid": "neg:1840563_10",
"text": "Matrix factorization has found incredible success and widespread application as a collaborative filtering based approach to recommendations. Unfortunately, incorporating additional sources of evidence, especially ones that are incomplete and noisy, is quite difficult to achieve in such models, however, is often crucial for obtaining further gains in accuracy. For example, additional information about businesses from reviews, categories, and attributes should be leveraged for predicting user preferences, even though this information is often inaccurate and partially-observed. Instead of creating customized methods that are specific to each type of evidences, in this paper we present a generic approach to factorization of relational data that collectively models all the relations in the database. By learning a set of embeddings that are shared across all the relations, the model is able to incorporate observed information from all the relations, while also predicting all the relations of interest. Our evaluation on multiple Amazon and Yelp datasets demonstrates effective utilization of additional information for held-out preference prediction, but further, we present accurate models even for the cold-starting businesses and products for which we do not observe any ratings or reviews. We also illustrate the capability of the model in imputing missing information and jointly visualizing words, categories, and attribute factors.",
"title": ""
},
{
"docid": "neg:1840563_11",
"text": "In this paper, we introduce a stereo vision based CNN tracker for a person following robot. The tracker is able to track a person in real-time using an online convolutional neural network. Our approach enables the robot to follow a target under challenging situations such as occlusions, appearance changes, pose changes, crouching, illumination changes or people wearing the same clothes in different environments. The robot follows the target around corners even when it is momentarily unseen by estimating and replicating the local path of the target. We build an extensive dataset for person following robots under challenging situations. We evaluate the proposed system quantitatively by comparing our tracking approach with existing real-time tracking algorithms.",
"title": ""
},
{
"docid": "neg:1840563_12",
"text": "Max-Min Fairness is a flexible resource allocation mechanism used in most datacenter schedulers. However, an increasing number of jobs have hard placement constraints, restricting the machines they can run on due to special hardware or software requirements. It is unclear how to define, and achieve, max-min fairness in the presence of such constraints. We propose Constrained Max-Min Fairness (CMMF), an extension to max-min fairness that supports placement constraints, and show that it is the only policy satisfying an important property that incentivizes users to pool resources. Optimally computing CMMF is challenging, but we show that a remarkably simple online scheduler, called Choosy, approximates the optimal scheduler well. Through experiments, analysis, and simulations, we show that Choosy on average differs 2% from the optimal CMMF allocation, and lets jobs achieve their fair share quickly.",
"title": ""
},
{
"docid": "neg:1840563_13",
"text": "In today's world data is growing very rapidly, which we call as big data. To deal with these large data sets, currently we are using NoSQL databases, as relational database is not capable for handling such data. These schema less NoSQL database allow us to handle unstructured data. Through this paper we are comparing two NoSQL databases MongoDB and CouchBase server, in terms of image storage and retrieval. Aim behind selecting these two databases as both comes under Document store category. Major applications like social media, traffic analysis, criminal database etc. require image database. The motivation behind this paper is to compare database performance in terms of time required to store and retrieve images from database. In this paper, firstly we are going describe advantages of NoSQL databases over SQL, then brief idea about MongoDB and CouchBase and finally comparison of time required to insert various size images in databases and to retrieve various size images using front end tool Java.",
"title": ""
},
{
"docid": "neg:1840563_14",
"text": "We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author.",
"title": ""
},
{
"docid": "neg:1840563_15",
"text": "Semantics is seen as the key ingredient in the next phase of the Web infrastructure as well as the next generation of information systems applications. In this context, we review some of the reservations expressed about the viability of the Semantic Web. We respond to these by identifying a Semantic Technology that supports the key capabilities also needed to realize the Semantic Web vision, namely representing, acquiring and utilizing knowledge. Given that scalability is a key challenge, we briefly review our observations from developing three classes of real world applications and corresponding technology components: search/browsing, integration, and analytics. We distinguish this proven technology from some parts of the Semantic Web approach and offer subjective remarks which we hope will foster additional debate.",
"title": ""
},
{
"docid": "neg:1840563_16",
"text": "Over the last decade, the ever increasing world-wide demand for early detection of breast cancer at many screening sites and hospitals has resulted in the need of new research avenues. According to the World Health Organization (WHO), an early detection of cancer greatly increases the chances of taking the right decision on a successful treatment plan. The Computer-Aided Diagnosis (CAD) systems are applied widely in the detection and differential diagnosis of many different kinds of abnormalities. Therefore, improving the accuracy of a CAD system has become one of the major research areas. In this paper, a CAD scheme for detection of breast cancer has been developed using deep belief network unsupervised path followed by back propagation supervised path. The construction is back-propagation neural network with Liebenberg Marquardt learning function while weights are initialized from the deep belief network path (DBN-NN). Our technique was tested on the Wisconsin Breast Cancer Dataset (WBCD). The classifier complex gives an accuracy of 99.68% indicating promising results over previously-published studies. The proposed system provides an effective classification model for breast cancer. In addition, we examined the architecture at several train-test partitions. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840563_17",
"text": "As the concept of Friction stir welding is relatively new, there are many areas, which need thorough investigation to optimize and make it commercially viable. In order to obtain the desired mechanical properties, certain process parameters, like rotational and translation speeds, tool tilt angle, tool geometry etc. are to be controlled. Aluminum alloys of 5xxx series and their welded joints show good resistance to corrosion in sea water. Here, a literature survey has been carried out for the friction stir welding of 5xxx series aluminum alloys.",
"title": ""
},
{
"docid": "neg:1840563_18",
"text": "Temporal networks, i.e., networks in which the interactions among a set of elementary units change over time, can be modelled in terms of timevarying graphs, which are time-ordered sequences of graphs over a set of nodes. In such graphs, the concepts of node adjacency and reachability crucially depend on the exact temporal ordering of the links. Consequently, all the concepts and metrics proposed and used for the characterisation of static complex networks have to be redefined or appropriately extended to time-varying graphs, in order to take into account the effects of time ordering on causality. In this chapter we V. Nicosia ( ) Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK e-mail: [email protected] Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy J. Tang C. Mascolo Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK M. Musolesi ( ) School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK e-mail: [email protected] G. Russo Dipartimento di Matematica e Informatica, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy V. Latora Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy School of Mathematical Sciences, Queen Mary, University of London, E1 4NS London, UK Dipartimento di Fisica e Astronomia and INFN, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy P. Holme and J. Saramäki (eds.), Temporal Networks, Understanding Complex Systems, DOI 10.1007/978-3-642-36461-7 2, © Springer-Verlag Berlin Heidelberg 2013 15 16 V. Nicosia et al. discuss how to represent temporal networks and we review the definitions of walks, paths, connectedness and connected components valid for graphs in which the links fluctuate over time. We then focus on temporal node–node distance, and we discuss how to characterise link persistence and the temporal small-world behaviour in this class of networks. Finally, we discuss the extension of classic centrality measures, including closeness, betweenness and spectral centrality, to the case of time-varying graphs, and we review the work on temporal motifs analysis and the definition of modularity for temporal graphs.",
"title": ""
}
] |
1840564 | Interactive Instance-based Evaluation of Knowledge Base Question Answering | [
{
"docid": "pos:1840564_0",
"text": "Semantic parsing is a rich fusion of the logical and the statistical worlds.",
"title": ""
},
{
"docid": "pos:1840564_1",
"text": "We introduce ParlAI (pronounced “parlay”), an open-source software platform for dialog research implemented in Python, available at http://parl.ai. Its goal is to provide a unified framework for sharing, training and testing dialog models; integration of Amazon Mechanical Turk for data collection, human evaluation, and online/reinforcement learning; and a repository of machine learning models for comparing with others’ models, and improving upon existing architectures. Over 20 tasks are supported in the first release, including popular datasets such as SQuAD, bAbI tasks, MCTest, WikiQA, QACNN, QADailyMail, CBT, bAbI Dialog, Ubuntu, OpenSubtitles and VQA. Several models are integrated, including neural models such as memory networks, seq2seq and attentive LSTMs.",
"title": ""
}
] | [
{
"docid": "neg:1840564_0",
"text": "In this paper, a complete voiceprint recognition based on Matlab was realized, including speech processing and feature extraction at early stage, and model training and recognition at later stage. For speech processing and feature extraction at early stage, Mel Frequency Cepstrum Coefficient (MFCC) was taken as feature parameter. For speaker model method, DTW model was adopted to reflect the voiceprint characteristics of speech, converting voiceprint recognition into speaker speech data evaluation, and breaking up complex speech training and matching into model parameter training and probability calculation. Simulation experiment results show that this system is effective to recognize voiceprint.",
"title": ""
},
{
"docid": "neg:1840564_1",
"text": "UNLABELLED\nYale Image Finder (YIF) is a publicly accessible search engine featuring a new way of retrieving biomedical images and associated papers based on the text carried inside the images. Image queries can also be issued against the image caption, as well as words in the associated paper abstract and title. A typical search scenario using YIF is as follows: a user provides few search keywords and the most relevant images are returned and presented in the form of thumbnails. Users can click on the image of interest to retrieve the high resolution image. In addition, the search engine will provide two types of related images: those that appear in the same paper, and those from other papers with similar image content. Retrieved images link back to their source papers, allowing users to find related papers starting with an image of interest. Currently, YIF has indexed over 140 000 images from over 34 000 open access biomedical journal papers.\n\n\nAVAILABILITY\nhttp://krauthammerlab.med.yale.edu/imagefinder/",
"title": ""
},
{
"docid": "neg:1840564_2",
"text": "Phones with some of the capabilities of modern computers also have the same kind of drawbacks. These phones are commonly referred to as smartphones. They have both phone and personal digital assistant (PDA) functionality. Typical to these devices is to have a wide selection of different connectivity options from general packet radio service (GPRS) data transfer to multi media messages (MMS) and wireless local area network (WLAN) capabilities. They also have standardized operating systems, which makes smartphones a viable platform for malware writers. Since the design of the operating systems is recent, many common security holes and vulnerabilities have been taken into account during the design. However, these precautions have not fully protected these devices. Even now, when smartphones are not that common, there is a handful of viruses for them. In this paper we will discuss some of the most typical viruses in the mobile environment and propose guidelines and predictions for the future.",
"title": ""
},
{
"docid": "neg:1840564_3",
"text": "Making data to be more connected is one of the goals of Semantic Technology. Therefore, relational data model as one of important data resource type, is needed to be mapped and converted to graph model. In this paper we focus in mapping and converting without semantically loss, by considering semantic abstraction of the real world, which has been ignored in some previous researches. As a graph schema model, it can be implemented in graph database or linked data in RDF/OWL format. This approach studies that relationship should be paid more attention in mapping and converting because, often be found a gap semantic abstraction during those processes. In our small experiment shows that our idea can map and convert relational model to graph model without semantically loss.",
"title": ""
},
{
"docid": "neg:1840564_4",
"text": "OBJECTIVE\nThe objective of this study is to outline explicit criteria for assessing the contribution of qualitative empirical studies in health and medicine, leading to a hierarchy of evidence specific to qualitative methods.\n\n\nSTUDY DESIGN AND SETTING\nThis paper arose from a series of critical appraisal exercises based on recent qualitative research studies in the health literature. We focused on the central methodological procedures of qualitative method (defining a research framework, sampling and data collection, data analysis, and drawing research conclusions) to devise a hierarchy of qualitative research designs, reflecting the reliability of study conclusions for decisions made in health practice and policy.\n\n\nRESULTS\nWe describe four levels of a qualitative hierarchy of evidence-for-practice. The least likely studies to produce good evidence-for-practice are single case studies, followed by descriptive studies that may provide helpful lists of quotations but do not offer detailed analysis. More weight is given to conceptual studies that analyze all data according to conceptual themes but may be limited by a lack of diversity in the sample. Generalizable studies using conceptual frameworks to derive an appropriately diversified sample with analysis accounting for all data are considered to provide the best evidence-for-practice. Explicit criteria and illustrative examples are described for each level.\n\n\nCONCLUSION\nA hierarchy of evidence-for-practice specific to qualitative methods provides a useful guide for the critical appraisal of papers using these methods and for defining the strength of evidence as a basis for decision making and policy generation.",
"title": ""
},
{
"docid": "neg:1840564_5",
"text": "Sexting has received increasing scholarly and media attention. Especially, minors’ engagement in this behaviour is a source of concern. As adolescents are highly sensitive about their image among peers and prone to peer influence, the present study implemented the prototype willingness model in order to assess how perceptions of peers engaging in sexting possibly influence adolescents’ willingness to send sexting messages. A survey was conducted among 217 15to 19-year-olds. A total of 18% of respondents had engaged in sexting in the 2 months preceding the study. Analyses further revealed that the subjective norm was the strongest predictor of sexting intention, followed by behavioural willingness and attitude towards sexting. Additionally, the more favourable young people evaluated the prototype of a person engaging in sexting and the higher they assessed their similarity with this prototype, the more they were willing to send sexting messages. Differences were also found based on gender, relationship status and need for popularity. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840564_6",
"text": "Identity Crime is well known, established, and costly. Identity Crime is the term used to refer to all types of crime in which someone wrongfully obtains and uses another person’s personal data in some way that involves fraud or deception, typically for economic gain. Forgery and use of fraudulent identity documents are major enablers of Identity Fraud. It has affected the e-commerce. It is increasing significantly with the development of modern technology and the global superhighways of communication, resulting in the loss of lots of money worldwide each year. Also along with transaction the application domain such as credit application is hit by this crime. These are growing concerns for not only governmental bodies but business organizations also all over the world. This paper gives a brief summary of the identity fraud. Also it discusses various data mining techniques used to overcome it.",
"title": ""
},
{
"docid": "neg:1840564_7",
"text": "The aim of this research is to design an intelligent system that addresses the problem of real-time localization and navigation of visually impaired (VI) in an indoor environment using a monocular camera. Systems that have been developed so far for the VI use either many cameras (stereo and monocular) integrated with other sensors or use very complex algorithms that are computationally expensive. In this research work, a computationally less expensive integrated system has been proposed to combine imaging geometry, Visual Odometry (VO), Object Detection (OD) along with Distance-Depth (D-D) estimation algorithms for precise navigation and localization by utilizing a single monocular camera as the only sensor. The developed algorithm is tested for both standard Karlsruhe and indoor environment recorded datasets. Tests have been carried out in real-time using a smartphone camera that captures image data of the environment as the person moves and is sent over Wi-Fi for further processing to the MATLAB software model running on an Intel i7 processor. The algorithm provides accurate results on real-time navigation in the environment with an audio feedback about the person's location. The trajectory of the navigation is expressed in an arbitrary scale. Object detection based localization is accurate. The D-D estimation provides distance and depth measurements up to an accuracy of 94–98%.",
"title": ""
},
{
"docid": "neg:1840564_8",
"text": "Social media sites such as Flickr, YouTube, and Facebook host substantial amounts of user-contributed materials (e.g., photographs, videos, and textual content) for a wide variety of real-world events. These range from widely known events, such as the presidential inauguration, to smaller, community-specific events, such as annual conventions and local gatherings. By identifying these events and their associated user-contributed social media documents, which is the focus of this paper, we can greatly improve local event browsing and search in state-of-the-art search engines. To address our problem of focus, we exploit the rich “context” associated with social media content, including user-provided annotations (e.g., title, tags) and automatically generated information (e.g., content creation time). We form a variety of representations of social media documents using different context dimensions, and combine these dimensions in a principled way into a single clustering solution—where each document cluster ideally corresponds to one event—using a weighted ensemble approach. We evaluate our approach on a large-scale, real-world dataset of event images, and report promising performance with respect to several baseline approaches. Our preliminary experiments suggest that our ensemble approach identifies events, and their associated images, more effectively than the state-of-the-art strategies on which we build.",
"title": ""
},
{
"docid": "neg:1840564_9",
"text": "Spasticity is a prevalent and potentially disabling symptom common in individuals with multiple sclerosis. Adequate evaluation and management of spasticity requires a careful assessment of the patient's history to determine functional impact of spasticity and potential exacerbating factors, and physical examination to determine the extent of the condition and culpable muscles. A host of options for spasticity management are available: therapeutic exercise, physical modalities, complementary/alternative medicine interventions, oral medications, chemodenervation, and implantation of an intrathecal baclofen pump. Choice of treatment hinges on a combination of the extent of symptoms, patient preference, and availability of services.",
"title": ""
},
{
"docid": "neg:1840564_10",
"text": "According to World Health Organization (WHO) estimations, one out of five adults worldwide will be obese by 2025. Worldwide obesity has doubled since 1980. In fact, more than 1.9 billion adults (39%) of 18 years and older were overweight and over 600 million (13%) of these were obese in 2014. 42 million children under the age of five were overweight or obese in 2014. Obesity is a top public health problem due to its associated morbidity and mortality. This paper reviews the main techniques to measure the level of obesity and body fat percentage, and explains the complications that can carry to the individual's quality of life, longevity and the significant cost of healthcare systems. Researchers and developers are adapting the existing technology, as intelligent phones or some wearable gadgets to be used for controlling obesity. They include the promoting of healthy eating culture and adopting the physical activity lifestyle. The paper also shows a comprehensive study of the most used mobile applications and Wireless Body Area Networks focused on controlling the obesity and overweight. Finally, this paper proposes an intelligent architecture that takes into account both, physiological and cognitive aspects to reduce the degree of obesity and overweight.",
"title": ""
},
{
"docid": "neg:1840564_11",
"text": "The higher variability introduced by distributed generation leads to fast changes in the aggregate load composition, and thus in the power response during voltage variations. The smart transformer, a power electronics-based distribution transformer with advanced control functionalities, can exploit the load dependence on voltage for providing services to the distribution and transmission grids. In this paper, two possible applications are proposed: 1) the smart transformer overload control by means of voltage control action and 2) the soft load reduction method, that reduces load consumption avoiding the load disconnection. These services depend on the correct identification of load dependence on voltage, which the smart transformer evaluates in real time based on load measurements. The effect of the distributed generation on net load sensitivity has been derived and demonstrated with the control hardware in loop evaluation by means of a real time digital simulator.",
"title": ""
},
{
"docid": "neg:1840564_12",
"text": "Passwords continue to prevail on the web as the primary method for user authentication despite their well-known security and usability drawbacks. Password managers offer some improvement without requiring server-side changes. In this paper, we evaluate the security of dual-possession authentication, an authentication approach offering encrypted storage of passwords and theft-resistance without the use of a master password. We further introduce Tapas, a concrete implementation of dual-possession authentication leveraging a desktop computer and a smartphone. Tapas requires no server-side changes to websites, no master password, and protects all the stored passwords in the event either the primary or secondary device (e.g., computer or phone) is stolen. To evaluate the viability of Tapas as an alternative to traditional password managers, we perform a 30 participant user study comparing Tapas to two configurations of Firefox's built-in password manager. We found users significantly preferred Tapas. We then improve Tapas by incorporating feedback from this study, and reevaluate it with an additional 10 participants.",
"title": ""
},
{
"docid": "neg:1840564_13",
"text": "Lack of physical activity is a serious health concern for individuals who are visually impaired as they have fewer opportunities and incentives to engage in physical activities that provide the amounts and kinds of stimulation sufficient to maintain adequate fitness and to support a healthy standard of living. Exergames are video games that use physical activity as input and which have the potential to change sedentary lifestyles and associated health problems such as obesity. We identify that exergames have a number properties that could overcome the barriers to physical activity that individuals with visual impairments face. However, exergames rely upon being able to perceive visual cues that indicate to the player what input to provide. This paper presents VI Tennis, a modified version of a popular motion sensing exergame that explores the use of vibrotactile and audio cues. The effectiveness of providing multimodal (tactile/audio) versus unimodal (audio) cues was evaluated with a user study with 13 children who are blind. Children achieved moderate to vigorous levels of physical activity- the amount required to yield health benefits. No significant difference in active energy expenditure was found between both versions, though children scored significantly better with the tactile/audio version and also enjoyed playing this version more, which emphasizes the potential of tactile/audio feedback for engaging players for longer periods of time.",
"title": ""
},
{
"docid": "neg:1840564_14",
"text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.",
"title": ""
},
{
"docid": "neg:1840564_15",
"text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.",
"title": ""
},
{
"docid": "neg:1840564_16",
"text": "Visual Question Answering (VQA) is a popular research problem that involves inferring answers to natural language questions about a given visual scene. Recent neural network approaches to VQA use attention to select relevant image features based on the question. In this paper, we propose a novel Dual Attention Network (DAN) that not only attends to image features, but also to question features. The selected linguistic and visual features are combined by a recurrent model to infer the final answer. We experiment with different question representations and do several ablation studies to evaluate the model on the challenging VQA dataset.",
"title": ""
},
{
"docid": "neg:1840564_17",
"text": "The posterior cerebral artery (PCA) has been noted in literature to have anatomical variations, specifically fenestration. Cerebral arteries with fenestrations are uncommon, especially when associated with other vascular pathologies. We report a case here of fenestrations within the P1 segment of the right PCA associated with a right middle cerebral artery (MCA) aneurysm in an elder adult male who presented with a new onset of headaches. The patient was treated with vascular clipping of the MCA and has recovered well. Identifying anatomical variations with appropriate imaging is of particular importance in neuro-interventional procedures as it may have an impact on the procedure itself and consequently post-interventional outcomes. Categories: Neurology, Neurosurgery",
"title": ""
},
{
"docid": "neg:1840564_18",
"text": "The neck is not only the first anatomical area to show aging but also contributes to the persona of the individual. The understanding the aging process of the neck is essential for neck rejuvenation. Multiple neck rejuvenation techniques have been reported in the literature. In 1974, Skoog [1] described the anatomy of the superficial musculoaponeurotic system (SMAS) and its role in the aging of the neck. Recently, many patients have expressed interest in minimally invasive surgery with a low risk of complications and short recovery period. The use of thread for neck rejuvenation and the concept of the suture suspension neck lift have become widespread as a convenient and effective procedure; nevertheless, complications have also been reported such as recurrence, inadequate correction, and palpability of the sutures. In this study, we analyzed a new type of thread lift: elastic lift that uses elastic thread (Elasticum; Korpo SRL, Genova, Italy). We already use this new technique for the midface lift and can confirm its efficacy and safety in that context. The purpose of this study was to evaluate the outcomes and safety of the elastic lift technique for neck region lifting.",
"title": ""
},
{
"docid": "neg:1840564_19",
"text": "BACKGROUND\nHaving cancer may result in extensive emotional, physical and social suffering. Music interventions have been used to alleviate symptoms and treatment side effects in cancer patients.\n\n\nOBJECTIVES\nTo compare the effects of music therapy or music medicine interventions and standard care with standard care alone, or standard care and other interventions in patients with cancer.\n\n\nSEARCH STRATEGY\nWe searched the Cochrane Central Register of Controlled Trials (CENTRAL) (The Cochrane Library 2010, Issue 10), MEDLINE, EMBASE, CINAHL, PsycINFO, LILACS, Science Citation Index, CancerLit, www.musictherapyworld.net, CAIRSS, Proquest Digital Dissertations, ClinicalTrials.gov, Current Controlled Trials, and the National Research Register. All databases were searched from their start date to September 2010. We handsearched music therapy journals and reference lists and contacted experts. There was no language restriction.\n\n\nSELECTION CRITERIA\nWe included all randomized controlled trials (RCTs) and quasi-randomized trials of music interventions for improving psychological and physical outcomes in patients with cancer. Participants undergoing biopsy and aspiration for diagnostic purposes were excluded.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently extracted the data and assessed the risk of bias. Where possible, results were presented in meta analyses using mean differences and standardized mean differences. Post-test scores were used. In cases of significant baseline difference, we used change scores.\n\n\nMAIN RESULTS\nWe included 30 trials with a total of 1891 participants. We included music therapy interventions, offered by trained music therapists, as well as listening to pre-recorded music, offered by medical staff. The results suggest that music interventions may have a beneficial effect on anxiety in people with cancer, with a reported average anxiety reduction of 11.20 units (95% confidence interval (CI) -19.59 to -2.82, P = 0.009) on the STAI-S scale and -0.61 standardized units (95% CI -0.97 to -0.26, P = 0.0007) on other anxiety scales. Results also suggested a positive impact on mood (standardised mean difference (SMD) = 0.42, 95% CI 0.03 to 0.81, P = 0.03), but no support was found for depression.Music interventions may lead to small reductions in heart rate, respiratory rate, and blood pressure. A moderate pain-reducing effect was found (SMD = -0.59, 95% CI -0.92 to -0.27, P = 0.0003), but no strong evidence was found for enhancement of fatigue or physical status. The pooled estimate of two trials suggested a beneficial effect of music therapy on patients' quality of life (QoL) (SMD = 1.02, 95% CI 0.58 to 1.47, P = 0.00001).No conclusions could be drawn regarding the effect of music interventions on distress, body image, oxygen saturation level, immunologic functioning, spirituality, and communication outcomes.Seventeen trials used listening to pre-recorded music and 13 trials used music therapy interventions that actively engaged the patients. Not all studies included the same outcomes and due to the small number of studies per outcome, we could not compare the effectiveness of music medicine interventions with that of music therapy interventions.\n\n\nAUTHORS' CONCLUSIONS\nThis systematic review indicates that music interventions may have beneficial effects on anxiety, pain, mood, and QoL in people with cancer. Furthermore, music may have a small effect on heart rate, respiratory rate, and blood pressure. Most trials were at high risk of bias and, therefore, these results need to be interpreted with caution.",
"title": ""
}
] |
1840565 | Characteristics of knowledge, people engaged in knowledge transfer and knowledge stickiness: evidence from Chinese R&D team | [
{
"docid": "pos:1840565_0",
"text": "The dynamic capabilities framework analyzes the sources and methods of wealth creation and capture by private enterprise firms operating in environments of rapid technological change. The competitive advantage of firms is seen as resting on distinctive processes (ways of coordinating and combining), shaped by the firm's (specific) asset positions (such as the firm's portfolio of difftcult-to-trade knowledge assets and complementary assets), and the evolution path(s) it has aflopted or inherited. The importance of path dependencies is amplified where conditions of increasing retums exist. Whether and how a firm's competitive advantage is eroded depends on the stability of market demand, and the ease of replicability (expanding intemally) and imitatability (replication by competitors). If correct, the framework suggests that private wealth creation in regimes of rapid technological change depends in large measure on honing intemal technological, organizational, and managerial processes inside the firm. In short, identifying new opportunities and organizing effectively and efficiently to embrace them are generally more fundamental to private wealth creation than is strategizing, if by strategizing one means engaging in business conduct that keeps competitors off balance, raises rival's costs, and excludes new entrants. © 1997 by John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "pos:1840565_1",
"text": "Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.",
"title": ""
}
] | [
{
"docid": "neg:1840565_0",
"text": "The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.",
"title": ""
},
{
"docid": "neg:1840565_1",
"text": "Now days many research is going on for text summari zation. Because of increasing information in the internet, these kind of research are gaining more a nd more attention among the researchers. Extractive text summarization generates a brief summary by extracti ng proper set of sentences from a document or multi ple documents by deep learning. The whole concept is to reduce or minimize the important information prese nt in the documents. The procedure is manipulated by Rest rict d Boltzmann Machine (RBM) algorithm for better efficiency by removing redundant sentences. The res tricted Boltzmann machine is a graphical model for binary random variables. It consist of three layers input, hidden and output layer. The input data uni formly distributed in the hidden layer for operation. The experimentation is carried out and the summary is g enerated for three different document set from different kno wledge domain. The f-measure value is the identifie r to the performance of the proposed text summarization meth od. The top responses of the three different knowle dge domain in accordance with the f-measure are 0.85, 1 .42 and 1.97 respectively for the three document se t.",
"title": ""
},
{
"docid": "neg:1840565_2",
"text": "In Part I of this paper, a novel motion simulator platform is presented, the DLR Robot Motion Simulator with 7 degrees of freedom (DOF). In this Part II, a path-planning algorithm for mentioned platform will be discussed. By replacing the widely used hexapod kinematics by an antropomorhic, industrial robot arm mounted on a standard linear axis, a comparably larger workspace at lower hardware costs can be achieved. But the serial, redundant kinematics of the industrial robot system also introduces challenges for the path-planning as singularities in the workspace, varying movability of the system and the handling of robot system's kinematical redundancy. By solving an optimization problem with constraints in every sampling step, a feasible trajectory can be generated, fulfilling the task of motion cueing, while respecting the robot's dynamic constraints.",
"title": ""
},
{
"docid": "neg:1840565_3",
"text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.",
"title": ""
},
{
"docid": "neg:1840565_4",
"text": "Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6–10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.",
"title": ""
},
{
"docid": "neg:1840565_5",
"text": "This paper describes the latest version of the ABC metadata model. This model has been developed within the Harmony international digital library project to provide a common conceptual model to facilitate interoperability between metadata vocabularies from different domains. This updated ABC model is the result of collaboration with the CIMI consortium whereby earlier versions of the ABC model were applied to metadata descriptions of complex objects provided by CIMI museums and libraries. The result is a metadata model with more logically grounded time and entity semantics. Based on this model we have been able to build a metadata repository of RDF descriptions and a search interface which is capable of more sophisticated queries than less-expressive, object-centric metadata models will allow.",
"title": ""
},
{
"docid": "neg:1840565_6",
"text": "Designing powerful tools that support cooking activities has rapidly gained popularity due to the massive amounts of available data, as well as recent advances in machine learning that are capable of analyzing them. In this paper, we propose a cross-modal retrieval model aligning visual and textual data (like pictures of dishes and their recipes) in a shared representation space. We describe an effective learning scheme, capable of tackling large-scale problems, and validate it on the Recipe1M dataset containing nearly 1 million picture-recipe pairs. We show the effectiveness of our approach regarding previous state-of-the-art models and present qualitative results over computational cooking use cases.",
"title": ""
},
{
"docid": "neg:1840565_7",
"text": "Mobile apps have to satisfy various privacy requirements. Notably, app publishers are often obligated to provide a privacy policy and notify users of their apps’ privacy practices. But how can a user tell whether an app behaves as its policy promises? In this study we introduce a scalable system to help analyze and predict Android apps’ compliance with privacy requirements. We discuss how we customized our system in a collaboration with the California Office of the Attorney General. Beyond its use by regulators and activists our system is also meant to assist app publishers and app store owners in their internal assessments of privacy requirement compliance. Our analysis of 17,991 free Android apps shows the viability of combining machine learning-based privacy policy analysis with static code analysis of apps. Results suggest that 71% of apps tha lack a privacy policy should have one. Also, for 9,050 apps that have a policy, we find many instances of potential inconsistencies between what the app policy seems to state and what the code of the app appears to do. In particular, as many as 41% of these apps could be collecting location information and 17% could be sharing such with third parties without disclosing so in their policies. Overall, each app exhibits a mean of 1.83 potential privacy requirement inconsistencies.",
"title": ""
},
{
"docid": "neg:1840565_8",
"text": "In this study, we examined physician acceptance behavior of the electronic medical record (EMR) exchange. Although several prior studies have focused on factors that affect the adoption or use of EMRs, empirical study that captures the success factors that encourage physicians to adopt the EMR exchange is limited. Therefore, drawing on institutional trust integrated with the decomposed theory of planned behavior (TPB) model, we propose a theoretical model to examine physician intentions of using the EMR exchange. A field survey was conducted in Taiwan to collect data from physicians. Structural equation modeling (SEM) using the partial least squares (PLS) method was employed to test the research model. The results showed that the usage intention of physicians is significantly influenced by 4 factors (i.e., attitude, subjective norm, perceived behavior control, and institutional trust). These 4 factors were assessed by their perceived usefulness and compatibility, facilitating conditions and self-efficacy, situational normality, and structural assurance, respectively. The results also indicated that institutional trust integrated with the decomposed TPB model provides an improved method for predicting physician intentions to use the EMR exchange. Finally, the implications of this study are discussed.",
"title": ""
},
{
"docid": "neg:1840565_9",
"text": "As urbanisation increases globally and the natural environment becomes increasingly fragmented, the importance of urban green spaces for biodiversity conservation grows. In many countries, private gardens are a major component of urban green space and can provide considerable biodiversity benefits. Gardens and adjacent habitats form interconnected networks and a landscape ecology framework is necessary to understand the relationship between the spatial configuration of garden patches and their constituent biodiversity. A scale-dependent tension is apparent in garden management, whereby the individual garden is much smaller than the unit of management needed to retain viable populations. To overcome this, here we suggest mechanisms for encouraging 'wildlife-friendly' management of collections of gardens across scales from the neighbourhood to the city.",
"title": ""
},
{
"docid": "neg:1840565_10",
"text": "What is a good test case? One that reveals potential defects with good cost-effectiveness. We provide a generic model of faults and failures, formalize it, and present its various methodological usages for test case generation.",
"title": ""
},
{
"docid": "neg:1840565_11",
"text": "We present WatchWriter, a finger operated keyboard that supports both touch and gesture typing with statistical decoding on a smartwatch. Just like on modern smartphones, users type one letter per tap or one word per gesture stroke on WatchWriter but in a much smaller spatial scale. WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size. In a user performance experiment entirely run on a smartwatch, 36 participants reached a speed of 22-24 WPM with near zero error rate.",
"title": ""
},
{
"docid": "neg:1840565_12",
"text": "Three classic cases and one exceptional case are reported. The unique case of decapitation took place in a traffic accident, while the others were seen after homicide, vehicle-assisted suicide, and after long-jump hanging. Thorough scene examinations were performed, and photographs from the scene were available in all cases. Through the autopsy of each case, the mechanism for the decapitation in each case was revealed. The severance lines were through the neck and the cervical vertebral column, except for in the motor vehicle accident case, where the base of skull was fractured. This case was also unusual as the mechanism was blunt force. In the homicide case, the mechanism was the use of a knife combined with a saw, while in the two last cases, a ligature made the cut through the neck. The different mechanisms in these decapitations are suggested.",
"title": ""
},
{
"docid": "neg:1840565_13",
"text": "Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.",
"title": ""
},
{
"docid": "neg:1840565_14",
"text": "A core business in the fashion industry is the understanding and prediction of customer needs and trends. Search engines and social networks are at the same time a fundamental bridge and a costly middleman between the customer’s purchase intention and the retailer. To better exploit Europe’s distinctive characteristics e.g., multiple languages, fashion and cultural differences, it is pivotal to reduce retailers’ dependence to search engines. This goal can be achieved by harnessing various data channels (manufacturers and distribution networks, online shops, large retailers, social media, market observers, call centers, press/magazines etc.) that retailers can leverage in order to gain more insight about potential buyers, and on the industry trends as a whole. This can enable the creation of novel on-line shopping experiences, the detection of influencers, and the prediction of upcoming fashion trends. In this paper, we provide an overview of the main research challenges and an analysis of the most promising technological solutions that we are investigating in the FashionBrain project.",
"title": ""
},
{
"docid": "neg:1840565_15",
"text": "As organizational environments become more global, dynamic, and competitive, contradictory demands intensify. To understand and explain such tensions, academics and practitioners are increasingly adopting a paradox lens. We review the paradox literature, categorizing types and highlighting fundamental debates. We then present a dynamic equilibrium model of organizing, which depicts how cyclical responses to paradoxical tensions enable sustainability—peak performance in the present that enables success in the future. This review and the model provide the foundation of a theory of paradox.",
"title": ""
},
{
"docid": "neg:1840565_16",
"text": "Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Each module also contains a residual component that learns to solve aspects of the new task that lower modules cannot solve. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate state-ofthe-art performance in Visual Question Answering, the highest-level task in our task set. By evaluating the reasoning process using non-expert human judges, we show that our model is more interpretable than an attention-based baseline.",
"title": ""
},
{
"docid": "neg:1840565_17",
"text": "Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of “Deep Learning” strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 × 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies.",
"title": ""
},
{
"docid": "neg:1840565_18",
"text": "Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks.",
"title": ""
},
{
"docid": "neg:1840565_19",
"text": "This paper focuses on tracking dynamic targets using a low cost, commercially available drone. The approach presented utilizes a computationally simple potential field controller expanded to operate not only on relative positions, but also relative velocities. A brief background on potential field methods is given, and the design and implementation of the proposed controller is presented. Experimental results using an external motion capture system for localization demonstrate the ability of the drone to track a dynamic target in real time as well as avoid obstacles in its way.",
"title": ""
}
] |
1840566 | Direct Ray Tracing of Displacement Mapped Triangles | [
{
"docid": "pos:1840566_0",
"text": "Recent progress in acquiring shape from range data permits the acquisition of seamless million-polygon meshes from physical models. In this paper, we present an algorithm and system for converting dense irregular polygon meshes of arbitrary topology into tensor product B-spline surface patches with accompanying displacement maps. This choice of representation yields a coarse but efficient model suitable for animation and a fine but more expensive model suitable for rendering. The first step in our process consists of interactively painting patch boundaries over a rendering of the mesh. In many applications, interactive placement of patch boundaries is considered part of the creative process and is not amenable to automation. The next step is gridded resampling of each boundedsection of the mesh. Our resampling algorithm lays a grid of springs across the polygon mesh, then iterates between relaxing this grid and subdividing it. This grid provides a parameterization for the mesh section, which is initially unparameterized. Finally, we fit a tensor product B-spline surface to the grid. We also output a displacement map for each mesh section, which represents the error between our fitted surface and the spring grid. These displacement maps are images; hence this representation facilitates the use of image processing operators for manipulating the geometric detail of an object. They are also compatible with modern photo-realistic rendering systems. Our resampling and fitting steps are fast enough to surface a million polygon mesh in under 10 minutes important for an interactive system. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling —curve, surface and object representations; I.3.7[Computer Graphics]:Three-Dimensional Graphics and Realism—texture; J.6[Computer-Aided Engineering]:ComputerAided Design (CAD); G.1.2[Approximation]:Spline Approximation Additional",
"title": ""
},
{
"docid": "pos:1840566_1",
"text": "The quality of computer generated images of three-dimensional scenes depends on the shading technique used to paint the objects on the cathode-ray tube screen. The shading algorithm itself depends in part on the method for modeling the object, which also determines the hidden surface algorithm. The various methods of object modeling, shading, and hidden surface removal are thus strongly interconnected. Several shading techniques corresponding to different methods of object modeling and the related hidden surface algorithms are presented here. Human visual perception and the fundamental laws of optics are considered in the development of a shading rule that provides better quality and increased realism in generated images.",
"title": ""
}
] | [
{
"docid": "neg:1840566_0",
"text": "Fuzzy logic methods have been used successfully in many real-world applications, but the foundations of fuzzy logic remain under attack. Taken together, these two facts constitute a paradox. A second paradox is that almost all of the successful fuzzy logic applications are embedded controllers, while most of the theoretical papers on fuzzy methods deal with knowledge representation and reasoning. I hope to resolve these paradoxes by identifying which aspects of fuzzy logic render it useful in practice, and which aspects are inessential. My conclusions are based on a mathematical result, on a survey of literature on the use of fuzzy logic in heuristic control and in expert systems, and on practical experience in developing expert systems.<<ETX>>",
"title": ""
},
{
"docid": "neg:1840566_1",
"text": "Many agricultural studies rely on infrared sensors for remote measurement of surface temperatures for crop status monitoring and estimating sensible and latent heat fluxes. Historically, applications for these non-contact thermometers employed the use of hand-held or stationary industrial infrared thermometers (IRTs) wired to data loggers. Wireless sensors in agricultural applications are a practical alternative, but the availability of low cost wireless IRTs is limited. In this study, we designed prototype narrow (10◦) field of view wireless infrared sensor modules and evaluated the performance of the IRT sensor by comparing temperature readings of an object (Tobj) against a blackbody calibrator in a controlled temperature room at ambient temperatures of 15 ◦C, 25 ◦C, 35 ◦C, and 45 ◦C. Additional comparative readings were taken over plant and soil samples alongside a hand-held IRT and over an isothermal target in the outdoors next to a wired IRT. The average root mean square error (RMSE) and mean absolute error (MAE) between the collected IRT object temperature readings and the blackbody target ranged between 0.10 and 0.79 ◦C. The wireless IRT readings also compared well with the hand-held IRT and wired industrial IRT. Additional tests performed to investigate the influence of direct radiation on IRT measurements indicated that housing the sensor in white polyvinyl chloride provided ample shielding for the self-compensating circuitry of the IR detector. The relatively low cost of the wireless IRT modules and repeatable measurements against a blackbody calibrator and commercial IR thermometers demonstrated that these wireless prototypes have the potential to provide accurate surface radiometric temperature readings in outdoor applications. Further studies are needed to thoroughly test radio frequency communication and power consumption characteristics in an outdoor setting. Published by Elsevier B.V.",
"title": ""
},
{
"docid": "neg:1840566_2",
"text": "This article assumes that brands should be managed as valuable, long-term corporate assets. It is proposed that for a true brand asset mindset to be achieved, the relationship between brand loyalty and brand value needs to be recognised within the management accounting system. It is also suggested that strategic brand management is achieved by having a multi-disciplinary focus, which is facilitated by a common vocabulary. This article seeks to establish the relationships between the constructs and concepts of branding, and to provide a framework and vocabulary that aids effective communication between the functions of accounting and marketing. Performance measures for brand management are also considered, and a model for the management of brand equity is provided. Very simply, brand description (or identity or image) is tailored to the needs and wants of a target market using the marketing mix of product, price, place, and promotion. The success or otherwise of this process determines brand strength or the degree of brand loyalty. A brand's value is determined by the degree of brand loyalty, as this implies a guarantee of future cash flows. Feldwick considered that using the term brand equity creates the illusion that an operational relationship exists between brand description, brand strength and brand value that cannot be demonstrated to operate in practice. This is not surprising, given that brand description and brand strength are, broadly speaking, within the remit of marketers and brand value has been considered largely an accounting issue. However, for brands to be managed strategically as long-term assets, the relationship outlined in Figure 1 needs to be operational within the management accounting system. The efforts of managers of brands could be reviewed and assessed by the measurement of brand strength and brand value, and brand strategy modified accordingly. Whilst not a simple process, the measurement of outcomes is useful as part of a range of diagnostic tools for management. This is further explored in the summary discussion. Whilst there remains a diversity of opinion on the definition and basis of brand equity, most approaches consider brand equity to be a strategic issue, albeit often implicitly. The following discussion explores the range of interpretations of brand equity, showing how they relate to Feldwick's (1996) classification. Ambler and Styles (1996) suggest that managers of brands choose between taking profits today or storing them for the future, with brand equity being the `̀ . . . store of profits to be realised at a later date.'' Their definition follows Srivastava and Shocker (1991) with brand equity suggested as; . . . the aggregation of all accumulated attitudes and behavior patterns in the extended minds of consumers, distribution channels and influence agents, which will enhance future profits and long term cash flow. This definition of brand equity distinguishes the brand asset from its valuation, and falls into Feldwick's (1996) brand strength category of brand equity. This approach is intrinsically strategic in nature, with the emphasis away from short-term profits. Davis (1995) also emphasises the strategic importance of brand equity when he defines brand value (one form of brand equity) as `̀ . . . the potential strategic contributions and benefits that a brand can make to a company.'' In this definition, brand value is the resultant form of brand equity in Figure 1, or the outcome of consumer-based brand equity. Keller (1993) also takes the consumer-based brand strength approach to brand equity, suggesting that brand equity represents a condition in which the consumer is familiar with the brand and recalls some favourable, strong and unique brand associations. Hence, there is a differential effect of brand knowledge on consumer response to the marketing of a brand. This approach is aligned to the relationship described in Figure 1, where brand strength is a function of brand description. Winters (1991) relates brand equity to added value by suggesting that brand equity involves the value added to a product by consumers' associations and perceptions of a particular brand name. It is unclear in what way added value is being used, but brand equity fits the categories of brand description and brand strength as outlined above. Leuthesser (1988) offers a broad definition of brand equity as: the set of associations and behaviour on the part of a brand's customers, channel members and parent corporation that permits the brand to earn greater volume or greater margins than it could without the brand name. This definition covers Feldwick's classifications of brand description and brand strength implying a similar relationship to that outlined in Figure 1. The key difference to Figure 1 is that the outcome of brand strength is not specified as brand value, but implies market share, and profit as outcomes. Marketers tend to describe, rather than ascribe a figure to, the outcomes of brand strength. Pitta and Katsanis (1995) suggest that brand equity increases the probability of brand choice, leads to brand loyalty and `̀ insulates the brand from a measure of competitive threats.'' Aaker (1991) suggests that strong brands will usually provide higher profit margins and better access to distribution channels, as well as providing a broad platform for product line extensions. Brand extension[1] is a commonly cited advantage of high brand equity, with Dacin and Smith (1994) and Keller and Aaker (1992) suggesting that successful brand extensions can also build brand equity. Loken and John (1993) and Aaker (1993) advise caution in that poor brand extensions can erode brand equity. Figure 1 The brand equity chain [ 663 ] Lisa Wood Brands and brand equity: definition and management Management Decision 38/9 [2000] 662±669 Farquhar (1989) suggests a relationship between high brand equity and market power asserting that: The competitive advantage of firms that have brands with high equity includes the opportunity for successful extensions, resilience against competitors' promotional pressures, and creation of barriers to competitive entry. This relationship is summarised in Figure 2. Figure 2 indicates that there can be more than one outcome determined by brand strength apart from brand value. It should be noted that it is argued by Wood (1999) that brand value measurements could be used as an indicator of market power. Achieving a high degree of brand strength may be considered an important objective for managers of brands. If we accept that the relationships highlighted in Figures 1 and 2 are something that we should be aiming for, then it is logical to focus our attention on optimising brand description. This requires a rich understanding of the brand construct itself. Yet, despite an abundance of literature, the definitive brand construct has yet to be produced. Subsequent discussion explores the brand construct itself, and highlights the specific relationship between brands and added value. This relationship is considered to be key to the variety of approaches to brand definition within marketing, and is currently an area of incompatibility between marketing and accounting.",
"title": ""
},
{
"docid": "neg:1840566_3",
"text": "Extracting question-answer pairs from online forums is a meaningful work due to the huge amount of valuable user generated resource contained in forums. In this paper we consider the problem of extracting Chinese question-answer pairs for the first time. We present a strategy to detect Chinese questions and their answers. We propose a sequential rule based method to find questions in a forum thread, then we adopt nontextual features based on forum structure to improve the performance of answer detecting in the same thread. Experimental results show that our techniques are very effective.",
"title": ""
},
{
"docid": "neg:1840566_4",
"text": "The notion of Cloud computing has not only reshaped the field of distributed systems but also fundamentally changed how businesses utilize computing today. While Cloud computing provides many advanced features, it still has some shortcomings such as the relatively high operating cost for both public and private Clouds. The area of Green computing is also becoming increasingly important in a world with limited energy resources and an ever-rising demand for more computational power. In this paper a new framework is presented that provides efficient green enhancements within a scalable Cloud computing architecture. Using power-aware scheduling techniques, variable resource management, live migration, and a minimal virtual machine design, overall system efficiency will be vastly improved in a data center based Cloud with minimal performance overhead.",
"title": ""
},
{
"docid": "neg:1840566_5",
"text": "OBJECTIVE\nThe present study examined the association between child sexual abuse (CSA) and sexual health outcomes in young adult women. Maladaptive coping strategies and optimism were investigated as possible mediators and moderators of this relationship.\n\n\nMETHOD\nData regarding sexual abuse, coping, optimism and various sexual health outcomes were collected using self-report and computerized questionnaires with a sample of 889 young adult women from the province of Quebec aged 20-23 years old.\n\n\nRESULTS\nA total of 31% of adult women reported a history of CSA. Women reporting a severe CSA were more likely to report more adverse sexual health outcomes including suffering from sexual problems and engaging in more high-risk sexual behaviors. CSA survivors involving touching only were at greater risk of reporting more negative sexual self-concept such as experiencing negative feelings during sex than were non-abused participants. Results indicated that emotion-oriented coping mediated outcomes related to negative sexual self-concept while optimism mediated outcomes related to both, negative sexual self-concept and high-risk sexual behaviors. No support was found for any of the proposed moderation models.\n\n\nCONCLUSIONS\nSurvivors of more severe CSA are more likely to engage in high-risk sexual behaviors that are potentially harmful to their health as well as to experience more sexual problems than women without a history of sexual victimization. Personal factors, namely emotion-oriented coping and optimism, mediated some sexual health outcomes in sexually abused women. The results suggest that maladaptive coping strategies and optimism regarding the future may be important targets for interventions optimizing sexual health and sexual well-being in CSA survivors.",
"title": ""
},
{
"docid": "neg:1840566_6",
"text": "A broadband design of the microstrip-fed modified quasi-Yagi antenna is presented. The two arms of the driving dipole are connected separately to two microstrip sections tapered from the feeding microstrip line and its truncated ground plane. The end points of the two tapered sections can be suitably adjusted to obtain a 10-dB return loss bandwidth more than 50%. Measured radiation patterns are end-fire and the in-band peak gains range from 3.9 to 7.2 dBi. Details of the antenna design and the experimental results are presented and discussed.",
"title": ""
},
{
"docid": "neg:1840566_7",
"text": "Purpose – System usage and user satisfaction are widely accepted and used as surrogate measures of IS success. Past studies attempted to explore the relationship between system usage and user satisfaction but findings are mixed, inconclusive and misleading. The main objective of this research is to better understand and explain the nature and strength of the relationship between system usage and user satisfaction by resolving the existing inconsistencies in the IS research and to validate this relationship empirically as defined in Delone and McLean’s IS success model. Design/methodology/approach – “Meta-analysis” as a research approach was adopted because of its suitability regarding the nature of the research and its capability of dealing with exploring relationships that may be obscured in other approaches to synthesize research findings. Meta-analysis findings contributed towards better explaining the relationship between system usage and user satisfaction, the main objectives of this research. Findings – This research examines critically the past findings and resolves the existing inconsistencies. The meta-analysis findings explain that there exists a significant positive relationship between “system usage” and “user satisfaction” (i.e. r 1⁄4 0:2555) although not very strong. This research empirically validates this relationship that has already been proposed by Delone and McLean in their IS success model. Provides a guide for future research to explore the mediating variables that might affect the relationship between system usage and user satisfaction. Originality/value – This research better explains the relationship between system usage and user satisfaction by resolving contradictory findings in the past research and contributes to the existing body of knowledge relating to IS success.",
"title": ""
},
{
"docid": "neg:1840566_8",
"text": "Crowd sourcing is based on a simple but powerful concept: Virtually anyone has the potential to plug in valuable information. The concept revolves around large groups of people or community handling tasks that have traditionally been associated with a specialist or small group of experts. With the advent of the smart devices, many mobile applications are already tapping into crowd sourcing to report community issues and traffic problems, but more can be done. While most of these applications work well for the average user, it neglects the information needs of particular user communities. We present CROWDSAFE, a novel convergence of Internet crowd sourcing and portable smart devices to enable real time, location based crime incident searching and reporting. It is targeted to users who are interested in crime information. The system leverages crowd sourced data to provide novel features such as a Safety Router and value added crime analytics. We demonstrate the system by using crime data in the metropolitan Washington DC area to show the effectiveness of our approach. Also highlighted is its ability to facilitate greater collaboration between citizens and civic authorities. Such collaboration shall foster greater innovation to turn crime data analysis into smarter and safe decisions for the public.",
"title": ""
},
{
"docid": "neg:1840566_9",
"text": "With the advances in new-generation information technologies, especially big data and digital twin, smart manufacturing is becoming the focus of global manufacturing transformation and upgrading. Intelligence comes from data. Integrated analysis for the manufacturing big data is beneficial to all aspects of manufacturing. Besides, the digital twin paves a way for the cyber-physical integration of manufacturing, which is an important bottleneck to achieve smart manufacturing. In this paper, the big data and digital twin in manufacturing are reviewed, including their concept as well as their applications in product design, production planning, manufacturing, and predictive maintenance. On this basis, the similarities and differences between big data and digital twin are compared from the general and data perspectives. Since the big data and digital twin can be complementary, how they can be integrated to promote smart manufacturing are discussed.",
"title": ""
},
{
"docid": "neg:1840566_10",
"text": "a Emerging Markets Research Centre (EMaRC), School of Management, Swansea University Bay Campus, Fabian Way, Swansea SA1 8EN, Wales, UK b Section of Information & Communication Technology, Faculty of Technology, Policy, and Management, Delft University of Technology, The Netherlands c Nottingham Business School, Nottingham Trent University, UK d School of Management, Swansea University Bay Campus, Fabian Way, Swansea SA1 8EN, Wales, UK e School of Management, Swansea University Bay Campus, Fabian Way, Crymlyn Burrows, Swansea, SA1 8EN, Wales, UK",
"title": ""
},
{
"docid": "neg:1840566_11",
"text": "Adversarial samples are strategically modified samples, which are crafted with the purpose of fooling a classifier at hand. An attacker introduces specially crafted adversarial samples to a deployed classifier, which are being mis-classified by the classifier. However, the samples are perceived to be drawn from entirely different classes and thus it becomes hard to detect the adversarial samples. Most of the prior works have been focused on synthesizing adversarial samples in the image domain. In this paper, we propose a new method of crafting adversarial text samples by modification of the original samples. Modifications of the original text samples are done by deleting or replacing the important or salient words in the text or by introducing new words in the text sample. Our algorithm works best for the datasets which have sub-categories within each of the classes of examples. While crafting adversarial samples, one of the key constraint is to generate meaningful sentences which can at pass off as legitimate from language (English) viewpoint. Experimental results on IMDB movie review dataset for sentiment analysis and Twitter dataset for gender detection show the efficiency of our proposed method.",
"title": ""
},
{
"docid": "neg:1840566_12",
"text": "The present experiment was designed to test the theory that psychological procedures achieve changes in behavior by altering the level and strength of self-efficacy. In this formulation, perceived self-efficacy. In this formulation, perceived self-efficacy influences level of performance by enhancing intensity and persistence of effort. Adult phobics were administered treatments based upon either performance mastery experiences, vicarious experiences., or they received no treatment. Their efficacy expectations and approach behavior toward threats differing on a similarity dimension were measured before and after treatment. In accord with our prediction, the mastery-based treatment produced higher, stronger, and more generalized expectations of personal efficacy than did the treatment relying solely upon vicarious experiences. Results of a microanalysis further confirm the hypothesized relationship between self-efficacy and behavioral change. Self-efficacy was a uniformly accurate predictor of performance on tasks of varying difficulty with different threats regardless of whether the changes in self-efficacy were produced through enactive mastery or by vicarious experience alone.",
"title": ""
},
{
"docid": "neg:1840566_13",
"text": "The presence of geometric details on object surfaces dramatically changes the way light interacts with these surfaces. Although synthesizing realistic pictures requires simulating this interaction as faithfully as possible, explicitly modeling all the small details tends to be impractical. To address these issues, an image-based technique called relief mapping has recently been introduced for adding per-fragment details onto arbitrary polygonal models (Policarpo et al. 2005). The technique has been further extended to render correct silhouettes (Oliveira and Policarpo 2005) and to handle non-height-field surface details (Policarpo and Oliveira 2006). In all its variations, the ray-height-field intersection is performed using a binary search, which refines the result produced by some linear search procedure. While the binary search converges very fast, the linear search (required to avoid missing large structures) is prone to aliasing, by possibly missing some thin structures, as is evident in Figure 18-1a. Several space-leaping techniques have since been proposed to accelerate the ray-height-field intersection and to minimize the occurrence of aliasing (Donnelly 2005, Dummer 2006, Baboud and Décoret 2006). Cone step mapping (CSM) (Dummer 2006) provides a clever solution to accelerate the intersection calculation for the average case and avoids skipping height-field structures by using some precomputed data (a cone map). However, because CSM uses a conservative approach, the rays tend to stop before the actual surface, which introduces different Relaxed Cone Stepping for Relief Mapping",
"title": ""
},
{
"docid": "neg:1840566_14",
"text": "This brief shows that a conventional semi-custom design-flow based on a positive feedback adiabatic logic (PFAL) cell library allows any VLSI designer to design and verify complex adiabatic systems (e.g., arithmetic units) in a short time and easy way, thus, enjoying the energy reduction benefits of adiabatic logic. A family of semi-custom PFAL carry lookahead adders and parallel multipliers were designed in a 0.6-/spl mu/m CMOS technology and verified. Post-layout simulations show that semi-custom adiabatic arithmetic units can save energy a factor 17 at 10 MHz and about 7 at 100 MHz, as compared to a logically equivalent static CMOS implementation. The energy saving obtained is also better if compared to other custom adiabatic circuit realizations and maintains high values (3/spl divide/6) even when the losses in power-clock generation are considered.",
"title": ""
},
{
"docid": "neg:1840566_15",
"text": "We describe a case of secondary syphilis of the tongue in which the main clinical presentation of the disease was similar to oral hairy leukoplakia. In a man who was HIV seronegative, the first symptom was a dryness of the throat followed by a feeling of foreign body in the tongue. Lesions were painful without cutaneous manifestations of secondary syphilis. IgM-fluorescent treponemal antibody test and typical serologic parameters promptly led to the diagnosis of secondary syphilis. We initiated an appropriate antibiotic therapy using benzathine penicillin, which induced healing of the tongue lesions. The differential diagnosis of this lesion may include oral squamous carcinoma, leukoplakia, candidosis, lichen planus, and, especially, hairy oral leukoplakia. This case report emphasizes the importance of considering secondary syphilis in the differential diagnosis of hairy oral leukoplakia. Depending on the clinical picture, the possibility of syphilis should not be overlooked in the differential diagnosis of many diseases of the oral mucosa.",
"title": ""
},
{
"docid": "neg:1840566_16",
"text": "Two Gram-stain-negative, non-motile, non-spore-forming, rod-shaped bacterial strains, designated 3B-2(T) and 10AO(T), were isolated from a sand sample collected from the west coast of the Korean peninsula by using low-nutrient media, and their taxonomic positions were investigated in a polyphasic study. The strains did not grow on marine agar. They grew optimally at 30 °C and pH 6.5-7.5. Strains 3B-2(T) and 10AO(T) shared 97.5 % 16S rRNA gene sequence similarity and mean level of DNA-DNA relatedness of 12 %. In phylogenetic trees based on 16S rRNA gene sequences, strains 3B-2(T) and 10AO(T), together with several uncultured bacterial clones, formed independent lineages within the evolutionary radiation encompassed by the phylum Bacteroidetes. Strains 3B-2(T) and 10AO(T) contained MK-7 as the predominant menaquinone and iso-C(15 : 0) and C(16 : 1)ω5c as the major fatty acids. The DNA G+C contents of strains 3B-2(T) and 10AO(T) were 42.8 and 44.6 mol%, respectively. Strains 3B-2(T) and 10AO(T) exhibited very low levels of 16S rRNA gene sequence similarity (<85.0 %) to the type strains of recognized bacterial species. These data were sufficient to support the proposal that the novel strains should be differentiated from previously known genera of the phylum Bacteroidetes. On the basis of the data presented, we suggest that strains 3B-2(T) and 10AO(T) represent two distinct novel species of a new genus, for which the names Ohtaekwangia koreensis gen. nov., sp. nov. (the type species; type strain 3B-2(T) = KCTC 23018(T) = CCUG 58939(T)) and Ohtaekwangia kribbensis sp. nov. (type strain 10AO(T) = KCTC 23019(T) = CCUG 58938(T)) are proposed.",
"title": ""
},
{
"docid": "neg:1840566_17",
"text": "fast align is a simple, fast, and efficient approach for word alignment based on the IBM model 2. fast align performs well for language pairs with relatively similar word orders; however, it does not perform well for language pairs with drastically different word orders. We propose a segmenting-reversing reordering process to solve this problem by alternately applying fast align and reordering source sentences during training. Experimental results with JapaneseEnglish translation demonstrate that the proposed approach improves the performance of fast align significantly without the loss of efficiency. Experiments using other languages are also reported.",
"title": ""
},
{
"docid": "neg:1840566_18",
"text": "An approach for capturing and modeling individual entertainment (“fun”) preferences is applied to users of the innovative Playware playground, an interactive physical playground inspired by computer games, in this study. The goal is to construct, using representative statistics computed from children’s physiological signals, an estimator of the degree to which games provided by the playground engage the players. For this purpose children’s heart rate (HR) signals, and their expressed preferences of how much “fun” particular game variants are, are obtained from experiments using games implemented on the Playware playground. A comprehensive statistical analysis shows that children’s reported entertainment preferences correlate well with specific features of the HR signal. Neuro-evolution techniques combined with feature set selection methods permit the construction of user models that predict reported entertainment preferences given HR features. These models are expressed as artificial neural networks and are demonstrated and evaluated on two Playware games and two control tasks requiring physical activity. The best network is able to correctly match expressed preferences in 64% of cases on previously unseen data (p−value 6 · 10−5). The generality of the methodology, its limitations, its usability as a real-time feedback mechanism for entertainment augmentation and as a validation tool are discussed.",
"title": ""
},
{
"docid": "neg:1840566_19",
"text": "Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research.",
"title": ""
}
] |
1840567 | An active compliance controller for quadruped trotting | [
{
"docid": "pos:1840567_0",
"text": "We propose a reactive controller framework for robust quadrupedal locomotion, designed to cope with terrain irregularities, trajectory tracking errors and poor state estimation. The framework comprises two main modules: One related to the generation of elliptic trajectories for the feet and the other for control of the stability of the whole robot. We propose a task space CPG-based trajectory generation that can be modulated according to terrain irregularities and the posture of the robot trunk. To improve the robot's stability, we implemented a null space based attitude control for the trunk and a push recovery algorithm based on the concept of capture points. Simulations and experimental results on the hydraulically actuated quadruped robot HyQ will be presented to demonstrate the effectiveness of our framework.",
"title": ""
},
{
"docid": "pos:1840567_1",
"text": "This paper examines the passive dynamics of quadrupedal bounding. First, an unexpected difference between local and global behavior of the forward speed versus touchdown angle in the selfstabilized Spring Loaded Inverted Pendulum (SLIP) model is exposed and discussed. Next, the stability properties of a simplified sagittal plane model of our Scout II quadrupedal robot are investigated. Despite its simplicity, this model captures the targeted steady state behavior of Scout II without dependence on the fine details of the robot structure. Two variations of the bounding gait, which are observed experimentally in Scout II, are considered. Surprisingly, numerical return map studies reveal that passive generation of a large variety of cyclic bounding motion is possible. Most strikingly, local stability analysis shows that the dynamics of the open loop passive system alone can confer stability to the motion! These results can be used in developing a general control methodology for legged robots, resulting from the synthesis of feedforward and feedback models that take advantage of the mechanical sysPortions of this paper have previously appeared in conference publications Poulakakis, Papadopoulos, and Buehler (2003) and Poulakakis, Smith, and Buehler (2005b). The first and third authors were with the Centre for Intelligent Machines at McGill University when this work was performed. Address all correspondence related to this paper to the first author. The International Journal of Robotics Research Vol. 25, No. 7, July 2006, pp. 669-687 DOI: 10.1177/0278364906066768 ©2006 SAGE Publications Figures appear in color online: http://ijr.sagepub.com tem, and might explain the success of simple, open loop bounding controllers on our experimental robot. KEY WORDS—passive dynamics, bounding gait, dynamic running, quadrupedal robot",
"title": ""
},
{
"docid": "pos:1840567_2",
"text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433",
"title": ""
}
] | [
{
"docid": "neg:1840567_0",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
},
{
"docid": "neg:1840567_1",
"text": "News topics, which are constructed from news stories using the techniques of Topic Detection and Tracking (TDT), bring convenience to users who intend to see what is going on through the Internet. However, it is almost impossible to view all the generated topics, because of the large amount. So it will be helpful if all topics are ranked and the top ones, which are both timely and important, can be viewed with high priority. Generally, topic ranking is determined by two primary factors. One is how frequently and recently a topic is reported by the media; the other is how much attention users pay to it. Both media focus and user attention varies as time goes on, so the effect of time on topic ranking has already been included. However, inconsistency exists between both factors. In this paper, an automatic online news topic ranking algorithm is proposed based on inconsistency analysis between media focus and user attention. News stories are organized into topics, which are ranked in terms of both media focus and user attention. Experiments performed on practical Web datasets show that the topic ranking result reflects the influence of time, the media and users. The main contributions of this paper are as follows. First, we present the quantitative measure of the inconsistency between media focus and user attention, which provides a basis for topic ranking and an experimental evidence to show that there is a gap between what the media provide and what users view. Second, to the best of our knowledge, it is the first attempt to synthesize the two factors into one algorithm for automatic online topic ranking.",
"title": ""
},
{
"docid": "neg:1840567_2",
"text": "This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on a car. Our method is designed to cope with illumination changes, shadows, and harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into an individual road marking. For this purpose, the proposed technique relies on MSER features to obtain candidate regions which are further merged using density-based clustering. Finally, these regions of interest are recognized using machine learning approaches. Worth noting, the algorithm is versatile since it does not utilize any prior information about lane position or road space. The proposed method compares favorably to other existing works through a large number of experiments on an extensive road marking dataset.",
"title": ""
},
{
"docid": "neg:1840567_3",
"text": "Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.",
"title": ""
},
{
"docid": "neg:1840567_4",
"text": "For the past two decades, the security community has been fighting malicious programs for Windows-based operating systems. However, the recent surge in adoption of embedded devices and the IoT revolution are rapidly changing the malware landscape. Embedded devices are profoundly different than traditional personal computers. In fact, while personal computers run predominantly on x86-flavored architectures, embedded systems rely on a variety of different architectures. In turn, this aspect causes a large number of these systems to run some variants of the Linux operating system, pushing malicious actors to give birth to \"\"Linux malware.\"\" To the best of our knowledge, there is currently no comprehensive study attempting to characterize, analyze, and understand Linux malware. The majority of resources on the topic are available as sparse reports often published as blog posts, while the few systematic studies focused on the analysis of specific families of malware (e.g., the Mirai botnet) mainly by looking at their network-level behavior, thus leaving the main challenges of analyzing Linux malware unaddressed. This work constitutes the first step towards filling this gap. After a systematic exploration of the challenges involved in the process, we present the design and implementation details of the first malware analysis pipeline specifically tailored for Linux malware. We then present the results of the first large-scale measurement study conducted on 10,548 malware samples (collected over a time frame of one year) documenting detailed statistics and insights that can help directing future work in the area.",
"title": ""
},
{
"docid": "neg:1840567_5",
"text": "This paper reports the results achieved by Carnegie Mellon University on the Topic Detection and Tracking Project’s secondyear evaluation for the segmentation, detection, and tracking tasks. Additional post-evaluation improvements are also",
"title": ""
},
{
"docid": "neg:1840567_6",
"text": "Many analytics applications generate mixed workloads, i.e., workloads comprised of analytical tasks with different processing characteristics including data pre-processing, SQL, and iterative machine learning algorithms. Examples of such mixed workloads can be found in web data analysis, social media analysis, and graph analytics, where they are executed repetitively on large input datasets (e.g., Find the average user time spent on the top 10 most popular web pages on the UK domain web graph.). Scale-out processing engines satisfy the needs of these applications by distributing the data and the processing task efficiently among multiple workers that are first reserved and then used to execute the task in parallel on a cluster of machines. Finding the resource allocation that can complete the workload execution within a given time constraint, and optimizing cluster resource allocations among multiple analytical workloads motivates the need for estimating the runtime of the workload before its actual execution. Predicting runtime of analytical workloads is a challenging problem as runtime depends on a large number of factors that are hard to model a priori execution. These factors can be summarized as workload characteristics (data statistics and processing costs) , the execution configuration (deployment, resource allocation, and software settings), and the cost model that captures the interplay among all of the above parameters. While conventional cost models proposed in the context of query optimization can assess the relative order among alternative SQL query plans, they are not aimed to estimate absolute runtime. Additionally, conventional models are ill-equipped to estimate the runtime of iterative analytics that are executed repetitively until convergence and that of user defined data pre-processing operators which are not “owned” by the underlying data management system. This thesis demonstrates that runtime for data analytics can be predicted accurately by breaking the analytical tasks into multiple processing phases, collecting key input features during a reference execution on a sample of the dataset, and then using the features to build per-phase cost models. We develop prediction models for three categories of data analytics produced by social media applications: iterative machine learning, data pre-processing, and reporting SQL. The prediction framework for iterative analytics, PREDIcT, addresses the challenging problem of estimating the number of iterations, and per-iteration runtime for a class of iterative machine learning algorithms that are run repetitively until convergence. The hybrid prediction models we develop for data pre-processing tasks and for reporting SQL combine the benefits of analytical modeling with that of machine learning-based models. Through a",
"title": ""
},
{
"docid": "neg:1840567_7",
"text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.",
"title": ""
},
{
"docid": "neg:1840567_8",
"text": "Age-associated disease and disability are placing a growing burden on society. However, ageing does not affect people uniformly. Hence, markers of the underlying biological ageing process are needed to help identify people at increased risk of age-associated physical and cognitive impairments and ultimately, death. Here, we present such a biomarker, ‘brain-predicted age’, derived using structural neuroimaging. Brain-predicted age was calculated using machine-learning analysis, trained on neuroimaging data from a large healthy reference sample (N=2001), then tested in the Lothian Birth Cohort 1936 (N=669), to determine relationships with age-associated functional measures and mortality. Having a brain-predicted age indicative of an older-appearing brain was associated with: weaker grip strength, poorer lung function, slower walking speed, lower fluid intelligence, higher allostatic load and increased mortality risk. Furthermore, while combining brain-predicted age with grey matter and cerebrospinal fluid volumes (themselves strong predictors) not did improve mortality risk prediction, the combination of brain-predicted age and DNA-methylation-predicted age did. This indicates that neuroimaging and epigenetics measures of ageing can provide complementary data regarding health outcomes. Our study introduces a clinically-relevant neuroimaging ageing biomarker and demonstrates that combining distinct measurements of biological ageing further helps to determine risk of age-related deterioration and death.",
"title": ""
},
{
"docid": "neg:1840567_9",
"text": "In situ Raman spectroscopy is an extremely valuable technique for investigating fundamental reactions that occur inside lithium rechargeable batteries. However, specialized in situ Raman spectroelectrochemical cells must be constructed to perform these experiments. These cells are often quite different from the cells used in normal electrochemical investigations. More importantly, the number of cells is usually limited by construction costs; thus, routine usage of in situ Raman spectroscopy is hampered for most laboratories. This paper describes a modification to industrially available coin cells that facilitates routine in situ Raman spectroelectrochemical measurements of lithium batteries. To test this strategy, in situ Raman spectroelectrochemical measurements are performed on Li//V2O5 cells. Various phases of Li(x)V2O5 could be identified in the modified coin cells with Raman spectroscopy, and the electrochemical cycling performance between in situ and unmodified cells is nearly identical.",
"title": ""
},
{
"docid": "neg:1840567_10",
"text": "This paper introduces flash organizations: crowds structured like organizations to achieve complex and open-ended goals. Microtask workflows, the dominant crowdsourcing structures today, only enable goals that are so simple and modular that their path can be entirely pre-defined. We present a system that organizes crowd workers into computationally-represented structures inspired by those used in organizations - roles, teams, and hierarchies - which support emergent and adaptive coordination toward open-ended goals. Our system introduces two technical contributions: 1) encoding the crowd's division of labor into de-individualized roles, much as movie crews or disaster response teams use roles to support coordination between on-demand workers who have not worked together before; and 2) reconfiguring these structures through a model inspired by version control, enabling continuous adaptation of the work and the division of labor. We report a deployment in which flash organizations successfully carried out open-ended and complex goals previously out of reach for crowdsourcing, including product design, software development, and game production. This research demonstrates digitally networked organizations that flexibly assemble and reassemble themselves from a globally distributed online workforce to accomplish complex work.",
"title": ""
},
{
"docid": "neg:1840567_11",
"text": "Deep convolutional neural networks (DCNNs) have been successfully used in many computer vision tasks. Previous works on DCNN acceleration usually use a fixed computation pattern for diverse DCNN models, leading to imbalance between power efficiency and performance. We solve this problem by designing a DCNN acceleration architecture called deep neural architecture (DNA), with reconfigurable computation patterns for different models. The computation pattern comprises a data reuse pattern and a convolution mapping method. For massive and different layer sizes, DNA reconfigures its data paths to support a hybrid data reuse pattern, which reduces total energy consumption by 5.9~8.4 times over conventional methods. For various convolution parameters, DNA reconfigures its computing resources to support a highly scalable convolution mapping method, which obtains 93% computing resource utilization on modern DCNNs. Finally, a layer-based scheduling framework is proposed to balance DNA’s power efficiency and performance for different DCNNs. DNA is implemented in the area of 16 mm2 at 65 nm. On the benchmarks, it achieves 194.4 GOPS at 200 MHz and consumes only 479 mW. The system-level power efficiency is 152.9 GOPS/W (considering DRAM access power), which outperforms the state-of-the-art designs by one to two orders.",
"title": ""
},
{
"docid": "neg:1840567_12",
"text": "A system that enables continuous slip compensation for a Mars rover has been designed, implemented, and field-tested. This system is composed of several components that allow the rover to accurately and continuously follow a designated path, compensate for slippage, and reach intended goals in high-slip environments. These components include: visual odometry, vehicle kinematics, a Kalman filter pose estimator, and a slip compensation/path follower. Visual odometry tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs. The vehicle kinematics for a rocker-bogie suspension system estimates motion by measuring wheel rates, and rocker, bogie, and steering angles. The Kalman filter merges data from an inertial measurement unit (IMU) and visual odometry. This merged estimate is then compared to the kinematic estimate to determine how much slippage has occurred, taking into account estimate uncertainties. If slippage has occurred then a slip vector is calculated by differencing the current Kalman filter estimate from the kinematic estimate. This slip vector is then used to determine the necessary wheel velocities and steering angles to compensate for slip and follow the desired path.",
"title": ""
},
{
"docid": "neg:1840567_13",
"text": "is the latest release of a versatile and very well optimized package for molecular simulation. Much effort has been devoted to achieving extremely high performance on both workstations and parallel computers. The design includes an extraction of vi-rial and periodic boundary conditions from the loops over pairwise interactions, and special software routines to enable rapid calculation of x –1/2. Inner loops are generated automatically in C or Fortran at compile time, with optimizations adapted to each architecture. Assembly loops using SSE and 3DNow! Multimedia instructions are provided for x86 processors, resulting in exceptional performance on inexpensive PC workstations. The interface is simple and easy to use (no scripting language), based on standard command line arguments with self-explanatory functionality and integrated documentation. All binary files are independent of hardware endian and can be read by versions of GROMACS compiled using different floating-point precision. A large collection of flexible tools for trajectory analysis is included, with output in the form of finished Xmgr/Grace graphs. A basic trajectory viewer is included, and several external visualization tools can read the GROMACS trajectory format. Starting with version 3.0, GROMACS is available under the GNU General Public License from",
"title": ""
},
{
"docid": "neg:1840567_14",
"text": "Automatic recognition of emotional states from human speech is a current research topic with a wide range. In this paper an attempt has been made to recognize and classify the speech emotion from three language databases, namely, Berlin, Japan and Thai emotion databases. Speech features consisting of Fundamental Frequency (F0), Energy, Zero Crossing Rate (ZCR), Linear Predictive Coding (LPC) and Mel Frequency Cepstral Coefficient (MFCC) from short-time wavelet signals are comprehensively investigated. In this regard, Support Vector Machines (SVM) is utilized as the classification model. Empirical experimentation shows that the combined features of F0, Energy and MFCC provide the highest accuracy on all databases provided using the linear kernel. It gives 89.80%, 93.57% and 98.00% classification accuracy for Berlin, Japan and Thai emotions databases, respectively.",
"title": ""
},
{
"docid": "neg:1840567_15",
"text": "Prediction or prognostication is at the core of modern evidence-based medicine. Prediction of overall mortality and cardiovascular disease can be improved by a systematic evaluation of measurements from large-scale epidemiological studies or by using nested sampling designs to discover new markers from omics technologies. In study I, we investigated if prediction measures such as calibration, discrimination and reclassification could be calculated within traditional sampling designs and which of these designs were the most efficient. We found that is possible to calculate prediction measures by using a proper weighting system and that a stratified casecohort design is a reasonable choice both in terms of efficiency and simplicity. In study II, we investigated the clinical utility of several genetic scores for incident coronary heart disease. We found that genetic information could be of clinical value in improving the allocation of patients to correct risk strata and that the assessment of a genetic risk score among intermediate risk subjects could help to prevent about one coronary heart disease event every 318 people screened. In study III, we explored the association between circulating metabolites and incident coronary heart disease. We found four new metabolites associated with coronary heart disease independently of established cardiovascular risk factors and with evidence of clinical utility. By using genetic information we determined a potential causal effect on coronary heart disease of one of these novel metabolites. In study IV, we compared a large number of demographics, health and lifestyle measurements for association with all-cause and cause-specific mortality. By ranking measurements in terms of their predictive abilities we could provide new insights about their relative importance, as well as reveal some unexpected associations. Moreover we developed and validated a prediction score for five-year mortality with good discrimination ability and calibrated it for the entire UK population. In conclusion, we applied a translational approach spanning from the discovery of novel biomarkers to their evaluation in terms of clinical utility. We combined this effort with methodological improvements aimed to expand prediction measures in settings that were not previously explored. We identified promising novel metabolomics markers for cardiovascular disease and supported the potential clinical utility of a genetic score in primary prevention. Our results might fuel future studies aimed to implement these findings in clinical practice.",
"title": ""
},
{
"docid": "neg:1840567_16",
"text": "Research in automotive safety leads to the conclusion that modern vehicle should utilize active and passive sensors for the recognition of the environment surrounding them. Thus, the development of tracking systems utilizing efficient state estimators is very important. In this case, problems such as moving platform carrying the sensor and maneuvering targets could introduce large errors in the state estimation and in some cases can lead to the divergence of the filter. In order to avoid sub-optimal performance, the unscented Kalman filter is chosen, while a new curvilinear model is applied which takes into account both the turn rate of the detected object and its tangential acceleration, leading to a more accurate modeling of its movement. The performance of the unscented filter using the proposed model in the case of automotive applications is proven to be superior compared to the performance of the extended and linear Kalman filter.",
"title": ""
},
{
"docid": "neg:1840567_17",
"text": "In this paper, we consider positioning with observed-time-difference-of-arrival (OTDOA) for a device deployed in long-term-evolution (LTE) based narrow-band Internet-of-things (NB-IoT) systems. We propose an iterative expectation- maximization based successive interference cancellation (EM-SIC) algorithm to jointly consider estimations of residual frequency- offset (FO), fading-channel taps and time-of- arrival (ToA) of the first arrival-path for each of the detected cells. In order to design a low complexity ToA detector and also due to the limits of low-cost analog circuits, we assume an NB-IoT device working at a low-sampling rate such as 1.92 MHz or lower. The proposed EM-SIC algorithm comprises two stages to detect ToA, based on which OTDOA can be calculated. In a first stage, after running the EM-SIC block a predefined number of iterations, a coarse ToA is estimated for each of the detected cells. Then in a second stage, to improve the ToA resolution, a low-pass filter is utilized to interpolate the correlations of time-domain PRS signal evaluated at a low sampling-rate to a high sampling-rate such as 30.72 MHz. To keep low-complexity, only the correlations inside a small search window centered at the coarse ToA estimates are upsampled. Then, the refined ToAs are estimated based on upsampled correlations. If at least three cells are detected, with OTDOA and the locations of detected cell sites, the position of the NB-IoT device can be estimated. We show through numerical simulations that, the proposed EM-SIC based ToA detector is robust against impairments introduced by inter-cell interference, fading-channel and residual FO. Thus significant signal-to-noise (SNR) gains are obtained over traditional ToA detectors that do not consider these impairments when positioning a device.",
"title": ""
},
{
"docid": "neg:1840567_18",
"text": "Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16.55% while requiring on average 853x less space than existing methods on a variety of graphs.",
"title": ""
}
] |
1840568 | Single Switched Capacitor Battery Balancing System Enhancements | [
{
"docid": "pos:1840568_0",
"text": "Battery systems are affected by many factors, the most important one is the cells unbalancing. Without the balancing system, the individual cell voltages will differ over time, battery pack capacity will decrease quickly. That will result in the fail of the total battery system. Thus cell balancing acts an important role on the battery life preserving. Different cell balancing methodologies have been proposed for battery pack. This paper presents a review and comparisons between the different proposed balancing topologies for battery string based on MATLAB/Simulink® simulation. The comparison carried out according to circuit design, balancing simulation, practical implementations, application, balancing speed, complexity, cost, size, balancing system efficiency, voltage/current stress … etc.",
"title": ""
},
{
"docid": "pos:1840568_1",
"text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.",
"title": ""
},
{
"docid": "pos:1840568_2",
"text": "The automobile industry is progressing toward hybrid, plug-in hybrid, and fully electric vehicles in their future car models. The energy storage unit is one of the most important blocks in the power train of future electric-drive vehicles. Batteries and/or ultracapacitors are the most prominent storage systems utilized so far. Hence, their reliability during the lifetime of the vehicle is of great importance. Charge equalization of series-connected batteries or ultracapacitors is essential due to the capacity imbalances stemming from manufacturing, ensuing driving environment, and operational usage. Double-tiered capacitive charge shuttling technique is introduced and applied to a battery system in order to balance the battery-cell voltages. Parameters in the system are varied, and their effects on the performance of the system are determined. Results are compared to a single-tiered approach. MATLAB simulation shows a substantial improvement in charge transport using the new topology. Experimental results verifying simulation are presented.",
"title": ""
}
] | [
{
"docid": "neg:1840568_0",
"text": "The quantity of rooftop solar photovoltaic (PV) installations has grown rapidly in the US in recent years. There is a strong interest among decision makers in obtaining high quality information about rooftop PV, such as the locations, power capacity, and energy production of existing rooftop PV installations. Solar PV installations are typically connected directly to local power distribution grids, and therefore it is important for the reliable integration of solar energy to have information at high geospatial resolutions: by county, zip code, or even by neighborhood. Unfortunately, traditional means of obtaining this information, such as surveys and utility interconnection filings, are limited in availability and geospatial resolution. In this work a new approach is investigated where a computer vision algorithm is used to detect rooftop PV installations in high resolution color satellite imagery and aerial photography. It may then be possible to use the identified PV images to estimate power capacity and energy production for each array of panels, yielding a fast, scalable, and inexpensive method to obtain rooftop PV estimates for regions of any size. The aim of this work is to investigate the feasibility of the first step of the proposed approach: detecting rooftop PV in satellite imagery. Towards this goal, a collection of satellite rooftop images is used to develop and evaluate a detection algorithm. The results show excellent detection performance on the testing dataset and that, with further development, the proposed approach may be an effective solution for fast and scalable rooftop PV information collection.",
"title": ""
},
{
"docid": "neg:1840568_1",
"text": "BACKGROUND\nDyadic suicide pacts are cases in which two individuals (and very rarely more) agree to die together. These account for fewer than 1% of all completed suicides.\n\n\nOBJECTIVE\nThe authors describe two men in a long-term domestic partnership who entered into a suicide pact and, despite utilizing a high-lethality method (simultaneous arm amputation with a power saw), survived.\n\n\nMETHOD\nThe authors investigated the psychiatric, psychological, and social causes of suicide pacts by delving into the history of these two participants, who displayed a very high degree of suicidal intent. Psychiatric interviews and a family conference call, along with the strong support of one patient's family, were elicited.\n\n\nRESULTS\nThe patients, both HIV-positive, showed high levels of depression and hopelessness, as well as social isolation and financial hardship. With the support of his family, one patient was discharged to their care, while the other partner was hospitalized pending reunion with his partner.\n\n\nDISCUSSION\nThis case illustrates many of the key, defining features of suicide pacts that are carried out and also highlights the nature of the dependency relationship.",
"title": ""
},
{
"docid": "neg:1840568_2",
"text": "AIM\nThis article reports the results of a study evaluating a preferred music listening intervention for reducing anxiety in older adults with dementia in nursing homes.\n\n\nBACKGROUND\nAnxiety can have a significant negative impact on older adults' functional status, quality of life and health care resources. However, anxiety is often under-diagnosed and inappropriately treated in those with dementia. Little is known about the use of a preferred music listening intervention for managing anxiety in those with dementia.\n\n\nDESIGN\nA quasi-experimental pretest and posttest design was used.\n\n\nMETHODS\nThis study aimed to evaluate the effectiveness of a preferred music listening intervention on anxiety in older adults with dementia in nursing home. Twenty-nine participants in the experimental group received a 30-minute music listening intervention based on personal preferences delivered by trained nursing staff in mid-afternoon, twice a week for six weeks. Meanwhile, 23 participants in the control group only received usual standard care with no music. Anxiety was measured by Rating Anxiety in Dementia at baseline and week six. Analysis of covariance (ancova) was used to determine the effectiveness of a preferred music listening intervention on anxiety at six weeks while controlling for pretest anxiety, age and marital status.\n\n\nRESULTS\nancova results indicated that older adults who received the preferred music listening had a significantly lower anxiety score at six weeks compared with those who received the usual standard care with no music (F = 12.15, p = 0.001).\n\n\nCONCLUSIONS\nPreferred music listening had a positive impact by reducing the level of anxiety in older adults with dementia.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nNursing staff can learn how to implement preferred music intervention to provide appropriate care tailored to the individual needs of older adults with dementia. Preferred music listening is an inexpensive and viable intervention to promote mental health of those with dementia.",
"title": ""
},
{
"docid": "neg:1840568_3",
"text": "In the early days, content-based image retrieval (CBIR) was studied with global features. Since 2003, image retrieval based on local descriptors (de facto SIFT) has been extensively studied for over a decade due to the advantage of SIFT in dealing with image transformations. Recently, image representations based on the convolutional neural network (CNN) have attracted increasing interest in the community and demonstrated impressive performance. Given this time of rapid evolution, this article provides a comprehensive survey of instance retrieval over the last decade. Two broad categories, SIFT-based and CNN-based methods, are presented. For the former, according to the codebook size, we organize the literature into using large/medium-sized/small codebooks. For the latter, we discuss three lines of methods, i.e., using pre-trained or fine-tuned CNN models, and hybrid methods. The first two perform a single-pass of an image to the network, while the last category employs a patch-based feature extraction scheme. This survey presents milestones in modern instance retrieval, reviews a broad selection of previous works in different categories, and provides insights on the connection between SIFT and CNN-based methods. After analyzing and comparing retrieval performance of different categories on several datasets, we discuss promising directions towards generic and specialized instance retrieval.",
"title": ""
},
{
"docid": "neg:1840568_4",
"text": "BACKGROUND\nNewborns with critical health conditions are monitored in neonatal intensive care units (NICU). In NICU, one of the most important problems that they face is the risk of brain injury. There is a need for continuous monitoring of newborn's brain function to prevent any potential brain injury. This type of monitoring should not interfere with intensive care of the newborn. Therefore, it should be non-invasive and portable.\n\n\nMETHODS\nIn this paper, a low-cost, battery operated, dual wavelength, continuous wave near infrared spectroscopy system for continuous bedside hemodynamic monitoring of neonatal brain is presented. The system has been designed to optimize SNR by optimizing the wavelength-multiplexing parameters with special emphasis on safety issues concerning burn injuries. SNR improvement by utilizing the entire dynamic range has been satisfied with modifications in analog circuitry.\n\n\nRESULTS AND CONCLUSION\nAs a result, a shot-limited SNR of 67 dB has been achieved for 10 Hz temporal resolution. The system can operate more than 30 hours without recharging when an off-the-shelf 1850 mAh-7.2 V battery is used. Laboratory tests with optical phantoms and preliminary data recorded in NICU demonstrate the potential of the system as a reliable clinical tool to be employed in the bedside regional monitoring of newborn brain metabolism under intensive care.",
"title": ""
},
{
"docid": "neg:1840568_5",
"text": "Recent advances in signal processing and the revolution by the mobile technologies have spurred several innovations in all the areas and albeit more so in home based tele-medicine. We used variational mode decomposition (VMD) based denoising on large-scale phonocardiogram (PCG) data sets and achieved better accuracy. We have also implemented a reliable, external hardware and mobile based phonocardiography system that uses VMD signal processing technique to denoise the PCG signal that visually displays the waveform and inform the end-user and send the data to cloud based analytics system.",
"title": ""
},
{
"docid": "neg:1840568_6",
"text": "This work examines the implications of uncoupled intersections with local realworld topology and sensor setup on traffic light control approaches. Control approaches are evaluated with respect to: Traffic flow, fuel consumption and noise emission at intersections. The real-world road network of Friedrichshafen is depicted, preprocessed and the present traffic light controlled intersections are modeled with respect to state space and action space. Different strategies, containing fixed-time, gap-based and time-based control approaches as well as our deep reinforcement learning based control approach, are implemented and assessed. Our novel DRL approach allows for modeling the TLC action space, with respect to phase selection as well as selection of transition timings. It was found that real-world topologies, and thus irregularly arranged intersections have an influence on the performance of traffic light control approaches. This is even to be observed within the same intersection types (n-arm, m-phases). Moreover we could show, that these influences can be efficiently dealt with by our deep reinforcement learning based control approach.",
"title": ""
},
{
"docid": "neg:1840568_7",
"text": "A case of a fatal cardiac episode resulting from an unusual autoerotic practice involving the use of a vacuum cleaner, is presented. Scene investigation and autopsy findings are discussed.",
"title": ""
},
{
"docid": "neg:1840568_8",
"text": "Physical unclonable function (PUF) leverages the immensely complex and irreproducible nature of physical structures to achieve device authentication and secret information storage. To enhance the security and robustness of conventional PUFs, reconfigurable physical unclonable functions (RPUFs) with dynamically refreshable challenge-response pairs (CRPs) have emerged recently. In this paper, we propose two novel physically reconfigurable PUF (P-RPUF) schemes that exploit the process parameter variability and programming sensitivity of phase change memory (PCM) for CRP reconfiguration and evaluation. The first proposed PCM-based P-RPUF scheme extracts its CRPs from the measurable differences of the PCM cell resistances programmed by randomly varying pulses. An imprecisely controlled regulator is used to protect the privacy of the CRP in case the configuration state of the RPUF is divulged. The second proposed PCM-based RPUF scheme produces the random response by counting the number of programming pulses required to make the cell resistance converge to a predetermined target value. The merging of CRP reconfiguration and evaluation overcomes the inherent vulnerability of P-RPUF devices to malicious prediction attacks by limiting the number of accessible CRPs between two consecutive reconfigurations to only one. Both schemes were experimentally evaluated on 180-nm PCM chips. The obtained results demonstrated their quality for refreshable key generation when appropriate fuzzy extractor algorithms are incorporated.",
"title": ""
},
{
"docid": "neg:1840568_9",
"text": "Prediction of popularity has profound impact for social media, since it offers opportunities to reveal individual preference and public attention from evolutionary social systems. Previous research, although achieves promising results, neglects one distinctive characteristic of social data, i.e., sequentiality. For example, the popularity of online content is generated over time with sequential post streams of social media. To investigate the sequential prediction of popularity, we propose a novel prediction framework called Deep Temporal Context Networks (DTCN) by incorporating both temporal context and temporal attention into account. Our DTCN contains three main components, from embedding, learning to predicting. With a joint embedding network, we obtain a unified deep representation of multi-modal user-post data in a common embedding space. Then, based on the embedded data sequence over time, temporal context learning attempts to recurrently learn two adaptive temporal contexts for sequential popularity. Finally, a novel temporal attention is designed to predict new popularity (the popularity of a new userpost pair) with temporal coherence across multiple time-scales. Experiments on our released image dataset with about 600K Flickr photos demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms, with an average of 21.51% relative performance improvement in the popularity prediction (Spearman Ranking Correlation).",
"title": ""
},
{
"docid": "neg:1840568_10",
"text": "Principles of muscle coordination in gait have been based largely on analyses of body motion, ground reaction force and EMG measurements. However, data from dynamical simulations provide a cause-effect framework for analyzing these measurements; for example, Part I (Gait Posture, in press) of this two-part review described how force generation in a muscle affects the acceleration and energy flow among the segments. This Part II reviews the mechanical and coordination concepts arising from analyses of simulations of walking. Simple models have elucidated the basic multisegmented ballistic and passive mechanics of walking. Dynamical models driven by net joint moments have provided clues about coordination in healthy and pathological gait. Simulations driven by muscle excitations have highlighted the partial stability afforded by muscles with their viscoelastic-like properties and the predictability of walking performance when minimization of metabolic energy per unit distance is assumed. When combined with neural control models for exciting motoneuronal pools, simulations have shown how the integrative properties of the neuro-musculo-skeletal systems maintain a stable gait. Other analyses of walking simulations have revealed how individual muscles contribute to trunk support and progression. Finally, we discuss how biomechanical models and simulations may enhance our understanding of the mechanics and muscle function of walking in individuals with gait impairments.",
"title": ""
},
{
"docid": "neg:1840568_11",
"text": "We study the use of randomized value functions to guide deep exploration in reinforcement learning. This offers an elegant means for synthesizing statistically and computationally efficient exploration with common practical approaches to value function learning. We present several reinforcement learning algorithms that leverage randomized value functions and demonstrate their efficacy through computational studies. We also prove a regret bound that establishes statistical efficiency with a tabular representation.",
"title": ""
},
{
"docid": "neg:1840568_12",
"text": "A geometric dissection is a set of pieces which can be assembled in different ways to form distinct shapes. Dissections are used as recreational puzzles because it is striking when a single set of pieces can construct highly different forms. Existing techniques for creating dissections find pieces that reconstruct two input shapes exactly. Unfortunately, these methods only support simple, abstract shapes because an excessive number of pieces may be needed to reconstruct more complex, naturalistic shapes. We introduce a dissection design technique that supports such shapes by requiring that the pieces reconstruct the shapes only approximately. We find that, in most cases, a small number of pieces suffices to tightly approximate the input shapes. We frame the search for a viable dissection as a combinatorial optimization problem, where the goal is to search for the best approximation to the input shapes using a given number of pieces. We find a lower bound on the tightness of the approximation for a partial dissection solution, which allows us to prune the search space and makes the problem tractable. We demonstrate our approach on several challenging examples, showing that it can create dissections between shapes of significantly greater complexity than those supported by previous techniques.",
"title": ""
},
{
"docid": "neg:1840568_13",
"text": "In unsupervised semantic role labeling, identifying the role of an argument is usually informed by its dependency relation with the predicate. In this work, we propose a neural model to learn argument embeddings from the context by explicitly incorporating dependency relations as multiplicative factors, which bias argument embeddings according to their dependency roles. Our model outperforms existing state-of-the-art embeddings in unsupervised semantic role induction on the CoNLL 2008 dataset and the SimLex999 word similarity task. Qualitative results demonstrate our model can effectively bias argument embeddings based on their dependency role.",
"title": ""
},
{
"docid": "neg:1840568_14",
"text": "Wireless sensor networks (WSNs) have recently gained a lot of attention by scientific community. Small and inexpensive devices with low energy consumption and limited computing resources are increasingly being adopted in different application scenarios including environmental monitoring, target tracking and biomedical health monitoring. In many such applications, node localization is inherently one of the system parameters. Localization process is necessary to report the origin of events, routing and to answer questions on the network coverage ,assist group querying of sensors. In general, localization schemes are classified into two broad categories: range-based and range-free. However, it is difficult to classify hybrid solutions as range-based or range-free. In this paper we make this classification easy, where range-based schemes and range-free schemes are divided into two types: fully schemes and hybrid schemes. Moreover, we compare the most relevant localization algorithms and discuss the future research directions for wireless sensor networks localization schemes.",
"title": ""
},
{
"docid": "neg:1840568_15",
"text": "BACKGROUND\nThe deltoid ligament has both superficial and deep layers and consists of up to six ligamentous bands. The prevalence of the individual bands is variable, and no consensus as to which bands are constant or variable exists. Although other studies have looked at the variance in the deltoid anatomy, none have quantified the distance to relevant osseous landmarks.\n\n\nMETHODS\nThe deltoid ligaments from fourteen non-paired, fresh-frozen cadaveric specimens were isolated and the ligamentous bands were identified. The lengths, footprint areas, orientations, and distances from relevant osseous landmarks were measured with a three-dimensional coordinate measurement device.\n\n\nRESULTS\nIn all specimens, the tibionavicular, tibiospring, and deep posterior tibiotalar ligaments were identified. Three additional bands were variable in our specimen cohort: the tibiocalcaneal, superficial posterior tibiotalar, and deep anterior tibiotalar ligaments. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament. The origins from the distal center of the intercollicular groove were 16.1 mm (95% confidence interval, 14.7 to 17.5 mm) for the tibionavicular ligament, 13.1 mm (95% confidence interval, 11.1 to 15.1 mm) for the tibiospring ligament, and 7.6 mm (95% confidence interval, 6.7 to 8.5 mm) for the deep posterior tibiotalar ligament. Relevant to other pertinent osseous landmarks, the tibionavicular ligament inserted at 9.7 mm (95% confidence interval, 8.4 to 11.0 mm) from the tuberosity of the navicular, the tibiospring inserted at 35% (95% confidence interval, 33.4% to 36.6%) of the spring ligament's posteroanterior distance, and the deep posterior tibiotalar ligament inserted at 17.8 mm (95% confidence interval, 16.3 to 19.3 mm) from the posteromedial talar tubercle.\n\n\nCONCLUSIONS\nThe tibionavicular, tibiospring, and deep posterior tibiotalar ligament bands were constant components of the deltoid ligament. The deep posterior tibiotalar ligament was the largest band of the deltoid ligament.\n\n\nCLINICAL RELEVANCE\nThe anatomical data regarding the deltoid ligament bands in this study will help to guide anatomical placement of repairs and reconstructions for deltoid ligament injury or instability.",
"title": ""
},
{
"docid": "neg:1840568_16",
"text": "The performance of a brushless motor which has a surface-mounted magnet rotor and a trapezoidal back-emf waveform when it is operated in BLDC and BLAC modes is evaluated, in both constant torque and flux-weakening regions, assuming the same torque, the same peak current, and the same rms current. It is shown that although the motor has an essentially trapezoidal back-emf waveform, the output power and torque when operated in the BLAC mode in the flux-weakening region are significantly higher than that can be achieved when operated in the BLDC mode due to the influence of the winding inductance and back-emf harmonics",
"title": ""
},
{
"docid": "neg:1840568_17",
"text": "The aims of this study were as follows: (a) to examine the possible presence of an identifiable group of stable victims of cyberbullying; (b) to analyze whether the stability of cybervictimization is associated with the perpetration of cyberbullying and bully–victim status (i.e., being only a bully, only a victim, or being both a bully and a victim); and (c) to test whether stable victims report a greater number of psychosocial problems compared to non-stable victims and uninvolved peers. A sample of 680 Spanish adolescents (410 girls) completed self-report measures on cyberbullying perpetration and victimization, depressive symptoms, and problematic alcohol use at two time points that were separated by one year. The results of cluster analyses suggested the existence of four distinct victimization profiles: ‘‘Stable-Victims,’’ who reported victimization at both Time 1 and Time 2 (5.8% of the sample), ‘‘Time 1-Victims,’’ and ‘‘Time 2-Victims,’’ who presented victimization only at one time (14.5% and 17.6%, respectively), and ‘‘Non-Victims,’’ who presented minimal victimization at both times (61.9% of the sample). Stable victims were more likely to fall into the ‘‘bully–victim’’ category and presented more cyberbullying perpetration than the rest of the groups. Overall, the Stable Victims group displayed higher scores of depressive symptoms and problematic alcohol use over time than the other groups, whereas the Non-Victims displayed the lowest of these scores. These findings have major implications for prevention and intervention efforts aimed at reducing cyberbullying and its consequences. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840568_18",
"text": "The bandit problem is revisited and considered under the PAC model. Our main contribution in this part is to show that given n arms, it suffices to pull the arms O( n 2 log 1 δ ) times to find an -optimal arm with probability of at least 1 − δ. This is in contrast to the naive bound of O( n 2 log n δ ). We derive another algorithm whose complexity depends on the specific setting of the rewards, rather than the worst case setting. We also provide a matching lower bound. We show how given an algorithm for the PAC model Multi-Armed Bandit problem, one can derive a batch learning algorithm for Markov Decision Processes. This is done essentially by simulating Value Iteration, and in each iteration invoking the multi-armed bandit algorithm. Using our PAC algorithm for the multi-armed bandit problem we improve the dependence on the number of actions.",
"title": ""
},
{
"docid": "neg:1840568_19",
"text": "This paper presents a novel technique, anatomy, for publishing sensitive data. Anatomy releases all the quasi-identifier and sensitive values directly in two separate tables. Combined with a grouping mechanism, this approach protects privacy, and captures a large amount of correlation in the microdata. We develop a linear-time algorithm for computing anatomized tables that obey the l-diversity privacy requirement, and minimize the error of reconstructing the microdata. Extensive experiments confirm that our technique allows significantly more effective data analysis than the conventional publication method based on generalization. Specifically, anatomy permits aggregate reasoning with average error below 10%, which is lower than the error obtained from a generalized table by orders of magnitude.",
"title": ""
}
] |
1840569 | A vision-guided autonomous quadrotor in an air-ground multi-robot system | [
{
"docid": "pos:1840569_0",
"text": "This paper presents a nonlinear controller for hovering flight and touchdown control for a vertical take-off and landing (VTOL) unmanned aerial vehicle (UAV) using inertial optical flow. The VTOL vehicle is assumed to be a rigid body, equipped with a minimum sensor suite (camera and IMU), manoeuvring over a textured flat target plane. Two different tasks are considered in this paper: the first concerns the stability of hovering flight and the second one concerns regulation of automatic landing using the divergent optical flow as feedback information. Experimental results on a quad-rotor UAV demonstrate the performance of the proposed control strategy.",
"title": ""
},
{
"docid": "pos:1840569_1",
"text": "This paper presents a Miniature Aerial Vehicle (MAV) capable of handsoff autonomous operation within indoor environments. Our prototype is a Quadrotor weighing approximately 600g, with a diameter of 550mm, which carries the necessary electronics for stability control, altitude control, collision avoidance and anti-drift control. This MAV is equipped with three rate gyroscopes, three accelerometers, one ultrasonic sensor, four infrared sensors, a high-speed motor controller and a flight computer. Autonomous flight tests have been carried out in a 7x6-m room.",
"title": ""
}
] | [
{
"docid": "neg:1840569_0",
"text": "We present the results of a study of definite descriptions use in written texts aimed at assessing the feasibility of annotating corpora with information about definite description interpretation. We ran two experiments, in which subjects were asked to classify the uses of definite descriptions in a corpus of 33 newspaper articles, containing a total of 1412 definite descriptions. We measured the agreement among annotators about the classes assigned to definite descriptions, as well as the agreement about the antecedent assigned to those definites that the annotators classified as being related to an antecedent in the text. Themost interesting result of this study from a corpus annotation perspective was the rather low agreement (K=0.63) that we obtained using versions of Hawkins’ and Prince’s classification schemes; better results (K=0.76) were obtained using the simplified scheme proposed by Fraurud that includes only two classes, first-mention and subsequent-mention. The agreement about antecedents was also not complete. These findings raise questions concerning the strategy of evaluating systems for definite description interpretation by comparing their results with a standardized annotation. From a linguistic point of view, the most interesting observations were the great number of discourse-newdefinites in our corpus (in one of our experiments, about 50% of the definites in the collection were classified as discourse-new, 30% as anaphoric, and 18% as associative/bridging) and the presence of definites which did not seem to require a complete disambiguation. This paper will appear in Computational Linguistics.",
"title": ""
},
{
"docid": "neg:1840569_1",
"text": "With sustained growth of software complexity, finding security vulnerabilities in operating systems has become an important necessity. Nowadays, OS are shipped with thousands of binary executables. Unfortunately, methodologies and tools for an OS scale program testing within a limited time budget are still missing.\n In this paper we present an approach that uses lightweight static and dynamic features to predict if a test case is likely to contain a software vulnerability using machine learning techniques. To show the effectiveness of our approach, we set up a large experiment to detect easily exploitable memory corruptions using 1039 Debian programs obtained from its bug tracker, collected 138,308 unique execution traces and statically explored 76,083 different subsequences of function calls. We managed to predict with reasonable accuracy which programs contained dangerous memory corruptions.\n We also developed and implemented VDiscover, a tool that uses state-of-the-art Machine Learning techniques to predict vulnerabilities in test cases. Such tool will be released as open-source to encourage the research of vulnerability discovery at a large scale, together with VDiscovery, a public dataset that collects raw analyzed data.",
"title": ""
},
{
"docid": "neg:1840569_2",
"text": "The exact nature of the relationship among species range sizes, speciation, and extinction events is not well understood. The factors that promote larger ranges, such as broad niche widths and high dispersal abilities, could increase the likelihood of encountering new habitats but also prevent local adaptation due to high gene flow. Similarly, low dispersal abilities or narrower niche widths could cause populations to be isolated, but such populations may lack advantageous mutations due to low population sizes. Here we present a large-scale, spatially explicit, individual-based model addressing the relationships between species ranges, speciation, and extinction. We followed the evolutionary dynamics of hundreds of thousands of diploid individuals for 200,000 generations. Individuals adapted to multiple resources and formed ecological species in a multidimensional trait space. These species varied in niche widths, and we observed the coexistence of generalists and specialists on a few resources. Our model shows that species ranges correlate with dispersal abilities but do not change with the strength of fitness trade-offs; however, high dispersal abilities and low resource utilization costs, which favored broad niche widths, have a strong negative effect on speciation rates. An unexpected result of our model is the strong effect of underlying resource distributions on speciation: in highly fragmented landscapes, speciation rates are reduced.",
"title": ""
},
{
"docid": "neg:1840569_3",
"text": "BACKGROUND\nLimited evidence exists to show that adding a third agent to platinum-doublet chemotherapy improves efficacy in the first-line advanced non-small-cell lung cancer (NSCLC) setting. The anti-PD-1 antibody pembrolizumab has shown efficacy as monotherapy in patients with advanced NSCLC and has a non-overlapping toxicity profile with chemotherapy. We assessed whether the addition of pembrolizumab to platinum-doublet chemotherapy improves efficacy in patients with advanced non-squamous NSCLC.\n\n\nMETHODS\nIn this randomised, open-label, phase 2 cohort of a multicohort study (KEYNOTE-021), patients were enrolled at 26 medical centres in the USA and Taiwan. Patients with chemotherapy-naive, stage IIIB or IV, non-squamous NSCLC without targetable EGFR or ALK genetic aberrations were randomly assigned (1:1) in blocks of four stratified by PD-L1 tumour proportion score (<1% vs ≥1%) using an interactive voice-response system to 4 cycles of pembrolizumab 200 mg plus carboplatin area under curve 5 mg/mL per min and pemetrexed 500 mg/m2 every 3 weeks followed by pembrolizumab for 24 months and indefinite pemetrexed maintenance therapy or to 4 cycles of carboplatin and pemetrexed alone followed by indefinite pemetrexed maintenance therapy. The primary endpoint was the proportion of patients who achieved an objective response, defined as the percentage of patients with radiologically confirmed complete or partial response according to Response Evaluation Criteria in Solid Tumors version 1.1 assessed by masked, independent central review, in the intention-to-treat population, defined as all patients who were allocated to study treatment. Significance threshold was p<0·025 (one sided). Safety was assessed in the as-treated population, defined as all patients who received at least one dose of the assigned study treatment. This trial, which is closed for enrolment but continuing for follow-up, is registered with ClinicalTrials.gov, number NCT02039674.\n\n\nFINDINGS\nBetween Nov 25, 2014, and Jan 25, 2016, 123 patients were enrolled; 60 were randomly assigned to the pembrolizumab plus chemotherapy group and 63 to the chemotherapy alone group. 33 (55%; 95% CI 42-68) of 60 patients in the pembrolizumab plus chemotherapy group achieved an objective response compared with 18 (29%; 18-41) of 63 patients in the chemotherapy alone group (estimated treatment difference 26% [95% CI 9-42%]; p=0·0016). The incidence of grade 3 or worse treatment-related adverse events was similar between groups (23 [39%] of 59 patients in the pembrolizumab plus chemotherapy group and 16 [26%] of 62 in the chemotherapy alone group). The most common grade 3 or worse treatment-related adverse events in the pembrolizumab plus chemotherapy group were anaemia (seven [12%] of 59) and decreased neutrophil count (three [5%]); an additional six events each occurred in two (3%) for acute kidney injury, decreased lymphocyte count, fatigue, neutropenia, and sepsis, and thrombocytopenia. In the chemotherapy alone group, the most common grade 3 or worse events were anaemia (nine [15%] of 62) and decreased neutrophil count, pancytopenia, and thrombocytopenia (two [3%] each). One (2%) of 59 patients in the pembrolizumab plus chemotherapy group experienced treatment-related death because of sepsis compared with two (3%) of 62 patients in the chemotherapy group: one because of sepsis and one because of pancytopenia.\n\n\nINTERPRETATION\nCombination of pembrolizumab, carboplatin, and pemetrexed could be an effective and tolerable first-line treatment option for patients with advanced non-squamous NSCLC. This finding is being further explored in an ongoing international, randomised, double-blind, phase 3 study.\n\n\nFUNDING\nMerck & Co.",
"title": ""
},
{
"docid": "neg:1840569_4",
"text": "The increased accessibility of digitally sourced data and advance technology to analyse it drives many industries to digital change. Many global businesses are talking about the potential of big data and they believe that analysing big data sets can help businesses derive competitive insight and shape organisations’ marketing strategy decisions. Potential impact of digital technology varies widely by industry. Sectors such as financial services, insurances and mobile telecommunications which are offering virtual rather than physical products are more likely highly susceptible to digital transformation. Howeverthe interaction between digital technology and organisations is complex and there are many barriers for to effective digital change which are presented by big data. Changes brought by technology challenges both researchers and practitioners. Various global business and digital tends have highlights the emergent need for collaboration between academia and market practitioners. There are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. In this paper we identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. INTRODUCTION Advances in digital technology has made a significant impact on marketing theory and practice. Technology expands the opportunity to capture better quality customer data, increase focus on customer relationship, rise of customer insight and Customer Relationship Management (CRM). Availability of big data made traditional marketing tools to work more powerful and innovative way. In current digital age of marketing some predictions of effects of the digital changes have come to function but still there is no definite answer to what works and what doesn’t in terms of implementing the changes in an organisation context. The choice of this specific topic is motivated by the need for a better understanding for impact of digital on marketing fild.This paper will discusses the potential positive impact of the big data on digital marketing. It also present the evidence of positive views in academia and highlight the gap between academia and practices. The main focus is on understanding the gap and providing recommendation for fillingit in. The aim of this paper is to identify theoretical dilemmas of the digital revolution and importance of challenges within practice. Preliminary results presented here show that those industries that tried to narrow the gap and put necessary mechanisms in place to make use of big data for marketing are upfront on the market. In our discussion we shall identify these industries and present evaluations of which industry sectors would need to be looking at understanding of impact that big data may have on their practices and businesses. Digital Marketing and Big data In early 90’s when views about digital changes has started Parsons at el (1998) believed that to achieve success in digital marketing consumer marketers should create a new model with five essential elements in new media environment. Figure below shows five success factors and issues that marketers should address around it. Figure 1. Digital marketing Framework and levers Parson et al (1998) International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 147 Today in digital age of marketing some predictions of effects of this changes have come to function but still there is no define answers on what works and what doesn’t in terms of implement it in organisation context.S. Dibb (2012). There are deferent explanations, arguments and views about impact of digital on marketing strategy in the literature. At first, it is important to define what is meant by digital marketing, what are the challenges brought by it and then understand how it is adopted. Simply, Digital Marketing (2012) can be defined as “a sub branch of traditional Marketing using modern digital channels for the placement of products such as downloadable music, and primarily for communicating with stakeholders e.g. customers and investors about brand, products and business progress”. According to (Smith, 2007) the digital marketing refers “The use of digital technologies to create an integrated, targeted and measurable communication which helps to acquire and retain customers while building deeper relationships with them”. There are a number of accepted theoretical frameworks however as Parsons et al (1998) suggested potentialities offered by digital marketing need to consider carefully where and how to build in each organisation by the senior managers. The most recent developments in this area has been triggered by growing amount of digital data now known as Big Data. Tech American Foundation (2004) defines Big Data as a “term that describes large volumes of high velocity, complex and variable data that require advanced techniques and technologies to enable the capture storage, distribution, management and analysis of information”. D. Krajicek (2013) argues that the big challenge of Big Data is the ability to focus on what is meaningful not on what is possible, with so much information at their fingerprint marketers and their research partners can and often do fall into “more is better” fallacy. Knowing something and knowing it quickly is not enough. Therefore to have valuable Big data it needs to be sorted by professional people who have skills to understand dynamics of market and can identify what is relevant and meaningful. G. Day (2011). Data should be used for achieve competitive advantage by creating effective relationship with the target segments. According to K. Kendall (2014) with de right capabilities, you can take a whole range of new data sources such as web browsing, social data and geotracking data and develop much more complete profile about your customers and then with this information you can segment better. Successful Big Data initiatives should start with a specific and clearly defined business requirement then leaders of these initiatives need to assess the technical requirement and identify gap in their capabilities and then plan the investment to close those gaps (Big Data Analytics 2014) The impact and current challenges Bileviciene (2012) suggest that well conducted market research is the basis for successful marketing and well conducted study is the basis of successful market segmentation. Generally marketing management is broken down into a series of steps, which include market research, segmentation of markets and positioning the company’s offering in such a way as to appeal to the targeted segments. (OU Business school, 2007) Market segmentation refers to the process of defining and subdividing a large homogenous market into clearly identifiable segments having similar needs, wants, or demand characteristics. Its objective is to design a marketing mix that precisely matches the expectations of customers in the targeted segment (Business dictation, 2013). The goal for segmentation is to break down the target market into different consumers groups. According to Kotler and Armstrong (2011) traditionally customers were classified based on four types of segmentation variables, geographic, demographic, psychographic and behavioural. There are many focuses, beliefs and arguments in the field of market segmentation. Many researchers believe that the traditional variables of demographic and geographic segments are out-dated and the theory regarding segmentation has become too narrow (Quinn and Dibb, 2010). According to Lin (2002), these variables should be a part of a new, expanded view of the market segmentation theory that focuses more on customer’s personalities and values. Dibb and Simkin (2009) argue that priorities of market segmentation research aim to exploring the applicability of new segmentation bases across different products and contexts, developing more flexible data analysis techniques, creating new research designs and data collection approaches, however practical questions about implementation and integration have received less attention. According to S. Dibb (2012) in academic perspective segmentation still has strategic and tactical role as shown on figure below. But in practice as Dibb argues “some things have not changed” and: Segmentation’s strategic role still matters Implementation is as much of a pain as always Even the smartest segments need embedding International Conference on Communication, Media, Technology and Design 24 26 April 2014, Istanbul – Turkey 148 Figure 2: role of segmentation S. Dibb (2012) Dilemmas with the Implementation of digital change arise for various reasons. Some academics believed that greater access to data would reduce the need for more traditional segmentation but research done on the field shows that traditional segmentation works equal to CRM ( W. Boulding et al 2005). Even thought the marketing literature offers insights for improving the effectiveness of digital changes in marketing filed there is limitation on how an organisation adapts its customer information processes once the technology is adjusted into the organisation. (J. Peltier et al 2012) suggest that there is an urgent need for data management studies that captures insights from other disciplines including organisational behaviour, change management and technology implementation. Reibstein et al (2009) also highlights the emergent need for collaboration between academia and market practitioners. They point out that there is a “digital skill gap” within the marketing filed. Authors argue that there are “theories-in – use” which are academically rigorous but still there is gap between implementation of theory in practice. Changes brought by technology and availability of di",
"title": ""
},
{
"docid": "neg:1840569_5",
"text": "We present in this paper a new approach that uses supervised machine learning techniques to improve the performances of optimization algorithms in the context of mixed-integer programming (MIP). We focus on the branch-and-bound (B&B) algorithm, which is the traditional algorithm used to solve MIP problems. In B&B, variable branching is the key component that most conditions the efficiency of the optimization. Good branching strategies exist but are computationally expensive and usually hinder the optimization rather than improving it. Our approach consists in imitating the decisions taken by a supposedly good branching strategy, strong branching in our case, with a fast approximation. To this end, we develop a set of features describing the state of the ongoing optimization and show how supervised machine learning can be used to approximate the desired branching strategy. The approximated function is created by a supervised machine learning algorithm from a set of observed branching decisions taken by the target strategy. The experiments performed on randomly generated and standard benchmark (MIPLIB) problems show promising results.",
"title": ""
},
{
"docid": "neg:1840569_6",
"text": "We examined salivary C-reactive protein (CRP) levels in the context of tobacco smoke exposure (TSE) in healthy youth. We hypothesized that there would be a dose-response relationship between TSE status and salivary CRP levels. This work is a pilot study (N = 45) for a larger investigation in which we aim to validate salivary CRP against serum CRP, the gold standard measurement of low-grade inflammation. Participants were healthy youth with no self-reported periodontal disease, no objectively measured obesity/adiposity, and no clinical depression, based on the Beck Depression Inventory (BDI-II). We assessed tobacco smoking and confirmed smoking status (non-smoking, passive smoking, and active smoking) with salivary cotinine measurement. We measured salivary CRP by the ELISA method. We controlled for several potential confounders. We found evidence for the existence of a dose-response relationship between the TSE status and salivary CRP levels. Our preliminary findings indicate that salivary CRP seems to have a similar relation to TSE as its widely used serum (systemic inflammatory) biomarker counterpart.",
"title": ""
},
{
"docid": "neg:1840569_7",
"text": "Head drop is a symptom commonly seen in patients with amyotrophic lateral sclerosis. These patients usually experience neck pain and have difficulty in swallowing and breathing. Static neck braces are used in current treatment. These braces, however, immobilize the head in a single configuration, which causes muscle atrophy. This letter presents the design of a dynamic neck brace for the first time in the literature, which can both measure and potentially assist in the head motion of the human user. This letter introduces the brace design method and validates its capability to perform measurements. The brace is designed based on kinematics data collected from a healthy individual via a motion capture system. A pilot study was conducted to evaluate the wearability of the brace and the accuracy of measurements with the brace. This study recruited ten participants who performed a series of head motions. The results of this human study indicate that the brace is wearable by individuals who vary in size, the brace allows nearly $70\\%$ of the overall range of head rotations, and the sensors on the brace give accurate motion of the head with an error of under $5^{\\circ }$ when compared to a motion capture system. We believe that this neck brace can be a valid and accurate measurement tool for human head motion. This brace will be a big improvement in the available technologies to measure head motion as these are currently done in the clinic using hand-held protractors in two orthogonal planes.",
"title": ""
},
{
"docid": "neg:1840569_8",
"text": "We propose a method to recover the shape of a 3D room from a full-view indoor panorama. Our algorithm can automatically infer a 3D shape from a collection of partially oriented superpixel facets and line segments. The core part of the algorithm is a constraint graph, which includes lines and superpixels as vertices, and encodes their geometric relations as edges. A novel approach is proposed to perform 3D reconstruction based on the constraint graph by solving all the geometric constraints as constrained linear least-squares. The selected constraints used for reconstruction are identified using an occlusion detection method with a Markov random field. Experiments show that our method can recover room shapes that can not be addressed by previous approaches. Our method is also efficient, that is, the inference time for each panorama is less than 1 minute.",
"title": ""
},
{
"docid": "neg:1840569_9",
"text": "We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera. Our framework is the first one to provide simultaneously comprehensive facial motion tracking and a detailed 3D model of the user's head. Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning. The user starts facing the camera with a neutral expression in the first frame, but is free to move, talk and change his face expression as he wills otherwise. The facial motion is tracked using a blendshape representation while the fine geometric details are captured using a Bump image mapped over the template mesh. We propose an efficient algorithm to grow and refine the 3D model of the head on-the-fly and in real-time. We demonstrate robust and high-fidelity simultaneous facial motion tracking and 3D head modeling results on a wide range of subjects with various head poses and facial expressions. Our proposed method offers interesting possibilities for animation production and 3D video telecommunications.",
"title": ""
},
{
"docid": "neg:1840569_10",
"text": "Threats from social engineering can cause organisations severe damage if they are not considered and managed. In order to understand how to manage those threats, it is important to examine reasons why organisational employees fall victim to social engineering. In this paper, the objective is to understand security behaviours in practice by investigating factors that may cause an individual to comply with a request posed by a perpetrator. In order to attain this objective, we collect data through a scenario-based survey and conduct phishing experiments in three organisations. The results from the experiment reveal that the degree of target information in an attack increases the likelihood that an organisational employee fall victim to an actual attack. Further, an individual’s trust and risk behaviour significantly affects the actual behaviour during the phishing experiment. Computer experience at work, helpfulness and gender (females tend to be less susceptible to a generic attack than men), has a significant correlation with behaviour reported by respondents in the scenario-based survey. No correlation between the performance in the scenario-based survey and experiment was found. We argue that the result does not imply that one or the other method should be ruled out as they have both advantages and disadvantages which should be considered in the context of collecting data in the critical domain of information security. Discussions of the findings, implications and recommendations for future research are further provided.",
"title": ""
},
{
"docid": "neg:1840569_11",
"text": "The subject of this talk is Morse landscapes of natural functionals on infinitedimensional moduli spaces appearing in Riemannian geometry. First, we explain how recursion theory can be used to demonstrate that for many natural functionals on spaces of Riemannian structures, spaces of submanifolds, etc., their Morse landscapes are always more complicated than what follows from purely topological reasons. These Morse landscapes exhibit non-trivial “deep” local minima, cycles in sublevel sets that become nullhomologous only in sublevel sets corresponding to a much higher value of functional, etc. Our second topic is Morse landscapes of the length functional on loop spaces. Here the main conclusion (obtained jointly with Regina Rotman) is that these Morse landscapes can be much more complicated than what follows from topological considerations only if the length functional has “many” “deep” local minima, and the values of the length at these local minima are not “very large”. Mathematics Subject Classification (2000). Primary 53C23, 58E11, 53C20; Secondary 03D80, 68Q30, 53C40, 58E05.",
"title": ""
},
{
"docid": "neg:1840569_12",
"text": "ABSTRACT Evidence of Sedona magnetic anomaly and brainwave EEG synchronization can be demonstrated with portable equipment on site in the field, during sudden magnetic events. Previously, we have demonstrated magnetic anomaly charts recorded in both known and unrecognized Sedona vortex activity locations. We have also shown a correlation or amplification of vortex phenomena with Schumann Resonance. Adding the third measurable parameter of brain wave activity, we demonstrate resonance and amplification among them. We suggest tiny magnetic crystals, biogenic magnetite, make human beings highly sensitive to ELF field fluctuations. Biological Magnetite could act as a transducer of both low frequency magnetic fields and RF fields.",
"title": ""
},
{
"docid": "neg:1840569_13",
"text": "Knowledge graph construction consists of two tasks: extracting information from external resources (knowledge population) and inferring missing information through a statistical analysis on the extracted information (knowledge completion). In many cases, insufficient external resources in the knowledge population hinder the subsequent statistical inference. The gap between these two processes can be reduced by an incremental population approach. We propose a new probabilistic knowledge graph factorisation method that benefits from the path structure of existing knowledge (e.g. syllogism) and enables a common modelling approach to be used for both incremental population and knowledge completion tasks. More specifically, the probabilistic formulation allows us to develop an incremental population algorithm that trades off exploitation-exploration. Experiments on three benchmark datasets show that the balanced exploitation-exploration helps the incremental population, and the additional path structure helps to predict missing information in knowledge completion.",
"title": ""
},
{
"docid": "neg:1840569_14",
"text": "A survey of mental health problems of university students was carried out on 1850 participants in the age range 19-26 years. An indigenous Student Problem Checklist (SPCL) developed by Mahmood & Saleem, (2011), 45 items is a rating scale, designed to determine the prevalence rate of mental health problem among university students. This scale relates to four dimensions of mental health problems as reported by university students, such as: Sense of Being Dysfunctional, Loss of Confidence, Lack of self Regulation and Anxiety Proneness. For interpretation of the overall SPCL score, the authors suggest that scores falling above one SD should be considered as indicative of severe problems, where as score about 2 SD represent very severe problems. Our finding show that 31% of the participants fall in the “severe” category, whereas 16% fall in the “very severe” category. As far as the individual dimensions are concerned, 17% respondents comprising sample of the present study fall in very severe category Sense of Being Dysfunctional, followed by Loss of Confidence (16%), Lack of Self Regulation (14%) and Anxiety Proneness (12%). These findings are in lying with similar other studies on mental health of students. The role of variables like sample characteristics, the measure used, cultural and contextual factors are discussed in determining rates as well as their implications for student counseling service in prevention and intervention.",
"title": ""
},
{
"docid": "neg:1840569_15",
"text": "Given the increase in demand for sustainable livelihoods for coastal villagers in developing countries and for the commercial eucheumoid Kappaphycus alvarezii (Doty) Doty, for the carrageenan industry, there is a trend towards introducing K. alvarezii to more countries in the tropical world for the purpose of cultivation. However, there is also increasing concern over the impact exotic species have on endemic ecosystems and biodiversity. Quarantine and introduction procedures were tested in northern Madagascar and are proposed for all future introductions of commercial eucheumoids (K. alvarezii, K. striatum and Eucheuma denticulatum). In addition, the impact and extent of introduction of K. alvarezii was measured on an isolated lagoon in the southern Lau group of Fiji. It is suggested that, in areas with high human population density, the overwhelming benefits to coastal ecosystems by commercial eucheumoid cultivation far outweigh potential negative impacts. However, quarantine and introduction procedures should be followed. In addition, introduction should only take place if a thorough survey has been conducted and indicates the site is appropriate. Subsequently, the project requires that a well designed and funded cultivation development programme, with a management plan and an assured market, is in place in order to make certain cultivation, and subsequently the introduced algae, will not be abandoned at a later date. KAPPAPHYCUS ALVAREZI",
"title": ""
},
{
"docid": "neg:1840569_16",
"text": "Social media platforms such as Twitter and Facebook enable the creation of virtual customer environments (VCEs) where online communities of interest form around specific firms, brands, or products. While these platforms can be used as another means to deliver familiar e-commerce applications, when firms fail to fully engage their customers, they also fail to fully exploit the capabilities of social media platforms. To gain business value, organizations need to incorporate community building as part of the implementation of social media.",
"title": ""
},
{
"docid": "neg:1840569_17",
"text": "Keyword search on graph structured data has attracted a lot of attention in recent years. Graphs are a natural “lowest common denominator” representation which can combine relational, XML and HTML data. Responses to keyword queries are usually modeled as trees that connect nodes matching the keywords. In this paper we address the problem of keyword search on graphs that may be significantly larger than memory. We propose a graph representation technique that combines a condensed version of the graph (the “supernode graph”) which is always memory resident, along with whatever parts of the detailed graph are in a cache, to form a multi-granular graph representation. We propose two alternative approaches which extend existing search algorithms to exploit multigranular graphs; both approaches attempt to minimize IO by directing search towards areas of the graph that are likely to give good results. We compare our algorithms with a virtual memory approach on several real data sets. Our experimental results show significant benefits in terms of reduction in IO due to our algorithms.",
"title": ""
},
{
"docid": "neg:1840569_18",
"text": "We consider the network implications of virtual reality (VR) and augmented reality (AR). While there are intrinsic challenges for AR/VR applications to deliver on their promise, their impact on the underlying infrastructure will be undeniable. We look at augmented and virtual reality and consider a few use cases where they could be deployed. These use cases define a set of requirements for the underlying network. We take a brief look at potential network architectures. We then make the case for Information-centric networks as a potential architecture to assist the deployment of AR/VR and draw a list of challenges and future research directions for next generation networks to better support AR/VR.",
"title": ""
},
{
"docid": "neg:1840569_19",
"text": "Kalman filter extensions are commonly used algorithms for nonlinear state estimation in time series. The structure of the state and measurement models in the estimation problem can be exploited to reduce the computational demand of the algorithms. We review algorithms that use different forms of structure and show how they can be combined. We show also that the exploitation of the structure of the problem can lead to improved accuracy of the estimates while reducing the computational load.",
"title": ""
}
] |
1840570 | A Muddle of Models of Motivation for Using Peer-to-Peer Economy Systems | [
{
"docid": "pos:1840570_0",
"text": "A meta-analysis of 128 studies examined the effects of extrinsic rewards on intrinsic motivation. As predicted, engagement-contingent, completion-contingent, and performance-contingent rewards significantly undermined free-choice intrinsic motivation (d = -0.40, -0.36, and -0.28, respectively), as did all rewards, all tangible rewards, and all expected rewards. Engagement-contingent and completion-contingent rewards also significantly undermined self-reported interest (d = -0.15, and -0.17), as did all tangible rewards and all expected rewards. Positive feedback enhanced both free-choice behavior (d = 0.33) and self-reported interest (d = 0.31). Tangible rewards tended to be more detrimental for children than college students, and verbal rewards tended to be less enhancing for children than college students. The authors review 4 previous meta-analyses of this literature and detail how this study's methods, analyses, and results differed from the previous ones.",
"title": ""
}
] | [
{
"docid": "neg:1840570_0",
"text": "To investigate whether a persuasive social impact game may serve as a way to increase affective learning and attitude towards the homeless, this study examined the effects of persuasive mechanics in a video game designed to put the player in the shoes of an almost-homeless person. Data were collected from 5139 students in 200 middle/high school classes across four states. Classes were assigned to treatment groups based on matching. Two treatment conditions and a control group were employed in the study. All three groups affective learning and attitude scores decreased from the immediate posttest but the game group was significantly different from the control group in a positive direction. Students who played the persuasive social impact game sustained a significantly higher score on the Affective Learning Scale (ALS) and the Attitude Towards Homelessness Inventory (ATHI) after three weeks. Overall, findings suggest that when students play a video game that is designed using persuasive mechanics an affective and attitude change can be measured empirically. 2014 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840570_1",
"text": "2018 How Does Batch Normalization Help Optimization? S. Santurkar*, D. Tsipras*, A. Ilyas*, & A. Mądry NIPS 2018 (Oral presentation) 2018 Adversarially Robust Generalization Requires More Data L. Schmidt, S. Santurkar, D. Tsipras, K. Talwar, & A. Mądry NIPS 2018 (Spotlight presentation) 2018 A Classification–Based Study of Covariate Shift in GAN Distributions S. Santurkar, L. Schmidt, & A. Mądry ICML 2018 2018 Generative Compression S. Santurkar, D. Budden, & N. Shavit PCS 2018 2017 Deep Tensor Convolution on Multicores D. Budden, A. Matveev, S. Santurkar, S. R. Chaudhuri, & N. Shavit ICML 2017",
"title": ""
},
{
"docid": "neg:1840570_2",
"text": "The goal in automatic programming is to get a computer to perform a task by telling it what needs to be done, rather than by explicitly programming it. This paper considers the task of automatically generating a computer program to enable an autonomous mobile robot to perform the task of following the wall of an irregular shaped room. A human programmer has written such a program in the style of the subsumption architecture. The solution produced by genetic programming emerges as a result of Darwinian natural selection and genetic crossover (sexual recombination) in a population of computer programs. This evolutionary process is driven by a fitness measure which communicates the nature of the task to the computer.",
"title": ""
},
{
"docid": "neg:1840570_3",
"text": "In this paper, a new omni-directional driving system with one spherical wheel is proposed. This system is able to overcome the existing driving systems with structural limitations in vertical, horizontal and diagonal movement. This driving system was composed of two stepping motors, a spherical wheel covered by a ball bearing, a weight balancer for the elimination of eccentricity, and ball plungers for balance. All parts of this structure is located at same distance on the center because the center of gravity of this system must be placed at the center of the system. An own ball bearing was designed for settled rotation and smooth direction change of a spherical wheel. The principle of an own ball bearing is the reversal of the ball mouse. Steel as the material of ball in the own ball bearing, was used for the prevention the slip with ground. One of the stepping motors is used for driving the spherical wheel. This spherical wheel is stable because of the support of ball bearing. And the other enables to move in a wanted direction while it rotates based on the central axis. The ATmega128 chip is used for the control of two stepping motors. To verify the proposed system, driving experiments was executed in variety of environments. Finally, the performance and the validity of the omni-directional driving system were confirmed.",
"title": ""
},
{
"docid": "neg:1840570_4",
"text": "This paper presents AOP++, a generic aspect-oriented programming framework in C++. It successfully incorporates AOP with object-oriented programming as well as generic programming naturally in the framework of standard C++. It innovatively makes use of C++ templates to express pointcut expressions and match join points at compile time. It innovatively creates a full-fledged aspect weaver by using template metaprogramming techniques to perform aspect weaving. It is notable that AOP++ itself is written completely in standard C++, and requires no language extensions. With the help of AOP++, C++ programmers can facilitate AOP with only a little effort.",
"title": ""
},
{
"docid": "neg:1840570_5",
"text": "We present a novel methodology for the automated detection of breast lesions from dynamic contrast-enhanced magnetic resonance volumes (DCE-MRI). Our method, based on deep reinforcement learning, significantly reduces the inference time for lesion detection compared to an exhaustive search, while retaining state-of-art accuracy. This speed-up is achieved via an attention mechanism that progressively focuses the search for a lesion (or lesions) on the appropriate region(s) of the input volume. The attention mechanism is implemented by training an artificial agent to learn a search policy, which is then exploited during inference. Specifically, we extend the deep Q-network approach, previously demonstrated on simpler problems such as anatomical landmark detection, in order to detect lesions that have a significant variation in shape, appearance, location and size. We demonstrate our results on a dataset containing 117 DCE-MRI volumes, validating run-time and accuracy of lesion detection.",
"title": ""
},
{
"docid": "neg:1840570_6",
"text": "In the recent years, new molecules have appeared in the illicit market, claimed to contain \"non-illegal\" compounds, although exhibiting important psychoactive effects; this heterogeneous and rapidly evolving class of compounds are commonly known as \"New Psychoactive Substances\" or, less properly, \"Smart Drugs\" and are easily distributed through the e-commerce or in the so-called \"Smart Shops\". They include, among other, synthetic cannabinoids, cathinones and tryptamine analogs of psylocin. Whereas cases of intoxication and death have been reported, the phenomenon appears to be largely underestimated and is a matter of concern for Public Health. One of the major points of concern depends on the substantial ineffectiveness of the current methods of toxicological screening of biological samples to identify the new compounds entering the market. These limitations emphasize an urgent need to increase the screening capabilities of the toxicology laboratories, and to develop rapid, versatile yet specific assays able to identify new molecules. The most recent advances in mass spectrometry technology, introducing instruments capable of detecting hundreds of compounds at nanomolar concentrations, are expected to give a fundamental contribution to broaden the diagnostic spectrum of the toxicological screening to include not only all these continuously changing molecules but also their metabolites. In the present paper a critical overview of the opportunities, strengths and limitations of some of the newest analytical approaches is provided, with a particular attention to liquid phase separation techniques coupled to high accuracy, high resolution mass spectrometry.",
"title": ""
},
{
"docid": "neg:1840570_7",
"text": "Skin detection from images, typically used as a preprocessing step, has a wide range of applications such as dermatology diagnostics, human computer interaction designs, and etc. It is a challenging problem due to many factors such as variation in pigment melanin, uneven illumination, and differences in ethnicity geographics. Besides, age and gender introduce additional difficulties to the detection process. It is hard to determine whether a single pixel is skin or nonskin without considering the context. An efficient traditional hand-engineered skin color detection algorithm requires extensive work by domain experts. Recently, deep learning algorithms, especially convolutional neural networks (CNNs), have achieved great success in pixel-wise labeling tasks. However, CNN-based architectures are not sufficient for modeling the relationship between pixels and their neighbors. In this letter, we integrate recurrent neural networks (RNNs) layers into the fully convolutional neural networks (FCNs), and develop an end-to-end network for human skin detection. In particular, FCN layers capture generic local features, while RNN layers model the semantic contextual dependencies in images. Experimental results on the COMPAQ and ECU skin datasets validate the effectiveness of the proposed approach, where RNN layers enhance the discriminative power of skin detection in complex background situations.",
"title": ""
},
{
"docid": "neg:1840570_8",
"text": "To explain social learning without invoking the cognitively complex concept of imitation, many learning mechanisms have been proposed. Borrowing an idea used routinely in cognitive psychology, we argue that most of these alternatives can be subsumed under a single process, priming, in which input increases the activation of stored internal representations. Imitation itself has generally been seen as a \"special faculty.\" This has diverted much research towards the all-or-none question of whether an animal can imitate, with disappointingly inconclusive results. In the great apes, however, voluntary, learned behaviour is organized hierarchically. This means that imitation can occur at various levels, of which we single out two clearly distinct ones: the \"action level,\" a rather detailed and linear specification of sequential acts, and the \"program level,\" a broader description of subroutine structure and the hierarchical layout of a behavioural \"program.\" Program level imitation is a high-level, constructive mechanism, adapted for the efficient learning of complex skills and thus not evident in the simple manipulations used to test for imitation in the laboratory. As examples, we describe the food-preparation techniques of wild mountain gorillas and the imitative behaviour of orangutans undergoing \"rehabilitation\" to the wild. Representing and manipulating relations between objects seems to be one basic building block in their hierarchical programs. There is evidence that great apes suffer from a stricter capacity limit than humans in the hierarchical depth of planning. We re-interpret some chimpanzee behaviour previously described as \"emulation\" and suggest that all great apes may be able to imitate at the program level. Action level imitation is seldom observed in great ape skill learning, and may have a largely social role, even in humans.",
"title": ""
},
{
"docid": "neg:1840570_9",
"text": "This paper proposes a simplified method to compute the systolic and diastolic blood pressures from measured oscillometric blood-pressure waveforms. Therefore, the oscillometric waveform is analyzed in the frequency domain, which reveals that the measured blood-pressure signals are heavily disturbed by nonlinear contributions. The proposed approach will linearize the measured oscillometric waveform in order to obtain a more accurate and transparent estimation of the systolic and diastolic pressure based on a robust preprocessing technique. This new approach will be compared with the Korotkoff method and a commercially available noninvasive blood-pressure meter. This allows verification if the linearized approach contains as much information as the Korotkoff method in order to calculate a correct systolic and diastolic blood pressure.",
"title": ""
},
{
"docid": "neg:1840570_10",
"text": "This article describes the results of a case study that applies Neural Networkbased Optical Character Recognition (OCR) to scanned images of books printed between 1487 and 1870 by training the OCR engine OCRopus (Breuel et al. 2013) on the RIDGES herbal text corpus (Odebrecht et al. 2017, in press). Training specific OCR models was possible because the necessary ground truth is available as error-corrected diplomatic transcriptions. The OCR results have been evaluated for accuracy against the ground truth of unseen test sets. Character and word accuracies (percentage of correctly recognized items) for the resulting machine-readable texts of individual documents range from 94% to more than 99% (character level) and from 76% to 97% (word level). This includes the earliest printed books, which were thought to be inaccessible by OCR methods until recently. Furthermore, OCR models trained on one part of the corpus consisting of books with different printing dates and different typesets (mixed models) have been tested for their predictive power on the books from the other part containing yet other fonts, mostly yielding character accuracies well above 90%. It therefore seems possible to construct generalized models trained on a range of fonts that can be applied to a wide variety of historical printings still giving good results. A moderate postcorrection effort of some pages will then enable the training of individual models with even better accuracies. Using this method, diachronic corpora including early printings can be constructed much faster and cheaper than by manual transcription. The OCR methods reported here open up the possibility of transforming our printed textual cultural 1 ar X iv :1 60 8. 02 15 3v 2 [ cs .C L ] 1 F eb 2 01 7 Springmann & Lüdeling OCR of historical printings heritage into electronic text by largely automatic means, which is a prerequisite for the mass conversion of scanned books.",
"title": ""
},
{
"docid": "neg:1840570_11",
"text": "We present a novel graph-based framework for timeline summarization, the task of creating different summaries for different timestamps but for the same topic. Our work extends timeline summarization to a multimodal setting and creates timelines that are both textual and visual. Our approach exploits the fact that news documents are often accompanied by pictures and the two share some common content. Our model optimizes local summary creation and global timeline generation jointly following an iterative approach based on mutual reinforcement and co-ranking. In our algorithm, individual summaries are generated by taking into account the mutual dependencies between sentences and images, and are iteratively refined by considering how they contribute to the global timeline and its coherence. Experiments on real-world datasets show that the timelines produced by our model outperform several competitive baselines both in terms of ROUGE and when assessed by human evaluators.",
"title": ""
},
{
"docid": "neg:1840570_12",
"text": "Bile acids are important signaling molecules that regulate cholesterol, glucose, and energy homoeostasis and have thus been implicated in the development of metabolic disorders. Their bioavailability is strongly modulated by the gut microbiota, which contributes to generation of complex individual-specific bile acid profiles. Hence, it is important to have accurate methods at hand for precise measurement of these important metabolites. Here, a rapid and sensitive liquid chromatography-tandem mass spectrometry (LC-MS/MS) method for simultaneous identification and quantitation of primary and secondary bile acids as well as their taurine and glycine conjugates was developed and validated. Applicability of the method was demonstrated for mammalian tissues, biofluids, and cell culture media. The analytical approach mainly consists of a simple and rapid liquid-liquid extraction procedure in presence of deuterium-labeled internal standards. Baseline separation of all isobaric bile acid species was achieved and a linear correlation over a broad concentration range was observed. The method showed acceptable accuracy and precision on intra-day (1.42-11.07 %) and inter-day (2.11-12.71 %) analyses and achieved good recovery rates for representative analytes (83.7-107.1 %). As a proof of concept, the analytical method was applied to mouse tissues and biofluids, but especially to samples from in vitro fermentations with gut bacteria of the family Coriobacteriaceae. The developed method revealed that the species Eggerthella lenta and Collinsella aerofaciens possess bile salt hydrolase activity, and for the first time that the species Enterorhabdus mucosicola is able to deconjugate and dehydrogenate primary bile acids in vitro.",
"title": ""
},
{
"docid": "neg:1840570_13",
"text": "Intensity inhomogeneity often occurs in real-world images, which presents a considerable challenge in image segmentation. The most widely used image segmentation algorithms are region-based and typically rely on the homogeneity of the image intensities in the regions of interest, which often fail to provide accurate segmentation results due to the intensity inhomogeneity. This paper proposes a novel region-based method for image segmentation, which is able to deal with intensity inhomogeneities in the segmentation. First, based on the model of images with intensity inhomogeneities, we derive a local intensity clustering property of the image intensities, and define a local clustering criterion function for the image intensities in a neighborhood of each point. This local clustering criterion function is then integrated with respect to the neighborhood center to give a global criterion of image segmentation. In a level set formulation, this criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, by minimizing this energy, our method is able to simultaneously segment the image and estimate the bias field, and the estimated bias field can be used for intensity inhomogeneity correction (or bias correction). Our method has been validated on synthetic images and real images of various modalities, with desirable performance in the presence of intensity inhomogeneities. Experiments show that our method is more robust to initialization, faster and more accurate than the well-known piecewise smooth model. As an application, our method has been used for segmentation and bias correction of magnetic resonance (MR) images with promising results.",
"title": ""
},
{
"docid": "neg:1840570_14",
"text": "Embedding and visualizing large-scale high-dimensional data in a two-dimensional space is an important problem since such visualization can reveal deep insights out of complex data. Most of the existing embedding approaches, however, run on an excessively high precision, ignoring the fact that at the end, embedding outputs are converted into coarsegrained discrete pixel coordinates in a screen space. Motivated by such an observation and directly considering pixel coordinates in an embedding optimization process, we accelerate Barnes-Hut tree-based t-distributed stochastic neighbor embedding (BH-SNE), known as a state-of-the-art 2D embedding method, and propose a novel method called PixelSNE, a highly-efficient, screen resolution-driven 2D embedding method with a linear computational complexity in terms of the number of data items. Our experimental results show the significantly fast running time of PixelSNE by a large margin against BH-SNE, while maintaining the minimal degradation in the embedding quality. Finally, the source code of our method is publicly available at https: //github.com/awesome-davian/sasne.",
"title": ""
},
{
"docid": "neg:1840570_15",
"text": "A switching control strategy to extend the zero-voltage-switching (ZVS) operating range of a Dual Active Bridge (DAB) AC/DC converter to the entire input-voltage interval and the full power range is proposed. The converter topology consists of a DAB DC/DC converter, receiving a rectified AC line voltage via a synchronous rectifier. The DAB comprises a primary side half bridge and secondary side full bridge, linked by a high-frequency isolation transformer and inductor. Using conventional control strategies, the soft-switching boundary conditions are exceeded at the higher voltage conversion ratios of the AC input interval. A novel pulse-width-modulation strategy to fully eliminate these boundaries and its analysis are presented in this paper, allowing increased performance (in terms of efficiency and stresses). Additionally, by using a half bridge / full bridge configuration, the number of active components is reduced. A prototype converter was constructed and experimental results are given to validate the theoretical analyses and practical feasibility of the proposed strategy.",
"title": ""
},
{
"docid": "neg:1840570_16",
"text": "Sir, An unusual abnormal fat distribution of the lower part of the body is characterized by massive and symmetric deposits in the groins, trochanters, buttocks, and hips, which contrast sharply with the normal upper part of the body. The massive lipomatosis of the lower part of the body can be classified into three types: type 1, the familial symmetrical lipomatosis that affects the groins, trochanters, hips, buttocks, and thighs; type 2, the bilateral peritrochanteric familial lipomatosis; and type 3, the unilateral peritrochanteric lipomatosis. This deformity affects only women aged between 18 and 50 in the Mediterranean region [1]. Further, isolated abnormal bilateral peritrochanteric lipomatosis has rarely been reported in literature. We report two patients, a mother and her daughter, with isolated bilateral peritrochanteric lipomatosis, who had normal fat distribution of the upper half of the body which was in contrast with the abnormal lower half. The mother, a 42-year-old patient, presented with bilateral abnormal fat distribution of the lower part of the body. Peritrochanteric fat deposits had appeared at the age of 13 and increased with time. The physical examination revealed bilateral isolated, well-demarcated peritrochanteric lipomatosis and normal fat distribution of the upper half of the body (Fig. 1a). The patient was 167 cm tall and weighed 72 kg (body mass index [BMI]=25.8 kg/m). Laboratory and endocrinologic tests included the serum concentrations of lipoprotein, lipoprotein lipase activity, cholesterol, triglycerides, uric acid, fasting glucose, serum estradiol, and testosterone levels, and thyroid function parameters were within normal limits. Histological study of lipoaspirate showed subcutaneous fatty tissue. The daugther, a 22-year-old patient, also presented with bilateral abnormal fat distribution of the lower part of the body. The patient's signs had appeared at age of 12 also increasing with time. The physical examination revealed bilateral isolated, well-demarcated peritrochanteric lipomatosis although it was more evident on the left side (Fig. 2a). The patient was 169 cm tall and weighed 67 kg (BMI=23.5 kg/m). Laboratory and endocrinological tests were within normal limits. Histological study of lipoaspirate showed subcutaneous fatty tissue. Both patients underwent general anesthesia and all procedures were initiated with infusion of tumescent solution (1 L normal saline solution, 30 mg lidocaine, and 1 mL of 1:1,000 epinephrine) [2]. A suction-assisted liposuction method was employed using 4and 6-mm cannulae. Suction started deep into the superficial fascia and ended with superficial liposuction [3]. Incisionswere closedwith6-0 polyprolene and dressings were applied. A second limited liposuction was planned to treat the irregularities in the first case. Results were satisfactory in both cases (Figs. 1b and 2b). Isolated abnormal bilateral peritrochanteric lipomatosis has rarely been reported in literature. In 2006, Goshtasby et al. presented a case of isolated bilateral peritrochanteric lipomatosis of the soft tissue overlying the trochanters [4]. The unusual distribution of fat in the lower body should be differentiated from the familial multiple nodular symmetrical lipomatosis, where the lipomas are nodular, circumscribed, subcutaneous in location, and more common on the extremities and trunk rather than around the neck, shoulder, or the upper torso [5]. Stavropoulos and his colleagues have suggested that the term symmetric lipomatosis referred to two separate disorders, benign multiple symmetric lipomatosis and female S. Şentürk (*) Department of Plastic and Reconstructive Surgery, Mevlana (Rumi) University Hospital, Konya, Turkey e-mail: [email protected]",
"title": ""
},
{
"docid": "neg:1840570_17",
"text": "This paper presents a comprehensive analysis and comparison of air-cored axial-flux permanent-magnet machines with different types of coil configurations. Although coil factor is particularly more sensitive to coil-band width and coil pitch in air-cored machines than conventional slotted machines, remarkably no comprehensive analytical equations exist. Here, new formulas are derived to compare the coil factor of two common concentrated-coil stator winding types. Then, respective coil factors for the winding types are used to determine the torque characteristics and, from that, the optimized coil configurations. Three-dimensional finite-element analysis (FEA) models are built to verify the analytical models. Furthermore, overlapping and wave windings are investigated and compared with the concentrated-coil types. Finally, a prototype machine is designed and built for experimental validations. The results show that the concentrated-coil type with constant coil pitch is superior to all other coil types under study.",
"title": ""
},
{
"docid": "neg:1840570_18",
"text": "We propose a high-capacity polymer-based optical and electrical LSI package integrated with multimode Si photonic transmitters and receivers. We describe the fabrication and characteristics of the polymer-based hybrid LSI package substrate with a polymer optical waveguide, a mirror, and optical card edge connectors. We fabricated optical mirrors with several angles ranging from 40° to 45° for the Si photonic grating coupler by using a dicing blade at an angle. The dicing mirror changed the emission angle for the grating coupler. We also realized a large lateral misalignment tolerance (±11.5 μm) between the polymer waveguide and MMF for 1 dB of excess loss at 24 channels. We obtained 1-dB coupling loss using an optical card edge connector at 1.3 μm because of the large tolerance. We realized 25-Gb/s error-free transmission per channel at 1.3 μm. We also describe here the error penalty and jitter due to modal noise generated by coupling mismatch.",
"title": ""
}
] |
1840571 | Belief & Evidence in Empirical Software Engineering | [
{
"docid": "pos:1840571_0",
"text": "There has been a great deal of interest in defect prediction: using prediction models trained on historical data to help focus quality-control resources in ongoing development. Since most new projects don't have historical data, there is interest in cross-project prediction: using data from one project to predict defects in another. Sadly, results in this area have largely been disheartening. Most experiments in cross-project defect prediction report poor performance, using the standard measures of precision, recall and F-score. We argue that these IR-based measures, while broadly applicable, are not as well suited for the quality-control settings in which defect prediction models are used. Specifically, these measures are taken at specific threshold settings (typically thresholds of the predicted probability of defectiveness returned by a logistic regression model). However, in practice, software quality control processes choose from a range of time-and-cost vs quality tradeoffs: how many files shall we test? how many shall we inspect? Thus, we argue that measures based on a variety of tradeoffs, viz., 5%, 10% or 20% of files tested/inspected would be more suitable. We study cross-project defect prediction from this perspective. We find that cross-project prediction performance is no worse than within-project performance, and substantially better than random prediction!",
"title": ""
},
{
"docid": "pos:1840571_1",
"text": "Popular open-source software projects receive and review contributions from a diverse array of developers, many of whom have little to no prior involvement with the project. A recent survey reported that reviewers consider conformance to the project's code style to be one of the top priorities when evaluating code contributions on Github. We propose to quantitatively evaluate the existence and effects of this phenomenon. To this aim we use language models, which were shown to accurately capture stylistic aspects of code. We find that rejected changesets do contain code significantly less similar to the project than accepted ones; furthermore, the less similar changesets are more likely to be subject to thorough review. Armed with these results we further investigate whether new contributors learn to conform to the project style and find that experience is positively correlated with conformance to the project's code style.",
"title": ""
}
] | [
{
"docid": "neg:1840571_0",
"text": "The increasing utilization of business process models both in business analysis and information systems development raises several issues regarding quality measures. In this context, this paper discusses understandability as a particular quality aspect and its connection with personal, model, and content related factors. We use an online survey to explore the ability of the model reader to draw correct conclusions from a set of process models. For the first group of the participants we used models with abstract activity labels (e.g. A, B, C) while the second group received the same models with illustrative labels such as “check credit limit”. The results suggest that all three categories indeed have an impact on the understandability.",
"title": ""
},
{
"docid": "neg:1840571_1",
"text": "Perimeter protection aims at identifying intrusions across the temporary base established by army in critical regions. Convex-hull algorithm is used to determine the boundary nodes among a set of nodes in the network. To study the effectiveness of such algorithm, we opted three variations, such as distributed approach, centralized, and mobile approach, suitable for wireless sensor networks for boundary detection. The convex-hull approaches are simulated with different node density, and the performance is measured in terms of energy consumption, boundary detection time, and accuracy. Results from the simulations highlight that the convex-hull approach is effective under densely deployed nodes in an environment. The different approaches of convex-hull algorithm are found to be suitable under different sensor network application scenarios.",
"title": ""
},
{
"docid": "neg:1840571_2",
"text": "The mechanism of death in patients struggling against restraints remains a topic of debate. This article presents a series of five patients with restraint-associated cardiac arrest and profound metabolic acidosis. The lowest recorded pH was 6.25; this patient and three others died despite aggressive resuscitation. The survivor's pH was 6.46; this patient subsequently made a good recovery. Struggling against restraints may produce a lactic acidosis. Stimulant drugs such as cocaine may promote further metabolic acidosis and impair normal behavioral regulatory responses. Restrictive positioning of combative patients may impede appropriate respiratory compensation for this acidemia. Public safety personnel and emergency providers must be aware of the life threat to combative patients and be careful with restraint techniques. Further investigation of sedative agents and buffering therapy for this select patient group is suggested.",
"title": ""
},
{
"docid": "neg:1840571_3",
"text": "Research on the \"dark side\" of organizational behavior has determined that employee sabotage is most often a reaction by disgruntled employees to perceived mistreatment. To date, however, most studies on employee retaliation have focused on intra-organizational sources of (in)justice. Results from this field study of customer service representatives (N = 358) showed that interpersonal injustice from customers relates positively to customer-directed sabotage over and above intra-organizational sources of fairness. Moreover, the association between unjust treatment and sabotage was moderated by 2 dimensions of moral identity (symbolization and internalization) in the form of a 3-way interaction. The relationship between injustice and sabotage was more pronounced for employees high (vs. low) in symbolization, but this moderation effect was weaker among employees who were high (vs. low) in internalization. Last, employee sabotage was negatively related to job performance ratings.",
"title": ""
},
{
"docid": "neg:1840571_4",
"text": "Retargeting is an innovative online marketing technique in the modern age. Although this advertising form offers great opportunities of bringing back customers who have left an online store without a complete purchase, retargeting is risky because the necessary data collection leads to strong privacy concerns which in turn, trigger consumer reactance and decreasing trust. Digital nudges – small design modifications in digital choice environments which guide peoples’ behaviour – present a promising concept to bypass these negative consequences of retargeting. In order to prove the positive effects of digital nudges, we aim to conduct an online experiment with a subsequent survey by testing the impacts of social nudges and information nudges in retargeting banners. Our expected contribution to theory includes an extension of existing research of nudging in context of retargeting by investigating the effects of different nudges in retargeting banners on consumers’ behaviour. In addition, we aim to provide practical contributions by the provision of design guidelines for practitioners to build more trustworthy IT artefacts and enhance retargeting strategy of marketing practitioners.",
"title": ""
},
{
"docid": "neg:1840571_5",
"text": "In this paper, we address several puzzles concerning speech acts,particularly indirect speech acts. We show how a formal semantictheory of discourse interpretation can be used to define speech actsand to avoid murky issues concerning the metaphysics of action. Weprovide a formally precise definition of indirect speech acts, includingthe subclass of so-called conventionalized indirect speech acts. Thisanalysis draws heavily on parallels between phenomena at the speechact level and the lexical level. First, we argue that, just as co-predicationshows that some words can behave linguistically as if they're `simultaneously'of incompatible semantic types, certain speech acts behave this way too.Secondly, as Horn and Bayer (1984) and others have suggested, both thelexicon and speech acts are subject to a principle of blocking or ``preemptionby synonymy'': Conventionalized indirect speech acts can block their`paraphrases' from being interpreted as indirect speech acts, even ifthis interpretation is calculable from Gricean-style principles. Weprovide a formal model of this blocking, and compare it withexisting accounts of lexical blocking.",
"title": ""
},
{
"docid": "neg:1840571_6",
"text": "In this paper, we study a multi-residential electricity load scheduling problem with multi-class appliances in smart grid. Compared with the previous works in which only limited types of appliances are considered or only single residence grids are considered, we model the grid system more practically with jointly considering multi-residence and multi-class appliance. We formulate an optimization problem to maximize the sum of the overall satisfaction levels of residences which is defined as the sum of utilities of the residential customers minus the total cost for energy consumption. Then, we provide an electricity load scheduling algorithm by using a PL-Generalized Benders Algorithm which operates in a distributed manner while protecting the private information of the residences. By applying the algorithm, we can obtain the near-optimal load scheduling for each residence, which is shown to be very close to the optimal scheduling, and also obtain the lower and upper bounds on the optimal sum of the overall satisfaction levels of all residences, which are shown to be very tight.",
"title": ""
},
{
"docid": "neg:1840571_7",
"text": "Current GUI builders provide a design environment for user interfaces that target either a single type or fixed set of devices, and provide little support for scenarios in which the user interface, or parts of it, are distributed over multiple devices. Distributed user interfaces have received increasing attention over the past years. There are different, often model-based, approaches that focus on technical issues. This paper presents XDStudio--a new GUI builder designed to support interactive development of cross-device web interfaces. XDStudio implements two complementary authoring modes with a focus on the design process of distributed user interfaces. First, simulated authoring allows designing for a multi-device environment on a single device by simulating other target devices. Second, on-device authoring allows the design process itself to be distributed over multiple devices, as design and development take place on the target devices themselves. To support interactive development for multi-device environments, where not all devices may be present at design and run-time, XDStudio supports switching between the two authoring modes, as well as between design and use modes, as required. This paper focuses on the design of XDStudio, and evaluates its support for two distribution scenarios.",
"title": ""
},
{
"docid": "neg:1840571_8",
"text": "We present a study on the importance of psycho-acoustic transformations for effective audio feature calculation. From the results, both crucial and problematic parts of the algorithm for Rhythm Patterns feature extraction are identified. We furthermore introduce two new feature representations in this context: Statistical Spectrum Descriptors and Rhythm Histogram features. Evaluation on both the individual and combined feature sets is accomplished through a music genre classification task, involving 3 reference audio collections. Results are compared to published measures on the same data sets. Experiments confirmed that in all settings the inclusion of psycho-acoustic transformations provides significant improvement of classification accuracy.",
"title": ""
},
{
"docid": "neg:1840571_9",
"text": "Historically, social scientists have sought out explanations of human and social phenomena that provide interpretable causal mechanisms, while often ignoring their predictive accuracy. We argue that the increasingly computational nature of social science is beginning to reverse this traditional bias against prediction; however, it has also highlighted three important issues that require resolution. First, current practices for evaluating predictions must be better standardized. Second, theoretical limits to predictive accuracy in complex social systems must be better characterized, thereby setting expectations for what can be predicted or explained. Third, predictive accuracy and interpretability must be recognized as complements, not substitutes, when evaluating explanations. Resolving these three issues will lead to better, more replicable, and more useful social science.",
"title": ""
},
{
"docid": "neg:1840571_10",
"text": "Cellulosomes are multienzyme complexes that are produced by anaerobic cellulolytic bacteria for the degradation of lignocellulosic biomass. They comprise a complex of scaffoldin, which is the structural subunit, and various enzymatic subunits. The intersubunit interactions in these multienzyme complexes are mediated by cohesin and dockerin modules. Cellulosome-producing bacteria have been isolated from a large variety of environments, which reflects their prevalence and the importance of this microbial enzymatic strategy. In a given species, cellulosomes exhibit intrinsic heterogeneity, and between species there is a broad diversity in the composition and configuration of cellulosomes. With the development of modern technologies, such as genomics and proteomics, the full protein content of cellulosomes and their expression levels can now be assessed and the regulatory mechanisms identified. Owing to their highly efficient organization and hydrolytic activity, cellulosomes hold immense potential for application in the degradation of biomass and are the focus of much effort to engineer an ideal microorganism for the conversion of lignocellulose to valuable products, such as biofuels.",
"title": ""
},
{
"docid": "neg:1840571_11",
"text": "In automated driving systems (ADS) and advanced driver-assistance systems (ADAS), an efficient road segmentation module is required to present the drivable region and to build an occupancy grid for path planning components. The existing road algorithms build gigantic convolutional neural networks (CNNs) that are computationally expensive and time consuming. In this paper, we explore the usage of recurrent neural network (RNN) in image processing and propose an efficient network layer named spatial sequence. This layer is then applied to our new road segmentation network RoadNet-v2, which combines convolutional layers and spatial sequence layers. In the end, the network is trained and tested in KITTI road benchmark and Cityscapes dataset. We claim the proposed network achieves comparable accuracy to the existing road segmentation algorithms but much faster processing speed, 10 ms per frame.",
"title": ""
},
{
"docid": "neg:1840571_12",
"text": "Background: Statistical mechanics results (Dauphin et al. (2014); Choromanska et al. (2015)) suggest that local minima with high error are exponentially rare in high dimensions. However, to prove low error guarantees for Multilayer Neural Networks (MNNs), previous works so far required either a heavily modified MNN model or training method, strong assumptions on the labels (e.g., “near” linear separability), or an unrealistically wide hidden layer with Ω (N) units. Results: We examine a MNN with one hidden layer of piecewise linear units, a single output, and a quadratic loss. We prove that, with high probability in the limit of N → ∞ datapoints, the volume of differentiable regions of the empiric loss containing sub-optimal differentiable local minima is exponentially vanishing in comparison with the same volume of global minima, given standard normal input of dimension d0 = Ω̃ (√ N ) , and a more realistic number of d1 = Ω̃ (N/d0) hidden units. We demonstrate our results numerically: for example, 0% binary classification training error on CIFAR with only N/d0 ≈ 16 hidden neurons.",
"title": ""
},
{
"docid": "neg:1840571_13",
"text": "Generating images from natural language is one of the primary applications of recent conditional generative models. Besides testing our ability to model conditional, highly dimensional distributions, text to image synthesis has many exciting and practical applications such as photo editing or computer-aided content creation. Recent progress has been made using Generative Adversarial Networks (GANs). This material starts with a gentle introduction to these topics and discusses the existent state of the art models. Moreover, I propose Wasserstein GAN-CLS, a new model for conditional image generation based on the Wasserstein distance which offers guarantees of stability. Then, I show how the novel loss function of Wasserstein GAN-CLS can be used in a Conditional Progressive Growing GAN. In combination with the proposed loss, the model boosts by 7.07% the best Inception Score (on the Caltech birds dataset) of the models which use only the sentence-level visual semantics. The only model which performs better than the Conditional Wasserstein Progressive growing GAN is the recently proposed AttnGAN which uses word-level visual semantics as well.",
"title": ""
},
{
"docid": "neg:1840571_14",
"text": "Many companies have developed strategies that include investing heavily in information technology (IT) in order to enhance their performance. Yet, this investment pays off for some companies but not others. This study proposes that organization learning plays a significant role in determining the outcomes of IT. Drawing from resource theory and IT literature, the authors develop the concept of IT competency. Using structural equations modeling with data collected from managers in 271 manufacturing firms, they show that organizational learning plays a significant role in mediating the effects of IT competency on firm performance. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "neg:1840571_15",
"text": "Students, researchers and professional analysts lack effective tools to make personal and collective sense of problems while working in distributed teams. Central to this work is the process of sharing—and contesting—interpretations via different forms of argument. How does the “Web 2.0” paradigm challenge us to deliver useful, usable tools for online argumentation? This paper reviews the current state of the art in Web Argumentation, describes key features of the Web 2.0 orientation, and identifies some of the tensions that must be negotiated in bringing these worlds together. It then describes how these design principles are interpreted in Cohere, a web tool for social bookmarking, idea-linking, and argument visualization.",
"title": ""
},
{
"docid": "neg:1840571_16",
"text": "The sparse matrix solver by LU factorization is a serious bottleneck in Simulation Program with Integrated Circuit Emphasis (SPICE)-based circuit simulators. The state-of-the-art Graphics Processing Units (GPU) have numerous cores sharing the same memory, provide attractive memory bandwidth and compute capability, and support massive thread-level parallelism, so GPUs can potentially accelerate the sparse solver in circuit simulators. In this paper, an efficient GPU-based sparse solver for circuit problems is proposed. We develop a hybrid parallel LU factorization approach combining task-level and data-level parallelism on GPUs. Work partitioning, number of active thread groups, and memory access patterns are optimized based on the GPU architecture. Experiments show that the proposed LU factorization approach on NVIDIA GTX580 attains an average speedup of 7.02× (geometric mean) compared with sequential PARDISO, and 1.55× compared with 16-threaded PARDISO. We also investigate bottlenecks of the proposed approach by a parametric performance model. The performance of the sparse LU factorization on GPUs is constrained by the global memory bandwidth, so the performance can be further improved by future GPUs with larger memory bandwidth.",
"title": ""
},
{
"docid": "neg:1840571_17",
"text": "To evaluate cone and cone-driven retinal function in patients with Smith-Lemli-Opitz syndrome (SLOS), a condition characterized by low cholesterol. Rod and rod-driven function in patients with SLOS are known to be abnormal. Electroretinographic (ERG) responses to full-field stimuli presented on a steady, rod suppressing background were recorded in 13 patients who had received long-term cholesterol supplementation. Cone photoresponse sensitivity (S CONE) and saturated amplitude (R CONE) parameters were estimated using a model of the activation of phototransduction, and post-receptor b-wave and 30 Hz flicker responses were analyzed. The responses of the patients were compared to those of control subjects (N = 13). Although average values of both S CONE and R CONE were lower than in controls, the differences were not statistically significant. Post-receptor b-wave amplitude and implicit time and flicker responses were normal. The normal cone function contrasts with the significant abnormalities in rod function that were found previously in these same patients. Possibly, cholesterol supplementation has a greater protective effect on cones than on rods as has been demonstrated in the rat model of SLOS.",
"title": ""
},
{
"docid": "neg:1840571_18",
"text": "Ambient assisted living (AAL) technologies can help the elderly maintain their independence while keeping them safer. Sensors monitor their activities to detect situations in which they might need help. Most research in this area has targeted indoor environments, but outdoor activities are just as important; many risky situations might occur outdoors. SafeNeighborhood (SN) is an AAL system that combines data from multiple sources with collective intelligence to tune sensor data. It merges mobile, ambient, and AI technologies with old-fashioned neighborhood ties to create safe outdoor spaces. The initial results indicate SN’s potential use and point toward new opportunities for care of the elderly.",
"title": ""
},
{
"docid": "neg:1840571_19",
"text": "An approach to the problem of autonomous mobile robot obstacle avoidance using reinforcement learning neural network is proposed in this paper. Q-learning is one kind of reinforcement learning method that is similar to dynamic programming and the neural network has a powerful ability to store the values. We integrate these two methods with the aim to ensure autonomous robot behavior in complicated unpredictable environment. The simulation results show that the simulated robot using the reinforcement learning neural network can enhance its learning ability obviously and can finish the given task in a complex environment.",
"title": ""
}
] |
1840572 | Answering Science Exam Questions Using Query Rewriting with Background Knowledge | [
{
"docid": "pos:1840572_0",
"text": "Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.",
"title": ""
},
{
"docid": "pos:1840572_1",
"text": "This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++ is designed for quick implementation of different neural sequence labeling models with a CRF inference layer. It provides users with an inference for building the custom model structure through configuration file with flexible neural feature design and utilization. Built on PyTorch1, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. It also includes the implementations of most state-of-the-art neural sequence labeling models such as LSTMCRF, facilitating reproducing and refinement on those methods.",
"title": ""
},
{
"docid": "pos:1840572_2",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "pos:1840572_3",
"text": "We present a new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem. SCITAIL is the first entailment set that is created solely from natural sentences that already exist independently “in the wild” rather than sentences authored specifically for the entailment task. Different from existing entailment datasets, we create hypotheses from science questions and the corresponding answer candidates, and premises from relevant web sentences retrieved from a large corpus. These sentences are often linguistically challenging. This, combined with the high lexical similarity of premise and hypothesis for both entailed and non-entailed pairs, makes this new entailment task particularly difficult. The resulting challenge is evidenced by state-of-the-art textual entailment systems achieving mediocre performance on SCITAIL, especially in comparison to a simple majority class baseline. As a step forward, we demonstrate that one can improve accuracy on SCITAIL by 5% using a new neural model that exploits linguistic structure.",
"title": ""
},
{
"docid": "pos:1840572_4",
"text": "tion can machines think? by replacing it with another, namely can a machine pass the imitation game (the Turing test). In the years since, this test has been criticized as being a poor replacement for the original enquiry (for example, Hayes and Ford [1995]), which raises the question: what would a better replacement be? In this article, we argue that standardized tests are an effective and practical assessment of many aspects of machine intelligence, and should be part of any comprehensive measure of AI progress. While a crisp definition of machine intelligence remains elusive, we can enumerate some general properties we might expect of an intelligent machine. The list is potentially long (for example, Legg and Hutter [2007]), but should at least include the ability to (1) answer a wide variety of questions, (2) answer complex questions, (3) demonstrate commonsense and world knowledge, and (4) acquire new knowledge scalably. In addition, a suitable test should be clearly measurable, graduated (have a variety of levels of difficulty), not gameable, ambitious but realistic, and motivating. There are many other requirements we might add (for example, capabilities in robotics, vision, dialog), and thus any comprehensive measure of AI is likely to require a battery of different tests. However, standardized tests meet a surprising number of requirements, including the four listed, and thus should be a key component of a future battery of tests. As we will show, the tests require answering a wide variety of questions, including those requiring commonsense and world knowledge. In addition, they meet all the practical requirements, a huge advantage for any component of a future test of AI. Articles",
"title": ""
},
{
"docid": "pos:1840572_5",
"text": "The recent work of Clark et al. (2018) introduces the AI2 Reasoning Challenge (ARC) and the associated ARC dataset that partitions open domain, complex science questions into an Easy Set and a Challenge Set. That paper includes an analysis of 100 questions with respect to the types of knowledge and reasoning required to answer them; however, it does not include clear definitions of these types, nor does it offer information about the quality of the labels. We propose a comprehensive set of definitions of knowledge and reasoning types necessary for answering the questions in the ARC dataset. Using ten annotators and a sophisticated annotation interface, we analyze the distribution of labels across the Challenge Set and statistics related to them. Additionally, we demonstrate that although naive information retrieval methods return sentences that are irrelevant to answering the query, sufficient supporting text is often present in the (ARC) corpus. Evaluating with human-selected relevant sentences improves the performance of a neural machine comprehension model by 42 points.",
"title": ""
}
] | [
{
"docid": "neg:1840572_0",
"text": "As multicore systems continue to gain ground in the High Performance Computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. Fine grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. This paper presents an algorithm for the Cholesky, LU and QR factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data. These tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. This may result in an out of order execution of the tasks which will completely hide the presence of intrinsically sequential tasks in the factorization. Performance comparisons are presented with the LAPACK algorithms where parallelism can only be exploited at the level of the BLAS operations and vendor implementations.",
"title": ""
},
{
"docid": "neg:1840572_1",
"text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.",
"title": ""
},
{
"docid": "neg:1840572_2",
"text": "Text mining has becoming an emerging research area in now-a-days that helps to extract useful information from large amount of natural language text documents. The need of grouping similar documents together for different applications has gaining the attention of researchers in this area. Document clustering organizes the documents into different groups called as clusters. The documents in one cluster have higher degree of similarity than the documents in other cluster. The paper provides an overview of the document clustering reviewed from different papers and the challenges in document clustering. KeywordsText Mining, Document Clustering, Similarity Measures, Challenges in Document Clustering",
"title": ""
},
{
"docid": "neg:1840572_3",
"text": "The purpose of this paper is to present a direct digital manufacturing (DDM) process that is an order of magnitude faster than other DDM processes currently available. The developed process is based on a mask-image-projection-based Stereolithography process (MIP-SL), during which a Digital Micromirror Device (DMD) controlled projection light cures and cross-links liquid photopolymer resin. In order to achieve high-speed fabrication, we investigated the bottom-up projection system in the MIP-SL process. A set of techniques including film coating and the combination of two-way linear motions have been developed for the quick spreading of liquid resin into uniform thin layers. The process parameters and related settings to achieve the fabrication speed of a few seconds per layer are presented. Additionally, the hardware, software, and material setups developed for fabricating given three-dimensional (3D) digital models are presented. Experimental studies using the developed testbed have been performed to verify the effectiveness and efficiency of the presented fast MIP-SL process. The test results illustrate that the newly developed process can build a moderately sized part within minutes instead of hours that are typically required.",
"title": ""
},
{
"docid": "neg:1840572_4",
"text": "It is crucial for cancer diagnosis and treatment to accurately identify the site of origin of a tumor. With the emergence and rapid advancement of DNA microarray technologies, constructing gene expression profiles for different cancer types has already become a promising means for cancer classification. In addition to research on binary classification such as normal versus tumor samples, which attracts numerous efforts from a variety of disciplines, the discrimination of multiple tumor types is also important. Meanwhile, the selection of genes which are relevant to a certain cancer type not only improves the performance of the classifiers, but also provides molecular insights for treatment and drug development. Here, we use semisupervised ellipsoid ARTMAP (ssEAM) for multiclass cancer discrimination and particle swarm optimization for informative gene selection. ssEAM is a neural network architecture rooted in adaptive resonance theory and suitable for classification tasks. ssEAM features fast, stable, and finite learning and creates hyperellipsoidal clusters, inducing complex nonlinear decision boundaries. PSO is an evolutionary algorithm-based technique for global optimization. A discrete binary version of PSO is employed to indicate whether genes are chosen or not. The effectiveness of ssEAM/PSO for multiclass cancer diagnosis is demonstrated by testing it on three publicly available multiple-class cancer data sets. ssEAM/PSO achieves competitive performance on all these data sets, with results comparable to or better than those obtained by other classifiers",
"title": ""
},
{
"docid": "neg:1840572_5",
"text": "In this paper, we investigate the possibility that a Near Field Communication (NFC) enabled mobile phone, with an embedded secure element (SE), could be used as a mobile token cloning and skimming platform. We show how an attacker could use an NFC mobile phone as such an attack platform by exploiting the existing security controls of the embedded SE and the available contactless APIs. To illustrate the feasibility of these actions, we also show how to practically skim and emulate certain tokens typically used in payment and access control applications with a NFC mobile phone. We also discuss how to capture and analyse legitimate transaction information from contactless systems. Although such attacks can also be implemented on other contactless platforms, such as custom-built card emulators and modified readers, the NFC enabled mobile phone has a legitimate form factor, which would be accepted by merchants and arouse less suspicion in public. Finally, we propose several security countermeasures for NFC phones that could prevent such misuse.",
"title": ""
},
{
"docid": "neg:1840572_6",
"text": "In this paper we describe a technology for protecting privacy in video systems. The paper presents a review of privacy in video surveillance and describes how a computer vision approach to understanding the video can be used to represent “just enough” of the information contained in a video stream to allow video-based tasks (including both surveillance and other “person aware” applications) to be accomplished, while hiding superfluous details, particularly identity, that can contain privacyintrusive information. The technology has been implemented in the form of a privacy console that manages operator access to different versions of the video-derived data according to access control lists. We have also built PrivacyCam—a smart camera that produces a video stream with the privacy-intrusive information already removed.",
"title": ""
},
{
"docid": "neg:1840572_7",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "neg:1840572_8",
"text": "High reliability and large rangeability are required of pumps in existing and new plants which must be capable of reliable on-off cycling operations and specially low load duties. The reliability and rangeability target is a new task for the pump designer/researcher and is made very challenging by the cavitation and/or suction recirculation effects, first of all the pump damage. The present knowledge about the: a) design critical parameters and their optimization, b) field problems diagnosis and troubleshooting has much advanced, in the very latest years. The objective of the pump manufacturer is to develop design solutions and troubleshooting approaches which improve the impeller life as related to cavitation erosion and enlarge the reliable operating range by minimizing the effects of the suction recirculation. This paper gives a short description of several field cases characterized by different damage patterns and other symptoms related with cavitation and/or suction recirculation. The troubleshooting methodology is described in detail, also focusing on the role of both the pump designer and the pump user.",
"title": ""
},
{
"docid": "neg:1840572_9",
"text": "Business Process Reengineering is a discipline in which extensive research has been carried out and numerous methodologies churned out. But what seems to be lacking is a structured approach. In this paper we provide a review of BPR and present ‘best of breed ‘ methodologies from contemporary literature and introduce a consolidated, systematic approach to the redesign of a business enterprise. The methodology includes the five activities: Prepare for reengineering, Map and Analyze As-Is process, Design To-be process, Implement reengineered process and Improve continuously.",
"title": ""
},
{
"docid": "neg:1840572_10",
"text": "The demand for coal has been on the rise in modern society. With the number of opencast coal mines decreasing, it has become increasingly difficult to find coal. Low efficiencies and high casualty rates have always been problems in the process of coal exploration due to complicated geological structures in coal mining areas. Therefore, we propose a new exploration technology for coal that uses satellite images to explore and monitor opencast coal mining areas. First, we collected bituminous coal and lignite from the Shenhua opencast coal mine in China in addition to non-coal objects, including sandstones, soils, shales, marls, vegetation, coal gangues, water, and buildings. Second, we measured the spectral data of these objects through a spectrometer. Third, we proposed a multilayer extreme learning machine algorithm and constructed a coal classification model based on that algorithm and the spectral data. The model can assist in the classification of bituminous coal, lignite, and non-coal objects. Fourth, we collected Landsat 8 satellite images for the coal mining areas. We divided the image of the coal mine using the constructed model and correctly described the distributions of bituminous coal and lignite. Compared with the traditional coal exploration method, our method manifested an unparalleled advantage and application value in terms of its economy, speed, and accuracy.",
"title": ""
},
{
"docid": "neg:1840572_11",
"text": "Received: 25 June 2013 Revised: 11 October 2013 Accepted: 25 November 2013 Abstract This paper distinguishes and contrasts two design science research strategies in information systems. In the first strategy, a researcher constructs or builds an IT meta-artefact as a general solution concept to address a class of problem. In the second strategy, a researcher attempts to solve a client’s specific problem by building a concrete IT artefact in that specific context and distils from that experience prescriptive knowledge to be packaged into a general solution concept to address a class of problem. The two strategies are contrasted along 16 dimensions representing the context, outcomes, process and resource requirements. European Journal of Information Systems (2015) 24(1), 107–115. doi:10.1057/ejis.2013.35; published online 7 January 2014",
"title": ""
},
{
"docid": "neg:1840572_12",
"text": "The present research examined how mode of play in an educational mathematics video game impacts learning, performance, and motivation. The game was designed for the practice and automation of arithmetic skills to increase fluency and was adapted to allow for individual, competitive, or collaborative game play. Participants (N 58) from urban middle schools were randomly assigned to each experimental condition. Results suggested that, in comparison to individual play, competition increased in-game learning, whereas collaboration decreased performance during the experimental play session. Although out-of-game math fluency improved overall, it did not vary by condition. Furthermore, competition and collaboration elicited greater situational interest and enjoyment and invoked a stronger mastery goal orientation. Additionally, collaboration resulted in stronger intentions to play the game again and to recommend it to others. Results are discussed in terms of the potential for mathematics learning games and technology to increase student learning and motivation and to demonstrate how different modes of engagement can inform the instructional design of such games.",
"title": ""
},
{
"docid": "neg:1840572_13",
"text": "This research addresses management control in the front end of innovation projects. We conceptualize and analyze PMOs more broadly than just as a specialized project-focused organizational unit. Building on theories of management control, organization design, and innovation front end literature, we assess the role of PMO as an integrative arrangement. The empirical material is derived from four companies. The results show a variety of management control mechanisms that can be considered as integrative organizational arrangements. Such organizational arrangements can be considered as an alternative to a non-existent PMO, or to complement a (non-existent) PMO's tasks. The paper also contrasts prior literature by emphasizing the desirability of a highly organic or embedded matrix structure in the organization. Finally, we propose that the development path of the management approach proceeds by first emphasizing diagnostic and boundary systems (with mechanistic management approaches) followed by intensive use of interactive and belief systems (with value-based management approaches). The major contribution of this paper is in the organizational and managerial mechanisms of a firm that is managing multiple innovation projects. This research also expands upon the existing PMO research to include a broader management control approach for managing projects in companies. © 2011 Elsevier Ltd. and IPMA. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840572_14",
"text": "We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.",
"title": ""
},
{
"docid": "neg:1840572_15",
"text": "Context: Static analysis approaches have been proposed to assess the security of Android apps, by searching for known vulnerabilities or actual malicious code. The literature thus has proposed a large body of works, each of which attempts to tackle one or more of the several challenges that program analyzers face when dealing with Android apps. Objective: We aim to provide a clear view of the state-of-the-art works that statically analyze Android apps, from which we highlight the trends of static analysis approaches, pinpoint where the focus has been put and enumerate the key aspects where future researches are still needed. Method: We have performed a systematic literature review which involves studying around 90 research papers published in software engineering, programming languages and security venues. This review is performed mainly in five dimensions: problems targeted by the approach, fundamental techniques used by authors, static analysis sensitivities considered, android characteristics taken into account and the scale of evaluation performed. Results: Our in-depth examination have led to several key findings: 1) Static analysis is largely performed to uncover security and privacy issues; 2) The Soot framework and the Jimple intermediate representation are the most adopted basic support tool and format, respectively; 3) Taint analysis remains the most applied technique in research approaches; 4) Most approaches support several analysis sensitivities, but very few approaches consider path-sensitivity; 5) There is no single work that has been proposed to tackle all challenges of static analysis that are related to Android programming; and 6) Only a small portion of state-of-the-art works have made their artifacts publicly available. Conclusion: The research community is still facing a number of challenges for building approaches that are aware altogether of implicit-Flows, dynamic code loading features, reflective calls, native code and multi-threading, in order to implement sound and highly precise static analyzers.",
"title": ""
},
{
"docid": "neg:1840572_16",
"text": "Current theories of aspect acknowledge the pervasiveness of verbs of variable telicity, and are designed to account both for why these verbs show such variability and for the complex conditions that give rise to telic and atelic interpretations. Previous work has identified several sets of such verbs, including incremental theme verbs, such as eat and destroy; degree achievements, such as cool and widen; and (a)telic directed motion verbs, such as ascend and descend (see e.g., Dowty 1979; Declerck 1979; Dowty 1991; Krifka 1989, 1992; Tenny 1994; Bertinetto and Squartini 1995; Levin and Rappaport Hovav 1995; Jackendoff 1996; Ramchand 1997; Filip 1999; Hay, Kennedy, and Levin 1999; Rothstein 2003; Borer 2005). As the diversity in descriptive labels suggests, most previous work has taken these classes to embody distinct phenomena and to have distinct lexical semantic analyses. We believe that it is possible to provide a unified analysis in which the behavior of all of these verbs stems from a single shared element of their meanings: a function that measures the degree to which an object changes relative to some scalar dimension over the course of an event. We claim that such ‘measures of change’ are based on the more general kinds of measure functions that are lexicalized in many languages by gradable adjectives, and that map an object to a scalar value that represents the degree to which it manifests some gradable property at a time (see Bartsch and Vennemann 1972,",
"title": ""
},
{
"docid": "neg:1840572_17",
"text": "Sparse methods and the use of Winograd convolutions are two orthogonal approaches, each of which significantly accelerates convolution computations in modern CNNs. Sparse Winograd merges these two and thus has the potential to offer a combined performance benefit. Nevertheless, training convolution layers so that the resulting Winograd kernels are sparse has not hitherto been very successful. By introducing a Winograd layer in place of a standard convolution layer, we can learn and prune Winograd coefficients “natively” and obtain sparsity level beyond 90% with only 0.1% accuracy loss with AlexNet on ImageNet dataset. Furthermore, we present a sparse Winograd convolution algorithm and implementation that exploits the sparsity, achieving up to 31.7 effective TFLOP/s in 32-bit precision on a latest Intel Xeon CPU, which corresponds to a 5.4× speedup over a state-of-the-art dense convolution implementation.",
"title": ""
},
{
"docid": "neg:1840572_18",
"text": "Since vulnerabilities in Linux kernel are on the increase, attackers have turned their interests into related exploitation techniques. However, compared with numerous researches on exploiting use-after-free vulnerabilities in the user applications, few efforts studied how to exploit use-after-free vulnerabilities in Linux kernel due to the difficulties that mainly come from the uncertainty of the kernel memory layout. Without specific information leakage, attackers could only conduct a blind memory overwriting strategy trying to corrupt the critical part of the kernel, for which the success rate is negligible.\n In this work, we present a novel memory collision strategy to exploit the use-after-free vulnerabilities in Linux kernel reliably. The insight of our exploit strategy is that a probabilistic memory collision can be constructed according to the widely deployed kernel memory reuse mechanisms, which significantly increases the success rate of the attack. Based on this insight, we present two practical memory collision attacks: An object-based attack that leverages the memory recycling mechanism of the kernel allocator to achieve freed vulnerable object covering, and a physmap-based attack that takes advantage of the overlap between the physmap and the SLAB caches to achieve a more flexible memory manipulation. Our proposed attacks are universal for various Linux kernels of different architectures and could successfully exploit systems with use-after-free vulnerabilities in kernel. Particularly, we achieve privilege escalation on various popular Android devices (kernel version>=4.3) including those with 64-bit processors by exploiting the CVE-2015-3636 use-after-free vulnerability in Linux kernel. To our knowledge, this is the first generic kernel exploit for the latest version of Android. Finally, to defend this kind of memory collision, we propose two corresponding mitigation schemes.",
"title": ""
},
{
"docid": "neg:1840572_19",
"text": "Organizations spend a significant amount of resources securing their servers and network perimeters. However, these mechanisms are not sufficient for protecting databases. In this paper, we present a new technique for identifying malicious database transactions. Compared to many existing approaches which profile SQL query structures and database user activities to detect intrusions, the novelty of this approach is the automatic discovery and use of essential data dependencies, namely, multi-dimensional and multi-level data dependencies, for identifying anomalous database transactions. Since essential data dependencies reflect semantic relationships among data items and are less likely to change than SQL query structures or database user behaviors, they are ideal for profiling data correlations for identifying malicious database activities.1",
"title": ""
}
] |
1840573 | Multi- and Cross-Modal Semantics Beyond Vision: Grounding in Auditory Perception | [
{
"docid": "pos:1840573_0",
"text": "Models that acquire semantic representations from both linguistic and perceptual input are of interest to researchers in NLP because of the obvious parallels with human language learning. Performance advantages of the multi-modal approach over language-only models have been clearly established when models are required to learn concrete noun concepts. However, such concepts are comparatively rare in everyday language. In this work, we present a new means of extending the scope of multi-modal models to more commonly-occurring abstract lexical concepts via an approach that learns multimodal embeddings. Our architecture outperforms previous approaches in combining input from distinct modalities, and propagates perceptual information on concrete concepts to abstract concepts more effectively than alternatives. We discuss the implications of our results both for optimizing the performance of multi-modal models and for theories of abstract conceptual representation.",
"title": ""
}
] | [
{
"docid": "neg:1840573_0",
"text": "Referring to existing illustrations helps novice drawers to realize their ideas. To find such helpful references from a large image collection, we first build a semantic vector representation of illustrations by training convolutional neural networks. As the proposed vector space correctly reflects the semantic meanings of illustrations, users can efficiently search for references with similar attributes. Besides the search with a single query, a semantic morphing algorithm that searches the intermediate illustrations that gradually connect two queries is proposed. Several experiments were conducted to demonstrate the effectiveness of our methods.",
"title": ""
},
{
"docid": "neg:1840573_1",
"text": "In this paper we present the first public, online demonstration of MaxTract; a tool that converts PDF files containing mathematics into multiple formats including LTEX, HTML with embedded MathML, and plain text. Using a bespoke PDF parser and image analyser, we directly extract character and font information to use as input for a linear grammar which, in conjunction with specialised drivers, can accurately recognise and reproduce both the two dimensional relationships between symbols in mathematical formulae and the one dimensional relationships present in standard text. The main goals of MaxTract are to provide translation services into standard mathematical markup languages and to add accessibility to mathematical documents on multiple levels. This includes both accessibility in the narrow sense of providing access to content for print impaired users, such as those with visual impairments, dyslexia or dyspraxia, as well as more generally to enable any user access to the mathematical content at more re-usable levels than merely visual. MaxTract produces output compatible with web browsers, screen readers, and tools such as copy and paste, which is achieved by enriching the regular text with mathematical markup. The output can also be used directly, within the limits of the presentation MathML produced, as machine readable mathematical input to software systems such as Mathematica or Maple.",
"title": ""
},
{
"docid": "neg:1840573_2",
"text": "A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.",
"title": ""
},
{
"docid": "neg:1840573_3",
"text": "A core problem in learning semantic parsers from denotations is picking out consistent logical forms—those that yield the correct denotation—from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WIKITABLEQUESTIONS dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.",
"title": ""
},
{
"docid": "neg:1840573_4",
"text": "Articles in the financial press suggest that institutional investors are overly focused on short-term profitability leading mangers to manipulate earnings fearing that a short-term profit disappointment will lead institutions to liquidate their holdings. This paper shows, however, that the absolute value of discretionary accruals declines with institutional ownership. The result is consistent with managers recognizing that institutional owners are better informed than individual investors, which reduces the perceived benefit of managing accruals. We also find that as institutional ownership increases, stock prices tend to reflect a greater proportion of the information in future earnings relative to current earnings. This result is consistent with institutional investors looking beyond current earnings compared to individual investors. Collectively, the results offer strong evidence that managers do not manipulate earnings due to pressure from institutional investors who are overly focused on short-term profitability.",
"title": ""
},
{
"docid": "neg:1840573_5",
"text": "There is risk involved in any construction project. A contractor’s quality assurance system is essential in preventing problems and the reoccurrence of problems. This system ensures consistent quality for the contractor’s clients. An evaluation of the quality systems of 15 construction contractors in Saudi Arabia is discussed here. The evaluation was performed against the ISO 9000 standard. The contractors’ quality systems vary in complexity, ranging from an informal inspection and test system to a comprehensive system. The ISO 9000 clauses most often complied with are those dealing with (1) inspection and test status; (2) inspection and testing; (3) control of nonconformance product; and (4) handling, storage, and preservation. The clauses least complied with concern (1) design control; (2) internal auditing; (3) training; and (4) statistical techniques. Documentation of a quality system is scarce for the majority of the contractors.",
"title": ""
},
{
"docid": "neg:1840573_6",
"text": "This article presents a survey on crowd analysis using computer vision techniques, covering different aspects such as people tracking, crowd density estimation, event detection, validation, and simulation. It also reports how related the areas of computer vision and computer graphics should be to deal with current challenges in crowd analysis.",
"title": ""
},
{
"docid": "neg:1840573_7",
"text": "BACKGROUND\nBeneficial effects of probiotics have never been analyzed in an animal shelter.\n\n\nHYPOTHESIS\nDogs and cats housed in an animal shelter and administered a probiotic are less likely to have diarrhea of ≥2 days duration than untreated controls.\n\n\nANIMALS\nTwo hundred and seventeen cats and 182 dogs.\n\n\nMETHODS\nDouble blinded and placebo controlled. Shelter dogs and cats were housed in 2 separate rooms for each species. For 4 weeks, animals in 1 room for each species was fed Enterococcus faecium SF68 while animals in the other room were fed a placebo. After a 1-week washout period, the treatments by room were switched and the study continued an additional 4 weeks. A standardized fecal score system was applied to feces from each animal every day by a blinded individual. Feces of animals with and without diarrhea were evaluated for enteric parasites. Data were analyzed by a generalized linear mixed model using a binomial distribution with treatment being a fixed effect and the room being a random effect.\n\n\nRESULTS\nThe percentage of cats with diarrhea ≥2 days was significantly lower (P = .0297) in the probiotic group (7.4%) when compared with the placebo group (20.7%). Statistical differences between groups of dogs were not detected but diarrhea was uncommon in both groups of dogs during the study.\n\n\nCONCLUSION AND CLINICAL IMPORTANCE\nCats fed SF68 had fewer episodes of diarrhea of ≥2 days when compared with controls suggests the probiotic may have beneficial effects on the gastrointestinal tract.",
"title": ""
},
{
"docid": "neg:1840573_8",
"text": "In this paper, we study the problem of learning from weakly labeled data, where labels of the training examples are incomplete. This includes, for example, (i) semi-supervised learning where labels are partially known; (ii) multi-instance learning where labels are implicitly known; and (iii) clustering where labels are completely unknown. Unlike supervised learning, learning with weak labels involves a difficult Mixed-Integer Programming (MIP) problem. Therefore, it can suffer from poor scalability and may also get stuck in local minimum. In this paper, we focus on SVMs and propose the WellSVM via a novel label generation strategy. This leads to a convex relaxation of the original MIP, which is at least as tight as existing convex Semi-Definite Programming (SDP) relaxations. Moreover, the WellSVM can be solved via a sequence of SVM subproblems that are much more scalable than previous convex SDP relaxations. Experiments on three weakly labeled learning tasks, namely, (i) semi-supervised learning; (ii) multi-instance learning for locating regions of interest in content-based information retrieval; and (iii) clustering, clearly demonstrate improved performance, and WellSVM is also readily applicable on large data sets.",
"title": ""
},
{
"docid": "neg:1840573_9",
"text": "Network embedding aims to represent each node in a network as a low-dimensional feature vector that summarizes the given node’s (extended) network neighborhood. The nodes’ feature vectors can then be used in various downstream machine learning tasks. Recently, many embedding methods that automatically learn the features of nodes have emerged, such as node2vec and struc2vec, which have been used in tasks such as node classification, link prediction, and node clustering, mainly in the social network domain. There are also other embedding methods that explicitly look at the connections between nodes, i.e., the nodes’ network neighborhoods, such as graphlets. Graphlets have been used in many tasks such as network comparison, link prediction, and network clustering, mainly in the computational biology domain. Even though the two types of embedding methods (node2vec/struct2vec versus graphlets) have a similar goal – to represent nodes as features vectors, no comparisons have been made between them, possibly because they have originated in the different domains. Therefore, in this study, we compare graphlets to node2vec and struc2vec, and we do so in the task of network alignment. In evaluations on synthetic and real-world biological networks, we find that graphlets are both more accurate and faster than node2vec and struc2vec.",
"title": ""
},
{
"docid": "neg:1840573_10",
"text": "In the last decade, supervised deep learning approaches have been extensively employed in visual odometry (VO) applications, which is not feasible in environments where labelled data is not abundant. On the other hand, unsupervised deep learning approaches for localization and mapping in unknown environments from unlabelled data have received comparatively less attention in VO research. In this study, we propose a generative unsupervised learning framework that predicts 6-DoF pose camera motion and monocular depth map of the scene from unlabelled RGB image sequences, using deep convolutional Generative Adversarial Networks (GANs). We create a supervisory signal by warping view sequences and assigning the re-projection minimization to the objective loss function that is adopted in multi-view pose estimation and single-view depth generation network. Detailed quantitative and qualitative evaluations of the proposed framework on the KITTI [1] and Cityscapes [2] datasets show that the proposed method outperforms both existing traditional and unsupervised deep VO methods providing better results for both pose estimation and depth recovery.",
"title": ""
},
{
"docid": "neg:1840573_11",
"text": "Enterprise Resource Planning (ERP) has come to mean many things over the last several decades. Divergent applications by practitioners and academics, as well as by researchers in alternative fields of study, has allowed for both considerable proliferation of information on the topic but also for a considerable amount of confusion regarding the meaning of the term. In reviewing ERP research two distinct research streams emerge. The first focuses on the fundamental corporate capabilities driving ERP as a strategic concept. A second stream focuses on the details associated with implementing information systems and their relative success and cost. This paper briefly discusses these research streams and suggests some ideas for related future research. Published in the European Journal of Operational Research 146(2), 2003",
"title": ""
},
{
"docid": "neg:1840573_12",
"text": "We introduce a similarity-based machine learning approach for detecting non-market, adversarial, malicious Android apps. By adversarial, we mean those apps designed to avoid detection. Our approach relies on identifying the Android applications that are similar to an adversarial known Android malware. In our approach, similarity is detected statically by computing the similarity score between two apps based on their methods similarity. The similarity between methods is computed using the normalized compression distance (NCD) in dependence of either zlib or bz2 compressors. The NCD calculates the semantic similarity between pair of methods in two compared apps. The first app is one of the sample apps in the input dataset, while the second app is one of malicious apps stored in a malware database. Later all the computed similarity scores are used as features for training a supervised learning classifier to detect suspicious apps with high similarity score to the malicious ones in the database.",
"title": ""
},
{
"docid": "neg:1840573_13",
"text": "Face perception relies on computations carried out in face-selective cortical areas. These areas have been intensively investigated for two decades, and this work has been guided by an influential neural model suggested by Haxby and colleagues in 2000. Here, we review new findings about face-selective areas that suggest the need for modifications and additions to the Haxby model. We suggest a revised framework based on (a) evidence for multiple routes from early visual areas into the face-processing system, (b) information about the temporal characteristics of these areas, (c) indications that the fusiform face area contributes to the perception of changeable aspects of faces, (d) the greatly elevated responses to dynamic compared with static faces in dorsal face-selective brain areas, and (e) the identification of three new anterior face-selective areas. Together, these findings lead us to suggest that face perception depends on two separate pathways: a ventral stream that represents form information and a dorsal stream driven by motion and form information.",
"title": ""
},
{
"docid": "neg:1840573_14",
"text": "Sleep is a complex phenomenon that could be understood and assessed at many levels. Sleep could be described at the behavioral level (relative lack of movements and awareness and responsiveness) and at the brain level (based on EEG activity). Sleep could be characterized by its duration, by its distribution during the 24-hr day period, and by its quality (e.g., consolidated versus fragmented). Different methods have been developed to assess various aspects of sleep. This chapter covers the most established and common methods used to assess sleep in infants and children. These methods include polysomnography, videosomnography, actigraphy, direct observations, sleep diaries, and questionnaires. The advantages and disadvantages of each method are highlighted.",
"title": ""
},
{
"docid": "neg:1840573_15",
"text": "The structure of foot-and-mouth disease virus has been determined at close to atomic resolution by X-ray diffraction without experimental phase information. The virus shows similarities with other picornaviruses but also several unique features. The canyon or pit found in other picornaviruses is absent; this has important implications for cell attachment. The most immunogenic portion of the capsid, which acts as a potent peptide vaccine, forms a disordered protrusion on the virus surface.",
"title": ""
},
{
"docid": "neg:1840573_16",
"text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to replicate some of these abilities with a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which the agent can move and interact with objects it sees, the agent learns a world model predicting the dynamic consequences of its actions. Simultaneously, the agent learns to take actions that adversarially challenge the developing world model, pushing the agent to explore novel and informative interactions with its environment. We demonstrate that this policy leads to the self-supervised emergence of a spectrum of complex behaviors, including ego motion prediction, object attention, and object gathering. Moreover, the world model that the agent learns supports improved performance on object dynamics prediction and localization tasks. Our results are a proof-of-principle that computational models of intrinsic motivation might account for key features of developmental visuomotor learning in infants.",
"title": ""
},
{
"docid": "neg:1840573_17",
"text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a study of 11 widely used internal clustering validation measures for crisp clustering. The results of this study indicate that these existing measures have certain limitations in different application scenarios. As an alternative choice, we propose a new internal clustering validation measure, named clustering validation index based on nearest neighbors (CVNN), which is based on the notion of nearest neighbors. This measure can dynamically select multiple objects as representatives for different clusters in different situations. Experimental results show that CVNN outperforms the existing measures on both synthetic data and real-world data in different application scenarios.",
"title": ""
},
{
"docid": "neg:1840573_18",
"text": "Field experiment was conducted on fodder maize to explore the potential of integrated use of chemical, organic and biofertilizers for improving maize growth, beneficial microflora in the rhizosphere and the economic returns. The treatments were designed to make comparison of NPK fertilizer with different combinations of half dose of NP with organic and biofertilizers viz. biological potassium fertilizer (BPF), Biopower, effective microorganisms (EM) and green force compost (GFC). Data reflected maximum crop growth in terms of plant height, leaf area and fresh biomass with the treatment of full NPK; and it was followed by BPF+full NP. The highest uptake of NPK nutrients by crop was recorded as: N under half NP+Biopower; P in BPF+full NP; and K from full NPK. The rhizosphere microflora enumeration revealed that Biopower+EM applied along with half dose of GFC soil conditioner (SC) or NP fertilizer gave the highest count of N-fixing bacteria (Azotobacter, Azospirillum, Azoarcus andZoogloea). Regarding the P-solubilizing bacteria,Bacillus was having maximum population with Biopower+BPF+half NP, andPseudomonas under Biopower+EM+half NP treatment. It was concluded that integration of half dose of NP fertilizer with Biopower+BPF / EM can give similar crop yield as with full rate of NP fertilizer; and through reduced use of fertilizers the production cost is minimized and the net return maximized. However, the integration of half dose of NP fertilizer with biofertilizers and compost did not give maize fodder growth and yield comparable to that from full dose of NPK fertilizers.",
"title": ""
}
] |
1840574 | Artificial Neural Networks ’ Applications in Management | [
{
"docid": "pos:1840574_0",
"text": "The long-running debate between the ‘rational design’ and ‘emergent process’ schools of strategy formation has involved caricatures of firms’ strategic planning processes, but little empirical evidence of whether and how companies plan. Despite the presumption that environmental turbulence renders conventional strategic planning all but impossible, the evidence from the corporate sector suggests that reports of the demise of strategic planning are greatly exaggerated. The goal of this paper is to fill this empirical gap by describing the characteristics of the strategic planning systems of multinational, multibusiness companies faced with volatile, unpredictable business environments. In-depth case studies of the planning systems of eight of the world’s largest oil companies identified fundamental changes in the nature and role of strategic planning since the end of the 1970s. The findings point to a possible reconciliation of ‘design’ and ‘process’ approaches to strategy formulation. The study pointed to a process of planned emergence in which strategic planning systems provided a mechanism for coordinating decentralized strategy formulation within a structure of demanding performance targets and clear corporate guidelines. The study shows that these planning systems fostered adaptation and responsiveness, but showed limited innovation and analytical sophistication. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
}
] | [
{
"docid": "neg:1840574_0",
"text": "Connectionist networks that have learned one task can be reused on related tasks in a process that is called \"transfer\". This paper surveys recent work on transfer. A number of distinctions between kinds of transfer are identified, and future directions for research are explored. The study of transfer has a long history in cognitive science. Discoveries about transfer in human cognition can inform applied efforts. Advances in applications can also inform cognitive studies.",
"title": ""
},
{
"docid": "neg:1840574_1",
"text": "In modern daily life people need to move, whether in business or leisure, sightseeing or addressing a meeting. Often this is done in familiar environments, but in some cases we need to find our way in unfamiliar scenarios. Visual impairment is a factor that greatly reduces mobility. Currently, the most widespread and used means by the visually impaired people are the white stick and the guide dog; however both present some limitations. With the recent advances in inclusive technology it is possible to extend the support given to people with visual impairment during their mobility. In this context we propose a system, named SmartVision, whose global objective is to give blind users the ability to move around in unfamiliar environments, whether indoor or outdoor, through a user friendly interface that is fed by a geographic information system (GIS). In this paper we propose the development of an electronic white cane that helps moving around, in both indoor and outdoor environments, providing contextualized geographical information using RFID technology.",
"title": ""
},
{
"docid": "neg:1840574_2",
"text": "Neural network approaches to Named-Entity Recognition reduce the need for carefully handcrafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a lowdimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute — offline — a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.",
"title": ""
},
{
"docid": "neg:1840574_3",
"text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.",
"title": ""
},
{
"docid": "neg:1840574_4",
"text": "XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for i) discovering the structural commonalities between sub-trees, ii) identifying sub-tree semantic resemblances, iii) computing tree-based edit operations costs, and iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance. © 2002 Elsevier Science. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840574_5",
"text": "Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke-Lichtheim-Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.",
"title": ""
},
{
"docid": "neg:1840574_6",
"text": "Evolutionary adaptation can be rapid and potentially help species counter stressful conditions or realize ecological opportunities arising from climate change. The challenges are to understand when evolution will occur and to identify potential evolutionary winners as well as losers, such as species lacking adaptive capacity living near physiological limits. Evolutionary processes also need to be incorporated into management programmes designed to minimize biodiversity loss under rapid climate change. These challenges can be met through realistic models of evolutionary change linked to experimental data across a range of taxa.",
"title": ""
},
{
"docid": "neg:1840574_7",
"text": "A novel inductor switching technique is used to design and implement a wideband LC voltage controlled oscillator (VCO) in 0.13µm CMOS. The VCO has a tuning range of 87.2% between 3.3 and 8.4 GHz with phase noise ranging from −122 to −117.2 dBc/Hz at 1MHz offset. The power varies between 6.5 and 15.4 mW over the tuning range. This results in a Power-Frequency-Tuning Normalized figure of merit (PFTN) between 6.6 and 10.2 dB which is one of the best reported to date.",
"title": ""
},
{
"docid": "neg:1840574_8",
"text": "Cluster ensembles have recently emerged as a powerful alternative to standard cluster analysis, aggregating several input data clusterings to generate a single output clustering, with improved robustness and stability. From the early work, these techniques held great promise; however, most of them generate the final solution based on incomplete information of a cluster ensemble. The underlying ensemble-information matrix reflects only cluster-data point relations, while those among clusters are generally overlooked. This paper presents a new link-based approach to improve the conventional matrix. It achieves this using the similarity between clusters that are estimated from a link network model of the ensemble. In particular, three new link-based algorithms are proposed for the underlying similarity assessment. The final clustering result is generated from the refined matrix using two different consensus functions of feature-based and graph-based partitioning. This approach is the first to address and explicitly employ the relationship between input partitions, which has not been emphasized by recent studies of matrix refinement. The effectiveness of the link-based approach is empirically demonstrated over 10 data sets (synthetic and real) and three benchmark evaluation measures. The results suggest the new approach is able to efficiently extract information embedded in the input clusterings, and regularly illustrate higher clustering quality in comparison to several state-of-the-art techniques.",
"title": ""
},
{
"docid": "neg:1840574_9",
"text": "In recent years, we have witnessed a significant growth of “social computing” services, or online communities where users contribute content in various forms, including images, text or video. Content contribution from members is critical to the viability of these online communities. It is therefore important to understand what drives users to share content with others in such settings. We extend previous literature on user contribution by studying the factors that are associated with users’ photo sharing in an online community, drawing on motivation theories as well as on analysis of basic structural properties. Our results indicate that photo sharing declines in respect to the users’ tenure in the community. We also show that users with higher commitment to the community and greater “structural embeddedness” tend to share more content. We demonstrate that the motivation of self-development is negatively related to photo sharing, and that tenure in the community moderates the effect of self-development on photo sharing. Directions for future research, as well as implications for theory and practice are discussed.",
"title": ""
},
{
"docid": "neg:1840574_10",
"text": "Prior research has linked mindfulness to improvements in attention, and suggested that the effects of mindfulness are particularly pronounced when individuals are cognitively depleted or stressed. Yet, no studies have tested whether mindfulness improves declarative awareness of unexpected stimuli in goal-directed tasks. Participants (N=794) were either depleted (or not) and subsequently underwent a brief mindfulness induction (or not). They then completed an inattentional blindness task during which an unexpected distractor appeared on the computer monitor. This task was used to assess declarative conscious awareness of the unexpected distractor's presence and the extent to which its perceptual properties were encoded. Mindfulness increased awareness of the unexpected distractor (i.e., reduced rates of inattentional blindness). Contrary to predictions, no mindfulness×depletion interaction emerged. Depletion however, increased perceptual encoding of the distractor. These results suggest that mindfulness may foster awareness of unexpected stimuli (i.e., reduce inattentional blindness).",
"title": ""
},
{
"docid": "neg:1840574_11",
"text": "The noble aim behind this project is to study and capture the Natural Eye movement detection and trying to apply it as assisting application for paralyzed patients those who cannot speak or use hands such disease as amyotrophic lateral sclerosis (ALS), Guillain-Barre Syndrome, quadriplegia & heniiparesis. Using electrophySiological genereted by the voluntary contradictions of the muscles around the eye. The proposed system which is based on the design and application of an electrooculogram (EOG) based an efficient human–computer interface (HCI). Establishing an alternative channel without speaking and hand movements is important in increasing the quality of life for the handicapped. EOG-based systems are more efficient than electroencephalogram (EEG)-based systems as easy acquisition, higher amplitude, and also easily classified. By using a realized virtual keyboard like graphical user interface, it is possible to notify in writing the needs of the patient in a relatively short time. Considering the bio potential measurement pitfalls, the novel EOG-based HCI system allows people to successfully communicate with their environment by using only eye movements. [1] Classifying horizontal and vertical EOG channel signals in an efficient interface is realized in this study. The nearest neighbourhood algorithm will be use to classify the signals. The novel EOG-based HCI system allows people to successfully and economically communicate with their environment by using only eye movements. [2] An Electrooculography is a method of tracking the ocular movement, based on the voltage changes that occur due to the medications on the special orientation of the eye dipole. The resulting signal has a myriad of possible applications. [2] In this dissertation phase one, the goal was to study the Eye movements and respective signal generation, EOG signal acquisition and also study of a Man-Machine Interface that made use of this signal. As per our goal we studied eye movements and design simple EOG acquisition circuit. We got efficient signal output in oscilloscope. I sure that result up to present stage will definitely leads us towards designing of novel assisting device for paralyzed patients. Thus, we set out to create an interface will be use by mobility impaired patients, allowing them to use their eyes to call nurse or attended person and some other requests. Keywords— Electro Oculogram, Natural Eye movement Detection, EOG acquisition & signal conditioning, Eye based Computer interface GUI, Paralysed assisting device, Eye movement recognization",
"title": ""
},
{
"docid": "neg:1840574_12",
"text": "Mastocytosis is a rare, heterogeneous disease of complex etiology, characterized by a marked increase in mast cell density in the skin, bone marrow, liver, spleen, gastrointestinal mucosa and lymph nodes. The most frequent site of organ involvement is the skin. Cutaneous lesions include urticaria pigmentosa, mastocytoma, diffuse and erythematous cutaneous mastocytosis, and telangiectasia macularis eruptiva perstans. Human mast cells originate from CD34 progenitors, under the influence of stem cell factor (SCF); a substantial number of patients exhibit activating mutations in c-kit, the receptor for SCF. Mast cells can synthesize a variety of cytokines that could affect the skeletal system, increasing perforating bone resorption and leading to osteoporosis. The coexistence of hematologic disorders, such as myeloproliferative or myelodysplastic syndromes, or of lymphoreticular malignancies, is common. Compared with radiographs, Tc-99m methylenediphosphonate (MDP) scintigraphy is better able to show the widespread skeletal involvement in patients with diffuse disease. T1-weighted MR imaging is a sensitive technique for detecting marrow abnormalities in patients with systemic mastocytosis, showing several different patterns of marrow involvement. We report the imaging findings a 36-year old male with well-documented urticaria pigmentosa. In order to evaluate mastocytic bone marrow involvement, 99mTc-MDP scintigraphy, T1-weighted spin echo and short tau inversion recovery MRI at 1.0 T, were performed. Both scan findings were consistent with marrow hyperactivity. Thus, the combined use of bone scan and MRI may be useful in order to recognize marrow involvement in suspected systemic mastocytosis, perhaps avoiding bone biopsy.",
"title": ""
},
{
"docid": "neg:1840574_13",
"text": "Image caption generation is the problem of generating a descriptive sentence of an image. Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. This paper presents a brief survey of some technical aspects and methods for description-generation of images. As there has been great interest in research community, to come up with automatic ways to retrieve images based on content. There are numbers of techniques, that, have been used to solve this problem, and purpose of this paper is to have an overview of many of these approaches and databases used for description generation purpose. Finally, we discuss open challenges and future directions for upcoming researchers.",
"title": ""
},
{
"docid": "neg:1840574_14",
"text": "Intuitively, for a training sample xi with its associated label yi, a deep model is getting closer to the correct answer in the higher layers. It starts with the difficult job of classifying xi, which becomes easier as the higher layers distill xi into a representation that is easier to classify. One might be tempted to say that this means that the higher layers have more information about the ground truth, but this would be incorrect.",
"title": ""
},
{
"docid": "neg:1840574_15",
"text": "Social media shatters the barrier to communicate anytime anywhere for people of all walks of life. The publicly available, virtually free information in social media poses a new challenge to consumers who have to discern whether a piece of information published in social media is reliable. For example, it can be difficult to understand the motivations behind a statement passed from one user to another, without knowing the person who originated the message. Additionally, false information can be propagated through social media, resulting in embarrassment or irreversible damages. Provenance data associated with a social media statement can help dispel rumors, clarify opinions, and confirm facts. However, provenance data about social media statements is not readily available to users today. Currently, providing this data to users requires changing the social media infrastructure or offering subscription services. Taking advantage of social media features, research in this nascent field spearheads the search for a way to provide provenance data to social media users, thus leveraging social media itself by mining it for the provenance data. Searching for provenance data reveals an interesting problem space requiring the development and application of new metrics in order to provide meaningful provenance data to social media users. This lecture reviews the current research on information provenance, explores exciting research opportunities to address pressing needs, and shows how data mining can enable a social media user to make informed judgements about statements published in social media.",
"title": ""
},
{
"docid": "neg:1840574_16",
"text": "Key-Value Stores (KVS) are becoming increasingly popular because they scale up and down elastically, sustain high throughputs for get/put workloads and have low latencies. KVS owe these advantages to their simplicity. This simplicity, however, comes at a cost: It is expensive to process complex, analytical queries on top of a KVS because today’s generation of KVS does not support an efficient way to scan the data. The problem is that there are conflicting goals when designing a KVS for analytical queries and for simple get/put workloads: Analytical queries require high locality and a compact representation of data whereas elastic get/put workloads require sparse indexes. This paper shows that it is possible to have it all, with reasonable compromises. We studied the KVS design space and built TellStore, a distributed KVS, that performs almost as well as state-of-the-art KVS for get/put workloads and orders of magnitude better for analytical and mixed workloads. This paper presents the results of comprehensive experiments with an extended version of the YCSB benchmark and a workload from the telecommunication industry.",
"title": ""
},
{
"docid": "neg:1840574_17",
"text": "Human face feature extraction using digital images is a vital element for several applications such as: identification and facial recognition, medical application, video games, cosmetology, etc. The skin pores are very important element of the structure of the skin. A novelty method is proposed allowing decomposing an photography of human face from digital image (RGB) in two layers, melanin and hemoglobin. From melanin layer, the main pores from the face can be obtained, as well as the centroids of each of them. It has been found that the pore configuration of the skin is invariant and unique for each individual. Therefore, from the localization of the pores of a human face, it is a possibility to use them for diverse application in the fields of pattern",
"title": ""
},
{
"docid": "neg:1840574_18",
"text": "Early-stage romantic love can induce euphoria, is a cross-cultural phenomenon, and is possibly a developed form of a mammalian drive to pursue preferred mates. It has an important influence on social behaviors that have reproductive and genetic consequences. To determine which reward and motivation systems may be involved, we used functional magnetic resonance imaging and studied 10 women and 7 men who were intensely \"in love\" from 1 to 17 mo. Participants alternately viewed a photograph of their beloved and a photograph of a familiar individual, interspersed with a distraction-attention task. Group activation specific to the beloved under the two control conditions occurred in dopamine-rich areas associated with mammalian reward and motivation, namely the right ventral tegmental area and the right postero-dorsal body and medial caudate nucleus. Activation in the left ventral tegmental area was correlated with facial attractiveness scores. Activation in the right anteromedial caudate was correlated with questionnaire scores that quantified intensity of romantic passion. In the left insula-putamen-globus pallidus, activation correlated with trait affect intensity. The results suggest that romantic love uses subcortical reward and motivation systems to focus on a specific individual, that limbic cortical regions process individual emotion factors, and that there is localization heterogeneity for reward functions in the human brain.",
"title": ""
},
{
"docid": "neg:1840574_19",
"text": "The widespread use of email has raised serious privacy concerns. A critical issue is how to prevent email information leaks, i.e., when a message is accidentally addressed to non-desired recipients. This is an increasingly common problem that can severely harm individuals and corporations — for instance, a single email leak can potentially cause expensive law suits, brand reputation damage, negotiation setbacks and severe financial losses. In this paper we present the first attempt to solve this problem. We begin by redefining it as an outlier detection task, where the unintended recipients are the outliers. Then we combine real email examples (from the Enron Corpus) with carefully simulated leak-recipients to learn textual and network patterns associated with email leaks. This method was able to detect email leaks in almost 82% of the test cases, significantly outperforming all other baselines. More importantly, in a separate set of experiments we applied the proposed method to the task of finding real cases of email leaks. The result was encouraging: a variation of the proposed technique was consistently successful in finding two real cases of email leaks. Not only does this paper introduce the important problem of email leak detection, but also presents an effective solution that can be easily implemented in any email client — with no changes in the email server side.",
"title": ""
}
] |
1840575 | Effect of Iyengar yoga therapy for chronic low back pain | [
{
"docid": "pos:1840575_0",
"text": "On the basis of medical officers diagnosis, thirty three (N = 33) hypertensives, aged 35-65 years, from Govt. General Hospital, Pondicherry, were examined with four variables viz, systolic and diastolic blood pressure, pulse rate and body weight. The subjects were randomly assigned into three groups. The exp. group-I underwent selected yoga practices, exp. group-II received medical treatment by the physician of the said hospital and the control group did not participate in any of the treatment stimuli. Yoga imparted in the morning and in the evening with 1 hr/session. day-1 for a total period of 11-weeks. Medical treatment comprised drug intake every day for the whole experimental period. The result of pre-post test with ANCOVA revealed that both the treatment stimuli (i.e., yoga and drug) were effective in controlling the variables of hypertension.",
"title": ""
}
] | [
{
"docid": "neg:1840575_0",
"text": "In this paper, we present the concept of transmitting power without using wires i.e., transmitting power as microwaves from one place to another is in order to reduce the cost, transmission and distribution losses. This concept is known as Microwave Power transmission (MPT). We also discussed the technological developments in Wireless Power Transmission (WPT) which are required for the improment .The components which are requiredfor the development of Microwave Power transmission(MPT)are also mentioned along with the performance when they are connected to various devices at different frequency levels . The advantages, disadvantages, biological impacts and applications of WPT are also presented.",
"title": ""
},
{
"docid": "neg:1840575_1",
"text": "There has been much research investigating team cognition, naturalistic decision making, and collaborative technology as it relates to real world, complex domains of practice. However, there has been limited work in incorporating naturalistic decision making models for supporting distributed team decision making. The aim of this research is to support human decision making teams using cognitive agents empowered by a collaborative Recognition-Primed Decision model. In this paper, we first describe an RPD-enabled agent architecture (R-CAST), in which we have implemented an internal mechanism of decision-making adaptation based on collaborative expectancy monitoring, and an information exchange mechanism driven by relevant cue analysis. We have evaluated R-CAST agents in a real-time simulation environment, feeding teams with frequent decision-making tasks under different tempo situations. While the result conforms to psychological findings that human team members are extremely sensitive to their workload in high-tempo situations, it clearly indicates that human teams, when supported by R-CAST agents, can perform better in the sense that they can maintain team performance at acceptable levels in high time pressure situations.",
"title": ""
},
{
"docid": "neg:1840575_2",
"text": "Given a network with node attributes, how can we identify communities and spot anomalies? How can we characterize, describe, or summarize the network in a succinct way? Community extraction requires a measure of quality for connected subgraphs (e.g., social circles). Existing subgraph measures, however, either consider only the connectedness of nodes inside the community and ignore the cross-edges at the boundary (e.g., density) or only quantify the structure of the community and ignore the node attributes (e.g., conductance). In this work, we focus on node-attributed networks and introduce: (1) a new measure of subgraph quality for attributed communities called normality, (2) a community extraction algorithm that uses normality to extract communities and a few characterizing attributes per community, and (3) a summarization and interactive visualization approach for attributed graph exploration. More specifically, (1) we first introduce a new measure to quantify the normality of an attributed subgraph. Our normality measure carefully utilizes structure and attributes together to quantify both the internal consistency and external separability. We then formulate an objective function to automatically infer a few attributes (called the “focus”) and respective attribute weights, so as to maximize the normality score of a given subgraph. Most notably, unlike many other approaches, our measure allows for many cross-edges as long as they can be “exonerated;” i.e., either (i) are expected under a null graph model, and/or (ii) their boundary nodes do not exhibit the focus attributes. Next, (2) we propose AMEN (for Attributed Mining of Entity Networks), an algorithm that simultaneously discovers the communities and their respective focus in a given graph, with a goal to maximize the total normality. Communities for which a focus that yields high normality cannot be found are considered low quality or anomalous. Last, (3) we formulate a summarization task with a multi-criteria objective, which selects a subset of the communities that (i) cover the entire graph well, are (ii) high quality and (iii) diverse in their focus attributes. We further design an interactive visualization interface that presents the communities to a user in an interpretable, user-friendly fashion. The user can explore all the communities, analyze various algorithm-generated summaries, as well as devise their own summaries interactively to characterize the network in a succinct way. As the experiments on real-world attributed graphs show, our proposed approaches effectively find anomalous communities and outperform several existing measures and methods, such as conductance, density, OddBall, and SODA. We also conduct extensive user studies to measure the capability and efficiency that our approach provides to the users toward network summarization, exploration, and sensemaking.",
"title": ""
},
{
"docid": "neg:1840575_3",
"text": "Social media platforms are popular venues for fashion brand marketing and advertising. With the introduction of native advertising, users don’t have to endure banner ads that hold very little saliency and are unattractive. Using images and subtle text overlays, even in a world of ever-depreciating attention span, brands can retain their audience and have a capacious creative potential. While an assortment of marketing strategies are conjectured, the subtle distinctions between various types of marketing strategies remain under-explored. This paper presents a qualitative analysis on the influence of social media platforms on different behaviors of fashion brand marketing. We employ both linguistic and computer vision techniques while comparing and contrasting strategic idiosyncrasies. We also analyze brand audience retention and social engagement hence providing suggestions in adapting advertising and marketing strategies over Twitter and Instagram.",
"title": ""
},
{
"docid": "neg:1840575_4",
"text": "ÐSoftware engineers use a number of different types of software development technical review (SDTR) for the purpose of detecting defects in software products. This paper applies the behavioral theory of group performance to explain the outcomes of software reviews. A program of empirical research is developed, including propositions to both explain review performance and identify ways of improving review performance based on the specific strengths of individuals and groups. Its contributions are to clarify our understanding of what drives defect detection performance in SDTRs and to set an agenda for future research. In identifying individuals' task expertise as the primary driver of review performance, the research program suggests specific points of leverage for substantially improving review performance. It points to the importance of understanding software reading expertise and implies the need for a reconsideration of existing approaches to managing reviews. Index TermsÐInspections, walkthroughs, technical reviews, defects, defect detection, groups, group process, group size, expertise, reading, training, behavioral research, theory, research program.",
"title": ""
},
{
"docid": "neg:1840575_5",
"text": "The method of finding high-quality answers has significant impact on user satisfaction in community question answering systems. However, due to the lexical gap between questions and answers as well as spam typically existing in user-generated content, filtering and ranking answers is very challenging. Previous solutions mainly focus on generating redundant features, or finding textual clues using machine learning techniques; none of them ever consider questions and their answers as relational data but instead model them as independent information. Moreover, they only consider the answers of the current question, and ignore any previous knowledge that would be helpful to bridge the lexical and semantic gap. We assume that answers are connected to their questions with various types of latent links, i.e. positive indicating high-quality answers, negative links indicating incorrect answers or user-generated spam, and propose an analogical reasoning-based approach which measures the analogy between the new question-answer linkages and those of relevant knowledge which contains only positive links; the candidate answer which has the most analogous link is assumed to be the best answer. We conducted experiments based on 29.8 million Yahoo!Answer question-answer threads and showed the effectiveness of our approach.",
"title": ""
},
{
"docid": "neg:1840575_6",
"text": "This paper presents a new conversion method to automatically transform a constituent-based Vietnamese Treebank into dependency trees. On a dependency Treebank created according to our new approach, we examine two stateof-the-art dependency parsers: the MSTParser and the MaltParser. Experiments show that the MSTParser outperforms the MaltParser. To the best of our knowledge, we report the highest performances published to date in the task of dependency parsing for Vietnamese. Particularly, on gold standard POS tags, we get an unlabeled attachment score of 79.08% and a labeled attachment score of 71.66%.",
"title": ""
},
{
"docid": "neg:1840575_7",
"text": "The linear programming (LP) is one of the most popular necessary optimization tool used for data analytics as well as in various scientific fields. However, the current state-of-art algorithms suffer from scalability issues when processing Big Data. For example, the commercial optimization software IBM CPLEX cannot handle an LP with more than hundreds of thousands variables or constraints. Existing algorithms are fundamentally hard to scale because they are inevitably too complex to parallelize. To address the issue, we study the possibility of using the Belief Propagation (BP) algorithm as an LP solver. BP has shown remarkable performances on various machine learning tasks and it naturally lends itself to fast parallel implementations. Despite this, very little work has been done in this area. In particular, while it is generally believed that BP implicitly solves an optimization problem, it is not well understood under what conditions the solution to a BP converges to that of a corresponding LP formulation. Our efforts consist of two main parts. First, we perform a theoretic study and establish the conditions in which BP can solve LP [1,2]. Although there has been several works studying the relation between BP and LP for certain instances, our work provides a generic condition unifying all prior works for generic LP. Second, utilizing our theoretical results, we develop a practical BP-based parallel algorithms for solving generic LPs, and it shows 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm [3, 4]. As a result of the study, the PIs have published two conference papers [1,3] and two follow-up journal papers [3,4] are under submission. We refer the readers to our published work [1,3] for details. Introduction: The main goal of our research is to develop a distributed and parallel algorithm for large-scale linear optimization (or programming). Considering the popularity and importance of linear optimizations in various fields, the proposed method has great potentials applicable to various big data analytics. Our approach is based on the Belief Propagation (BP) algorithm, which has shown remarkable performances on various machine learning tasks and naturally lends itself to fast parallel implementations. Our key contributions are summarized below: 1) We establish key theoretic foundations in the area of Belief Propagation. In particular, we show that BP converges to the solution of LP if some sufficient conditions are satisfied. Our DISTRIBUTION A. Approved for public release: distribution unlimited. conditions not only cover various prior studies including maximum weight matching, mincost network flow, shortest path, etc., but also discover new applications such as vertex cover and traveling salesman. 2) While the theoretic study provides understanding of the nature of BP, it falls short in slow convergence speed, oscillation and wrong convergence. To make BP-based algorithms more practical, we design a BP-based framework which uses BP as a ‘weight transformer’ to resolve the convergence issue of BP. We refer the readers to our published work [1, 3] for details. The rest of the report contains a summary of our work appeared in UAI (Uncertainty in Artificial Intelligence) and IEEE Conference in Big Data [1,3] and follow up work [2,4] under submission to major journals. Experiment: We first establish theoretical conditions when Belief Propagation (BP) can solve Linear Programming (LP), and second provide a practical distributed/parallel BP-based framework solving generic optimizations. We demonstrate the wide-applicability of our approach via popular combinatorial optimizations including maximum weight matching, shortest path, traveling salesman, cycle packing and vertex cover. Results and Discussion: Our contribution consists of two parts: Study 1 [1,2] looks at the theoretical conditions that BP converges to the solution of LP. Our theoretical result unify almost all prior result about BP for combinatorial optimization. Furthermore, our conditions provide a guideline for designing distributed algorithm for combinatorial optimization problems. Study 2 [3,4] focuses on building an optimal framework based on the theory of Study 1 for boosting the practical performance of BP. Our framework is generic, thus, it can be easily extended to various optimization problems. We also compare the empirical performance of our framework to other heuristics and state of the art algorithms for several combinatorial optimization problems. -------------------------------------------------------Study 1 -------------------------------------------------------We first introduce the background for our contributions. A joint distribution of � (binary) variables � = [��] ∈ {0,1}� is called graphical model (GM) if it factorizes as follows: for � = [��] ∈ {0,1}�, where ψψ� ,�� are some non-negative functions so called factors; � is a collection of subsets (each αα� is a subset of {1,⋯ ,�} with |��| ≥ 2; �� is the projection of � onto dimensions included in αα. Assignment �∗ is called maximum-a-posteriori (MAP) assignment if �∗maximizes the probability. The following figure depicts the graphical relation between factors � and variables �. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 1: Factor graph for the graphical model with factors αα1 = {1,3},�2 = {1,2,4},�3 = {2,3,4} Now we introduce the algorithm, (max-product) BP, for approximating MAP assignment in a graphical model. BP is an iterative procedure; at each iteration �, there are four messages between each variable �� and every associated αα ∈ ��, where ��: = {� ∈ �:� ∈ �}. Then, messages are updated as follows: Finally, given messages, BP marginal beliefs are computed as follows: Then, BP outputs the approximated MAP assignment ��� = [��] as Now, we are ready to introduce the main result of Study 1. Consider the following GM: for � = [��] ∈ {0,1}� and � = [��] ∈ ��, where the factor function ψψαα for αα ∈ � is defined as for some matrices ��,�� and vectors ��,��. Consider the Linear Programming (LP) corresponding the above GM: One can easily observe that the MAP assignments for GM corresponds to the (optimal) solution of the above LP if the LP has an integral solution �∗ ∈ {0,1}�. The following theorem is our main result of Study 1 which provide sufficient conditions so that BP can indeed find the LP solution DISTRIBUTION A. Approved for public release: distribution unlimited. Theorem 1 can be applied to several combinatorial optimization problems including matching, network flow, shortest path, vertex cover, etc. See [1,2] for the detailed proof of Theorem 1 and its applications to various combinatorial optimizations including maximum weight matching, min-cost network flow, shortest path, vertex cover and traveling salesman. -------------------------------------------------------Study 2 -------------------------------------------------------Study 2 mainly focuses on providing a distributed generic BP-based combinatorial optimization solver which has high accuracy and low computational complexity. In summary, the key contributions of Study 2 are as follows: 1) Practical BP-based algorithm design: To the best of our knowledge, this paper is the first to propose a generic concept for designing BP-based algorithms that solve large-scale combinatorial optimization problems. 2) Parallel implementation: We also demonstrate that the algorithm is easily parallelizable. For the maximum weighted matching problem, this translates to 71x speed up while sacrificing only 0.1% accuracy compared to the state-of-art exact algorithm. 3) Extensive empirical evaluation: We evaluate our algorithms on three different combinatorial optimization problems on diverse synthetic and real-world data-sets. Our evaluation shows that the framework shows higher accuracy compared to other known heuristics. Designing a BP-based algorithm for some problem is easy in general. However (a) it might diverge or converge very slowly, (b) even if it converges quickly, the BP decision might be not correct, and (c) even worse, BP might produce an infeasible solution, i.e., it does not satisfy the constraints of the problem. DISTRIBUTION A. Approved for public release: distribution unlimited. Figure 2: Overview of our generic BP-based framework To address these issues, we propose a generic BP-based framework that provides highly accurate approximate solutions for combinatorial optimization problems. The framework has two steps, as shown in Figure 2. In the first phase, it runs a BP algorithm for a fixed number of iterations without waiting for convergence. Then, the second phase runs a known heuristic using BP beliefs instead of the original weights to output a feasible solution. Namely, the first and second phases are respectively designed for ‘BP weight transforming’ and ‘post-processing’. Note that our evaluation mainly uses the maximum weight matching problem. The formal description of the maximum weight matching (MWM) problem is as follows: Given a graph � = (�,�) and edge weights � = [��] ∈ �|�|, it finds a set of edges such that each vertex is connected to at most one edge in the set and the sum of edge weights in the set is maximized. The problem is formulated as the following IP (Integer Programming): where δδ(�) is the set of edges incident to vertex � ∈ �. In the following paragraphs, we describe the two phases in more detail in reverse order. We first describe the post-processing phase. As we mentioned, one of the main issue of a BP-based algorithm is that the decision on BP beliefs might give an infeasible solution. To resolve the issue, we use post-processing by utilizing existing heuristics to the given problem that find a feasible solution. Applying post-processing ensures that the solution is at least feasible. In addition, our key idea is to replace the original weights by the logarithm of BP beliefs, i.e. function of (3). After th",
"title": ""
},
{
"docid": "neg:1840575_8",
"text": "Protein–protein interactions constitute the regulatory network that coordinates diverse cellular functions. Co-immunoprecipitation (co-IP) is a widely used and effective technique to study protein–protein interactions in living cells. However, the time and cost for the preparation of a highly specific antibody is the major disadvantage associated with this technique. In the present study, a co-IP system was developed to detect protein–protein interactions based on an improved protoplast transient expression system by using commercially available antibodies. This co-IP system eliminates the need for specific antibody preparation and transgenic plant production. Leaf sheaths of rice green seedlings were used for the protoplast transient expression system which demonstrated high transformation and co-transformation efficiencies of plasmids. The transient expression system developed by this study is suitable for subcellular localization and protein detection. This work provides a rapid, reliable, and cost-effective system to study transient gene expression, protein subcellular localization, and characterization of protein–protein interactions in vivo.",
"title": ""
},
{
"docid": "neg:1840575_9",
"text": "This paper addresses the topic of real-time decision making for autonomous city vehicles, i.e., the autonomous vehicles' ability to make appropriate driving decisions in city road traffic situations. The paper explains the overall controls system architecture, the decision making task decomposition, and focuses on how Multiple Criteria Decision Making (MCDM) is used in the process of selecting the most appropriate driving maneuver from the set of feasible ones. Experimental tests show that MCDM is suitable for this new application area.",
"title": ""
},
{
"docid": "neg:1840575_10",
"text": "The development of the Semantic Web, with machine-readable content, has the potential to revolutionize the World Wide Web and its use. In A Semantic Web Primer Grigoris Antoniou and Frank van Harmelen provide an introduction and guide to this emerging field, describing its key ideas, languages, and technologies. Suitable for use as a textbook or for self-study by professionals, the book concentrates on undergraduate-level fundamental concepts and techniques that will enable readers to proceed with building applications on their own and includes exercises, project descriptions, and annotated references to relevant online materials. A Semantic Web Primer is the only available book on the Semantic Web to include a systematic treatment of the different languages (XML, RDF, OWL, and rules) and technologies (explicit metadata, ontologies, and logic and inference) that are central to Semantic Web development. The book also examines such crucial related topics as ontology engineering and application scenarios. After an introductory chapter, topics covered in succeeding chapters include XML and related technologies that support semantic interoperability; RDF and RDF Schema, the standard data model for machine-processible semantics; and OWL, the W3C-approved standard for a Web ontology language that is more extensive than RDF Schema; rules, both monotonic and nonmonotonic, in the framework of the Semantic Web; selected application domains and how the Semantic Web would benefit them; the development of ontology-based systems; and current debates on key issues and predictions for the future.",
"title": ""
},
{
"docid": "neg:1840575_11",
"text": "While recommendation approaches exploiting different input sources have started to proliferate in the literature, an explicit study of the effect of the combination of heterogeneous inputs is still missing. On the other hand, in this context there are sides to recommendation quality requiring further characterisation and methodological research –a gap that is acknowledged in the field. We present a comparative study on the influence that different types of information available in social systems have on item recommendation. Aiming to identify which sources of user interest evidence –tags, social contacts, and user-item interaction data– are more effective to achieve useful recommendations, and in what aspect, we evaluate a number of content-based, collaborative filtering, and social recommenders on three datasets obtained from Delicious, Last.fm, and MovieLens. Aiming to determine whether and how combining such information sources may enhance over individual recommendation approaches, we extend the common accuracy-oriented evaluation practice with various metrics to measure further recommendation quality dimensions, namely coverage, diversity, novelty, overlap, and relative diversity between ranked item recommendations. We report empiric observations showing that exploiting tagging information by content-based recommenders provides high coverage and novelty, and combining social networking and collaborative filtering information by hybrid recommenders results in high accuracy and diversity. This, along with the fact that recommendation lists from the evaluated approaches had low overlap and relative diversity values between them, gives insights that meta-hybrid recommenders combining the above strategies may provide valuable, balanced item suggestions in terms of performance and non-performance metrics.",
"title": ""
},
{
"docid": "neg:1840575_12",
"text": "This paper describes the vision-based control of a small autonomous aircraft following a road. The computer vision system detects natural features of the scene and tracks the roadway in order to determine relative yaw and lateral displacement between the aircraft and the road. Using only the vision measurements and onboard inertial sensors, a control strategy stabilizes the aircraft and follows the road. The road detection and aircraft control strategies have been verified by hardware in the loop (HIL) simulations over long stretches (several kilometers) of straight roads and in conditions of up to 5 m/s of prevailing wind. Hardware experiments have also been conducted using a modified radio-controlled aircraft. Successful road following was demonstrated over an airfield runway under variable lighting and wind conditions. The development of vision-based control strategies for unmanned aerial vehicles (UAVs), such as the ones presented here, enables complex autonomous missions in environments where typical navigation sensor like GPS are unavailable.",
"title": ""
},
{
"docid": "neg:1840575_13",
"text": "Alistair S. Jump* and Josep Peñuelas Unitat d’Ecofisiologia CSICCEAB-CREAF, Centre de Recerca Ecològica i Aplicacions Forestals, Universitat Autònoma de Barcelona, E-08193, Bellaterra, Barcelona, Spain *Correspondence: E-mail: [email protected] Abstract Climate is a potent selective force in natural populations, yet the importance of adaptation in the response of plant species to past climate change has been questioned. As many species are unlikely to migrate fast enough to track the rapidly changing climate of the future, adaptation must play an increasingly important role in their response. In this paper we review recent work that has documented climate-related genetic diversity within populations or on the microgeographical scale. We then describe studies that have looked at the potential evolutionary responses of plant populations to future climate change. We argue that in fragmented landscapes, rapid climate change has the potential to overwhelm the capacity for adaptation in many plant populations and dramatically alter their genetic composition. The consequences are likely to include unpredictable changes in the presence and abundance of species within communities and a reduction in their ability to resist and recover from further environmental perturbations, such as pest and disease outbreaks and extreme climatic events. Overall, a range-wide increase in extinction risk is likely to result. We call for further research into understanding the causes and consequences of the maintenance and loss of climate-related genetic diversity within populations.",
"title": ""
},
{
"docid": "neg:1840575_14",
"text": "BACKGROUND\nHandling of upper lateral cartilages (ULCs) is of prime importance in rhinoplasty. This study presents the experiences among 2500 cases of rhinoplasty in the past 10 years for managing of ULCs to minimize unwilling results of the shape and functional problems of the nose.\n\n\nMETHODS\nAll cases of rhinoplasties were done by the same surgeon from 2002 to 2013. Management of ULCs changed from resection to preserving the ULCs and to enhance their structural and functional roles. The techniques were spreader grafts, suturing of ULC together at the level or above the septum, using ULCs as auto-spreader flaps and very rarely trimming of ULCs unilaterally or bilaterally for making symmetric dorsal aesthetic lines. Fifty cases were operated based on this classification. Most cases were in type II and III. There were 7 cases in type I and 8 cases in type IV.\n\n\nRESULTS\nAmong most cases, the results were satisfactory although there were 8 cases for revision and among them, 2 cases had some fullness on dorsum and supra-tip because of inappropriate judgment on keeping the relationship between dorsum and tip. The problems in the shape and airways role of the nose reduced dramatically and a useful algorithm was presented.\n\n\nCONCLUSION\nULCs have great important roles in shape and function of nose. Preserving methods to keep these structures are of importance in surgical treatments of primary rhinoplasties. The presented algorithm helps to manage the ULCs in different anatomic types of the noses especially for surgeons who are in learning curve period.",
"title": ""
},
{
"docid": "neg:1840575_15",
"text": "The problem of scheduling is concerned with searching for optimal (or near-optimal) schedules subject to a number of constraints. A variety of approaches have been developed to solve the problem of scheduling. However, many of these approaches are often impractical in dynamic real-world environments where there are complex constraints and a variety of unexpected disruptions. In most real-world environments, scheduling is an ongoing reactive process where the presence of real-time information continually forces reconsideration and revision of pre-established schedules. Scheduling research has largely ignored this problem, focusing instead on optimisation of static schedules. This paper outlines the limitations of static approaches to scheduling in the presence of real-time information and presents a number of issues that have come up in recent years on dynamic scheduling. The paper defines the problem of dynamic scheduling and provides a review of the state of the art of currently developing research on dynamic scheduling. The principles of several dynamic scheduling techniques, namely, dispatching rules, heuristics, meta-heuristics, artificial intelligence techniques, and multi-agent systems are described in detail, followed by a discussion and comparison of their potential.",
"title": ""
},
{
"docid": "neg:1840575_16",
"text": "BACKGROUND\nThe incidence of microcephaly in Brazil in 2015 was 20 times higher than in previous years. Congenital microcephaly is associated with genetic factors and several causative agents. Epidemiological data suggest that microcephaly cases in Brazil might be associated with the introduction of Zika virus. We aimed to detect and sequence the Zika virus genome in amniotic fluid samples of two pregnant women in Brazil whose fetuses were diagnosed with microcephaly.\n\n\nMETHODS\nIn this case study, amniotic fluid samples from two pregnant women from the state of Paraíba in Brazil whose fetuses had been diagnosed with microcephaly were obtained, on the recommendation of the Brazilian health authorities, by ultrasound-guided transabdominal amniocentesis at 28 weeks' gestation. The women had presented at 18 weeks' and 10 weeks' gestation, respectively, with clinical manifestations that could have been symptoms of Zika virus infection, including fever, myalgia, and rash. After the amniotic fluid samples were centrifuged, DNA and RNA were extracted from the purified virus particles before the viral genome was identified by quantitative reverse transcription PCR and viral metagenomic next-generation sequencing. Phylogenetic reconstruction and investigation of recombination events were done by comparing the Brazilian Zika virus genome with sequences from other Zika strains and from flaviviruses that occur in similar regions in Brazil.\n\n\nFINDINGS\nWe detected the Zika virus genome in the amniotic fluid of both pregnant women. The virus was not detected in their urine or serum. Tests for dengue virus, chikungunya virus, Toxoplasma gondii, rubella virus, cytomegalovirus, herpes simplex virus, HIV, Treponema pallidum, and parvovirus B19 were all negative. After sequencing of the complete genome of the Brazilian Zika virus isolated from patient 1, phylogenetic analyses showed that the virus shares 97-100% of its genomic identity with lineages isolated during an outbreak in French Polynesia in 2013, and that in both envelope and NS5 genomic regions, it clustered with sequences from North and South America, southeast Asia, and the Pacific. After assessing the possibility of recombination events between the Zika virus and other flaviviruses, we ruled out the hypothesis that the Brazilian Zika virus genome is a recombinant strain with other mosquito-borne flaviviruses.\n\n\nINTERPRETATION\nThese findings strengthen the putative association between Zika virus and cases of microcephaly in neonates in Brazil. Moreover, our results suggest that the virus can cross the placental barrier. As a result, Zika virus should be considered as a potential infectious agent for human fetuses. Pathogenesis studies that confirm the tropism of Zika virus for neuronal cells are warranted.\n\n\nFUNDING\nConsellho Nacional de Desenvolvimento e Pesquisa (CNPq), Fundação de Amparo a Pesquisa do Estado do Rio de Janeiro (FAPERJ).",
"title": ""
},
{
"docid": "neg:1840575_17",
"text": "In areas approaching malaria elimination, human mobility patterns are important in determining the proportion of malaria cases that are imported or the result of low-level, endemic transmission. A convenience sample of participants enrolled in a longitudinal cohort study in the catchment area of Macha Hospital in Choma District, Southern Province, Zambia, was selected to carry a GPS data logger for one month from October 2013 to August 2014. Density maps and activity space plots were created to evaluate seasonal movement patterns. Time spent outside the household compound during anopheline biting times, and time spent in malaria high- and low-risk areas, were calculated. There was evidence of seasonal movement patterns, with increased long-distance movement during the dry season. A median of 10.6% (interquartile range (IQR): 5.8-23.8) of time was spent away from the household, which decreased during anopheline biting times to 5.6% (IQR: 1.7-14.9). The per cent of time spent in malaria high-risk areas for participants residing in high-risk areas ranged from 83.2% to 100%, but ranged from only 0.0% to 36.7% for participants residing in low-risk areas. Interventions targeted at the household may be more effective because of restricted movement during the rainy season, with limited movement between high- and low-risk areas.",
"title": ""
},
{
"docid": "neg:1840575_18",
"text": "-Vehicular ad hoc networks (VANETs) are wireless networks that do not require any fixed infrastructure. Regarding traffic safety applications for VANETs, warning messages have to be quickly and smartly disseminated in order to reduce the required dissemination time and to increase the number of vehicles receiving the traffic warning information. Adaptive techniques for VANETs usually consider features related to the vehicles in the scenario, such as their density, speed, and position, to adapt the performance of the dissemination process. These approaches are not useful when trying to warn the highest number of vehicles about dangerous situations in realistic vehicular environments. The Profile-driven Adaptive Warning Dissemination Scheme (PAWDS) designed to improve the warning message dissemination process. PAWDS system that dynamically modifies some of the key parameters of the propagation process and it cannot detect the vehicles which are in the dangerous position. Proposed system identifies the vehicles which are in the dangerous position and to send warning messages immediately. The vehicles must make use of all the available information efficiently to predict the position of nearby vehicles. Keywords— PAWDS, VANET, Ad hoc network , OBU , RSU, GPS.",
"title": ""
},
{
"docid": "neg:1840575_19",
"text": "We propose a hierarchical model for sequential data that learns a tree on-thefly, i.e. while reading the sequence. In the model, a recurrent network adapts its structure and reuses recurrent weights in a recursive manner. This creates adaptive skip-connections that ease the learning of long-term dependencies. The tree structure can either be inferred without supervision through reinforcement learning, or learned in a supervised manner. We provide preliminary experiments in a novel Math Expression Evaluation (MEE) task, which is explicitly crafted to have a hierarchical tree structure that can be used to study the effectiveness of our model. Additionally, we test our model in a wellknown propositional logic and language modelling tasks. Experimental results show the potential of our approach.",
"title": ""
}
] |
1840576 | How to Fit when No One Size Fits | [
{
"docid": "pos:1840576_0",
"text": "Systems for processing continuous monitoring queries over data streams must be adaptive because data streams are often bursty and data characteristics may vary over time. We focus on one particular type of adaptivity: the ability to gracefully degrade performance via \"load shedding\" (dropping unprocessed tuples to reduce system load) when the demands placed on the system cannot be met in full given available resources. Focusing on aggregation queries, we present algorithms that determine at what points in a query plan should load shedding be performed and what amount of load should be shed at each point in order to minimize the degree of inaccuracy introduced into query answers. We report the results of experiments that validate our analytical conclusions.",
"title": ""
}
] | [
{
"docid": "neg:1840576_0",
"text": "Scrotal calcinosis is a rarely seen benign disease in urological practice. It was first described by Lewinsky in 1883. The etiology is considered to be idiopathic and it is not known exactly. Scrotal calcinosis is usually asymptomatic. Patients live with their disease for a long time until they start to mind their appearances. Scrotal skin lesions can be solitary or multiple and usually are not associated with hormonal or metabolic abnormalities. Histologically, scrotal calcinosis is characterized by the presence of calcium deposits in the dermis, often surrounded by a granulomatous reaction. In this case report, we present a rare scrotal calcinosis case of a 28-year-old man who presented with cosmetic symptoms causing scrotal nodules with no history of metabolic, systemic, neoplastic, or autoimmune diseases.",
"title": ""
},
{
"docid": "neg:1840576_1",
"text": "This letter presents a novel technique for synthesis of coupled-resonator filters with inter-resonator couplings varying linearly with frequency. The values of non-zero elements of the coupling matrix are found by solving a nonlinear least squares problem involving eigenvalues of matrix pencils derived from the coupling matrix and reference zeros and poles of scattering parameters. The proposed method was verified by numerical tests carried out for various coupling schemes including triplets and quadruplets for which the frequency-dependent coupling was found to produce an extra zero.",
"title": ""
},
{
"docid": "neg:1840576_2",
"text": "Due to the demand for depth maps of higher quality than possible with a single depth imaging technique today, there has been an increasing interest in the combination of different depth sensors to produce a “super-camera” that is more than the sum of the individual parts. In this survey paper, we give an overview over methods for the fusion of Time-ofFlight (ToF) and passive stereo data as well as applications of the resulting high quality depth maps. Additionally, we provide a tutorial-based introduction to the principles behind ToF stereo fusion and the evaluation criteria used to benchmark these methods.",
"title": ""
},
{
"docid": "neg:1840576_3",
"text": "Automatic segmentation of an organ and its cystic region is a prerequisite of computer-aided diagnosis. In this paper, we focus on pancreatic cyst segmentation in abdominal CT scan. This task is important and very useful in clinical practice yet challenging due to the low contrast in boundary, the variability in location, shape and the different stages of the pancreatic cancer. Inspired by the high relevance between the location of a pancreas and its cystic region, we introduce extra deep supervision into the segmentation network, so that cyst segmentation can be improved with the help of relatively easier pancreas segmentation. Under a reasonable transformation function, our approach can be factorized into two stages, and each stage can be efficiently optimized via gradient back-propagation throughout the deep networks. We collect a new dataset with 131 pathological samples, which, to the best of our knowledge, is the largest set for pancreatic cyst segmentation. Without human assistance, our approach reports a 63.44% average accuracy, measured by the Dice-Sørensen coefficient (DSC), which is higher than the number (60.46%) without deep supervision.",
"title": ""
},
{
"docid": "neg:1840576_4",
"text": "Issues concerning agriculture, countryside and farmers have been always hindering China’s development. The only solution to these three problems is agricultural modernization. However, China's agriculture is far from modernized. The introduction of cloud computing and internet of things into agricultural modernization will probably solve the problem. Based on major features of cloud computing and key techniques of internet of things, cloud computing, visualization and SOA technologies can build massive data involved in agricultural production. Internet of things and RFID technologies can help build plant factory and realize automatic control production of agriculture. Cloud computing is closely related to internet of things. A perfect combination of them can promote fast development of agricultural modernization, realize smart agriculture and effectively solve the issues concerning agriculture, countryside and farmers.",
"title": ""
},
{
"docid": "neg:1840576_5",
"text": "We present a novel way for designing complex joint inference and learning models using Saul (Kordjamshidi et al., 2015), a recently-introduced declarative learning-based programming language (DeLBP). We enrich Saul with components that are necessary for a broad range of learning based Natural Language Processing tasks at various levels of granularity. We illustrate these advances using three different, well-known NLP problems, and show how these generic learning and inference modules can directly exploit Saul’s graph-based data representation. These properties allow the programmer to easily switch between different model formulations and configurations, and consider various kinds of dependencies and correlations among variables of interest with minimal programming effort. We argue that Saul provides an extremely useful paradigm both for the design of advanced NLP systems and for supporting advanced research in NLP.",
"title": ""
},
{
"docid": "neg:1840576_6",
"text": "New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e.g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only nice to have but is in fact a necessary tool for designing embodied agents.",
"title": ""
},
{
"docid": "neg:1840576_7",
"text": "OBJECTIVE\nTo define a map of interradicular spaces where miniscrew can be likely placed at a level covered by attached gingiva, and to assess if a correlation between crowding and availability of space exists.\n\n\nMETHODS\nPanoramic radiographs and digital models of 40 patients were selected according to the inclusion criteria. Interradicular spaces were measured on panoramic radiographs, while tooth size-arch length discrepancy was assessed on digital models. Statistical analysis was performed to evaluate if interradicular spaces are influenced by the presence of crowding.\n\n\nRESULTS\nIn the mandible, the most convenient sites for miniscrew insertion were in the spaces comprised between second molars and first premolars; in the maxilla, between first molars and second premolars as well as between canines and lateral incisors and between the two central incisors. The interradicular spaces between the maxillary canines and lateral incisors, and between mandibular first and second premolars revealed to be influenced by the presence of dental crowding.\n\n\nCONCLUSIONS\nThe average interradicular sites map hereby proposed can be used as a general guide for miniscrew insertion at the very beginning of orthodontic treatment planning. Then, the clinician should consider the amount of crowding: if this is large, the actual interradicular space in some areas might be significantly different from what reported on average. Individualized radiographs for every patient are still recommended.",
"title": ""
},
{
"docid": "neg:1840576_8",
"text": "Since adoption of the 2011 National Electrical Code®, many photovoltaic (PV) direct current (DC) arc-fault circuit interrupters (AFCIs) and arc-fault detectors (AFDs) have been introduced into the PV market. To meet the Code requirements, these products must be listed to Underwriters Laboratories (UL) 1699B Outline of Investigation. The UL 1699B test sequence was designed to ensure basic arc-fault detection capabilities with resistance to unwanted tripping; however, field experiences with AFCI/AFD devices have shown mixed results. In this investigation, independent laboratory tests were performed with UL-listed, UL-recognized, and prototype AFCI/AFDs to reveal any limitations with state-of-the-art arc-fault detection products. By running AFCIs and stand-alone AFDs through realistic tests beyond the UL 1699B requirements, many products were found to be sensitive to unwanted tripping or were ineffective at detecting harmful arc-fault events. Based on these findings, additional experiments are encouraged for inclusion in the AFCI/AFD design process and the certification standard to improve products entering the market.",
"title": ""
},
{
"docid": "neg:1840576_9",
"text": "s on human factors in computing systems (pp. 815–828). ACM New York, NY, USA. Hudlicka, E. (1997). Summary of knowledge elicitation techniques for requirements analysis (Course material for human computer interaction). Worcester Polytechnic Institute. Kaptelinin, V., & Nardi, B. (2012). Affordances in HCI: Toward a mediated action perspective. In Proceedings of CHI '12 (pp. 967–976).",
"title": ""
},
{
"docid": "neg:1840576_10",
"text": "Tumour-associated viruses produce antigens that, on the face of it, are ideal targets for immunotherapy. Unfortunately, these viruses are experts at avoiding or subverting the host immune response. Cervical-cancer-associated human papillomavirus (HPV) has a battery of immune-evasion mechanisms at its disposal that could confound attempts at HPV-directed immunotherapy. Other virally associated human cancers might prove similarly refractive to immuno-intervention unless we learn how to circumvent their strategies for immune evasion.",
"title": ""
},
{
"docid": "neg:1840576_11",
"text": "Infection, as a common postoperative complication of orthopedic surgery, is the main reason leading to implant failure. Silver nanoparticles (AgNPs) are considered as a promising antibacterial agent and always used to modify orthopedic implants to prevent infection. To optimize the implants in a reasonable manner, it is critical for us to know the specific antibacterial mechanism, which is still unclear. In this review, we analyzed the potential antibacterial mechanisms of AgNPs, and the influences of AgNPs on osteogenic-related cells, including cellular adhesion, proliferation, and differentiation, were also discussed. In addition, methods to enhance biocompatibility of AgNPs as well as advanced implants modifications technologies were also summarized.",
"title": ""
},
{
"docid": "neg:1840576_12",
"text": "Boosting takes on various forms with different programs using different loss functions, different base models, and different optimization schemes. The gbm package takes the approach described in [3] and [4]. Some of the terminology differs, mostly due to an effort to cast boosting terms into more standard statistical terminology (e.g. deviance). In addition, the gbm package implements boosting for models commonly used in statistics but not commonly associated with boosting. The Cox proportional hazard model, for example, is an incredibly useful model and the boosting framework applies quite readily with only slight modification [7]. Also some algorithms implemented in the gbm package differ from the standard implementation. The AdaBoost algorithm [2] has a particular loss function and a particular optimization algorithm associated with it. The gbm implementation of AdaBoost adopts AdaBoost’s exponential loss function (its bound on misclassification rate) but uses Friedman’s gradient descent algorithm rather than the original one proposed. So the main purposes of this document is to spell out in detail what the gbm package implements.",
"title": ""
},
{
"docid": "neg:1840576_13",
"text": "Spiking neural network (SNN) models describe key aspects of neural function in a computationally efficient manner and have been used to construct large-scale brain models. Large-scale SNNs are challenging to implement, as they demand high-bandwidth communication, a large amount of memory, and are computationally intensive. Additionally, tuning parameters of these models becomes more difficult and time-consuming with the addition of biologically accurate descriptions. To meet these challenges, we have developed CARLsim 3, a user-friendly, GPU-accelerated SNN library written in C/C++ that is capable of simulating biologically detailed neural models. The present release of CARLsim provides a number of improvements over our prior SNN library to allow the user to easily analyze simulation data, explore synaptic plasticity rules, and automate parameter tuning. In the present paper, we provide examples and performance benchmarks highlighting the library's features.",
"title": ""
},
{
"docid": "neg:1840576_14",
"text": "We in this paper solve the problem of high-quality automatic real-time background cut for 720p portrait videos. We first handle the background ambiguity issue in semantic segmentation by proposing a global background attenuation model. A spatial-temporal refinement network is developed to further refine the segmentation errors in each frame and ensure temporal coherence in the segmentation map. We form an end-to-end network for training and testing. Each module is designed considering efficiency and accuracy. We build a portrait dataset, which includes 8,000 images with high-quality labeled map for training and testing. To further improve the performance, we build a portrait video dataset with 50 sequences to fine-tune video segmentation. Our framework benefits many video processing applications.",
"title": ""
},
{
"docid": "neg:1840576_15",
"text": "Information security policy compliance (ISP) is one of the key concerns that face organizations today. Although technical and procedural measures help improve information security, there is an increased need to accommodate human, social and organizational factors. Despite the plethora of studies that attempt to identify the factors that motivate compliance behavior or discourage abuse and misuse behaviors, there is a lack of studies that investigate the role of ethical ideology per se in explaining compliance behavior. The purpose of this research is to investigate the role of ethics in explaining Information Security Policy (ISP) compliance. In that regard, a model that integrates behavioral and ethical theoretical perspectives is developed and tested. Overall, analyses indicate strong support for the validation of the proposed theoretical model.",
"title": ""
},
{
"docid": "neg:1840576_16",
"text": "Attention is typically used to select informative sub-phrases that are used for prediction. This paper investigates the novel use of attention as a form of feature augmentation, i.e, casted attention. We propose Multi-Cast Attention Networks (MCAN), a new attention mechanism and general model architecture for a potpourri of ranking tasks in the conversational modeling and question answering domains. Our approach performs a series of soft attention operations, each time casting a scalar feature upon the inner word embeddings. The key idea is to provide a real-valued hint (feature) to a subsequent encoder layer and is targeted at improving the representation learning process. There are several advantages to this design, e.g., it allows an arbitrary number of attention mechanisms to be casted, allowing for multiple attention types (e.g., co-attention, intra-attention) and attention variants (e.g., alignment-pooling, max-pooling, mean-pooling) to be executed simultaneously. This not only eliminates the costly need to tune the nature of the co-attention layer, but also provides greater extents of explainability to practitioners. Via extensive experiments on four well-known benchmark datasets, we show that MCAN achieves state-of-the-art performance. On the Ubuntu Dialogue Corpus, MCAN outperforms existing state-of-the-art models by 9%. MCAN also achieves the best performing score to date on the well-studied TrecQA dataset.",
"title": ""
},
{
"docid": "neg:1840576_17",
"text": "Distributed Denial of Service (DDoS) is defined as an attack in which mutiple compromised systems are made to attack a single target to make the services unavailable foe legitimate users.It is an attack designed to render a computer or network incapable of providing normal services. DDoS attack uses many compromised intermediate systems, known as botnets which are remotely controlled by an attacker to launch these attacks. DDOS attack basically results in the situation where an entity cannot perform an action for which it is authenticated. This usually means that a legitimate node on the network is unable to reach another node or their performance is degraded. The high interruption and severance caused by DDoS is really posing an immense threat to entire internet world today. Any compromiseto computing, communication and server resources such as sockets, CPU, memory, disk/database bandwidth, I/O bandwidth, router processing etc. for collaborative environment would surely endanger the entire application. It becomes necessary for researchers and developers to understand behaviour of DDoSattack because it affects the target network with little or no advance warning. Hence developing advanced intrusion detection and prevention systems for preventing, detecting, and responding to DDOS attack is a critical need for cyber space. Our rigorous survey study presented in this paper describes a platform for the study of evolution of DDoS attacks and their defense mechanisms.",
"title": ""
},
{
"docid": "neg:1840576_18",
"text": "Recent studies have explored a promising method to measure driver workload—the Peripheral Detection Task (PDT). The PDT has been suggested as a standard method to assess safety-relevant workload from the use of in-vehicle information systems (IVIS) while driving. This paper reports the German part of a Swedish-German cooperative study in which the PDT was investigated focusing on its specific sensitivity compared with alternative workload measures. Forty-nine professional drivers performed the PDT while following route guidance system instructions on an inner-city route. The route consisted of both highly demanding and less demanding sections. Two route guidance systems that differed mainly in display size and display organization were compared. Subjective workload ratings (NASA-TLX) as well as physiological measures (heart rate and heart rate variability) were collected as reference data. The PDT showed sensitivity to route demands. Despite their differing displays, both route guidance systems affected PDT performance similarly in intervals of several minutes. However, the PDT proved sensitive to peaks in workload from IVIS use and from the driving task. Peaks in workload were studied by video analyses of four selected subsections on the route. Subjective workload ratings reflected overall route demands and also did not indicate differing effects of the two displays. The physiological measures were less sensitive to workload and indicated emotional strain as well. An assessment of the PDT as a method for the measurement of safety-related workload is given. 2005 Elsevier Ltd. All rights reserved. 1369-8478/$ see front matter 2005 Elsevier Ltd. All rights reserved. doi:10.1016/j.trf.2005.04.009 * Corresponding author. Address: University of Freiburg, Center for Cognitive Science, Institute of Computer Science and Social Research, Friedrichstrasse 50, D-79098 Freiburg, Germany. Tel.: +49 761 203 4966; fax: +49 761 203 4938. E-mail address: [email protected] (G. Jahn). 256 G. Jahn et al. / Transportation Research Part F 8 (2005) 255–275",
"title": ""
},
{
"docid": "neg:1840576_19",
"text": "Accurate prediction of the postmortem interval requires an understanding of the decomposition process and the factors acting upon it. A controlled experiment, over 60 days at an outdoor site in the northwest of England, used 20 freshly killed pigs (Sus scrofa) as human analogues to study decomposition rate and pattern. Ten pigs were hung off the ground and ten placed on the surface. Observed differences in the decomposition pattern required a new decomposition scoring scale to be produced for the hanging pigs to enable comparisons with the surface pigs. The difference in the rate of decomposition between hanging and surface pigs was statistically significant (p=0.001). Hanging pigs reached advanced decomposition stages sooner, but lagged behind during the early stages. This delay is believed to result from lower variety and quantity of insects, due to restricted beetle access to the aerial carcass, and/or writhing maggots falling from the carcass.",
"title": ""
}
] |
1840577 | A Collapsed Variational Bayesian Inference Algorithm for Latent Dirichlet Allocation | [
{
"docid": "pos:1840577_0",
"text": "We introduce a new class of maximization-expectation (ME) algorithms where we maximize over hidden variables but marginalize over random parameters. This reverses the roles of expectation and maximization in the classical expectation-maximization algorithm. In the context of clustering, we argue that these hard assignments open the door to very fast implementations based on data structures such as kd-trees and conga lines. The marginalization over parameters ensures that we retain the ability to infer model structure (i.e., number of clusters). As an important example, we discuss a top-down Bayesian k-means algorithm and a bottom-up agglomerative clustering algorithm. In experiments, we compare these algorithms against a number of alternative algorithms that have recently appeared in the literature.",
"title": ""
}
] | [
{
"docid": "neg:1840577_0",
"text": "Northrop Grumman is developing an atom-based magnetometer technology that has the potential for providing a global position reference independent of GPS. The NAV-CAM sensor is a direct outgrowth of the Nuclear Magnetic Resonance Gyro under development by the same technical team. It is capable of providing simultaneous measurements of all 3 orthogonal axes of magnetic vector field components using a single compact vapor cell. The vector sum determination of the whole-field scalar measurement achieves similar precision to the individual vector components. By using a single sensitive element (vapor cell) this approach eliminates many of the problems encountered when using physically separate sensors or sensing elements.",
"title": ""
},
{
"docid": "neg:1840577_1",
"text": "Skills learned through (deep) reinforcement learning often generalizes poorly across domains and re-training is necessary when presented with a new task. We present a framework that combines techniques in formal methods with reinforcement learning (RL). The methods we provide allows for convenient specification of tasks with logical expressions, learns hierarchical policies (meta-controller and low-level controllers) with well-defined intrinsic rewards, and construct new skills from existing ones with little to no additional exploration. We evaluate the proposed methods in a simple grid world simulation as well as a more complicated kitchen environment in AI2Thor (Kolve et al. [2017]).",
"title": ""
},
{
"docid": "neg:1840577_2",
"text": "The objective of the study was to examine the correlations between intracranial aneurysm morphology and wall shear stress (WSS) to identify reliable predictors of rupture risk. Seventy-two intracranial aneurysms (41 ruptured and 31 unruptured) from 63 patients were studied retrospectively. All aneurysms were divided into two categories: narrow (aspect ratio ≥1.4) and wide-necked (aspect ratio <1.4 or neck width ≥4 mm). Computational fluid dynamics was used to determine the distribution of WSS, which was analyzed between different morphological groups and between ruptured and unruptured aneurysms. Sections of the walls of clipped aneurysms were stained with hematoxylin–eosin, observed under a microscope, and photographed. Ruptured aneurysms were statistically more likely to have a greater low WSS area ratio (LSAR) (P = 0.001) and higher aneurysms parent WSS ratio (P = 0.026) than unruptured aneurysms. Narrow-necked aneurysms were statistically more likely to have a larger LSAR (P < 0.001) and lower values of MWSS (P < 0.001), mean aneurysm-parent WSS ratio (P < 0.001), HWSS (P = 0.012), and the highest aneurysm-parent WSS ratio (P < 0.001) than wide-necked aneurysms. The aneurysm wall showed two different pathological changes associated with high or low WSS in wide-necked aneurysms. Aneurysm morphology could affect the distribution and magnitude of WSS on the basis of differences in blood flow. Both high and low WSS could contribute to focal wall damage and rupture through different mechanisms associated with each morphological type.",
"title": ""
},
{
"docid": "neg:1840577_3",
"text": "An important challenge for human-like AI is compositional semantics. Recent research has attempted to address this by using deep neural networks to learn vector space embeddings of sentences, which then serve as input to other tasks. We present a new dataset for one such task, “natural language inference” (NLI), that cannot be solved using only word-level knowledge and requires some compositionality. We find that the performance of state of the art sentence embeddings (InferSent; Conneau et al., 2017) on our new dataset is poor. We analyze the decision rules learned by InferSent and find that they are consistent with simple heuristics that are ecologically valid in its training dataset. Further, we find that augmenting training with our dataset improves test performance on our dataset without loss of performance on the original training dataset. This highlights the importance of structured datasets in better understanding and improving AI systems.",
"title": ""
},
{
"docid": "neg:1840577_4",
"text": "Teachers are increasingly required to incorporate information and communications technologies (ICT) into the modern classroom. The implementation of ICT into the classroom should not be seen as merely an add-on, however, but should be included with purpose; meaningfully implemented based on pedagogy. The aim of this study is to explore potential factors that might predict purposeful implementation of ICT into the classroom. Using an online survey, skills in and beliefs about ICT were assessed, as well as the teaching and learning beliefs of forty-five K-12 teachers. Hierarchical multiple regression revealed that competence using ICT and a belief in the importance of ICT for student outcomes positively predicted purposeful implementation of ICT into the classroom, while endorsing more traditional content-based learning was a negative predictor. These three predictors explained 47% of the variance in purposeful implementation of ICT into the classroom. ICT competence was unpacked further with correlations. This revealed that there is a relationship between teachers having ICT skills that can personalize, engage, and create an interactive atmosphere for students and purposeful implementation of ICT into the classroom. Based on these findings, suggestions are made of important focal areas for encouraging teachers to purposefully implement ICT into their classrooms.",
"title": ""
},
{
"docid": "neg:1840577_5",
"text": "For many networked games, such as the Defense of the Ancients and StarCraft series, the unofficial leagues created by players themselves greatly enhance user-experience, and extend the success of each game. Understanding the social structure that players of these game s implicitly form helps to create innovative gaming services to the benefit of both players and game operators. But how to extract and analyse the implicit social structure? We address this question by first proposing a formalism consisting of various ways to map interaction to social structure, and apply this to real-world data collected from three different game genres. We analyse the implications of these mappings for in-game and gaming-related services, ranging from network and socially-aware matchmaking of players, to an investigation of social network robustnes against player departure.",
"title": ""
},
{
"docid": "neg:1840577_6",
"text": "Cryptocurrency wallets store the wallets private key(s), and hence, are a lucrative target for attackers. With possession of the private key, an attacker virtually owns all of the currency in the compromised wallet. Managing cryptocurrency wallets offline, in isolated (’air-gapped’) computers, has been suggested in order to secure the private keys from theft. Such air-gapped wallets are often referred to as ’cold wallets.’ In this paper we show how private keys can be exfiltrated from air-gapped wallets. In the adversarial attack model, the attacker infiltrates the offline wallet, infecting it with malicious code. The malware can be preinstalled or pushed in during the initial installation of the wallet, or it can infect the system when removable media (e.g., USB flash drive) is inserted into the wallet’s computer in order to sign a transaction. These attack vectors have repeatedly been proven feasible in the last decade (e.g., [1],[2],[3],[4],[5],[6],[7],[8],[9],[10]). Having obtained a foothold in the wallet, an attacker can utilize various air-gap covert channel techniques (bridgeware [11]) to jump the airgap and exfiltrate the wallets private keys. We evaluate various exfiltration techniques, including physical, electromagnetic, electric, magnetic, acoustic, optical, and thermal techniques. This research shows that although cold wallets provide a high degree of isolation, its not beyond the capability of motivated attackers to compromise such wallets and steal private keys from them. We demonstrate how a 256-bit private key (e.g., bitcoin’s private keys) can be exfiltrated from an offline, air-gapped wallet of a fictional character named Satoshi within a matter of seconds.",
"title": ""
},
{
"docid": "neg:1840577_7",
"text": "Previous research to investigate the interaction between malaria infection and tumor progression has revealed that malaria infection can potentiate host immune response against tumor in tumor-bearing mice. Exosomes may play key roles in disseminating pathogenic host-derived molecules during infection because several studies have shown the involvement and roles of extracellular vesicles in cell–cell communication. However, the role of exosomes generated during Plasmodium infection in tumor growth, progression and angiogenesis has not been studied either in animals or in the clinics. To test this hypothesis, we designed an animal model to generate and isolate exosomes from mice which were subsequently used to treat the tumor. Intra-tumor injection of exosomes derived from the plasma of Plasmodium-infected mice provided significantly reduced Lewis lung cancer growth in mice. We further co-cultured the isolated exosomes with endothelial cells and observed significantly reduced expression of VEGFR2 and migration in the endothelial cells. Interestingly, high level of micro-RNA (miRNA) 16/322/497/17 was detected in the exosomes derived from the plasma of mice infected with Plasmodium compared with those from control mice. We observed that overexpression of the miRNA 16/322/497/17 in endothelial cell corresponded with decreased expression of VEGFR2, inhibition of angiogenesis and inhibition of the miRNA 16/322/497/17 significantly alleviated these effects. These data provide novel scientific evidence of the interaction between Plasmodium infection and lung cancer growth and angiogenesis.",
"title": ""
},
{
"docid": "neg:1840577_8",
"text": "Edge detection plays a significant role in image processing and performance of high-level tasks such as image segmentation and object recognition depends on its efficiency. It is clear that accurate edge map generation is more difficult when images are corrupted with noise. Moreover, most of edge detection methods have parameters which must be set manually. Here we propose a new color edge detector based on a statistical test, which is robust to noise. Also, the parameters of this method will be set automatically based on image content. To show the effectiveness of the proposed method, four state-of-the-art edge detectors are implemented and the results are compared. Experimental results on five of the most well-known edge detection benchmarks show that the proposed method is robust to noise. The performance of our method for lower levels of noise is very comparable to the existing approaches, whose performances highly depend on their parameter tuning stage. However, for higher levels of noise, the observed results significantly highlight the superiority of the proposed method over the existing edge detection methods, both quantitatively and qualitatively.",
"title": ""
},
{
"docid": "neg:1840577_9",
"text": "In response to the increasing volume of trajectory data obtained, e.g., from tracking athletes, animals, or meteorological phenomena, we present a new space-efficient algorithm for the analysis of trajectory data. The algorithm combines techniques from computational geometry, data mining, and string processing and offers a modular design that allows for a user-guided exploration of trajectory data incorporating domain-specific constraints and objectives.",
"title": ""
},
{
"docid": "neg:1840577_10",
"text": "In this paper, we revisit the classical Bayesian face recognition method by Baback Moghaddam et al. and propose a new joint formulation. The classical Bayesian method models the appearance difference between two faces. We observe that this “difference” formulation may reduce the separability between classes. Instead, we model two faces jointly with an appropriate prior on the face representation. Our joint formulation leads to an EM-like model learning at the training time and an efficient, closed-formed computation at the test time. On extensive experimental evaluations, our method is superior to the classical Bayesian face and many other supervised approaches. Our method achieved 92.4% test accuracy on the challenging Labeled Face in Wild (LFW) dataset. Comparing with current best commercial system, we reduced the error rate by 10%.",
"title": ""
},
{
"docid": "neg:1840577_11",
"text": "The rapid growth of e-commerce has provided both an opportunity to create new values in the online marketplace and dramatic competition to survive. To survive in a competitive environment, Internet shopping malls attempt to adopt and use Customer Relationship Management. However, previous researches focused on navigation patterns of customers with membership. Therefore, they failed to apply real time web marketing to anonymous customers who navigate web pages without personal login. To overcome the problems noted above, we propose a methodology for predicting the purchase probability of anonymous customers to support real time web marketing. The proposed methodology is composed of two phases: (1) extracting purchase patterns and (2) predicting purchase probability. Purchase pattern provides marketing implications to web marketers while the purchase probability provides an opportunity for real time web marketing by predicting the purchase probability of an anonymous customer. The proposed methodology can be applied to the real time web marketing such as navigation shortcuts, product recommendations and better customer inducement since anonymous customers are included in marketing target and significant navigation pattern for purchase is identified. q 2004 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840577_12",
"text": "The establishment of policy is key to the implementation of actions for health. We review the nature of policy and the definition and directions of health policy. In doing so, we explicitly cast a health political science gaze on setting parameters for researching policy change for health. A brief overview of core theories of the policy process for health promotion is presented, and illustrated with empirical evidence. The key arguments are that (a) policy is not an intervention, but drives intervention development and implementation; (b) understanding policy processes and their pertinent theories is pivotal for the potential to influence policy change; (c) those theories and associated empirical work need to recognise the wicked, multi-level, and incremental nature of elements in the process; and, therefore, (d) the public health, health promotion, and education research toolbox should more explicitly embrace health political science insights. The rigorous application of insights from and theories of the policy process will enhance our understanding of not just how, but also why health policy is structured and implemented the way it is.",
"title": ""
},
{
"docid": "neg:1840577_13",
"text": "In this paper, we review some advances made recently in the study of mobile phone datasets. This area of research has emerged a decade ago, with the increasing availability of large-scale anonymized datasets, and has grown into a stand-alone topic. We survey the contributions made so far on the social networks that can be constructed with such data, the study of personal mobility, geographical partitioning, urban planning, and help towards development as well as security and privacy issues.",
"title": ""
},
{
"docid": "neg:1840577_14",
"text": "Verifiability is one of the core editing principles in Wikipedia, editors being encouraged to provide citations for the added content. For a Wikipedia article, determining the citation span of a citation, i.e. what content is covered by a citation, is important as it helps decide for which content citations are still missing. We are the first to address the problem of determining the citation span in Wikipedia articles. We approach this problem by classifying which textual fragments in an article are covered by a citation. We propose a sequence classification approach where for a paragraph and a citation, we determine the citation span at a finegrained level. We provide a thorough experimental evaluation and compare our approach against baselines adopted from the scientific domain, where we show improvement for all evaluation metrics.",
"title": ""
},
{
"docid": "neg:1840577_15",
"text": "We have previously proposed a statistical method for estimating the pronunciation proficiency and intelligibility of presentations made in English by non-native speakers. To investigate the relationship between various acoustic measures and the pronunciation score and intelligibility, we statistically analyzed the speaker’s actual utterances to find combinations of acoustic features with a high correlation between the score estimated by a linear regression model and the score perceived by native English teachers. In this paper, we examined the quality of new acoustic features that are useful when used in combination with the system’s estimates of pronunciation score and intelligibility. Results showed that the best combination of acoustic features produced correlation coefficients of 0.929 and 0.753 for pronunciation and intelligibility, respectively, using open data for speakers at the 10-sentence level.",
"title": ""
},
{
"docid": "neg:1840577_16",
"text": "While advances in computing resources have made processing enormous amounts of data possible, human ability to identify patterns in such data has not scaled accordingly. Efficient computational methods for condensing and simplifying data are thus becoming vital for extracting actionable insights. In particular, while data summarization techniques have been studied extensively, only recently has summarizing interconnected data, or graphs, become popular. This survey is a structured, comprehensive overview of the state-of-the-art methods for summarizing graph data. We first broach the motivation behind and the challenges of graph summarization. We then categorize summarization approaches by the type of graphs taken as input and further organize each category by core methodology. Finally, we discuss applications of summarization on real-world graphs and conclude by describing some open problems in the field.",
"title": ""
},
{
"docid": "neg:1840577_17",
"text": "The calibration parameters of a mobile robot play a substantial role in navigation tasks. Often these parameters are subject to variations that depend either on environmental changes or on the wear of the devices. In this paper, we propose an approach to simultaneously estimate a map of the environment, the position of the on-board sensors of the robot, and its kinematic parameters. Our method requires no prior knowledge about the environment and relies only on a rough initial guess of the platform parameters. The proposed approach performs on-line estimation of the parameters and it is able to adapt to non-stationary changes of the configuration. We tested our approach in simulated environments and on a wide range of real world data using different types of robotic platforms.",
"title": ""
},
{
"docid": "neg:1840577_18",
"text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.",
"title": ""
},
{
"docid": "neg:1840577_19",
"text": "The electric motor is the main component in an electrical vehicle. Its power density is directly influenced by the winding. For this reason, it is relevant to investigate the influences of coil production on the quality of the stator. The examined stator in this article is wound with the multi-wire needle winding technique. With this method, the placing of the wires can be precisely guided leading to small winding heads. To gain a high winding quality with small winding resistances, the control of the tensile force during the winding process is essential. The influence of the tensile force on the winding resistance during the winding process with the multiple needle winding technique will be presented here. To control the tensile force during the winding process, the stress on the wire during the winding process needs to be examined first. Thus a model will be presented to investigate the tensile force which realizes a coupling between the multibody dynamics simulation and the finite element methods with the software COMSOL Multiphysics®. With the results of the simulation, a new winding-trajectory based wire tension control can be implemented. Therefore, new strategies to control the tensile force during the process using a CAD/CAM approach will be presented in this paper.",
"title": ""
}
] |
1840578 | A Generalized Wiener Attack on RSA | [
{
"docid": "pos:1840578_0",
"text": "We show how to find sufficiently small integer solutions to a polynomial in a single variable modulo N, and to a polynomial in two variables over the integers. The methods sometimes extend to more variables. As applications: RSA encryption with exponent 3 is vulnerable if the opponent knows two-thirds of the message, or if two messages agree over eight-ninths of their length; and we can find the factors of N=PQ if we are given the high order $\\frac{1}{4} \\log_2 N$ bits of P.",
"title": ""
}
] | [
{
"docid": "neg:1840578_0",
"text": "The acquisition of high-fidelity, long-term neural recordings in vivo is critically important to advance neuroscience and brain⁻machine interfaces. For decades, rigid materials such as metal microwires and micromachined silicon shanks were used as invasive electrophysiological interfaces to neurons, providing either single or multiple electrode recording sites. Extensive research has revealed that such rigid interfaces suffer from gradual recording quality degradation, in part stemming from tissue damage and the ensuing immune response arising from mechanical mismatch between the probe and brain. The development of \"soft\" neural probes constructed from polymer shanks has been enabled by advancements in microfabrication; this alternative has the potential to mitigate mismatch-related side effects and thus improve the quality of recordings. This review examines soft neural probe materials and their associated microfabrication techniques, the resulting soft neural probes, and their implementation including custom implantation and electrical packaging strategies. The use of soft materials necessitates careful consideration of surgical placement, often requiring the use of additional surgical shuttles or biodegradable coatings that impart temporary stiffness. Investigation of surgical implantation mechanics and histological evidence to support the use of soft probes will be presented. The review concludes with a critical discussion of the remaining technical challenges and future outlook.",
"title": ""
},
{
"docid": "neg:1840578_1",
"text": "We conducted a longitudinal study with 32 nonmusician children over 9 months to determine 1) whether functional differences between musician and nonmusician children reflect specific predispositions for music or result from musical training and 2) whether musical training improves nonmusical brain functions such as reading and linguistic pitch processing. Event-related brain potentials were recorded while 8-year-old children performed tasks designed to test the hypothesis that musical training improves pitch processing not only in music but also in speech. Following the first testing sessions nonmusician children were pseudorandomly assigned to music or to painting training for 6 months and were tested again after training using the same tests. After musical (but not painting) training, children showed enhanced reading and pitch discrimination abilities in speech. Remarkably, 6 months of musical training thus suffices to significantly improve behavior and to influence the development of neural processes as reflected in specific pattern of brain waves. These results reveal positive transfer from music to speech and highlight the influence of musical training. Finally, they demonstrate brain plasticity in showing that relatively short periods of training have strong consequences on the functional organization of the children's brain.",
"title": ""
},
{
"docid": "neg:1840578_2",
"text": "Spectral clustering enjoys its success in both data clustering and semisupervised learning. But, most spectral clustering algorithms cannot handle multi-class clustering problems directly. Additional strategies are needed to extend spectral clustering algorithms to multi-class clustering problems. Furthermore, most spectral clustering algorithms employ hard cluster membership, which is likely to be trapped by the local optimum. In this paper, we present a new spectral clustering algorithm, named “Soft Cut”. It improves the normalized cut algorithm by introducing soft membership, and can be efficiently computed using a bound optimization algorithm. Our experiments with a variety of datasets have shown the promising performance of the proposed clustering algorithm.",
"title": ""
},
{
"docid": "neg:1840578_3",
"text": "Different types of electric vehicles (EVs) have been recently designed with the aim of solving pollution problems caused by the emission of gasoline-powered engines. Environmental problems promote the adoption of new-generation electric vehicles for urban transportation. As it is well known, one of the weakest points of electric vehicles is the battery system. Vehicle autonomy and, therefore, accurate detection of battery state of charge (SoC) together with battery expected life, i.e., battery state of health, are among the major drawbacks that prevent the introduction of electric vehicles in the consumer market. The electric scooter may provide the most feasible opportunity among EVs. They may be a replacement product for the primary-use vehicle, especially in Europe and Asia, provided that drive performance, safety, and cost issues are similar to actual engine scooters. The battery system choice is a crucial item, and thanks to an increasing emphasis on vehicle range and performance, the Li-ion battery could become a viable candidate. This paper deals with the design of a battery pack based on Li-ion technology for a prototype electric scooter with high performance and autonomy. The adopted battery system is composed of a suitable number of cells series connected, featuring a high voltage level. Therefore, cell equalization and monitoring need to be provided. Due to manufacturing asymmetries, charge and discharge cycles lead to cell unbalancing, reducing battery capacity and, depending on cell type, causing safety troubles or strongly limiting the storage capacity of the full pack. No solution is available on the market at a cheap price, because of the required voltage level and performance, therefore, a dedicated battery management system was designed, that also includes a battery SoC monitoring. The proposed solution features a high capability of energy storing in braking conditions, charge equalization, overvoltage and undervoltage protection and, obviously, SoC information in order to optimize autonomy instead of performance or vice-versa.",
"title": ""
},
{
"docid": "neg:1840578_4",
"text": "Sign recognition is an integral part of autonomous cars. Any misclassification of traffic signs can potentially lead to a multitude of disastrous consequences, ranging from a life-threatening accident to even a large-scale interruption of transportation services relying on autonomous cars. In this paper, we propose and examine security attacks against sign recognition systems for Deceiving Autonomous caRs with Toxic Signs (we call the proposed attacks DARTS). In particular, we introduce two novel methods to create these toxic signs. First, we propose Out-of-Distribution attacks, which expand the scope of adversarial examples by enabling the adversary to generate these starting from an arbitrary point in the image space compared to prior attacks which are restricted to existing training/test data (In-Distribution). Second, we present the Lenticular Printing attack, which relies on an optical phenomenon to deceive the traffic sign recognition system. We extensively evaluate the effectiveness of the proposed attacks in both virtual and real-world settings and consider both white-box and black-box threat models. Our results demonstrate that the proposed attacks are successful under both settings and threat models. We further show that Out-of-Distribution attacks can outperform In-Distribution attacks on classifiers defended using the adversarial training defense, exposing a new attack vector for these defenses.",
"title": ""
},
{
"docid": "neg:1840578_5",
"text": "Complete scene understanding has been an aspiration of computer vision since its very early days. It has applications in autonomous navigation, aerial imaging, surveillance, human-computer interaction among several other active areas of research. While many methods since the advent of deep learning have taken performance in several scene understanding tasks to respectable levels, the tasks are far from being solved. One problem that plagues scene understanding is low-resolution. Convolutional Neural Networks that achieve impressive results on high resolution struggle when confronted with low resolution because of the inability to learn hierarchical features and weakening of signal with depth. In this thesis, we study the low resolution and suggest approaches that can overcome its consequences on three popular tasks object detection, in-the-wild face recognition, and semantic segmentation. The popular object detectors were designed for, trained, and benchmarked on datasets that have a strong bias towards medium and large sized objects. When these methods are finetuned and tested on a dataset of small objects, they perform miserably. The most successful detection algorithms follow a two-stage pipeline: the first which quickly generates regions of interest that are likely to contain the object and the second, which classifies these proposal regions. We aim to adapt both these stages for the case of small objects; the first by modifying anchor box generation based on theoretical considerations, and the second using a simple-yet-effective super-resolution step. Motivated by the success of being able to detect small objects, we study the problem of detecting and recognising objects with huge variations in resolution, in the problem of face recognition in semistructured scenes. Semi-structured scenes like social settings are more challenging than regular ones: there are several more faces of vastly different scales, there are large variations in illumination, pose and expression, and the existing datasets do not capture these variations. We address the unique challenges in this setting by (i) benchmarking popular methods for the problem of face detection, and (ii) proposing a method based on resolution-specific networks to handle different scales. Semantic segmentation is a more challenging localisation task where the goal is to assign a semantic class label to every pixel in the image. Solving such a problem is crucial for self-driving cars where we need sharper boundaries for roads, obstacles and paraphernalia. For want of a higher receptive field and a more global view of the image, CNN networks forgo resolution. This results in poor segmentation of complex boundaries, small and thin objects. We propose prefixing a super-resolution step before semantic segmentation. Through experiments, we show that a performance boost can be obtained on the popular streetview segmentation dataset, CityScapes.",
"title": ""
},
{
"docid": "neg:1840578_6",
"text": "Cloud computing is a distributed computing model that still faces problems. New ideas emerge to take advantage of its features and among the research challenges found in the cloud, we can highlight Identity and Access Management. The main problems of the application of access control in the cloud are the necessary flexibility and scalability to support a large number of users and resources in a dynamic and heterogeneous environment, with collaboration and information sharing needs. This paper proposes the use of risk-based dynamic access control for cloud computing. The proposal is presented as an access control model based on an extension of the XACML standard with three new components: the Risk Engine, the Risk Quantification Web Services and the Risk Policies. The risk policies present a method to describe risk metrics and their quantification, using local or remote functions. The risk policies allow users and cloud service providers to define how to handle risk-based access control for their resources, using different quantification and aggregation methods. The model reaches the access decision based on a combination of XACML decisions and risk analysis. A prototype of the model is implemented, showing it has enough expressivity to describe the models of related work. In the experimental results, the prototype takes between 2 and 6 milliseconds to reach access decisions using a risk policy. A discussion on the security aspects of the model is also presented.",
"title": ""
},
{
"docid": "neg:1840578_7",
"text": "Automatic Turret Gun (ATG) is a weapon system used in numerous combat platforms and vehicles such as in tanks, aircrafts, or stationary ground platforms. ATG plays a big role in both defensive and offensive scenario. It allows combat engagement while the operator of ATG (soldier) covers himself inside a protected control station. On the other hand, ATGs have significant mass and dimension, therefore susceptible to inertial disturbances that need to be compensated to enable the ATG to reach the targeted position quickly and accurately while undergoing disturbances from weapon fire or platform movement. The paper discusses various conventional control method applied in ATG, namely PID controller, RAC, and RACAFC. A number of experiments have been carried out for various range of angle both in azimuth and elevation axis of turret gun. The results show that for an ATG system working under disturbance, RACAFC exhibits greater performance than both RAC and PID, but in experiments without load, equally satisfactory results are obtained from RAC. The exception is for the PID controller, which cannot reach the entire angle given.",
"title": ""
},
{
"docid": "neg:1840578_8",
"text": "Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.",
"title": ""
},
{
"docid": "neg:1840578_9",
"text": "Social media platforms such as Twitter and Facebook enable the creation of virtual customer environments (VCEs) where online communities of interest form around specific firms, brands, or products. While these platforms can be used as another means to deliver familiar e-commerce applications, when firms fail to fully engage their customers, they also fail to fully exploit the capabilities of social media platforms. To gain business value, organizations need to incorporate community building as part of the implementation of social media.",
"title": ""
},
{
"docid": "neg:1840578_10",
"text": "Lymph nodes are assessed routinely in clinical practice and their size is followed throughout radiation or chemotherapy to monitor the effectiveness of cancer treatment. This paper presents a robust learning-based method for automatic detection and segmentation of solid lymph nodes from CT data, with the following contributions. First, it presents a learning based approach to solid lymph node detection that relies on marginal space learning to achieve great speedup with virtually no loss in accuracy. Second, it presents a computationally efficient segmentation method for solid lymph nodes (LN). Third, it introduces two new sets of features that are effective for LN detection, one that self-aligns to high gradients and another set obtained from the segmentation result. The method is evaluated for axillary LN detection on 131 volumes containing 371 LN, yielding a 83.0% detection rate with 1.0 false positive per volume. It is further evaluated for pelvic and abdominal LN detection on 54 volumes containing 569 LN, yielding a 80.0% detection rate with 3.2 false positives per volume. The running time is 5-20 s per volume for axillary areas and 15-40 s for pelvic. An added benefit of the method is the capability to detect and segment conglomerated lymph nodes.",
"title": ""
},
{
"docid": "neg:1840578_11",
"text": "The modified Brostrom procedure is commonly recommended for reconstruction of the anterior talofibular ligament (ATF) and calcaneofibular ligament (CF) with an advancement of the inferior retinaculum. However, some surgeons perform the modified Bostrom procedure with an semi-single ATF ligament reconstruction and advancement of the inferior retinaculum for simplicity. This study evaluated the initial stability of the modified Brostrom procedure and compared a two ligaments (ATF + CF) reconstruction group with a semi-single ligament (ATF) reconstruction group. Sixteen paired fresh frozen cadaveric ankle joints were used in this study. The ankle joint laxity was measured on the plane radiographs with 150 N anterior drawer force and 150 N varus stress force. The anterior displacement distances and varus tilt angles were measured before and after cutting the ATF and CF ligaments. A two ligaments (ATF + CF) reconstruction with an advancement of the inferior retinaculum was performed on eight left cadaveric ankles, and an semi-single ligament (ATF) reconstruction with an advancement of the inferior retinaculum was performed on eight right cadaveric ankles. The ankle instability was rechecked after surgery. The decreases in instability of the ankle after surgery were measured and the difference in the decrease was compared using a Mann–Whitney U test. The mean decreases in anterior displacement were 3.4 and 4.0 mm in the two ligaments reconstruction and semi-single ligament reconstruction groups, respectively. There was no significant difference between the two groups (P = 0.489). The mean decreases in the varus tilt angle in the two ligaments reconstruction and semi-single ligament reconstruction groups were 12.6° and 12.2°, respectively. There was no significant difference between the two groups (P = 0.399). In this cadaveric study, a substantial level of initial stability can be obtained using an anatomical reconstruction of the anterior talofibular ligament only and reinforcement with the inferior retinaculum. The modified Brostrom procedure with a semi-single ligament (Anterior talofibular ligament) reconstruction with an advancement of the inferior retinaculum can provide as much initial stability as the two ligaments (Anterior talofibular ligament and calcaneofibular ligament) reconstruction procedure.",
"title": ""
},
{
"docid": "neg:1840578_12",
"text": "Cloud computing providers such as Amazon and Google have recently begun offering container-instances, which provide an efficient route to application deployment within a lightweight, isolated and well-defined execution environment. Cloud providers currently offer Container Service Platforms (CSPs), which orchestrate containerised applications. Existing CSP frameworks do not offer any form of intelligent resource scheduling: applications are usually scheduled individually, rather than taking a holistic view of all registered applications and available resources in the cloud. This can result in increased execution times for applications, resource wastage through underutilised container-instances, and a reduction in the number of applications that can be deployed, given the available resources. The research presented in this paper aims to extend existing systems by adding a cloud-based Container Management Service (CMS) framework that offers increased deployment density, scalability and resource efficiency. CMS provides additional functionalities for orchestrating containerised applications by joint optimisation of sets of containerised applications, and resource pool in multiple (geographical distributed) cloud regions. We evaluated CMS on a cloud-based CSP i.e., Amazon EC2 Container Management Service (ECS) and conducted extensive experiments using sets of CPU and Memory intensive containerised applications against the direct deployment strategy of Amazon ECS. The results show that CMS achieves up to 25% higher cluster utilisation, and up to 70% reduction in execution times.",
"title": ""
},
{
"docid": "neg:1840578_13",
"text": "This paper introduces the task of questionanswer driven semantic role labeling (QA-SRL), where question-answer pairs are used to represent predicate-argument structure. For example, the verb “introduce” in the previous sentence would be labeled with the questions “What is introduced?”, and “What introduces something?”, each paired with the phrase from the sentence that gives the correct answer. Posing the problem this way allows the questions themselves to define the set of possible roles, without the need for predefined frame or thematic role ontologies. It also allows for scalable data collection by annotators with very little training and no linguistic expertise. We gather data in two domains, newswire text and Wikipedia articles, and introduce simple classifierbased models for predicting which questions to ask and what their answers should be. Our results show that non-expert annotators can produce high quality QA-SRL data, and also establish baseline performance levels for future work on this task.",
"title": ""
},
{
"docid": "neg:1840578_14",
"text": "Math word problems form a natural abstraction to a range of quantitative reasoning problems, such as understanding financial news, sports results, and casualties of war. Solving such problems requires the understanding of several mathematical concepts such as dimensional analysis, subset relationships, etc. In this paper, we develop declarative rules which govern the translation of natural language description of these concepts to math expressions. We then present a framework for incorporating such declarative knowledge into word problem solving. Our method learns to map arithmetic word problem text to math expressions, by learning to select the relevant declarative knowledge for each operation of the solution expression. This provides a way to handle multiple concepts in the same problem while, at the same time, supporting interpretability of the answer expression. Our method models the mapping to declarative knowledge as a latent variable, thus removing the need for expensive annotations. Experimental evaluation suggests that our domain knowledge based solver outperforms all other systems, and that it generalizes better in the realistic case where the training data it is exposed to is biased in a different way than the test data.",
"title": ""
},
{
"docid": "neg:1840578_15",
"text": "CDM ESD event has become the main ESD reliability concern for integrated-circuits products using nanoscale CMOS technology. A novel CDM ESD protection design, using self-biased current trigger (SBCT) and source pumping, has been proposed and successfully verified in 0.13-lm CMOS technology to achieve 1-kV CDM ESD robustness. 2007 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840578_16",
"text": "Metaphors are common elements of language that allow us to creatively stretch the limits of word meaning. However, metaphors vary in their degree of novelty, which determines whether people must create new meanings on-line or retrieve previously known metaphorical meanings from memory. Such variations affect the degree to which general cognitive capacities such as executive control are required for successful comprehension. We investigated whether individual differences in executive control relate to metaphor processing using eye movement measures of reading. Thirty-nine participants read sentences including metaphors or idioms, another form of figurative language that is more likely to rely on meaning retrieval. They also completed the AX-CPT, a domain-general executive control task. In Experiment 1, we examined sentences containing metaphorical or literal uses of verbs, presented with or without prior context. In Experiment 2, we examined sentences containing idioms or literal phrases for the same participants to determine whether the link to executive control was qualitatively similar or different to Experiment 1. When metaphors were low familiar, all people read verbs used as metaphors more slowly than verbs used literally (this difference was smaller for high familiar metaphors). Executive control capacity modulated this pattern in that high executive control readers spent more time reading verbs when a prior context forced a particular interpretation (metaphorical or literal), and they had faster total metaphor reading times when there was a prior context. Interestingly, executive control did not relate to idiom processing for the same readers. Here, all readers had faster total reading times for high familiar idioms than literal phrases. Thus, executive control relates to metaphor but not idiom processing for these readers, and for the particular metaphor and idiom reading manipulations presented.",
"title": ""
},
{
"docid": "neg:1840578_17",
"text": "Two experiments comparing user performance on ClearType and Regular displays are reported. In the first, 26 participants scanned a series of spreadsheets for target information. Speed of performance was significantly faster with ClearType. In the second experiment, 25 users read two articles for meaning. Reading speed was significantly faster for ClearType. In both experiments no differences in accuracy of performance or visual fatigue scores were observed. The data also reveal substantial individual differences in performance suggesting ClearType may not be universally beneficial to information workers.",
"title": ""
},
{
"docid": "neg:1840578_18",
"text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.",
"title": ""
}
] |
1840579 | What you see is what you set: sustained inattentional blindness and the capture of awareness. | [
{
"docid": "pos:1840579_0",
"text": "Advances in neuroscience implicate reentrant signaling as the predominant form of communication between brain areas. This principle was used in a series of masking experiments that defy explanation by feed-forward theories. The masking occurs when a brief display of target plus mask is continued with the mask alone. Two masking processes were found: an early process affected by physical factors such as adapting luminance and a later process affected by attentional factors such as set size. This later process is called masking by object substitution, because it occurs whenever there is a mismatch between the reentrant visual representation and the ongoing lower level activity. Iterative reentrant processing was formalized in a computational model that provides an excellent fit to the data. The model provides a more comprehensive account of all forms of visual masking than do the long-held feed-forward views based on inhibitory contour interactions.",
"title": ""
}
] | [
{
"docid": "neg:1840579_0",
"text": "Re-authenticating users may be necessary for smartphone authentication schemes that leverage user behaviour, device context, or task sensitivity. However, due to the unpredictable nature of re-authentication, users may get annoyed when they have to use the default, non-transparent authentication prompt for re-authentication. We address this concern by proposing several re-authentication configurations with varying levels of screen transparency and an optional time delay before displaying the authentication prompt. We conduct user studies with 30 participants to evaluate the usability and security perceptions of these configurations. We find that participants respond positively to our proposed changes and utilize the time delay while they are anticipating to get an authentication prompt to complete their current task. Though our findings indicate no differences in terms of task performance against these configurations, we find that the participants’ preferences for the configurations are context-based. They generally prefer the reauthentication configuration with a non-transparent background for sensitive applications, such as banking and photo apps, while their preferences are inclined towards convenient, usable configurations for medium and low sensitive apps or while they are using their devices at home. We conclude with suggestions to improve the design of our proposed configurations as well as a discussion of guidelines for future implementations of re-authentication schemes.",
"title": ""
},
{
"docid": "neg:1840579_1",
"text": "Fuzz testing is an active testing technique which consists in automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. Fuzzing entails such questions as: Where to fuzz? Which parameter to fuzz? What kind of anomaly to introduce? Where to observe its effects? etc. Different test contexts depending on the degree of knowledge assumed about the target: recompiling the application (white-box), interacting only at the target interface (blackbox), dynamically instrumenting a binary (grey-box). In this paper, we focus on black-box test contest, and specifically address the questions: How to obtain a notion of coverage on unstructured inputs? How to capture human testers intuitions and use it for the fuzzing? How to drive the search in various directions? We specifically address the problems of detecting Memory Corruption in PDF interpreters and Cross Site Scripting (XSS) in web applications. We detail our approaches which use genetic algorithm, inference and anti-random testing. We empirically evaluate our implementations of XSS fuzzer KameleonFuzz and of PDF fuzzer ShiftMonkey.",
"title": ""
},
{
"docid": "neg:1840579_2",
"text": "This paper describes and assesses underwater channel models for optical wireless communication. Models considered are: inherent optical properties; vector radiative transfer theory with the small-angle analytical solution and numerical solutions of the vector radiative transfer equation (Monte Carlo, discrete ordinates and invariant imbedding). Variable composition and refractive index, in addition to background light, are highlighted as aspects of the channel which advanced models must represent effectively. Models are assessed against these aspects in terms of their ability to predict transmitted power and spatial and temporal distributions of light a specified distance from a transmitter. Monte Carlo numerical methods are found to be the most versatile but are compromised by long computational time and greater errors than other methods.",
"title": ""
},
{
"docid": "neg:1840579_3",
"text": "Domestic induction heating (IH) is currently the technology of choice in modern domestic applications due to its advantages regarding fast heating time, efficiency, and improved control. New design trends pursue the implementation of new cost-effective topologies with higher efficiency levels. In order to achieve this aim, a direct ac-ac boost resonant converter is proposed in this paper. The main features of this proposal are the improved efficiency, reduced component count, and proper output power control. A detailed analytical model leading to closed-form expressions of the main magnitudes is presented, and a converter design procedure is proposed. In addition, an experimental prototype has been designed and built to prove the expected converter performance and the accurateness of the analytical model. The experimental results are in good agreement with the analytical ones and prove the feasibility of the proposed converter for the IH application.",
"title": ""
},
{
"docid": "neg:1840579_4",
"text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.",
"title": ""
},
{
"docid": "neg:1840579_5",
"text": "What is CRM Customer relationship Management (CRM) appears to be a simple and straightforward concept, but there are many different definitions and implementations of CRM. At present, a number of different conceptual understandings are associated with the term \"Customer Relationship Management (CRM). There understanding range from IT driven programs designed to optimize customer contact to comprehensive approaches for the establishment and design of long-term relationships. The effort to establish a meaningful relationship with the customer is characteristic of this last understanding (Barnes 2003).",
"title": ""
},
{
"docid": "neg:1840579_6",
"text": "Abstract—In this paper, a novel dual-band RF-harvesting RF-DC converter with a frequency limited impedance matching network (M/N) is proposed. The proposed RF-DC converter consists of a dual-band impedance matching network, a rectifier circuit with villard structure, a wideband harmonic suppression low-pass filter (LPF), and a termination load. The proposed dual-band M/N can match two receiving band signals and suppress the out-of-band signals effectively, so the back-scattered nonlinear frequency components from the nonlinear rectifying diodes to the antenna can be blocked. The fabricated circuit provides the maximum RF-DC conversion efficiency of 73.76% and output voltage 7.09 V at 881MHz and 69.05% with 6.86V at 2.4GHz with an individual input signal power of 22 dBm. Moreover, the conversion efficiency of 77.13% and output voltage of 7.25V are obtained when two RF waves with input dual-band signal power of 22 dBm are fed simultaneously.",
"title": ""
},
{
"docid": "neg:1840579_7",
"text": "In this article, we quantitatively analyze how the term “fake news” is being shaped in news media in recent years. We study the perception and the conceptualization of this term in the traditional media using eight years of data collected from news outlets based in 20 countries. Our results not only corroborate previous indications of a high increase in the usage of the expression “fake news”, but also show contextual changes around this expression after the United States presidential election of 2016. Among other results, we found changes in the related vocabulary, in the mentioned entities, in the surrounding topics and in the contextual polarity around the term “fake news”, suggesting that this expression underwent a change in perception and conceptualization after 2016. These outcomes expand the understandings on the usage of the term “fake news”, helping to comprehend and more accurately characterize this relevant social phenomenon linked to misinformation and manipulation.",
"title": ""
},
{
"docid": "neg:1840579_8",
"text": "Social media platforms provide an inexpensive communication medium that allows anyone to quickly reach millions of users. Consequently, in these platforms anyone can publish content and anyone interested in the content can obtain it, representing a transformative revolution in our society. However, this same potential of social media systems brings together an important challenge---these systems provide space for discourses that are harmful to certain groups of people. This challenge manifests itself with a number of variations, including bullying, offensive content, and hate speech. Specifically, authorities of many countries today are rapidly recognizing hate speech as a serious problem, specially because it is hard to create barriers on the Internet to prevent the dissemination of hate across countries or minorities. In this paper, we provide the first of a kind systematic large scale measurement and analysis study of hate speech in online social media. We aim to understand the abundance of hate speech in online social media, the most common hate expressions, the effect of anonymity on hate speech and the most hated groups across regions. In order to achieve our objectives, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both of these systems. Our results identify hate speech forms and unveil a set of important patterns, providing not only a broader understanding of online hate speech, but also offering directions for detection and prevention approaches.",
"title": ""
},
{
"docid": "neg:1840579_9",
"text": "In Rspondin-based 3D cultures, Lgr5 stem cells from multiple organs form ever-expanding epithelial organoids that retain their tissue identity. We report the establishment of tumor organoid cultures from 20 consecutive colorectal carcinoma (CRC) patients. For most, organoids were also generated from adjacent normal tissue. Organoids closely recapitulate several properties of the original tumor. The spectrum of genetic changes within the \"living biobank\" agrees well with previous large-scale mutational analyses of CRC. Gene expression analysis indicates that the major CRC molecular subtypes are represented. Tumor organoids are amenable to high-throughput drug screens allowing detection of gene-drug associations. As an example, a single organoid culture was exquisitely sensitive to Wnt secretion (porcupine) inhibitors and carried a mutation in the negative Wnt feedback regulator RNF43, rather than in APC. Organoid technology may fill the gap between cancer genetics and patient trials, complement cell-line- and xenograft-based drug studies, and allow personalized therapy design. PAPERCLIP.",
"title": ""
},
{
"docid": "neg:1840579_10",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
},
{
"docid": "neg:1840579_11",
"text": "With the rapid development of Location-based Social Network (LBSN) services, a large number of Point-of-Interests (POIs) have been available, which consequently raises a great demand of building personalized POI recommender systems. A personalized POI recommender system can significantly help users to find their preferred POIs and assist POI owners to attract more customers. However, due to the complexity of users’ checkin decision making process that is influenced by many different factors such as POI distance and region’s prosperity, and the dynamics of user’s preference, POI recommender systems usually suffer from many challenges. Although different latent factor based methods (e.g., probabilistic matrix factorization) have been proposed, most of them do not successfully incorporate both geographical influence and temporal effect together into latent factor models. To this end, in this paper, we propose a new Spatial-Temporal Probabilistic Matrix Factorization (STPMF) model that models a user’s preference for POI as the combination of his geographical preference and other general interest in POI. Furthermore, in addition to static general interest of user, we capture the temporal dynamics of user’s interest as well by modeling checkin data in a unique way. To evaluate the proposed STPMF model, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on two real-world data sets. The experimental results clearly demonstrate the effectiveness of our proposed STPMF model.",
"title": ""
},
{
"docid": "neg:1840579_12",
"text": "In this article, a novel compact reconfigurable antenna based on substrate integrated waveguide (SIW) technology is introduced. The geometry of the proposed antennas is symmetric with respect to the horizontal center line. The electrical shape of the antenna is composed of double H-plane SIW based horn antennas and radio frequency micro electro mechanical system (RF-MEMS) actuators. The RF-MEMS actuators are integrated in the planar structure of the antenna for reconfiguring the radiation pattern by adding nulls to the pattern. The proper activation/deactivation of the switches alters the modes distributed in the structure and changes the radiation pattern. When different combinations of switches are on or off, the radiation patterns have 2, 4, 6, 8, . . . nulls with nearly similar operating frequencies. The attained peak gain of the proposed antenna is higher than 5 dB at any point on the far field radiation pattern except at the null positions. The design procedure and closed form formulation are provided for analytical determination of the antenna parameters. Moreover, the designed antenna with an overall dimensions of only 63.6 × 50 mm2 is fabricated and excited through standard SMA connector and compared with the simulated results. The measured results show that the antenna can clearly alters its beams using the switching components. The proposed antenna retains advantages of low cost, low cross-polarized radiation, and easy integration of configuration.",
"title": ""
},
{
"docid": "neg:1840579_13",
"text": "Hidden Markov Model (HMM) based applications are common in various areas, but the incorporation of HMM's for anomaly detection is still in its infancy. This paper aims at classifying the TCP network traffic as an attack or normal using HMM. The paper's main objective is to build an anomaly detection system, a predictive model capable of discriminating between normal and abnormal behavior of network traffic. In the training phase, special attention is given to the initialization and model selection issues, which makes the training phase particularly effective. For training HMM, 12.195% features out of the total features (5 features out of 41 features) present in the KDD Cup 1999 data set are used. Result of tests on the KDD Cup 1999 data set shows that the proposed system is able to classify network traffic in proportion to the number of features used for training HMM. We are extending our work on a larger data set for building an anomaly detection system.",
"title": ""
},
{
"docid": "neg:1840579_14",
"text": "The Stock Market is known for its volatile and unstable nature. A particular stock could be thriving in one period and declining in the next. Stock traders make money from buying equity when they are at their lowest and selling when they are at their highest. The logical question would be: \"What Causes Stock Prices To Change?\". At the most fundamental level, the answer to this would be the demand and supply. In reality, there are many theories as to why stock prices fluctuate, but there is no generic theory that explains all, simply because not all stocks are identical, and one theory that may apply for today, may not necessarily apply for tomorrow. This paper covers various approaches taken to attempt to predict the stock market without extensive prior knowledge or experience in the subject area, highlighting the advantages and limitations of the different techniques such as regression and classification. We formulate both short term and long term predictions. Through experimentation we achieve 81% accuracy for future trend direction using classification, 0.0117 RMSE for next day price and 0.0613 RMSE for next day change in price using regression techniques. The results obtained in this paper are achieved using only historic prices and technical indicators. Various methods, tools and evaluation techniques will be assessed throughout the course of this paper, the result of this contributes as to which techniques will be selected and enhanced in the final artefact of a stock prediction model. Further work will be conducted utilising deep learning techniques to approach the problem. This paper will serve as a preliminary guide to researchers wishing to expose themselves to this area.",
"title": ""
},
{
"docid": "neg:1840579_15",
"text": "The chemokine receptor CCR7 drives leukocyte migration into and within lymph nodes (LNs). It is activated by chemokines CCL19 and CCL21, which are scavenged by the atypical chemokine receptor ACKR4. CCR7-dependent navigation is determined by the distribution of extracellular CCL19 and CCL21, which form concentration gradients at specific microanatomical locations. The mechanisms underpinning the establishment and regulation of these gradients are poorly understood. In this article, we have incorporated multiple biochemical processes describing the CCL19-CCL21-CCR7-ACKR4 network into our model of LN fluid flow to establish a computational model to investigate intranodal chemokine gradients. Importantly, the model recapitulates CCL21 gradients observed experimentally in B cell follicles and interfollicular regions, building confidence in its ability to accurately predict intranodal chemokine distribution. Parameter variation analysis indicates that the directionality of these gradients is robust, but their magnitude is sensitive to these key parameters: chemokine production, diffusivity, matrix binding site availability, and CCR7 abundance. The model indicates that lymph flow shapes intranodal CCL21 gradients, and that CCL19 is functionally important at the boundary between B cell follicles and the T cell area. It also predicts that ACKR4 in LNs prevents CCL19/CCL21 accumulation in efferent lymph, but does not control intranodal gradients. Instead, it attributes the disrupted interfollicular CCL21 gradients observed in Ackr4-deficient LNs to ACKR4 loss upstream. Our novel approach has therefore generated new testable hypotheses and alternative interpretations of experimental data. Moreover, it acts as a framework to investigate gradients at other locations, including those that cannot be visualized experimentally or involve other chemokines.",
"title": ""
},
{
"docid": "neg:1840579_16",
"text": "A brain-computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI) is presented which allows human subjects to observe and control changes of their own blood oxygen level-dependent (BOLD) response. This BCI performs data preprocessing (including linear trend removal, 3D motion correction) and statistical analysis on-line. Local BOLD signals are continuously fed back to the subject in the magnetic resonance scanner with a delay of less than 2 s from image acquisition. The mean signal of a region of interest is plotted as a time-series superimposed on color-coded stripes which indicate the task, i.e., to increase or decrease the BOLD signal. We exemplify the presented BCI with one volunteer intending to control the signal of the rostral-ventral and dorsal part of the anterior cingulate cortex (ACC). The subject achieved significant changes of local BOLD responses as revealed by region of interest analysis and statistical parametric maps. The percent signal change increased across fMRI-feedback sessions suggesting a learning effect with training. This methodology of fMRI-feedback can assess voluntary control of circumscribed brain areas. As a further extension, behavioral effects of local self-regulation become accessible as a new field of research.",
"title": ""
},
{
"docid": "neg:1840579_17",
"text": "Microorganisms present in our oral cavity which are called the human micro flora attach to our tooth surfaces and develop biofilms. In maximum organic habitats microorganisms generally prevail as multispecies biolfilms with the help of intercellular interactions and communications among them which are the main keys for their endurance. These biofilms are formed by initial attachment of bacteria to a surface, development of a multi –dimensional complex structure and detachment to progress other site. The best example of biofilm formation is dental plaque. Plaque formation can lead to dental caries and other associated diseases causing tooth loss. Many different bacteria are involved in these processes and one among them is Streptococcus mutans which is the principle and the most important agent. When these infections become severe, during the treatment the bacterium can enter the bloodstream from the oral cavity and cause endocarditis. The oral bacterium S. mutans is greatly skilled in its mechanical modes of carbohydrate absorption. It also synthesizes polysaccharides that are present in dental plaque causing caries. As dental caries is a preventable disease major distinct approaches for its prevention are: carbohydrate diet, sugar substitutes, mechanical cleaning techniques, use of fluorides, antimicrobial agents, fissure sealants, vaccines, probiotics, replacement theory and dairy products and at the same time for tooth remineralization fluorides and casein phosphopeptides are extensively employed. The aim of this review article is to put forth the general features of the bacterium S.mutans and how it is involved in certain diseases like: dental plaque, dental caries and endocarditis.",
"title": ""
},
{
"docid": "neg:1840579_18",
"text": "Sex based differences in immune responses, affecting both the innate and adaptive immune responses, contribute to differences in the pathogenesis of infectious diseases in males and females, the response to viral vaccines and the prevalence of autoimmune diseases. Indeed, females have a lower burden of bacterial, viral and parasitic infections, most evident during their reproductive years. Conversely, females have a higher prevalence of a number of autoimmune diseases, including Sjogren's syndrome, systemic lupus erythematosus (SLE), scleroderma, rheumatoid arthritis (RA) and multiple sclerosis (MS). These observations suggest that gonadal hormones may have a role in this sex differential. The fundamental differences in the immune systems of males and females are attributed not only to differences in sex hormones, but are related to X chromosome gene contributions and the effects of environmental factors. A comprehensive understanding of the role that sex plays in the immune response is required for therapeutic intervention strategies against infections and the development of appropriate and effective therapies for autoimmune diseases for both males and females. This review will focus on the differences between male and female immune responses in terms of innate and adaptive immunity, and the effects of sex hormones in SLE, MS and RA.",
"title": ""
},
{
"docid": "neg:1840579_19",
"text": "Web services that thrive on mining user interaction data such as search engines can currently track clicks and mouse cursor activity on their Web pages. Cursor interaction mining has been shown to assist in user modeling and search result relevance, and is becoming another source of rich information that data scientists and search engineers can tap into. Due to the growing popularity of touch-enabled mobile devices, search systems may turn to tracking touch interactions in place of cursor interactions. However, unlike cursor interactions, touch interactions are difficult to record reliably and their coordinates have not been shown to relate to regions of user interest. A better approach may be to track the viewport coordinates instead, which the user must manipulate to view the content on a mobile device. These recorded viewport coordinates can potentially reveal what regions of the page interest users and to what degree. Using this information, search system can then improve the design of their pages or use this information in click models or learning to rank systems. In this position paper, we discuss some of the challenges faced in mining interaction data for new modes of interaction, and future research directions in this field.",
"title": ""
}
] |
1840580 | Computational personality traits assessment: A review | [
{
"docid": "pos:1840580_0",
"text": "Whenever we listen to or meet a new person we try to predict personality attributes of the person. Our behavior towards the person is hugely influenced by the predictions we make. Personality is made up of the characteristic patterns of thoughts, feelings and behaviors that make a person unique. Your personality affects your success in the role. Recognizing about yourself and reflecting on your personality can help you to understand how you might shape your future. Various approaches like personality prediction through speech, facial expression, video, and text are proposed in literature to recognize personality. Personality predictions can be made out of one’s handwriting as well. The objective of this paper is to discuss methodology used to identify personality through handwriting analysis and present current state-of-art related to it.",
"title": ""
},
{
"docid": "pos:1840580_1",
"text": "We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. When combined with a standard classifier, features extracted from these models outperform SIFT, as well as representations from other feature learning methods.",
"title": ""
}
] | [
{
"docid": "neg:1840580_0",
"text": "With the proliferation of mobile demands and increasingly multifarious services and applications, mobile Internet has been an irreversible trend. Unfortunately, the current mobile and wireless network (MWN) faces a series of pressing challenges caused by the inherent design. In this paper, we extend two latest and promising innovations of Internet, software-defined networking and network virtualization, to mobile and wireless scenarios. We first describe the challenges and expectations of MWN, and analyze the opportunities provided by the software-defined wireless network (SDWN) and wireless network virtualization (WNV). Then, this paper focuses on SDWN and WNV by presenting the main ideas, advantages, ongoing researches and key technologies, and open issues respectively. Moreover, we interpret that these two technologies highly complement each other, and further investigate efficient joint design between them. This paper confirms that SDWN and WNV may efficiently address the crucial challenges of This work is supported by National Basic Research Program of China (973 Program Grant No. 2013CB329105), National Natural Science Foundation of China (Grants No. 61301080 and No. 61171065), Chinese National Major Scientific and Technological Specialized Project (No. 2013ZX03002001), Chinas Next Generation Internet (No. CNGI-12-03-007), and ZTE Corporation. M. Yang School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710072, P. R. China E-mail: [email protected] Y. Li · D. Jin · L. Zeng Department of Electronic Engineering, Tsinghua University, Beijing 100084, P. R. China Y. Li E-mail: [email protected] D. Jin, L. Zeng E-mail: {jindp, zenglg}@mail.tsinghua.edu.cn Xin Wu Big Switch, USA E-mail: [email protected] A. V. Vasilakos Department of Computer and Telecommunications Engineering,University of Western Macedonia, Greece Electrical and Computer Engineering, National Technical University of Athens (NTUA), Greece E-mail: [email protected] MWN and significantly benefit the future mobile and wireless network.",
"title": ""
},
{
"docid": "neg:1840580_1",
"text": "We develop analytical models for predicting the magnetic field distribution in Halbach magnetized machines. They are formulated in polar coordinates and account for the relative recoil permeability of the magnets. They are applicable to both internal and external rotor permanent-magnet machines with either an iron-cored or air-cored stator and/or rotor. We compare predicted results with those obtained by finite-element analyses and measurements. We show that the air-gap flux density varies significantly with the pole number and that an optimal combination of the magnet thickness and the pole number exists for maximum air-gap flux density, while the back iron can enhance the air-gap field and electromagnetic torque when the radial thickness of the magnet is small.",
"title": ""
},
{
"docid": "neg:1840580_2",
"text": "This paper describes various systems from the University of Minnesota, Duluth that participated in the CLPsych 2015 shared task. These systems learned decision lists based on lexical features found in training data. These systems typically had average precision in the range of .70 – .76, whereas a random baseline attained .47 – .49.",
"title": ""
},
{
"docid": "neg:1840580_3",
"text": "Results from 12 switchback field trials involving 1216 cows were combined to assess the effects of a protected B vitamin blend (BVB) upon milk yield (kg), fat percentage (%), protein %, fat yield (kg) and protein yield (kg) in primiparous and multiparous cows. Trials consisted of 3 test periods executed in the order control-test-control. No diet changes other than the inclusion of 3 grams/cow/ day of the BVB during the test period occurred. Means from the two control periods were compared to results obtained during the test period using a paired T test. Cows include in the analysis were between 45 and 300 days in milk (DIM) at the start of the experiment and were continuously available for all periods. The provision of the BVB resulted in increased (P < 0.05) milk, fat %, protein %, fat yield and protein yield. Regression models showed that the amount of milk produced had no effect upon the magnitude of the increase in milk components. The increase in milk was greatest in early lactation and declined with DIM. Protein and fat % increased with DIM in mature cows, but not in first lactation cows. Differences in fat yields between test and control feeding periods did not change with DIM, but the improvement in protein yield in mature cows declined with DIM. These results indicate that the BVB provided economically important advantages throughout lactation, but expected results would vary with cow age and stage of lactation.",
"title": ""
},
{
"docid": "neg:1840580_4",
"text": "The aim of Active and Assisted Living is to develop tools to promote the ageing in place of elderly people, and human activity recognition algorithms can help to monitor aged people in home environments. Different types of sensors can be used to address this task and the RGBD sensors, especially the ones used for gaming, are cost-effective and provide much information about the environment. This work aims to propose an activity recognition algorithm exploiting skeleton data extracted by RGBD sensors. The system is based on the extraction of key poses to compose a feature vector, and a multiclass Support Vector Machine to perform classification. Computation and association of key poses are carried out using a clustering algorithm, without the need of a learning algorithm. The proposed approach is evaluated on five publicly available datasets for activity recognition, showing promising results especially when applied for the recognition of AAL related actions. Finally, the current applicability of this solution in AAL scenarios and the future improvements needed are discussed.",
"title": ""
},
{
"docid": "neg:1840580_5",
"text": "In this paper, we present the design process of a smart bracelet that aims at enhancing the life of elderly people. The bracelet acts as a personal assistant during the user's everyday life, monitoring the health status and alerting him or her about abnormal conditions, reminding medications and facilitating the everyday life in many outdoor and indoor activities.",
"title": ""
},
{
"docid": "neg:1840580_6",
"text": "According to the trend towards high-resolution CMOS image sensors, pixel sizes are continuously shrinking, towards and below 1.0μm, and sizes are now reaching a technological limit to meet required SNR performance [1-2]. SNR at low-light conditions, which is a key performance metric, is determined by the sensitivity and crosstalk in pixels. To improve sensitivity, pixel technology has migrated from frontside illumination (FSI) to backside illumiation (BSI) as pixel size shrinks down. In BSI technology, it is very difficult to further increase the sensitivity in a pixel of near-1.0μm size because there are no structural obstacles for incident light from micro-lens to photodiode. Therefore the only way to improve low-light SNR is to reduce crosstalk, which makes the non-diagonal elements of the color-correction matrix (CCM) close to zero and thus reduces color noise [3]. The best way to improve crosstalk is to introduce a complete physical isolation between neighboring pixels, e.g., using deep-trench isolation (DTI). So far, a few attempts using DTI have been made to suppress silicon crosstalk. A backside DTI in as small as 1.12μm-pixel, which is formed in the BSI process, is reported in [4], but it is just an intermediate step in the DTI-related technology because it cannot completely prevent silicon crosstalk, especially for long wavelengths of light. On the other hand, front-side DTIs for FSI pixels [5] and BSI pixels [6] are reported. In [5], however, DTI is present not only along the periphery of each pixel, but also invades into the pixel so that it is inefficient in terms of gathering incident light and providing sufficient amount of photodiode area. In [6], the pixel size is as large as 2.0μm and it is hard to scale down with this technology for near 1.0μm pitch because DTI width imposes a critical limit on the sufficient amount of photodiode area for full-well capacity. Thus, a new technological advance is necessary to realize the ideal front DTI in a small size pixel near 1.0μm.",
"title": ""
},
{
"docid": "neg:1840580_7",
"text": "This paper presents two chatbot systems, ALICE and Elizabeth, illustrating the dialogue knowledge representation and pattern matching techniques of each. We discuss the problems which arise when using the Dialogue Diversity Corpus to retrain a chatbot system with human dialogue examples. A Java program to convert from dialog transcript to AIML format provides a basic implementation of corpusbased chatbot training.. We conclude that dialogue researchers should adopt clearer standards for transcription and markup format in dialogue corpora to be used in training a chatbot system more effectively.",
"title": ""
},
{
"docid": "neg:1840580_8",
"text": "Reasoning about entities and their relationships from multimodal data is a key goal of Artificial General Intelligence. The visual question answering (VQA) problem is an excellent way to test such reasoning capabilities of an AI model and its multimodal representation learning. However, the current VQA models are oversimplified deep neural networks, comprised of a long short-term memory (LSTM) unit for question comprehension and a convolutional neural network (CNN) for learning single image representation. We argue that the single visual representation contains a limited and general information about the image contents and thus limits the model reasoning capabilities. In this work we introduce a modular neural network model that learns a multimodal and multifaceted representation of the image and the question. The proposed model learns to use the multimodal representation to reason about the image entities and achieves a new state-of-the-art performance on both VQA benchmark datasets, VQA v1.0 and v2.0, by a wide margin.",
"title": ""
},
{
"docid": "neg:1840580_9",
"text": "The Bulgarian electricity market rules require from the transmission system operator, to procure electricity for covering transmission grid losses on hourly base before day-ahead gate closure. In this paper is presented a software solution for day-ahead forecasting of hourly transmission losses that is based on statistical approach of the impacting factors correlations and uses as inputs numerical weather predictions.",
"title": ""
},
{
"docid": "neg:1840580_10",
"text": "In this paper, we survey the techniques for image-based rendering. Unlike traditional 3D computer graphics in which 3D geometry of the scene is known, image-based rendering techniques render novel views directly from input images. Previous image-based rendering techniques can be classified into three categories according to how much geometric information is used: rendering without geometry, rendering with implicit geometry (i.e., correspondence), and rendering with explicit geometry (either with approximate or accurate geometry). We discuss the characteristics of these categories and their representative methods. The continuum between images and geometry used in image-based rendering techniques suggests that image-based rendering with traditional 3D graphics can be united in a joint image and geometry space.",
"title": ""
},
{
"docid": "neg:1840580_11",
"text": "With the adoption of power electronic converters in shipboard power systems and associated novel fault management concepts, the ability to isolate electric faults quickly from the power system is becoming more important than breaking high magnitude fault currents and the corresponding arcing between opening contacts within a switch. This allows for the design of substantially faster, as well as potentially lighter and more compact, mechanical disconnect switches. Herein, we are proposing a new class of mechanical disconnect switches that utilize piezoelectric actuators to isolate within less than one millisecond. This technology may become a key enabler for future all-electric ships.",
"title": ""
},
{
"docid": "neg:1840580_12",
"text": "This paper introduces a positioning system for walking persons, called \"Personal Dead-reckoning\" (PDR) system. The PDR system does not require GPS, beacons, or landmarks. The system is therefore useful in GPS-denied environments, such as inside buildings, tunnels, or dense forests. Potential users of the system are military and security personnel as well as emergency responders. The PDR system uses a 6-DOF inertial measurement unit (IMU) attached to the user's boot. The IMU provides rate-of-rotation and acceleration measurements that are used in real-time to estimate the location of the user relative to a known starting point. In order to reduce the most significant errors of this IMU-based system-caused by the bias drift of the accelerometers-we implemented a technique known as \"Zero Velocity Update\" (ZUPT). With the ZUPT technique and related signal processing algorithms, typical errors of our system are about 2% of distance traveled for short walks. This typical PDR system error is largely independent of the gait or speed of the user. When walking continuously for several minutes, the error increases gradually beyond 2%. The PDR system works in both 2-dimensional (2-D) and 3-D environments, although errors in Z-direction are usually larger than 2% of distance traveled. Earlier versions of our system used an unpractically large IMU. In the most recent version we implemented a much smaller IMU. This paper discussed specific problems of this small IMU, our measures for eliminating these problems, and our first experimental results with the small IMU under different conditions.",
"title": ""
},
{
"docid": "neg:1840580_13",
"text": "In this paper, we analyze several neural network designs (and their variations) for sentence pair modeling and compare their performance extensively across eight datasets, including paraphrase identification, semantic textual similarity, natural language inference, and question answering tasks. Although most of these models have claimed state-of-the-art performance, the original papers often reported on only one or two selected datasets. We provide a systematic study and show that (i) encoding contextual information by LSTM and inter-sentence interactions are critical, (ii) Tree-LSTM does not help as much as previously claimed but surprisingly improves performance on Twitter datasets, (iii) the Enhanced Sequential Inference Model (Chen et al., 2017) is the best so far for larger datasets, while the Pairwise Word Interaction Model (He and Lin, 2016) achieves the best performance when less data is available. We release our implementations as an open-source toolkit.",
"title": ""
},
{
"docid": "neg:1840580_14",
"text": "For three-dimensional (3D) ultrasound imaging, connecting elements of a two-dimensional (2D) transducer array to the imaging system's front-end electronics is a challenge because of the large number of array elements and the small element size. To compactly connect the transducer array with electronics, we flip-chip bond a 2D 16 times 16-element capacitive micromachined ultrasonic transducer (CMUT) array to a custom-designed integrated circuit (IC). Through-wafer interconnects are used to connect the CMUT elements on the top side of the array with flip-chip bond pads on the back side. The IC provides a 25-V pulser and a transimpedance preamplifier to each element of the array. For each of three characterized devices, the element yield is excellent (99 to 100% of the elements are functional). Center frequencies range from 2.6 MHz to 5.1 MHz. For pulse-echo operation, the average -6-dB fractional bandwidth is as high as 125%. Transmit pressures normalized to the face of the transducer are as high as 339 kPa and input-referred receiver noise is typically 1.2 to 2.1 rnPa/ radicHz. The flip-chip bonded devices were used to acquire 3D synthetic aperture images of a wire-target phantom. Combining the transducer array and IC, as shown in this paper, allows for better utilization of large arrays, improves receive sensitivity, and may lead to new imaging techniques that depend on transducer arrays that are closely coupled to IC electronics.",
"title": ""
},
{
"docid": "neg:1840580_15",
"text": "Machine learning based system are increasingly being used for sensitive tasks such as security surveillance, guiding autonomous vehicle, taking investment decisions, detecting and blocking network intrusion and malware etc. However, recent research has shown that machine learning models are venerable to attacks by adversaries at all phases of machine learning (e.g., training data collection, training, operation). All model classes of machine learning systems can be misled by providing carefully crafted inputs making them wrongly classify inputs. Maliciously created input samples can affect the learning process of a ML system by either slowing the learning process, or affecting the performance of the learned model or causing the system make error only in attacker’s planned scenario. Because of these developments, understanding security of machine learning algorithms and systems is emerging as an important research area among computer security and machine learning researchers and practitioners. We present a survey of this emerging area.",
"title": ""
},
{
"docid": "neg:1840580_16",
"text": "Objective In symptom-dependent diseases such as functional dyspepsia (FD), matching the pattern of epigastric symptoms, including severity, kind, and perception site, between patients and physicians is critical. Additionally, a comprehensive examination of the stomach, duodenum, and pancreas is important for evaluating the origin of such symptoms. Methods FD-specific symptoms (epigastric pain, epigastric burning, early satiety, and postprandial fullness) and other symptoms (regurgitation, nausea, belching, and abdominal bloating) as well as the perception site of the above symptoms were investigated in healthy subjects using a new questionnaire with an illustration of the human body. A total of 114 patients with treatment-resistant dyspeptic symptoms were evaluated for their pancreatic exocrine function using N-benzoyl-L-tyrosyl-p-aminobenzoic acid. Results A total of 323 subjects (men:women, 216:107; mean age, 52.1 years old) were initially enrolled. Most of the subjects felt the FD-specific symptoms at the epigastrium, while about 20% felt them at other abdominal sites. About 30% of expressed as epigastric symptoms were FD-nonspecific symptoms. At the epigastrium, epigastric pain and epigastric burning were mainly felt at the upper part, and postprandial fullness and early satiety were felt at the lower part. The prevalence of patients with pancreatic exocrine dysfunction was 71% in the postprandial fullness group, 68% in the epigastric pain group, and 82% in the diarrhea group. Conclusion We observed mismatch in the perception site and expression between the epigastric symptoms of healthy subjects and FD-specific symptoms. Postprandial symptoms were often felt at the lower part of the epigastrium, and pancreatic exocrine dysfunction may be involved in the FD symptoms, especially for treatment-resistant dyspepsia patients.",
"title": ""
},
{
"docid": "neg:1840580_17",
"text": "We present a TTS neural network that is able to produce speech in multiple languages. The proposed network is able to transfer a voice, which was presented as a sample in a source language, into one of several target languages. Training is done without using matching or parallel data, i.e., without samples of the same speaker in multiple languages, making the method much more applicable. The conversion is based on learning a polyglot network that has multiple perlanguage sub-networks and adding loss terms that preserve the speaker’s identity in multiple languages. We evaluate the proposed polyglot neural network for three languages with a total of more than 400 speakers and demonstrate convincing conversion capabilities.",
"title": ""
},
{
"docid": "neg:1840580_18",
"text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.",
"title": ""
},
{
"docid": "neg:1840580_19",
"text": "This paper addresses the large-scale visual font recognition (VFR) problem, which aims at automatic identification of the typeface, weight, and slope of the text in an image or photo without any knowledge of content. Although visual font recognition has many practical applications, it has largely been neglected by the vision community. To address the VFR problem, we construct a large-scale dataset containing 2,420 font classes, which easily exceeds the scale of most image categorization datasets in computer vision. As font recognition is inherently dynamic and open-ended, i.e., new classes and data for existing categories are constantly added to the database over time, we propose a scalable solution based on the nearest class mean classifier (NCM). The core algorithm is built on local feature embedding, local feature metric learning and max-margin template selection, which is naturally amenable to NCM and thus to such open-ended classification problems. The new algorithm can generalize to new classes and new data at little added cost. Extensive experiments demonstrate that our approach is very effective on our synthetic test images, and achieves promising results on real world test images.",
"title": ""
}
] |
1840581 | Large scale tissue histopathology image classification, segmentation, and visualization via deep convolutional activation features | [
{
"docid": "pos:1840581_0",
"text": "Many modern visual recognition algorithms incorporate a step of spatial ‘pooling’, where the outputs of several nearby feature detectors are combined into a local or global ‘bag of features’, in a way that preserves task-related information while removing irrelevant details. Pooling is used to achieve invariance to image transformations, more compact representations, and better robustness to noise and clutter. Several papers have shown that the details of the pooling operation can greatly influence the performance, but studies have so far been purely empirical. In this paper, we show that the reasons underlying the performance of various pooling methods are obscured by several confounding factors, such as the link between the sample cardinality in a spatial pool and the resolution at which low-level features have been extracted. We provide a detailed theoretical analysis of max pooling and average pooling, and give extensive empirical comparisons for object recognition tasks.",
"title": ""
}
] | [
{
"docid": "neg:1840581_0",
"text": "Mul-T is a parallel Lisp system, based on Multilisp's future construct, that has been developed to run on an Encore Multimax multiprocessor. Mul-T is an extended version of the Yale T system and uses the T system's ORBIT compiler to achieve “production quality” performance on stock hardware — about 100 times faster than Multilisp. Mul-T shows that futures can be implemented cheaply enough to be useful in a production-quality system. Mul-T is fully operational, including a user interface that supports managing groups of parallel tasks.",
"title": ""
},
{
"docid": "neg:1840581_1",
"text": "Given a graph <i>G</i> = (<i>V,E</i>) and positive integral vertex weights <i>w</i> : <i>V</i> → N, the <i>max-coloring problem</i> seeks to find a proper vertex coloring of <i>G</i> whose color classes <i>C</i><inf>1,</inf> <i>C</i><inf>2,</inf>...,<i>C</i><inf><i>k</i></inf>, minimize Σ<sup><i>k</i></sup><inf><i>i</i> = 1</inf> <i>max</i><inf>ν∈<i>C</i><inf>i</inf></inf><i>w</i>(ν). This problem, restricted to interval graphs, arises whenever there is a need to design dedicated memory managers that provide better performance than the general purpose memory management of the operating system. Specifically, companies have tried to solve this problem in the design of memory managers for wireless protocol stacks such as GPRS or 3G.Though this problem seems similar to the wellknown dynamic storage allocation problem, we point out fundamental differences. We make a connection between max-coloring and on-line graph coloring and use this to devise a simple 2-approximation algorithm for max-coloring on interval graphs. We also show that a simple first-fit strategy, that is a natural choice for this problem, yields a 10-approximation algorithm. We show this result by proving that the first-fit algorithm for on-line coloring an interval graph <i>G</i> uses no more than 10.<i>x</i>(<i>G</i>) colors, significantly improving the bound of 26.<i>x</i>(<i>G</i>) by Kierstead and Qin (<i>Discrete Math.</i>, 144, 1995). We also show that the max-coloring problem is NP-hard.",
"title": ""
},
{
"docid": "neg:1840581_2",
"text": "Partial discharge (PD) detection is an effective method for finding insulation defects in HV and EHV power cables. PD apparent charge is typically expressed in picocoulombs (pC) when the calibration procedure defined in IEC 60270 is applied during off-line tests. During on-line PD detection, measured signals are usually denoted in mV or dB without transforming the measured signal into a charge quantity. For AC XLPE power cable systems, on-line PD detection is conducted primarily with the use of high frequency current transformer (HFCT). The HFCT is clamped around the cross-bonding link of the joint or the grounding wire of termination. In on-line occasion, PD calibration is impossible from the termination. A novel on-line calibration method using HFCT is introduced in this paper. To eliminate the influence of cross-bonding links, the interrupted cable sheath at the joint was reconnected via the high-pass C-arm connector. The calibration signal was injected into the cable system via inductive coupling through the cable sheath. The distributed transmission line equivalent circuit of the cable was used in consideration of the signal attenuation. Both the conventional terminal calibration method and the proposed on-line calibration method were performed on the coaxial cable model loop for experimental verification. The amplitude and polarity of signals that propagate in the cable sheath and the conductor were evaluated. The results indicate that the proposed method can calibrate the measured signal during power cable on-line PD detection.",
"title": ""
},
{
"docid": "neg:1840581_3",
"text": "Although speech dysfluencies have been hypothesized to be associated with abnormal function of dopaminergic system, the effects of dopaminergic medication on speech fluency in Parkinson’s disease (PD) have not been systematically studied. The aim of the present study was, therefore, to investigate the long-term effect of dopaminergic medication on speech fluency in PD. Fourteen de novo PD patients with no history of developmental stuttering and 14 age- and sex-matched healthy controls (HC) were recruited. PD subjects were examined three times; before the initiation of dopaminergic treatment and twice in following 6 years. The percentage of dysfluent words was calculated from reading passage and monolog. The amount of medication was expressed by cumulative doses of l-dopa equivalent. After 3–6 years of dopaminergic therapy, PD patients exhibited significantly more dysfluent events compared to healthy subjects as well as to their own speech performance before the introduction of dopaminergic therapy (p < 0.05). In addition, we found a strong positive correlation between the increased occurrence of dysfluent words and the total cumulative dose of l-dopa equivalent (r = 0.75, p = 0.002). Our findings indicate an adverse effect of prolonged dopaminergic therapy contributing to the development of stuttering-like dysfluencies in PD. These findings may have important implication in clinical practice, where speech fluency should be taken into account to optimize dopaminergic therapy.",
"title": ""
},
{
"docid": "neg:1840581_4",
"text": "Generative Adversarial Networks (GANs) and their extensions have carved open many exciting ways to tackle well known and challenging medical image analysis problems such as medical image denoising, reconstruction, segmentation, data simulation, detection or classification. Furthermore, their ability to synthesize images at unprecedented levels of realism also gives hope that the chronic scarcity of labeled data in the medical field can be resolved with the help of these generative models. In this review paper, a broad overview of recent literature on GANs for medical applications is given, the shortcomings and opportunities of the proposed methods are thoroughly discussed and potential future work is elaborated. A total of 63 papers published until end of July 2018 are reviewed. For quick access, the papers and important details such as the underlying method, datasets and performance are summarized in tables.",
"title": ""
},
{
"docid": "neg:1840581_5",
"text": "Emotions involve physiological responses that are regulated by the brain. The present paper reviews the empirical literature on central nervous system (CNS) and autonomic nervous system (ANS) concomitants of emotional states, with a focus on studies that simultaneously assessed CNS and ANS activity. The reviewed data support two primary conclusions: (1) numerous cortical and subcortical regions show co-occurring activity with ANS responses in emotion, and (2) there may be reversed asymmetries on cortical and subcortical levels with respect to CNS/ANS interrelations. These observations are interpreted in terms of a model of neurovisceral integration in emotion, and directions for future research are presented.",
"title": ""
},
{
"docid": "neg:1840581_6",
"text": "In this paper the Model Predictive Control (MPC) strategy is used to solve the mobile robot trajectory tracking problem, where controller must ensure that robot follows pre-calculated trajectory. The so-called explicit optimal controller design and implementation are described. The MPC solution is calculated off-line and expressed as a piecewise affine function of the current state of a mobile robot. A linearized kinematic model of a differential drive mobile robot is used for the controller design purpose. The optimal controller, which has a form of a look-up table, is tested in simulation and experimentally.",
"title": ""
},
{
"docid": "neg:1840581_7",
"text": "Drones equipped with cameras are emerging as a powerful tool for large-scale aerial 3D scanning, but existing automatic flight planners do not exploit all available information about the scene, and can therefore produce inaccurate and incomplete 3D models. We present an automatic method to generate drone trajectories, such that the imagery acquired during the flight will later produce a high-fidelity 3D model. Our method uses a coarse estimate of the scene geometry to plan camera trajectories that: (1) cover the scene as thoroughly as possible; (2) encourage observations of scene geometry from a diverse set of viewing angles; (3) avoid obstacles; and (4) respect a user-specified flight time budget. Our method relies on a mathematical model of scene coverage that exhibits an intuitive diminishing returns property known as submodularity. We leverage this property extensively to design a trajectory planning algorithm that reasons globally about the non-additive coverage reward obtained across a trajectory, jointly with the cost of traveling between views. We evaluate our method by using it to scan three large outdoor scenes, and we perform a quantitative evaluation using a photorealistic video game simulator.",
"title": ""
},
{
"docid": "neg:1840581_8",
"text": "Data mining system contain large amount of private and sensitive data such as healthcare, financial and criminal records. These private and sensitive data can not be share to every one, so privacy protection of data is required in data mining system for avoiding privacy leakage of data. Data perturbation is one of the best methods for privacy preserving. We used data perturbation method for preserving privacy as well as accuracy. In this method individual data value are distorted before data mining application. In this paper we present min max normalization transformation based data perturbation. The privacy parameters are used for measurement of privacy protection and the utility measure shows the performance of data mining technique after data distortion. We performed experiment on real life dataset and the result show that min max normalization transformation based data perturbation method is effective to protect confidential information and also maintain the performance of data mining technique after data distortion.",
"title": ""
},
{
"docid": "neg:1840581_9",
"text": "Median filtering technique is often used to remove additive white, salt and pepper noise from a signal or a source image. This filtering method is essential for the processing of digital data representing analog signals in real time. The median filter considers each pixel in the image in turn and looks at its nearby neighbors to determine whether or not it is representative of its surroundings. It replaces the pixel value with the median of neighboring pixel values. The median is calculated by first sorting all the pixel values from the surrounding neighborhood into numerical order and then replacing the pixel being considered with the middle pixel value. We have used graphics processing units (GPUs) to implement the post-processing, performed by NVIDIA Compute Unified Device Architecture (CUDA). Such a system is faster than the CPU version, or other traditional computing, for processing medical applications such as echography or Doppler. This paper shows the effect of the Median Filtering and a comparison of the performance of the CPU and GPU in terms of response time.",
"title": ""
},
{
"docid": "neg:1840581_10",
"text": "This article presents a Hoare-style calculus for a substantial subset of Java Card, which we call Java . In particular, the language includes side-effecting expressions, mutual recursion, dynamic method binding, full exception handling, and static class initialization. The Hoare logic of partial correctness is proved not only sound (w.r.t. our operational semantics of Java, described in detail elsewhere) but even complete. It is the first logic for an object-oriented language that is provably complete. The completeness proof uses a refinement of the Most General Formula approach. The proof of soundness gives new insights into the role of type safety. Further by-products of this work are a new general methodology for handling side-effecting expressions and their results, the discovery of the strongest possible rule of consequence, and a flexible Call rule for mutual recursion. We also give a small but non-trivial application example. All definitions and proofs have been done formally with the interactive theorem prover Isabelle/HOL. This guarantees not only rigorous definitions, but also gives maximal confidence in the results obtained.",
"title": ""
},
{
"docid": "neg:1840581_11",
"text": "Microcycle conidiation is a survival mechanism of fungi encountering unfavorable conditions. In this phenomenon, asexual spores germinate secondary spores directly without formation of mycelium. As Penicillium camemberti conidia have the ability to produce conidiophores after germination in liquid culture induced by a thermal stress (18 and 30 °C), our work has aimed at producing conidia through this mean. Incubation at 18 and 30 °C increased the swelling of conidia and their proportion thereby producing conidiophores. Our results showed that the microcycle of conidiation can produce 5 × 108 conidia ml−1 after 7 days at 18 °C of culture. The activity of these conidia was checked through culture on a solid medium. Conidia produced by microcycle conidiation formed a normal mycelium on the surface of solid media and 25 % could still germinate after 5 months of storage.",
"title": ""
},
{
"docid": "neg:1840581_12",
"text": "In the last few years, obfuscation has been used more and more by spammers to make spam emails bypass filters. The standard method is to use images that look like text, since typical spam filters are unable to parse such messages; this is what is used in so-called \"rock phishing\". To fight image-based spam, many spam filters use heuristic rules in which emails containing images are flagged, and since not many legit emails are composed mainly of a big image, this aids in detecting image-based spam. The spammers are thus interested in circumventing these methods. Unicode transliteration is a convenient tool for spammers, since it allows a spammer to create a large number of homomorphic clones of the same looking message; since Unicode contains many characters that are unique but appear very similar, spammers can translate a message's characters at random to hide black-listed words in an effort to bypass filters. In order to defend against these unicode-obfuscated spam emails, we developed a prototype tool that can be used with Spam Assassin to block spam obfuscated in this way by mapping polymorphic messages to a common, more homogeneous representation. This representation can then be filtered using traditional methods. We demonstrate the ease with which Unicode polymorphism can be used to circumvent spam filters such as SpamAssassin, and then describe a de-obfuscation technique that can be used to catch messages that have been obfuscated in this fashion.",
"title": ""
},
{
"docid": "neg:1840581_13",
"text": "This report describes the algorithms implemented in a Matlab toolbox for change detection and data segmentation. Functions are provided for simulating changes, choosing design parameters and detecting abrupt changes in signals.",
"title": ""
},
{
"docid": "neg:1840581_14",
"text": "This paper proposes efficient algorithms for group sparse optimization with mixed `2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity will lead to better signal recovery/feature selection. The `2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional `1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the `2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.",
"title": ""
},
{
"docid": "neg:1840581_15",
"text": "In 2004, the US Center for Disease Control (CDC) published a paper showing that there is no link between the age at which a child is vaccinated with MMR and the vaccinated children's risk of a subsequent diagnosis of autism. One of the authors, William Thompson, has now revealed that statistically significant information was deliberately omitted from the paper. Thompson first told Dr S Hooker, a researcher on autism, about the manipulation of the data. Hooker analysed the raw data from the CDC study afresh. He confirmed that the risk of autism among African American children vaccinated before the age of 2 years was 340% that of those vaccinated later.",
"title": ""
},
{
"docid": "neg:1840581_16",
"text": "Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence.",
"title": ""
},
{
"docid": "neg:1840581_17",
"text": "Living with unrelenting pain (chronic pain) is maladaptive and is thought to be associated with physiological and psychological modifications, yet there is a lack of knowledge regarding brain elements involved in such conditions. Here, we identify brain regions involved in spontaneous pain of chronic back pain (CBP) in two separate groups of patients (n = 13 and n = 11), and contrast brain activity between spontaneous pain and thermal pain (CBP and healthy subjects, n = 11 each). Continuous ratings of fluctuations of spontaneous pain during functional magnetic resonance imaging were separated into two components: high sustained pain and increasing pain. Sustained high pain of CBP resulted in increased activity in the medial prefrontal cortex (mPFC; including rostral anterior cingulate). This mPFC activity was strongly related to intensity of CBP, and the region is known to be involved in negative emotions, response conflict, and detection of unfavorable outcomes, especially in relation to the self. In contrast, the increasing phase of CBP transiently activated brain regions commonly observed for acute pain, best exemplified by the insula, which tightly reflected duration of CBP. When spontaneous pain of CBP was contrasted to thermal stimulation, we observe a double-dissociation between mPFC and insula with the former correlating only to intensity of spontaneous pain and the latter correlating only to pain intensity for thermal stimulation. These findings suggest that subjective spontaneous pain of CBP involves specific spatiotemporal neuronal mechanisms, distinct from those observed for acute experimental pain, implicating a salient role for emotional brain concerning the self.",
"title": ""
},
{
"docid": "neg:1840581_18",
"text": "Given a text description, most existing semantic parsers synthesize a program in one shot. However, it is quite challenging to produce a correct program solely based on the description, which in reality is often ambiguous or incomplete. In this paper, we investigate interactive semantic parsing, where the agent can ask the user clarification questions to resolve ambiguities via a multi-turn dialogue, on an important type of programs called “If-Then recipes.” We develop a hierarchical reinforcement learning (HRL) based agent that significantly improves the parsing performance with minimal questions to the user. Results under both simulation and human evaluation show that our agent substantially outperforms non-interactive semantic parsers and rule-based agents.",
"title": ""
},
{
"docid": "neg:1840581_19",
"text": "Do informational deficits on the part of voters sustain poor quality of governance in low income countries? We provide experimental evidence on the role of public disclosures on candidate quality and incumbent performance in enhancing electoral accountability. Slum dwellers who were randomly exposed to newspaper report cards on politician performance responded by increasing turnout and rewarding incumbents who spent more in slums and attended fair price shop oversight committee meetings. We also find evidence of yardstick competition – incumbent’s vote share is sensitive to the wealth and education qualifications of his challengers. ∗The authors are from MIT (Banerjee), Yale (Kumar) and Harvard University (Pande and Su). We thank our partners Satark Nagrik Sangathan, Delhi NGO Network and Hindustan times and especially Anjali Bharadwaj, Amrita Johri and Mrinal Pande for enabling this study and Shobhini Mukherji for providing field oversight.We also thank Hewlett Foundation for financial support, and Tim Besley and Esther Duflo for helpful comments.",
"title": ""
}
] |
1840582 | An Analysis Matrix for the Assessment of Smart City Technologies: Main Results of Its Application | [
{
"docid": "pos:1840582_0",
"text": "The current digital revolution has ignited the evolution of communications grids and the development of new schemes for productive systems. Traditional technologic scenarios have been challenged, and Smart Cities have become the basis for urban competitiveness. The citizen is the one who has the power to set new scenarios, and that is why a definition of the way people interact with their cities is needed, as is commented in the first part of the article. At the same time, a lack of clarity has been detected in the way of describing what Smart Cities are, and the second part will try to set the basis for that. For all before, the information and communication technologies that manage and transform 21st century cities must be reviewed, analyzing their impact on new social behaviors that shape the spaces and means of communication, as is posed in the experimental section, setting the basis for an analysis matrix to score the different elements that affect a Smart City environment. So, as the better way to evaluate what a Smart City is, there is a need for a tool to score the different technologies on the basis of their usefulness and consequences, considering the impact of each application. For all of that, the final section describes the main objective of this article in practical scenarios, considering how the technologies are used by citizens, who must be the main concern of all urban development.",
"title": ""
}
] | [
{
"docid": "neg:1840582_0",
"text": "Nostalgia fulfills pivotal functions for individuals, but lacks an empirically derived and comprehensive definition. We examined lay conceptions of nostalgia using a prototype approach. In Study 1, participants generated open-ended features of nostalgia, which were coded into categories. In Study 2, participants rated the centrality of these categories, which were subsequently classified as central (e.g., memories, relationships, happiness) or peripheral (e.g., daydreaming, regret, loneliness). Central (as compared with peripheral) features were more often recalled and falsely recognized (Study 3), were classified more quickly (Study 4), were judged to reflect more nostalgia in a vignette (Study 5), better characterized participants' own nostalgic (vs. ordinary) experiences (Study 6), and prompted higher levels of actual nostalgia and its intrapersonal benefits when used to trigger a personal memory, regardless of age (Study 7). These findings highlight that lay people view nostalgia as a self-relevant and social blended emotional and cognitive state, featuring a mixture of happiness and loss. The findings also aid understanding of nostalgia's functions and identify new methods for future research.",
"title": ""
},
{
"docid": "neg:1840582_1",
"text": "Despite significant progress in image-based 3D scene flow estimation, the performance of such approaches has not yet reached the fidelity required by many applications. Simultaneously, these applications are often not restricted to image-based estimation: laser scanners provide a popular alternative to traditional cameras, for example in the context of self-driving cars, as they directly yield a 3D point cloud. In this paper, we propose to estimate 3D motion from such unstructured point clouds using a deep neural network. In a single forward pass, our model jointly predicts 3D scene flow as well as the 3D bounding box and rigid body motion of objects in the scene. While the prospect of estimating 3D scene flow from unstructured point clouds is promising, it is also a challenging task. We show that the traditional global representation of rigid body motion prohibits inference by CNNs, and propose a translation equivariant representation to circumvent this problem. For training our deep network, a large dataset is required. Because of this, we augment real scans from KITTI with virtual objects, realistically modeling occlusions and simulating sensor noise. A thorough comparison with classic and learning-based techniques highlights the robustness of the proposed approach.",
"title": ""
},
{
"docid": "neg:1840582_2",
"text": "INTRODUCTION People make judgments about the world around them. They harbor positive and negative attitudes about people, organizations, places, events, and ideas. We regard these types of attitudes as sentiments. Sentiments are private states,1 cognitive phenomena that are not directly observable by others. However, expressions of sentiment can be manifested in actions, including written and spoken language. Sentiment analysis is the study of automated techniques for extracting sentiment from written language. This has been a very active area entiment analysis—the automated extraction of expressions of positive or negative attitudes from text—has received considerable attention from researchers during the past 10 years. During the same period, the widespread growth of social media has resulted in an explosion of publicly available, user-generated text on the World Wide Web. These data can potentially be utilized to provide real-time insights into the aggregated sentiments of people. The tools provided by statistical natural language processing and machine learning, along with exciting new scalable approaches to working with large volumes of text, make it possible to begin extracting sentiments from the web. We discuss some of the challenges of sentiment extraction and some of the approaches employed to address these challenges. In particular, we describe work we have done to annotate sentiment in blogs at the levels of sentences and subsentences (clauses); to classify subjectivity at the level of sentences; and to identify the targets, or topics, of sentiment at the level of clauses.",
"title": ""
},
{
"docid": "neg:1840582_3",
"text": "Because of the complexity of the hospital environment, there exist a lot of medical information systems from different vendors with incompatible structures. In order to establish an enterprise hospital information system, the integration among these heterogeneous systems must be considered. Complete integration should cover three aspects: data integration, function integration and workflow integration. However most of the previous design of architecture did not accomplish such a complete integration. This article offers an architecture design of the enterprise hospital information system based on the concept of digital neural network system in hospital. It covers all three aspects of integration, and eventually achieves the target of one virtual data center with Enterprise Viewer for users of different roles. The initial implementation of the architecture in the 5-year Digital Hospital Project in Huzhou Central hospital of Zhejiang Province is also described",
"title": ""
},
{
"docid": "neg:1840582_4",
"text": "Pancreatic cancer has one of the worst survival rates amongst all forms of cancer because its symptoms manifest later into the progression of the disease. One of those symptoms is jaundice, the yellow discoloration of the skin and sclera due to the buildup of bilirubin in the blood. Jaundice is only recognizable to the naked eye in severe stages, but a ubiquitous test using computer vision and machine learning can detect milder forms of jaundice. We propose BiliScreen, a smartphone app that captures pictures of the eye and produces an estimate of a person's bilirubin level, even at levels normally undetectable by the human eye. We test two low-cost accessories that reduce the effects of external lighting: (1) a 3D-printed box that controls the eyes' exposure to light and (2) paper glasses with colored squares for calibration. In a 70-person clinical study, we found that BiliScreen with the box achieves a Pearson correlation coefficient of 0.89 and a mean error of -0.09 ± 2.76 mg/dl in predicting a person's bilirubin level. As a screening tool, BiliScreen identifies cases of concern with a sensitivity of 89.7% and a specificity of 96.8% with the box accessory.",
"title": ""
},
{
"docid": "neg:1840582_5",
"text": "Randomization in randomized controlled trials involves more than generation of a random sequence by which to assign subjects. For randomization to be successfully implemented, the randomization sequence must be adequately protected (concealed) so that investigators, involved health care providers, and subjects are not aware of the upcoming assignment. The absence of adequate allocation concealment can lead to selection bias, one of the very problems that randomization was supposed to eliminate. Authors of reports of randomized trials should provide enough details on how allocation concealment was achieved so the reader can determine the likelihood of success. Fortunately, a plan of allocation concealment can always be incorporated into the design of a randomized trial. Certain methods minimize the risk of concealment failing more than others. Keeping knowledge of subjects' assignment after allocation from subjects, investigators/health care providers, or those assessing outcomes is referred to as masking (also known as blinding). The goal of masking is to prevent ascertainment bias. In contrast to allocation concealment, masking cannot always be incorporated into a randomized controlled trial. Both allocation concealment and masking add to the elimination of bias in randomized controlled trials.",
"title": ""
},
{
"docid": "neg:1840582_6",
"text": "Recent years have seen a proliferation of complex Advanced Driver Assistance Systems (ADAS), in particular, for use in autonomous cars. These systems consist of sensors and cameras as well as image processing and decision support software components. They are meant to help drivers by providing proper warnings or by preventing dangerous situations. In this paper, we focus on the problem of design time testing of ADAS in a simulated environment. We provide a testing approach for ADAS by combining multi-objective search with surrogate models developed based on neural networks. We use multi-objective search to guide testing towards the most critical behaviors of ADAS. Surrogate modeling enables our testing approach to explore a larger part of the input search space within limited computational resources. We characterize the condition under which the multi-objective search algorithm behaves the same with and without surrogate modeling, thus showing the accuracy of our approach. We evaluate our approach by applying it to an industrial ADAS system. Our experiment shows that our approach automatically identifies test cases indicating critical ADAS behaviors. Further, we show that combining our search algorithm with surrogate modeling improves the quality of the generated test cases, especially under tight and realistic computational resources.",
"title": ""
},
{
"docid": "neg:1840582_7",
"text": "BACKGROUD\nWith the advent of modern computing methods, modeling trial-to-trial variability in biophysical recordings including electroencephalography (EEG) has become of increasingly interest. Yet no widely used method exists for comparing variability in ordered collections of single-trial data epochs across conditions and subjects.\n\n\nNEW METHOD\nWe have developed a method based on an ERP-image visualization tool in which potential, spectral power, or some other measure at each time point in a set of event-related single-trial data epochs are represented as color coded horizontal lines that are then stacked to form a 2-D colored image. Moving-window smoothing across trial epochs can make otherwise hidden event-related features in the data more perceptible. Stacking trials in different orders, for example ordered by subject reaction time, by context-related information such as inter-stimulus interval, or some other characteristic of the data (e.g., latency-window mean power or phase of some EEG source) can reveal aspects of the multifold complexities of trial-to-trial EEG data variability.\n\n\nRESULTS\nThis study demonstrates new methods for computing and visualizing 'grand' ERP-image plots across subjects and for performing robust statistical testing on the resulting images. These methods have been implemented and made freely available in the EEGLAB signal-processing environment that we maintain and distribute.",
"title": ""
},
{
"docid": "neg:1840582_8",
"text": "Research on brain–machine interfaces has been ongoing for at least a decade. During this period, simultaneous recordings of the extracellular electrical activity of hundreds of individual neurons have been used for direct, real-time control of various artificial devices. Brain–machine interfaces have also added greatly to our knowledge of the fundamental physiological principles governing the operation of large neural ensembles. Further understanding of these principles is likely to have a key role in the future development of neuroprosthetics for restoring mobility in severely paralysed patients.",
"title": ""
},
{
"docid": "neg:1840582_9",
"text": "Cyber-physical systems tightly integrate physical processes and information and communication technologies. As today’s critical infrastructures, e.g., the power grid or water distribution networks, are complex cyber-physical systems, ensuring their safety and security becomes of paramount importance. Traditional safety analysis methods, such as HAZOP, are ill-suited to assess these systems. Furthermore, cybersecurity vulnerabilities are often not considered critical, because their effects on the physical processes are not fully understood. In this work, we present STPA-SafeSec, a novel analysis methodology for both safety and security. Its results show the dependencies between cybersecurity vulnerabilities and system safety. Using this information, the most effective mitigation strategies to ensure safety and security of the system can be readily identified. We apply STPA-SafeSec to a use case in the power grid domain, and highlight",
"title": ""
},
{
"docid": "neg:1840582_10",
"text": "The deep learning community has proposed optimizations spanning hardware, software, and learning theory to improve the computational performance of deep learning workloads. While some of these optimizations perform the same operations faster (e.g., switching from a NVIDIA K80 to P100), many modify the semantics of the training procedure (e.g., large minibatch training, reduced precision), which can impact a model’s generalization ability. Due to a lack of standard evaluation criteria that considers these trade-offs, it has become increasingly difficult to compare these different advances. To address this shortcoming, DAWNBENCH and the upcoming MLPERF benchmarks use time-to-accuracy as the primary metric for evaluation, with the accuracy threshold set close to state-of-the-art and measured on a held-out dataset not used in training; the goal is to train to this accuracy threshold as fast as possible. In DAWNBENCH, the winning entries improved time-to-accuracy on ImageNet by two orders of magnitude over the seed entries. Despite this progress, it is unclear how sensitive time-to-accuracy is to the chosen threshold as well as the variance between independent training runs, and how well models optimized for time-to-accuracy generalize. In this paper, we provide evidence to suggest that time-to-accuracy has a low coefficient of variance and that the models tuned for it generalize nearly as well as pre-trained models. We additionally analyze the winning entries to understand the source of these speedups, and give recommendations for future benchmarking efforts.",
"title": ""
},
{
"docid": "neg:1840582_11",
"text": "The present work describes a website designed for remote teaching of optical measurements using lasers. It enables senior undergraduate and postgraduate students to learn theoretical aspects of the subject and also have a means to perform experiments for better understanding of the application at hand. At this stage of web development, optical methods considered are those based on refractive index changes in the material medium. The website is specially designed in order to provide remote access of expensive lasers, cameras, and other laboratory instruments by employing a commercially available web browser. The web suite integrates remote experiments, hands-on experiments and life-like optical images generated by using numerical simulation techniques based on Open Foam software package. The remote experiments are real time experiments running in the physical laboratory but can be accessed remotely from anywhere in the world and at any time. Numerical simulation of problems enhances learning, visualization of problems and interpretation of results. In the present work hand-on experimental results are discussed with respect to simulated results. A reasonable amount of resource material, specifically theoretical background of interferometry is available on the website along with computer programs image processing and analysis of results obtained in an experiment.",
"title": ""
},
{
"docid": "neg:1840582_12",
"text": "We present a mechanism that puts users in the center of control and empowers them to dictate the access to their collections of data. Revisiting the fundamental mechanisms in security for providing protection, our solution uses capabilities, access lists, and access rights following well-understood formal notions for reasoning about access. This contribution presents a practical, correct, auditable, transparent, distributed, and decentralized mechanism that is well-matched to the current emerging environments including Internet of Things, smart city, precision medicine, and autonomous cars. It is based on well-tested principles and practices used in distributed authorization, cryptocurrencies, and scalable computing.",
"title": ""
},
{
"docid": "neg:1840582_13",
"text": "Grounded cognition rejects traditional views that cognition is computation on amodal symbols in a modular system, independent of the brain's modal systems for perception, action, and introspection. Instead, grounded cognition proposes that modal simulations, bodily states, and situated action underlie cognition. Accumulating behavioral and neural evidence supporting this view is reviewed from research on perception, memory, knowledge, language, thought, social cognition, and development. Theories of grounded cognition are also reviewed, as are origins of the area and common misperceptions of it. Theoretical, empirical, and methodological issues are raised whose future treatment is likely to affect the growth and impact of grounded cognition.",
"title": ""
},
{
"docid": "neg:1840582_14",
"text": "Objective: To study the efficacy and safety of an indigenously designed low cost nasal bubble continuous positive airway pressure (NB-CPAP) in neonates admitted with respiratory distress. Study Design: A descriptive study. Place and Duration of Study: Combined Military Hospital (CMH), Peshawar from Jan 2014 to May 2014. Material and Methods: Fifty neonates who developed respiratory distress within 6 hours of life were placed on an indigenous NB-CPAP device (costing 220 PKR) and evaluated for gestational age, weight, indications, duration on NB-CPAP, pre-defined outcomes and complications. Results: A total of 50 consecutive patients with respiratory distress were placed on NB-CPAP. Male to Female ratio was 2.3:1. Mean weight was 2365.85 ± 704 grams and mean gestational age was 35.41 ± 2.9 weeks. Indications for applying NB-CPAP were transient tachypnea of the newborn (TTN, 52%) and respiratory distress syndrome (RDS, 44%). Most common complications were abdominal distension (15.6%) and pulmonary hemorrhage (6%). Out of 50 infants placed on NB-CPAP, 35 (70%) were managed on NB-CPAP alone while 15 (30%) needed mechanical ventilation following a trial of NB-CPAP. Conclusion: In 70% of babies invasive mechanical ventilation was avoided using NB-CPAP.",
"title": ""
},
{
"docid": "neg:1840582_15",
"text": "This work addresses the problem of estimating the full body 3D human pose and shape from a single color image. This is a task where iterative optimization-based solutions have typically prevailed, while Convolutional Networks (ConvNets) have suffered because of the lack of training data and their low resolution 3D predictions. Our work aims to bridge this gap and proposes an efficient and effective direct prediction method based on ConvNets. Central part to our approach is the incorporation of a parametric statistical body shape model (SMPL) within our end-to-end framework. This allows us to get very detailed 3D mesh results, while requiring estimation only of a small number of parameters, making it friendly for direct network prediction. Interestingly, we demonstrate that these parameters can be predicted reliably only from 2D keypoints and masks. These are typical outputs of generic 2D human analysis ConvNets, allowing us to relax the massive requirement that images with 3D shape ground truth are available for training. Simultaneously, by maintaining differentiability, at training time we generate the 3D mesh from the estimated parameters and optimize explicitly for the surface using a 3D per-vertex loss. Finally, a differentiable renderer is employed to project the 3D mesh to the image, which enables further refinement of the network, by optimizing for the consistency of the projection with 2D annotations (i.e., 2D keypoints or masks). The proposed approach outperforms previous baselines on this task and offers an attractive solution for direct prediction of3D shape from a single color image.",
"title": ""
},
{
"docid": "neg:1840582_16",
"text": "This paper presents a method of electric tunability using varactor diodes installed on SIR coaxial resonators and associated filters. Using varactor diodes connected in parallel, in combination with the SIR coaxial resonator, makes it possible, by increasing the number of varactor diodes, to expand the tuning range and maintain the unloaded quality factor of the resonator. A second order filter, tunable in center frequency, was built with these resonators, providing a very large tuning range.",
"title": ""
},
{
"docid": "neg:1840582_17",
"text": "Due to standardization and connectivity to the Internet, Supervisory Control and Data Acquisition (SCADA) systems now face the threat of cyber attacks. SCADA systems were designed without cyber security in mind and hence the problem of how to modify conventional Information Technology (IT) intrusion detection techniques to suit the needs of SCADA is a big challenge. We explain the nuance associated with the task of SCADA-specific intrusion detection and frame it in the domain interest of control engineers and researchers to illuminate the problem space. We present a taxonomy and a set of metrics for SCADA-specific intrusion detection techniques by heightening their possible use in SCADA systems. In particular, we enumerate Intrusion Detection Systems (IDS) that have been proposed to undertake this endeavor. We draw upon the discussion to identify the deficits and voids in current research. Finally, we offer recommendations and future research venues based upon our taxonomy and analysis on which SCADAspecific IDS strategies are most likely to succeed, in part through presenting a prototype of our efforts towards this goal.",
"title": ""
},
{
"docid": "neg:1840582_18",
"text": "This paper presents the design and mathematical model of a lower extremity exoskeleton device used to make paralyzed people walk again. The design takes into account the anatomy of standard human leg with a total of 11 Degrees of freedom (DoF). A CAD model in SolidWorks is presented along with its fabrication and a mathematical model in MATLAB.",
"title": ""
},
{
"docid": "neg:1840582_19",
"text": "Nature plays a very important role to solve problems in a very effective and well-organized way. Few researchers are trying to create computational methods that can assist human to solve difficult problems. Nature inspired techniques like swarm intelligence, bio-inspired, physics/chemistry and many more have helped in solving difficult problems and also provide most favourable solution. Nature inspired techniques are wellmatched for soft computing application because parallel, dynamic and self organising behaviour. These algorithms motivated from the working group of social agents like ants, bees and insect. This paper is a complete survey of nature inspired techniques.",
"title": ""
}
] |
1840583 | Nethammer: Inducing Rowhammer Faults through Network Requests | [
{
"docid": "pos:1840583_0",
"text": "As memory scales down to smaller technology nodes, new failure mechanisms emerge that threaten its correct operation. If such failure mechanisms are not anticipated and corrected, they can not only degrade system reliability and availability but also, perhaps even more importantly, open up security vulnerabilities: a malicious attacker can exploit the exposed failure mechanism to take over the entire system. As such, new failure mechanisms in memory can become practical and significant threats to system security. In this work, we discuss the RowHammer problem in DRAM, which is a prime (and perhaps the first) example of how a circuit-level failure mechanism in DRAM can cause a practical and widespread system security vulnerability. RowHammer, as it is popularly referred to, is the phenomenon that repeatedly accessing a row in a modern DRAM chip causes bit flips in physically-adjacent rows at consistently predictable bit locations. It is caused by a hardware failure mechanism called DRAM disturbance errors, which is a manifestation of circuit-level cell-to-cell interference in a scaled memory technology. Researchers from Google Project Zero recently demonstrated that this hardware failure mechanism can be effectively exploited by user-level programs to gain kernel privileges on real systems. Several other recent works demonstrated other practical attacks exploiting RowHammer. These include remote takeover of a server vulnerable to RowHammer, takeover of a victim virtual machine by another virtual machine running on the same system, and takeover of a mobile device by a malicious user-level application that requires no permissions. We analyze the root causes of the RowHammer problem and examine various solutions. We also discuss what other vulnerabilities may be lurking in DRAM and other types of memories, e.g., NAND flash memory or Phase Change Memory, that can potentially threaten the foundations of secure systems, as the memory technologies scale to higher densities. We conclude by describing and advocating a principled approach to memory reliability and security research that can enable us to better anticipate and prevent such vulnerabilities.",
"title": ""
}
] | [
{
"docid": "neg:1840583_0",
"text": "In this paper we integrate a humanoid robot with a powered wheelchair with the aim of lowering the cognitive requirements needed for powered mobility. We propose two roles for this companion: pointing out obstacles and giving directions. We show that children enjoyed driving with the humanoid companion by their side during a field-trial in an uncontrolled environment. Moreover, we present the results of a driving experiment for adults where the companion acted as a driving aid and conclude that participants preferred the humanoid companion to a simulated companion. Our results suggest that people will welcome a humanoid companion for their wheelchairs.",
"title": ""
},
{
"docid": "neg:1840583_1",
"text": "An experiment was conducted in a Cave-like environment to explore the relationship between physiological responses and breaks in presence and utterances by virtual characters towards the participants. Twenty people explored a virtual environment (VE) that depicted a virtual bar scenario. The experiment was divided into a training and an experimental phase. During the experimental phase breaks in presence (BIPs) in the form of whiteouts of the VE scenario were induced for 2 s at four equally spaced times during the approximately 5 min in the bar scenario. Additionally, five virtual characters addressed remarks to the subjects. Physiological measures including electrocardiagram (ECG) and galvanic skin response (GSR) were recorded throughout the whole experiment. The heart rate, the heart rate variability, and the event-related heart rate changes were calculated from the acquired ECG data. The frequency response of the GSR signal was calculated with a wavelet analysis. The study shows that the heart rate and heart rate variability parameters vary significantly between the training and experimental phase. GSR parameters and event-related heart rate changes show the occurrence of breaks in presence. Event-related heart rate changes also signified the virtual character utterances. There were also differences in response between participants who report more or less socially anxious.",
"title": ""
},
{
"docid": "neg:1840583_2",
"text": "This paper focuses on detecting anomalies in a digital video broadcasting (DVB) system from providers’ perspective. We learn a probabilistic deterministic real timed automaton profiling benign behavior of encryption control in the DVB control access system. This profile is used as a one-class classifier. Anomalous items in a testing sequence are detected when the sequence is not accepted by the learned model.",
"title": ""
},
{
"docid": "neg:1840583_3",
"text": "A coplanar waveguide (CPW)-fed planar monopole antenna with triple-band operation for WiMAX and WLAN applications is presented. The antenna, which occupies a small size of 25(L) × 25(W) × 0.8(H) mm3, is simply composed of a pentagonal radiating patch with two bent slots. By carefully selecting the positions and lengths of these slots, good dual stopband rejection characteristic of the antenna can be obtained so that three operating bands covering 2.14-2.85, 3.29-4.08, and 5.02-6.09 GHz can be achieved. The measured results also demonstrate that the proposed antenna has good omnidirectional radiation patterns with appreciable gain across the operating bands and is thus suitable to be integrated within the portable devices for WiMAX/WLAN applications.",
"title": ""
},
{
"docid": "neg:1840583_4",
"text": "Finding informative genes from microarray data is an important research problem in bioinformatics research and applications. Most of the existing methods rank features according to their discriminative capability and then find a subset of discriminative genes (usually top k genes). In particular, t-statistic criterion and its variants have been adopted extensively. This kind of methods rely on the statistics principle of t-test, which requires that the data follows a normal distribution. However, according to our investigation, the normality condition often cannot be met in real data sets.To avoid the assumption of the normality condition, in this paper, we propose a rank sum test method for informative gene discovery. The method uses a rank-sum statistic as the ranking criterion. Moreover, we propose using the significance level threshold, instead of the number of informative genes, as the parameter. The significance level threshold as a parameter carries the quality specification in statistics. We follow the Pitman efficiency theory to show that the rank sum method is more accurate and more robust than the t-statistic method in theory.To verify the effectiveness of the rank sum method, we use support vector machine (SVM) to construct classifiers based on the identified informative genes on two well known data sets, namely colon data and leukemia data. The prediction accuracy reaches 96.2% on the colon data and 100% on the leukemia data. The results are clearly better than those from the previous feature ranking methods. By experiments, we also verify that using significance level threshold is more effective than directly specifying an arbitrary k.",
"title": ""
},
{
"docid": "neg:1840583_5",
"text": "Probably the most promising breakthroughs in vehicular safety will emerge from intelligent, Advanced Driving Assistance Systems (i-ADAS). Influential research institutions and large vehicle manufacturers work in lockstep to create advanced, on-board safety systems by means of integrating the functionality of existing systems and developing innovative sensing technologies. In this contribution, we describe a portable and scalable vehicular instrumentation designed for on-road experimentation and hypothesis verification in the context of designing i-ADAS prototypes.",
"title": ""
},
{
"docid": "neg:1840583_6",
"text": "We present IndoNet, a multilingual lexical knowledge base for Indian languages. It is a linked structure of wordnets of 18 different Indian languages, Universal Word dictionary and the Suggested Upper Merged Ontology (SUMO). We discuss various benefits of the network and challenges involved in the development. The system is encoded in Lexical Markup Framework (LMF) and we propose modifications in LMF to accommodate Universal Word Dictionary and SUMO. This standardized version of lexical knowledge base of Indian Languages can now easily be linked to similar global resources.",
"title": ""
},
{
"docid": "neg:1840583_7",
"text": "This paper describes our initial work in developing a real-time audio-visual Chinese speech synthesizer with a 3D expressive avatar. The avatar model is parameterized according to the MPEG-4 facial animation standard [1]. This standard offers a compact set of facial animation parameters (FAPs) and feature points (FPs) to enable realization of 20 Chinese visemes and 7 facial expressions (i.e. 27 target facial configurations). The Xface [2] open source toolkit enables us to define the influence zone for each FP and the deformation function that relates them. Hence we can easily animate a large number of coordinates in the 3D model by specifying values for a small set of FAPs and their FPs. FAP values for 27 target facial configurations were estimated from available corpora. We extended the dominance blending approach to effect animations for coarticulated visemes superposed with expression changes. We selected six sentiment-carrying text messages and synthesized expressive visual speech (for all expressions, in randomized order) with neutral audio speech. A perceptual experiment involving 11 subjects shows that they can identify the facial expression that matches the text message’s sentiment 85% of the time.",
"title": ""
},
{
"docid": "neg:1840583_8",
"text": "Online social networks (OSNs) are becoming extremely popular among Internet users as they spend significant amount of time on popular social networking sites like Facebook, Twitter and Google+. These sites are turning out to be fundamentally pervasive and are developing a communication channel for billions of users. Online community use them to find new friends, update their existing friends list with their latest thoughts and activities. Huge information available on these sites attracts the interest of cyber criminals who misuse these sites to exploit vulnerabilities for their illicit benefits such as advertising some product or to attract victims to click on malicious links or infecting users system just for the purpose of making money. Spam detection is one of the major problems these days in social networking sites such as twitter. Most previous techniques use different set of features to classify spam and non-spam users. In this paper, we proposed a hybrid technique which uses content-based as well as graph-based features for identification of spammers on twitter platform. We have analysed the proposed technique on real Twitter dataset with 11k uses and more than 400k tweets approximately. Our results show that the detection rate of our proposed technique is much higher than any of the existing techniques.",
"title": ""
},
{
"docid": "neg:1840583_9",
"text": "Objective: In this paper, we present a systematic literature review of motivation in Software Engineering. The objective of this review is to plot the landscape of current reported knowledge in terms of what motivates developers, what de-motivates them and how existing models address motivation. Methods: We perform a systematic literature review of peer reviewed published studies that focus on motivation in Software Engineering. Systematic reviews are well established in medical research and are used to systematically analyse the literature addressing specific research questions. Results: We found 92 papers related to motivation in Software Engineering. Fifty-six percent of the studies reported that Software Engineers are distinguishable from other occupational groups. Our findings suggest that Software Engineers are likely to be motivated according to three related factors: their ‘characteristics’ (for example, their need for variety); internal ‘controls’ (for example, their personality) and external ‘moderators’ (for example, their career stage). The literature indicates that de-motivated engineers may leave the organisation or take more sick-leave, while motivated engineers will increase their productivity and remain longer in the organisation. Aspects of the job that motivate Software Engineers include problem solving, working to benefit others and technical challenge. Our key finding is that the published models of motivation in Software Engineering are disparate and do not reflect the complex needs of Software Engineers in their career stages, cultural and environmental settings. Conclusions: The literature on motivation in Software Engineering presents a conflicting and partial picture of the area. It is clear that motivation is context dependent and varies from one engineer to another. The most commonly cited motivator is the job itself, yet we found very little work on what it is about that job that Software Engineers find motivating. Furthermore, surveys are often aimed at how Software Engineers feel about ‘the organisation’, rather than ‘the profession’. Although models of motivation in Software Engineering are reported in the literature, they do not account for the changing roles and environment in which Software Engineers operate. Overall, our findings indicate that there is no clear understanding of the Software Engineers’ job, what motivates Software Engineers, how they are motivated, or the outcome and benefits of motivating Software Engineers. 2007 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840583_10",
"text": "This research is a partial test of Park et al.’s (2008) model to assess the impact of flow and brand equity in 3D virtual worlds. It draws on flow theory as its main theoretical foundation to understand and empirically assess the impact of flow on brand equity and behavioral intention in 3D virtual worlds. The findings suggest that the balance of skills and challenges in 3D virtual worlds influences users’ flow experience, which in turn influences brand equity. Brand equity then increases behavioral intention. The authors also found that the impact of flow on behavioral intention in 3D virtual worlds is indirect because the relationship between them is mediated by brand equity. This research highlights the importance of balancing the challenges posed by 3D virtual world branding sites with the users’ skills to maximize their flow experience and brand equity to increase the behavioral intention associated with the brand.",
"title": ""
},
{
"docid": "neg:1840583_11",
"text": "Bottleneck autoencoders have been actively researched as a solution to image compression tasks. However, we observed that bottleneck autoencoders produce subjectively low quality reconstructed images. In this work, we explore the ability of sparse coding to improve reconstructed image quality for the same degree of compression. We observe that sparse image compression produces visually superior reconstructed images and yields higher values of pixel-wise measures of reconstruction quality (PSNR and SSIM) compared to bottleneck autoencoders. In addition, we find that using alternative metrics that correlate better with human perception, such as feature perceptual loss and the classification accuracy, sparse image compression scores up to 18.06% and 2.7% higher, respectively, compared to bottleneck autoencoders. Although computationally much more intensive, we find that sparse coding is otherwise superior to bottleneck autoencoders for the same degree of compression.",
"title": ""
},
{
"docid": "neg:1840583_12",
"text": "A new night-time lane detection system and its accompanying framework are presented in this paper. The accompanying framework consists of an automated ground truth process and systematic storage of captured videos that will be used for training and testing. The proposed Advanced Lane Detector 2.0 (ALD 2.0) is an improvement over the ALD 1.0 or Layered Approach with integration of pixel remapping, outlier removal, and prediction with tracking. Additionally, a novel procedure to generate the ground truth data for lane marker locations is also proposed. The procedure consists of an original process called time slicing, which provides the user with unique visualization of the captured video and enables quick generation of ground truth information. Finally, the setup and implementation of a database hosting lane detection videos and standardized data sets for testing are also described. The ALD 2.0 is evaluated by means of the user-created annotations accompanying the videos. Finally, the planned improvements and remaining work are addressed.",
"title": ""
},
{
"docid": "neg:1840583_13",
"text": "The paper proposes a comprehensive information security maturity model (ISMM) that addresses both technical and socio/non-technical security aspects. The model is intended for securing e-government services (implementation and service delivery) in an emerging and increasing security risk environment. The paper utilizes extensive literature review and survey study approaches. A total of eight existing ISMMs were selected and critically analyzed. Models were then categorized into security awareness, evaluation and management orientations. Based on the model’s strengths – three models were selected to undergo further analyses and then synthesized. Each of the three selected models was either from the security awareness, evaluation or management orientations category. To affirm the findings – a survey study was conducted into six government organizations located in Tanzania. The study was structured to a large extent by the security controls adopted from the Security By Consensus (SBC) model. Finally, an ISMM with five critical maturity levels was proposed. The maturity levels were: undefined, defined, managed, controlled and optimized. The papers main contribution is the proposed model that addresses both technical and non-technical security services within the critical maturity levels. Additionally, the paper enhances awareness and understanding on the needs for security in e-government services to stakeholders.",
"title": ""
},
{
"docid": "neg:1840583_14",
"text": "We describe the annotation of a new dataset for German Named Entity Recognition (NER). The need for this dataset is motivated by licensing issues and consistency issues of existing datasets. We describe our approach to creating annotation guidelines based on linguistic and semantic considerations, and how we iteratively refined and tested them in the early stages of annotation in order to arrive at the largest publicly available dataset for German NER, consisting of over 31,000 manually annotated sentences (over 591,000 tokens) from German Wikipedia and German online news. We provide a number of statistics on the dataset, which indicate its high quality, and discuss legal aspects of distributing the data as a compilation of citations. The data is released under the permissive CC-BY license, and will be fully available for download in September 2014 after it has been used for the GermEval 2014 shared task on NER. We further provide the full annotation guidelines and links to the annotation tool used for the creation of this resource.",
"title": ""
},
{
"docid": "neg:1840583_15",
"text": "We interpret meta-reinforcement learning as the problem of learning how to quickly find a good sampling distribution in a new environment. This interpretation leads to the development of two new meta-reinforcement learning algorithms: E-MAML and E-RL. Results are presented on a new environment we call ‘Krazy World’: a difficult high-dimensional gridworld which is designed to highlight the importance of correctly differentiating through sampling distributions in meta-reinforcement learning. Further results are presented on a set of maze environments. We show E-MAML and E-RL deliver better performance than baseline algorithms on both tasks.",
"title": ""
},
{
"docid": "neg:1840583_16",
"text": "We present four new reinforcement learning algorithms based on actor-critic and natural-gradient ideas, and provide their convergence proofs. Actor-critic reinforcement learning methods are online approximations to policy iteration in which the value-function parameters are estimated using temporal difference learning and the policy parameters are updated by stochastic gradient descent. Methods based on policy gradients in this way are of special interest because of their compatibility with function approximation methods, which are needed to handle large or infinite state spaces. The use of temporal difference learning in this way is of interest because in many applications it dramatically reduces the variance of the gradient estimates. The use of the natural gradient is of interest because it can produce better conditioned parameterizations and has been shown to further reduce variance in some cases. Our results extend prior two-timescale convergence results for actor-critic methods by Konda and Tsitsiklis by using temporal difference learning in the actor and by incorporating natural gradients, and they extend prior empirical studies of natural actor-critic methods by Peters, Vijayakumar and Schaal by providing the first convergence proofs and the first fully incremental algorithms.",
"title": ""
},
{
"docid": "neg:1840583_17",
"text": "There are many clustering tasks which are closely related in the real world, e.g. clustering the web pages of different universities. However, existing clustering approaches neglect the underlying relation and treat these clustering tasks either individually or simply together. In this paper, we will study a novel clustering paradigm, namely multi-task clustering, which performs multiple related clustering tasks together and utilizes the relation of these tasks to enhance the clustering performance. We aim to learn a subspace shared by all the tasks, through which the knowledge of the tasks can be transferred to each other. The objective of our approach consists of two parts: (1) Within-task clustering: clustering the data of each task in its input space individually; and (2) Cross-task clustering: simultaneous learning the shared subspace and clustering the data of all the tasks together. We will show that it can be solved by alternating minimization, and its convergence is theoretically guaranteed. Furthermore, we will show that given the labels of one task, our multi-task clustering method can be extended to transductive transfer classification (a.k.a. cross-domain classification, domain adaption). Experiments on several cross-domain text data sets demonstrate that the proposed multi-task clustering outperforms traditional single-task clustering methods greatly. And the transductive transfer classification method is comparable to or even better than several existing transductive transfer classification approaches.",
"title": ""
},
{
"docid": "neg:1840583_18",
"text": "Transactional Memory (TM) is on its way to becoming the programming API of choice for writing correct, concurrent, and scalable programs. Hardware TM (HTM) implementations are expected to be significantly faster than pure software TM (STM); however, full hardware support for true closed and open nested transactions is unlikely to be practical.\n This paper presents a novel mechanism, the split hardware transaction (SpHT), that uses minimal software support to combine multiple segments of an atomic block, each executed using a separate hardware transaction, into one atomic operation. The idea of segmenting transactions can be used for many purposes, including nesting, local retry, orElse, and user-level thread scheduling; in this paper we focus on how it allows linear closed and open nesting of transactions. SpHT overcomes the limited expressive power of best-effort HTM while imposing overheads dramatically lower than STM and preserving useful guarantees such as strong atomicity provided by the underlying HTM.",
"title": ""
},
{
"docid": "neg:1840583_19",
"text": "Given an incorrect value produced during a failed program run (e.g., a wrong output value or a value that causes the program to crash), the backward dynamic slice of the value very frequently captures the faulty code responsible for producing the incorrect value. Although the dynamic slice often contains only a small percentage of the statements executed during the failed program run, the dynamic slice can still be large and thus considerable effort may be required by the programmer to locate the faulty code.In this paper we develop a strategy for pruning the dynamic slice to identify a subset of statements in the dynamic slice that are likely responsible for producing the incorrect value. We observe that some of the statements used in computing the incorrect value may also have been involved in computing correct values (e.g., a value produced by a statement in the dynamic slice of the incorrect value may also have been used in computing a correct output value prior to the incorrect value). For each such executed statement in the dynamic slice, using the value profiles of the executed statements, we compute a confidence value ranging from 0 to 1 - a higher confidence value corresponds to greater likelihood that the execution of the statement produced a correct value. Given a failed run involving execution of a single error, we demonstrate that the pruning of a dynamic slice by excluding only the statements with the confidence value of 1 is highly effective in reducing the size of the dynamic slice while retaining the faulty code in the slice. Our experiments show that the number of distinct statements in a pruned dynamic slice are 1.79 to 190.57 times less than the full dynamic slice. Confidence values also prioritize the statements in the dynamic slice according to the likelihood of them being faulty. We show that examining the statements in the order of increasing confidence values is an effective strategy for reducing the effort of fault location.",
"title": ""
}
] |
1840584 | Learning, memory, and synesthesia. | [
{
"docid": "pos:1840584_0",
"text": "It is generally accepted that there is something special about reasoning by using mental images. The question of how it is special, however, has never been satisfactorily spelled out, despite more than thirty years of research in the post-behaviorist tradition. This article considers some of the general motivation for the assumption that entertaining mental images involves inspecting a picture-like object. It sets out a distinction between phenomena attributable to the nature of mind to what is called the cognitive architecture, and ones that are attributable to tacit knowledge used to simulate what would happen in a visual situation. With this distinction in mind, the paper then considers in detail the widely held assumption that in some important sense images are spatially displayed or are depictive, and that examining images uses the same mechanisms that are deployed in visual perception. I argue that the assumption of the spatial or depictive nature of images is only explanatory if taken literally, as a claim about how images are physically instantiated in the brain, and that the literal view fails for a number of empirical reasons--for example, because of the cognitive penetrability of the phenomena cited in its favor. Similarly, while it is arguably the case that imagery and vision involve some of the same mechanisms, this tells us very little about the nature of mental imagery and does not support claims about the pictorial nature of mental images. Finally, I consider whether recent neuroscience evidence clarifies the debate over the nature of mental images. I claim that when such questions as whether images are depictive or spatial are formulated more clearly, the evidence does not provide support for the picture-theory over a symbol-structure theory of mental imagery. Even if all the empirical claims were true, they do not warrant the conclusion that many people have drawn from them: that mental images are depictive or are displayed in some (possibly cortical) space. Such a conclusion is incompatible with what is known about how images function in thought. We are then left with the provisional counterintuitive conclusion that the available evidence does not support rejection of what I call the \"null hypothesis\"; namely, that reasoning with mental images involves the same form of representation and the same processes as that of reasoning in general, except that the content or subject matter of thoughts experienced as images includes information about how things would look.",
"title": ""
},
{
"docid": "pos:1840584_1",
"text": "Synesthesia is an unusual condition in which stimulation of one modality evokes sensation or experience in another modality. Although discussed in the literature well over a century ago, synesthesia slipped out of the scientific spotlight for decades because of the difficulty in verifying and quantifying private perceptual experiences. In recent years, the study of synesthesia has enjoyed a renaissance due to the introduction of tests that demonstrate the reality of the condition, its automatic and involuntary nature, and its measurable perceptual consequences. However, while several research groups now study synesthesia, there is no single protocol for comparing, contrasting and pooling synesthetic subjects across these groups. There is no standard battery of tests, no quantifiable scoring system, and no standard phrasing of questions. Additionally, the tests that exist offer no means for data comparison. To remedy this deficit we have devised the Synesthesia Battery. This unified collection of tests is freely accessible online (http://www.synesthete.org). It consists of a questionnaire and several online software programs, and test results are immediately available for use by synesthetes and invited researchers. Performance on the tests is quantified with a standard scoring system. We introduce several novel tests here, and offer the software for running the tests. By presenting standardized procedures for testing and comparing subjects, this endeavor hopes to speed scientific progress in synesthesia research.",
"title": ""
}
] | [
{
"docid": "neg:1840584_0",
"text": "─ A novel broadband 3-dB directional coupler design method utilizing HFSS and realization are given in this paper. It is realized in stripline, showing great agreement with the simulation and design format. The unique property of this design method is that it unnecessitates both a feedback from the realization for broadbanding and an additional smoothing of transition between coupled sections of the whole directional coupler. There is also no need for either a specialised CAD tool or a computer program. Key-words:digital frequency discriminator, HFSS, APLAC, broadside coupling",
"title": ""
},
{
"docid": "neg:1840584_1",
"text": "The ability to give precise and fast prediction for the price movement of stocks is the key to profitability in High Frequency Trading. The main objective of this paper is to propose a novel way of modeling the high frequency trading problem using Deep Neural Networks at its heart and to argue why Deep Learning methods can have a lot of potential in the field of High Frequency Trading. The paper goes on to analyze the model’s performance based on it’s prediction accuracy as well as prediction speed across full-day trading simulations.",
"title": ""
},
{
"docid": "neg:1840584_2",
"text": "System logs are widely used in various tasks of software system management. It is crucial to avoid logging too little or too much. To achieve so, developers need to make informed decisions on where to log and what to log in their logging practices during development. However, there exists no work on studying such logging practices in industry or helping developers make informed decisions. To fill this significant gap, in this paper, we systematically study the logging practices of developers in industry, with focus on where developers log. We obtain six valuable findings by conducting source code analysis on two large industrial systems (2.5M and 10.4M LOC, respectively) at Microsoft. We further validate these findings via a questionnaire survey with 54 experienced developers in Microsoft. In addition, our study demonstrates the high accuracy of up to 90% F-Score in predicting where to log.",
"title": ""
},
{
"docid": "neg:1840584_3",
"text": "In this paper, we propose the use of a novel fixed-wing vertical take-off and landing (VTOL) aerobot. A mission profile to investigate the Isidis Planitia region of Mars is proposed based on the knowledge of the planet's geophysical characteristics, its atmosphere and terrain. The aerobot design is described from the aspects of vehicle selection, its propulsion system, power system, payload, thermal management, structure, mass budget, and control strategy and sensor suite. The aerobot proposed in this paper is believed to be a practical and realistic solution to the problem of investigating the Martian surface. A six-degree-of-freedom flight simulator has been created to support the aerobot design process by providing performance evaluations. The nonlinear dynamics is then linearized to a state-space formulation at a certain trimmed equilibrium point. Basic autopilot modes are developed for the aerobot based on the linearized state-space model. The results of the simulation show the aerobot is stable and controllable.",
"title": ""
},
{
"docid": "neg:1840584_4",
"text": "With the increasing complexity of modern Systems-on-Chip, the possibility of functional errors escaping design verification is growing. Post-silicon validation targets the discovery of these errors in early hardware prototypes. Due to limited visibility and observability, dedicated design-for-debug (DFD) hardware such as trace buffers are inserted to aid post-silicon validation. In spite of its benefit, such hardware incurs area overheads, which impose size limitations. However, the overhead could be overcome if the area dedicated to DFD could be reused in-field. In this work, we present a novel method for reusing an existing trace buffer as a victim cache of a processor to enhance performance. The trace buffer storage space is reused for the victim cache, with a small additional controller logic. Experimental results on several benchmarks and trace buffer sizes show that the proposed approach can enhance the average performance by up to 8.3% over a baseline architecture. We also propose a strategy for dynamic power management of the structure, to enable saving energy with negligible impact on performance.",
"title": ""
},
{
"docid": "neg:1840584_5",
"text": "This paper presents an automatic annotation tool AATOS for providing documents with semantic annotations. The tool links entities found from the texts to ontologies defined by the user. The application is highly configurable and can be used with different natural language Finnish texts. The application was developed as a part of the WarSampo and Semantic Finlex projects and tested using Kansa Taisteli magazine articles and consolidated Finnish legislation of Semantic Finlex. The quality of the automatic annotation was evaluated by measuring precision and recall against existing manual annotations. The results showed that the quality of the input text, as well as the selection and configuration of the ontologies impacted the results.",
"title": ""
},
{
"docid": "neg:1840584_6",
"text": "In this paper we consider the mobile robot parking problem, i.e., the stabilization of a wheeled vehicle to a given position and orientation, using only visual feedback from low-cost cameras. We take into account the practically most relevant problem of keeping the tracked features in sight of the camera while maneuvering to park the vehicle. This constraint, often neglected in the literature, combines with the non-holonomic nature of the vehicle kinematics in a challenging controller design problem. We provide an effective solution to such a problem by using a combination of previous results on non-smooth control synthesis and recently developed hybrid control techniques. Simulations and experimental results on a laboratory vehicle are reported, showing the practicality of the proposed approach. KEY WORDS—parking of wheeled robots, visual servoing, hybrid control, non-holonomic systems",
"title": ""
},
{
"docid": "neg:1840584_7",
"text": "Examination of motivational dynamics in academic contexts within self-determination theory has centered primarily around both the motives (initially intrinsic vs. extrinsic, later autonomous vs. controlled) that regulate learners’study behavior and the contexts that promote or hinder these regulations. Less attention has been paid to the goal contents (intrinsic vs. extrinsic) that learners hold and to the different goal contents that are communicated in schools to increase the perceived relevance of the learning. Recent field experiments are reviewed showing that intrinsic goal framing (relative to extrinsic goal framing and no-goal framing) produces deeper engagement in learning activities, better conceptual learning, and higher persistence at learning activities. These effects occur for both intrinsically and extrinsically oriented individuals. Results are discussed in terms of self-determination theory’s concept of basic psychological needs for autonomy, competence, and relatedness.",
"title": ""
},
{
"docid": "neg:1840584_8",
"text": "Mobile devices have become a significant part of people’s lives, leading to an increasing number of users involved with such technology. The rising number of users invites hackers to generate malicious applications. Besides, the security of sensitive data available on mobile devices is taken lightly. Relying on currently developed approaches is not sufficient, given that intelligent malware keeps modifying rapidly and as a result becomes more difficult to detect. In this paper, we propose an alternative solution to evaluating malware detection using the anomaly-based approach with machine learning classifiers. Among the various network traffic features, the four categories selected are basic information, content based, time based and connection based. The evaluation utilizes two datasets: public (i.e. MalGenome) and private (i.e. self-collected). Based on the evaluation results, both the Bayes network and random forest classifiers produced more accurate readings, with a 99.97 % true-positive rate (TPR) as opposed to the multi-layer perceptron with only 93.03 % on the MalGenome dataset. However, this experiment revealed that the k-nearest neighbor classifier efficiently detected the latest Android malware with an 84.57 % truepositive rate higher than other classifiers. Communicated by V. Loia. F. A. Narudin · A. Gani Mobile Cloud Computing (MCC), University of Malaya, 50603 Kuala Lumpur, Malaysia A. Feizollah (B) · N. B. Anuar Security Research Group (SECReg), Faculty of Computer Science and Information Technology, University of Malaya, 50603 Kuala Lumpur, Malaysia e-mail: [email protected]",
"title": ""
},
{
"docid": "neg:1840584_9",
"text": "We developed computational models to predict the emergence of depression and Post-Traumatic Stress Disorder in Twitter users. Twitter data and details of depression history were collected from 204 individuals (105 depressed, 99 healthy). We extracted predictive features measuring affect, linguistic style, and context from participant tweets (N = 279,951) and built models using these features with supervised learning algorithms. Resulting models successfully discriminated between depressed and healthy content, and compared favorably to general practitioners’ average success rates in diagnosing depression, albeit in a separate population. Results held even when the analysis was restricted to content posted before first depression diagnosis. State-space temporal analysis suggests that onset of depression may be detectable from Twitter data several months prior to diagnosis. Predictive results were replicated with a separate sample of individuals diagnosed with PTSD (Nusers = 174, Ntweets = 243,775). A state-space time series model revealed indicators of PTSD almost immediately post-trauma, often many months prior to clinical diagnosis. These methods suggest a data-driven, predictive approach for early screening and detection of mental illness.",
"title": ""
},
{
"docid": "neg:1840584_10",
"text": "475 Abstract— In this paper a dc-dc buck-boost converter is modeled and controlled using sliding mode technique. First the buck-boost converter is modeled and dynamic equations describing the converter are derived and sliding mode controller is designed. The robustness of the converter system is tested against step load changes and input voltage variations. Matlab/Simulink is used for the simulations. The simulation results are presented..",
"title": ""
},
{
"docid": "neg:1840584_11",
"text": "Vehicle detection is important for advanced driver assistance systems (ADAS). Both LiDAR and cameras are often used. LiDAR provides excellent range information but with limits to object identification; on the other hand, the camera allows for better recognition but with limits to the high resolution range information. This paper presents a sensor fusion based vehicle detection approach by fusing information from both LiDAR and cameras. The proposed approach is based on two components: a hypothesis generation phase to generate positions that potential represent vehicles and a hypothesis verification phase to classify the corresponding objects. Hypothesis generation is achieved using the stereo camera while verification is achieved using the LiDAR. The main contribution is that the complementary advantages of two sensors are utilized, with the goal of vehicle detection. The proposed approach leads to an enhanced detection performance; in addition, maintains tolerable false alarm rates compared to vision based classifiers. Experimental results suggest a performance which is broadly comparable to the current state of the art, albeit with reduced false alarm rate.",
"title": ""
},
{
"docid": "neg:1840584_12",
"text": "Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.",
"title": ""
},
{
"docid": "neg:1840584_13",
"text": "This paper proposes a method for hand pose estimation from RGB images that uses both external large-scale depth image datasets and paired depth and RGB images as privileged information at training time. We show that providing depth information during training significantly improves performance of pose estimation from RGB images during testing. We explore different ways of using this privileged information: (1) using depth data to initially train a depth-based network, (2) using the features from the depthbased network of the paired depth images to constrain midlevel RGB network weights, and (3) using the foreground mask, obtained from the depth data, to suppress the responses from the background area. By using paired RGB and depth images, we are able to supervise the RGB-based network to learn middle layer features that mimic that of the corresponding depth-based network, which is trained on large-scale, accurately annotated depth data. During testing, when only an RGB image is available, our method produces accurate 3D hand pose predictions. Our method is also tested on 2D hand pose estimation. Experiments on three public datasets show that the method outperforms the state-of-the-art methods for hand pose estimation using RGB image input.",
"title": ""
},
{
"docid": "neg:1840584_14",
"text": "Biomaterial development is currently the most active research area in the field of biomedical engineering. The bioglasses possess immense potential for being the ideal biomaterials due to their high adaptiveness to the biological environment as well as tunable properties. Bioglasses like 45S5 has shown great clinical success over the past 10 years. The bioglasses like 45S5 were prepared using melt-quenching techniques but recently porous bioactive glasses have been derived through sol-gel process. The synthesis route exhibits marked effect on the specific surface area, as well as degradability of the material. This article is an attempt to provide state of the art of the sol-gel and melt quenched bioactive bioglasses for tissue regeneration. Fabrication routes for bioglasses suitable for bone tissue engineering are highlighted and the effect of these fabrication techniques on the porosity, pore-volume, mechanical properties, cytocompatibilty and especially apatite layer formation on the surface of bioglasses is analyzed in detail. Drug delivery capability of bioglasses is addressed shortly along with the bioactivity of mesoporous glasses. © 2015 Wiley Periodicals, Inc. J Biomed Mater Res Part B: Appl Biomater, 104B: 1248-1275, 2016.",
"title": ""
},
{
"docid": "neg:1840584_15",
"text": "Primary progressive aphasia (PPA) may be the onset of several neurodegenerative diseases. This study evaluates a cohort of patients with PPA to assess their progression to different clinical syndromes, associated factors that modulate this progression, and patterns of cerebral metabolism linked to different clinical evolutionary forms. Thirty-five patients meeting PPA criteria underwent a clinical and neuroimaging 18F-Fluorodeoxyglucose PET evaluation. Survival analysis was performed using time from clinical onset to the development of a non-language symptom or deficit (PPA-plus). Cerebral metabolism was analyzed using Statistical Parametric Mapping. Patients classified into three PPA variants evolved to atypical parkinsonism, behavioral disorder and motor neuron disease in the agrammatic variant; to behavioral disorder in the semantic; and to memory impairment in the logopenic. Median time from the onset of symptoms to PPA-plus was 36 months (31–40, 95 % confidence interval). Right laterality, and years of education were associated to a lower risk of progression, while logopenic variant to a higher risk. Different regions of hypometabolism were identified in agrammatic PPA with parkinsonism, motor neuron disease and logopenic PPA-plus. Clinical course of PPA differs according to each variant. Left anterior temporal and frontal medial hypometabolism in agrammatic variant is linked to motor neuron disease and atypical parkinsonism, respectively. PPA variant, laterality and education may be associated to the risk of progression. These results suggest the possibility that clinical and imaging data could help to predict the clinical course of PPA.",
"title": ""
},
{
"docid": "neg:1840584_16",
"text": "Patients with pathological laughter and crying (PLC) are subject to relatively uncontrollable episodes of laughter, crying or both. The episodes occur either without an apparent triggering stimulus or following a stimulus that would not have led the subject to laugh or cry prior to the onset of the condition. PLC is a disorder of emotional expression rather than a primary disturbance of feelings, and is thus distinct from mood disorders in which laughter and crying are associated with feelings of happiness or sadness. The traditional and currently accepted view is that PLC is due to the damage of pathways that arise in the motor areas of the cerebral cortex and descend to the brainstem to inhibit a putative centre for laughter and crying. In that view, the lesions 'disinhibit' or 'release' the laughter and crying centre. The neuroanatomical findings in a recently studied patient with PLC, along with new knowledge on the neurobiology of emotion and feeling, gave us an opportunity to revisit the traditional view and propose an alternative. Here we suggest that the critical PLC lesions occur in the cerebro-ponto-cerebellar pathways and that, as a consequence, the cerebellar structures that automatically adjust the execution of laughter or crying to the cognitive and situational context of a potential stimulus, operate on the basis of incomplete information about that context, resulting in inadequate and even chaotic behaviour.",
"title": ""
},
{
"docid": "neg:1840584_17",
"text": "With the increasing utilization and popularity of the cloud infrastructure, more and more data are moved to the cloud storage systems. This makes the availability of cloud storage services critically important, particularly given the fact that outages of cloud storage services have indeed happened from time to time. Thus, solely depending on a single cloud storage provider for storage services can risk violating the service-level agreement (SLA) due to the weakening of service availability. This has led to the notion of Cloud-of-Clouds, where data redundancy is introduced to distribute data among multiple independent cloud storage providers, to address the problem. The key in the effectiveness of the Cloud-of-Clouds approaches lies in how the data redundancy is incorporated and distributed among the clouds. However, the existing Cloud-of-Clouds approaches utilize either replication or erasure codes to redundantly distribute data across multiple clouds, thus incurring either high space or high performance overheads. In this paper, we propose a hybrid redundant data distribution approach, called HyRD, to improve the cloud storage availability in Cloud-of-Clouds by exploiting the workload characteristics and the diversity of cloud providers. In HyRD, large files are distributed in multiple cost-efficient cloud storage providers with erasure-coded data redundancy while small files and file system metadata are replicated on multiple high-performance cloud storage providers. The experiments conducted on our lightweight prototype implementation of HyRD show that HyRD improves the cost efficiency by 33.4 and 20.4 percent, and reduces the access latency by 58.7 and 34.8 percent than the DuraCloud and RACS schemes, respectively.",
"title": ""
},
{
"docid": "neg:1840584_18",
"text": "BACKGROUND\nThere is increasing awareness that meta-analyses require a sufficiently large information size to detect or reject an anticipated intervention effect. The required information size in a meta-analysis may be calculated from an anticipated a priori intervention effect or from an intervention effect suggested by trials with low-risk of bias.\n\n\nMETHODS\nInformation size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta-analysis.\n\n\nRESULTS\nWe devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between-trial variability and a sampling error estimate considering the required information size. D2 is different from the intuitively obvious adjusting factor based on the common quantification of heterogeneity, the inconsistency (I2), which may underestimate the required information size. Thus, D2 and I2 are compared and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses.\n\n\nCONCLUSION\nWe conclude that D2 seems a better alternative than I2 to consider model variation in any random-effects meta-analysis despite the choice of the between trial variance estimator that constitutes the model. Furthermore, D2 can readily adjust the required information size in any random-effects model meta-analysis.",
"title": ""
}
] |
1840585 | The pharmacology of psilocybin. | [
{
"docid": "pos:1840585_0",
"text": "Psilocybin, an indoleamine hallucinogen, produces a psychosis-like syndrome in humans that resembles first episodes of schizophrenia. In healthy human volunteers, the psychotomimetic effects of psilocybin were blocked dose-dependently by the serotonin-2A antagonist ketanserin or the atypical antipsychotic risperidone, but were increased by the dopamine antagonist and typical antipsychotic haloperidol. These data are consistent with animal studies and provide the first evidence in humans that psilocybin-induced psychosis is due to serotonin-2A receptor activation, independently of dopamine stimulation. Thus, serotonin-2A overactivity may be involved in the pathophysiology of schizophrenia and serotonin-2A antagonism may contribute to therapeutic effects of antipsychotics.",
"title": ""
},
{
"docid": "pos:1840585_1",
"text": "1. Reactions induced by LSD, mescaline, psilocin, and psilocybin are qualitatively similar. 2. The time course of the psilocin and psilocybin reactions are shorter than those of LSD or mescaline reactions. li 4. Psilocin is approximately 1.4 times as potent as psilocybin. This ratio is the same as that of the molecular weights of the two drugs. Reactions induced by LSD, mescaline, psilocin, and psilocybin are qualitatively similar. The time course of the psilocin and psilocybin reactions are shorter than those of LSD or mescaline reactions. li Psilocin is approximately 1.4 times as potent as psilocybin. This ratio is the same as that of the molecular weights of the two drugs.",
"title": ""
}
] | [
{
"docid": "neg:1840585_0",
"text": "IMPORTANCE\nAlthough several longitudinal studies have demonstrated an effect of violent video game play on later aggressive behavior, little is known about the psychological mediators and moderators of the effect.\n\n\nOBJECTIVE\nTo determine whether cognitive and/or emotional variables mediate the effect of violent video game play on aggression and whether the effect is moderated by age, sex, prior aggressiveness, or parental monitoring.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nThree-year longitudinal panel study. A total of 3034 children and adolescents from 6 primary and 6 secondary schools in Singapore (73% male) were surveyed annually. Children were eligible for inclusion if they attended one of the 12 selected schools, 3 of which were boys' schools. At the beginning of the study, participants were in third, fourth, seventh, and eighth grades, with a mean (SD) age of 11.2 (2.1) years (range, 8-17 years). Study participation was 99% in year 1.\n\n\nMAIN OUTCOMES AND MEASURES\nThe final outcome measure was aggressive behavior, with aggressive cognitions (normative beliefs about aggression, hostile attribution bias, aggressive fantasizing) and empathy as potential mediators.\n\n\nRESULTS\nLongitudinal latent growth curve modeling demonstrated that the effects of violent video game play are mediated primarily by aggressive cognitions. This effect is not moderated by sex, prior aggressiveness, or parental monitoring and is only slightly moderated by age, as younger children had a larger increase in initial aggressive cognition related to initial violent game play at the beginning of the study than older children. Model fit was excellent for all models.\n\n\nCONCLUSIONS AND RELEVANCE\nGiven that more than 90% of youths play video games, understanding the psychological mechanisms by which they can influence behaviors is important for parents and pediatricians and for designing interventions to enhance or mitigate the effects.",
"title": ""
},
{
"docid": "neg:1840585_1",
"text": "The vertex cover problem Find a set of vertices that cover the graph LP rounding is a 4 step scheme to approximate combinatorial problems with theoretical guarantees on solution quality. Several problems in machine learning, computer vision and data analysis can be formulated using NP-‐hard combinatorial optimization problems. In many of these applications, approximate solutions for these NP-‐hard problems are 'good enough'.",
"title": ""
},
{
"docid": "neg:1840585_2",
"text": "Hashing-based semantic similarity search is becoming increasingly important for building large-scale content-based retrieval system. The state-of-the-art supervised hashing techniques use flexible two-step strategy to learn hash functions. The first step learns binary codes for training data by solving binary optimization problems with millions of variables, thus usually requiring intensive computations. Despite simplicity and efficiency, locality-sensitive hashing (LSH) has never been recognized as a good way to generate such codes due to its poor performance in traditional approximate neighbor search. We claim in this paper that the true merit of LSH lies in transforming the semantic labels to obtain the binary codes, resulting in an effective and efficient two-step hashing framework. Specifically, we developed the locality-sensitive two-step hashing (LS-TSH) that generates the binary codes through LSH rather than any complex optimization technique. Theoretically, with proper assumption, LS-TSH is actually a useful LSH scheme, so that it preserves the label-based semantic similarity and possesses sublinear query complexity for hash lookup. Experimentally, LS-TSH could obtain comparable retrieval accuracy with state of the arts with two to three orders of magnitudes faster training speed.",
"title": ""
},
{
"docid": "neg:1840585_3",
"text": "This review discusses the theory and practical application of independent component analysis (ICA) to multi-channel EEG data. We use examples from an audiovisual attention-shifting task performed by young and old subjects to illustrate the power of ICA to resolve subtle differences between evoked responses in the two age groups. Preliminary analysis of these data using ICA suggests a loss of task specificity in independent component (IC) processes in frontal and somatomotor cortex during post-response periods in older as compared to younger subjects, trends not detected during examination of scalp-channel event-related potential (ERP) averages. We discuss possible approaches to component clustering across subjects and new ways to visualize mean and trial-by-trial variations in the data, including ERP-image plots of dynamics within and across trials as well as plots of event-related spectral perturbations in component power, phase locking, and coherence. We believe that widespread application of these and related analysis methods should bring EEG once again to the forefront of brain imaging, merging its high time and frequency resolution with enhanced cm-scale spatial resolution of its cortical sources.",
"title": ""
},
{
"docid": "neg:1840585_4",
"text": "Internet of vehicles is a promising area related to D2D communication and the Internet of Things. We present a novel perspective on vehicular communications and social vehicle swarms, to study and analyze a socially aware Internet of vehicles with the assistance of an agent-based model intended to reveal hidden patterns behind superficial data. After discussing its components (its agents, environments, and rules), we introduce supportive technology and methods, deep reinforcement learning, privacy preserving data mining, and sub-cloud computing in order to detect the most significant and interesting information for each individual effectively, which is the key desire. Finally, several relevant research topics and challenges are discussed.",
"title": ""
},
{
"docid": "neg:1840585_5",
"text": "Google Scholar has been well received by the research community. Its promises of free, universal and easy access to scientific literature as well as the perception that it covers better than other traditional multidisciplinary databases the areas of the Social Sciences and the Humanities have contributed to the quick expansion of Google Scholar Citations and Google Scholar Metrics: two new bibliometric products that offer citation data at the individual level and at journal level. In this paper we show the results of a experiment undertaken to analyze Google Scholar's capacity to detect citation counting manipulation. For this, six documents were uploaded to an institutional web domain authored by a false researcher and referencing all the publications of the members of the EC3 research group at the University of Granada. The detection of Google Scholar of these papers outburst the citations included in the Google Scholar Citations profiles of the authors. We discuss the effects of such outburst and how it could affect the future development of such products not only at individual level but also at journal level, especially if Google Scholar persists with its lack of transparency.",
"title": ""
},
{
"docid": "neg:1840585_6",
"text": "Naive-Bayes and k-NN classifiers are two machine learning approaches for text classification. Rocchio is the classic method for text classification in information retrieval. Based on these three approaches and using classifier fusion methods, we propose a novel approach in text classification. Our approach is a supervised method, meaning that the list of categories should be defined and a set of training data should be provided for training the system. In this approach, documents are represented as vectors where each component is associated with a particular word. We proposed voting methods and OWA operator and decision template method for combining classifiers. Experimental results show that these methods decrese the classification error 15 percent as measured on 2000 training data from 20 newsgroups dataset.",
"title": ""
},
{
"docid": "neg:1840585_7",
"text": "Korean dramas have played an influential role in Taiwanese society since they were first introduced into Taiwan. One of the most dominant themes in most Korean dramas is the theme of love. As a story topic, love accounts for about ninety percent of the themes dealt with by these dramas. By applying the theoretical idea of cultural proximity, and by using content analysis to analyze the underlying values contained in the dramas, this study examines the theme of love in these dramas. The data pool includes 10 popular Korean dramas aired between the years of 2008 and 2012. Using these 10 dramas as a sample, I examine whether contemporary feminist attitudes about women \" s autonomy play a role in how Taiwanese audiences identify with stories about love in Korean dramas. Through interviews with four television station managers from companies including LTV, ETTV, ii Videoland Drama and ELTA, I also gathered information about the process of localization within Korean dramas. In addition to the above strategies, my study incorporates secondary data to analyze related reports and statistical data about Korean dramas.",
"title": ""
},
{
"docid": "neg:1840585_8",
"text": "Increasing volumes of trajectory data require analysis methods which go beyond the visual. Methods for computing trajectory analysis typically assume linear interpolation between quasi-regular sampling points. This assumption, however, is often not realistic, and can lead to a meaningless analysis for sparsely and/or irregularly sampled data. We propose to use the space-time prism model instead, allowing to represent the influence of speed on possible trajectories within a volume. We give definitions for the similarity of trajectories in this model and describe algorithms for its computation using the Fréchet and the equal time distance.",
"title": ""
},
{
"docid": "neg:1840585_9",
"text": "Over the past decade, the advent of new technology has brought about the emergence of smart cities aiming to provide their stakeholders with technology-based solutions that are effective and efficient. Insofar as the objective of smart cities is to improve outcomes that are connected to people, systems and processes of businesses, government and other publicand private-sector entities, its main goal is to improve the quality of life of all residents. Accordingly, smart tourism has emerged over the past few years as a subset of the smart city concept, aiming to provide tourists with solutions that address specific travel related needs. Dubai is an emerging tourism destination that has implemented smart city and smart tourism platforms to engage various stakeholders. The objective of this study is to identify best practices related to Dubai’s smart city and smart tourism. In so doing, Dubai’s mission and vision along with key dimensions and pillars are identified in relation to the advancements in the literature while highlighting key resources and challenges. A Smart Tourism Dynamic Responsive System (STDRS) framework is proposed while suggesting how Dubai may able to enhance users’ involvement and their overall experience.",
"title": ""
},
{
"docid": "neg:1840585_10",
"text": "The authors report a meta-analysis of individual differences in detecting deception, confining attention to occasions when people judge strangers' veracity in real-time with no special aids. The authors have developed a statistical technique to correct nominal individual differences for differences introduced by random measurement error. Although researchers have suggested that people differ in the ability to detect lies, psychometric analyses of 247 samples reveal that these ability differences are minute. In terms of the percentage of lies detected, measurement-corrected standard deviations in judge ability are less than 1%. In accuracy, judges range no more widely than would be expected by chance, and the best judges are no more accurate than a stochastic mechanism would produce. When judging deception, people differ less in ability than in the inclination to regard others' statements as truthful. People also differ from one another as lie- and truth-tellers. They vary in the detectability of their lies. Moreover, some people are more credible than others whether lying or truth-telling. Results reveal that the outcome of a deception judgment depends more on the liar's credibility than any other individual difference.",
"title": ""
},
{
"docid": "neg:1840585_11",
"text": "We present a novel approach to real-time structured light range scanning. After an analysis of the underlying assumptions of existing structured light techniques, we derive a new set of illumination patterns based on coding the boundaries between projected stripes. These stripe boundary codes allow range scanning of moving objects, with only modest assumptions about scene continuity and reflectance. We describe an implementation that integrates these new codes with real-time algorithms for tracking stripe boundaries and determining depths. Our system uses a standard video camera and DLP projector, and produces dense range images at 60 Hz with 100 m accuracy over a 10 cm working volume. As an application, we demonstrate the creation of complete models of rigid objects: the objects are rotated in front of the scanner by hand, and successive range images are automatically aligned.",
"title": ""
},
{
"docid": "neg:1840585_12",
"text": "We propose an extension of Convolutional Neural Networks (CNNs) to graph-structured data, including strided convolutions and data augmentation defined from inferred graph translations. Our method matches the accuracy of state-of-the-art CNNs when applied on images, without any prior about their 2D regular structure. On fMRI data, we obtain a significant gain in accuracy compared with existing graph-based alternatives.",
"title": ""
},
{
"docid": "neg:1840585_13",
"text": "We develop some versions of quantum devices simulators such as NEMO-VN, NEMO-VN1 and NEMO-VN2. The quantum device simulator – NEMO-VN2 focuses on carbon nanotube FET (CNTFET). CNTFETs have been studied in recent years as potential alternatives to CMOS devices because of their compelling properties. Studies of phonon scattering in CNTs and its influence in CNTFET have focused on metallic tubes or on long semiconducting tubes. Phonon scattering in short channel CNTFETs, which is important for nanoelectronic applications, remains unexplored. In this work the non-equilibrium Green function (NEGF) is used to perform a comprehensive study of CNT transistors. The program has been written by using graphic user interface (GUI) of Matlab. We find that the effect of scattering on current-voltage characteristics of CNTFET is significant. The degradation of drain current due to scattering has been observed. Some typical simulation results have been presented for illustration.",
"title": ""
},
{
"docid": "neg:1840585_14",
"text": "Large-scale transactional systems still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective. A semantic layer built upon a basic blockchain infrastructure would join the benefits of flexible resource/service discovery and validation by consensus. This paper proposes a novel Service-oriented Architecture (SOA) based on a semantic blockchain. Registration, discovery, selection and payment operations are implemented as smart contracts, allowing decentralized execution and trust. Potential applications include material and immaterial resource marketplaces and trustless collaboration among autonomous entities, spanning many areas of interest for smart cities and communities.",
"title": ""
},
{
"docid": "neg:1840585_15",
"text": "This paper examines the contemporary relationship between fashion brands and celebrities. Noting the historic role of celebrities in fashion and their current prevalence in the industry, the paper moves beyond discussion of the motives and effectiveness of celebrity endorsement, and instead explores its nature and practice in the fashion sector. The paper proposes a new definition of celebrity endorsement in fashion, offers a classification of celebrities involved in fashion brand endorsement, and presents a typology examining the contemporary means by which a fashion brand may collaborate with celebrities. The typology is defined in context of the nature, length and cost to the brand of the relationship between it and the celebrity. The methodology uses secondary sources and qualitative primary research in an exploratory agenda in order to propose conclusions and suggest ideas for further research.",
"title": ""
},
{
"docid": "neg:1840585_16",
"text": "Skin cancer exists in different forms like Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most dangerous and unpredictable. In this paper, we implement an image processing technique for the detection of Melanoma Skin Cancer using the software MATLAB which is easy for implementation as well as detection of Melanoma skin cancer. The input to the system is the skin lesion image. This image proceeds with the image pre-processing methods such as conversion of RGB image to Grayscale image, noise removal and so on. Further Otsu thresholding is used to segment the images followed by feature extraction that includes parameters like Asymmetry, Border Irregularity, Color and Diameter (ABCD) and then Total Dermatoscopy Score (TDS) is calculated. The calculation of TDS determines the presence of Melanoma skin cancer by classifying it as benign, suspicious or highly suspicious skin lesion.",
"title": ""
},
{
"docid": "neg:1840585_17",
"text": "The two key challenges in hierarchical classification are to leverage the hierarchical dependencies between the class-labels for improving performance, and, at the same time maintaining scalability across large hierarchies. In this paper we propose a regularization framework for large-scale hierarchical classification that addresses both the problems. Specifically, we incorporate the hierarchical dependencies between the class-labels into the regularization structure of the parameters thereby encouraging classes nearby in the hierarchy to share similar model parameters. Furthermore, we extend our approach to scenarios where the dependencies between the class-labels are encoded in the form of a graph rather than a hierarchy. To enable large-scale training, we develop a parallel-iterative optimization scheme that can handle datasets with hundreds of thousands of classes and millions of instances and learning terabytes of parameters. Our experiments showed a consistent improvement over other competing approaches and achieved state-of-the-art results on benchmark datasets.",
"title": ""
},
{
"docid": "neg:1840585_18",
"text": "This paper proposes an EMI filter design software which can serve as an aid to the designer to quickly arrive at optimal filter sizes based on off-line measurement data or simulation results. The software covers different operating conditions-such as: different switching devices, different types of switching techniques, different load conditions and layout of the test setup. The proposed software design works for both silicon based and WBG based power converters.",
"title": ""
},
{
"docid": "neg:1840585_19",
"text": "The reconstitution of lost bone is a subject that is germane to many orthopedic conditions including fractures and non-unions, infection, inflammatory arthritis, osteoporosis, osteonecrosis, metabolic bone disease, tumors, and periprosthetic particle-associated osteolysis. In this regard, the processes of acute and chronic inflammation play an integral role. Acute inflammation is initiated by endogenous or exogenous adverse stimuli, and can become chronic in nature if not resolved by normal homeostatic mechanisms. Dysregulated inflammation leads to increased bone resorption and suppressed bone formation. Crosstalk among inflammatory cells (polymorphonuclear leukocytes and cells of the monocyte-macrophage-osteoclast lineage) and cells related to bone healing (cells of the mesenchymal stem cell-osteoblast lineage and vascular lineage) is essential to the formation, repair and remodeling of bone. In this review, the authors provide a comprehensive summary of the literature related to inflammation and bone repair. Special emphasis is placed on the underlying cellular and molecular mechanisms, and potential interventions that can favorably modulate the outcome of clinical conditions that involve bone repair.",
"title": ""
}
] |
1840586 | Using hidden Markov models for topic segmentation of meeting transcripts | [
{
"docid": "pos:1840586_0",
"text": "This paper introduces a new statistical approach to automatically partitioning text into coherent segments. The approach is based on a technique that incrementally builds an exponential model to extract features that are correlated with the presence of boundaries in labeled training text. The models use two classes of features: topicality features that use adaptive language models in a novel way to detect broad changes of topic, and cue-word features that detect occurrences of specific words, which may be domain-specific, that tend to be used near segment boundaries. Assessment of our approach on quantitative and qualitative grounds demonstrates its effectiveness in two very different domains, Wall Street Journal news articles and television broadcast news story transcripts. Quantitative results on these domains are presented using a new probabilistically motivated error metric, which combines precision and recall in a natural and flexible way. This metric is used to make a quantitative assessment of the relative contributions of the different feature types, as well as a comparison with decision trees and previously proposed text segmentation algorithms.",
"title": ""
},
{
"docid": "pos:1840586_1",
"text": "Several approaches to automatic speech summarization are discussed below, using the ICSI Meetings corpus. We contrast feature-based approaches using prosodic and lexical features with maximal marginal relevance and latent semantic analysis approaches to summarization. While the latter two techniques are borrowed directly from the field of text summarization, feature-based approaches using prosodic information are able to utilize characteristics unique to speech data. We also investigate how the summarization results might deteriorate when carried out on ASR output as opposed to manual transcripts. All of the summaries are of an extractive variety, and are compared using the software ROUGE.",
"title": ""
}
] | [
{
"docid": "neg:1840586_0",
"text": "We present a learning to rank approach to classify folktales, such as fairy tales and urban legends, according to their story type, a concept that is widely used by folktale researchers to organize and classify folktales. A story type represents a collection of similar stories often with recurring plot and themes. Our work is guided by two frequently used story type classification schemes. Contrary to most information retrieval problems, the text similarity in this problem goes beyond topical similarity. We experiment with approaches inspired by distributed information retrieval and features that compare subject-verb-object triplets. Our system was found to be highly effective compared with a baseline system.",
"title": ""
},
{
"docid": "neg:1840586_1",
"text": "Cellular systems are becoming more heterogeneous with the introduction of low power nodes including femtocells, relays, and distributed antennas. Unfortunately, the resulting interference environment is also becoming more complicated, making evaluation of different communication strategies challenging in both analysis and simulation. Leveraging recent applications of stochastic geometry to analyze cellular systems, this paper proposes to analyze downlink performance in a fixed-size cell, which is inscribed within a weighted Voronoi cell in a Poisson field of interferers. A nearest out-of-cell interferer, out-of-cell interferers outside a guard region, and cross-tier interferers are included in the interference calculations. Bounding the interference power as a function of distance from the cell center, the total interference is characterized through its Laplace transform. An equivalent marked process is proposed for the out-of-cell interference under additional assumptions. To facilitate simplified calculations, the interference distribution is approximated using the Gamma distribution with second order moment matching. The Gamma approximation simplifies calculation of the success probability and average rate, incorporates small-scale and large-scale fading, and works with co-tier and cross-tier interference. Simulations show that the proposed model provides a flexible way to characterize outage probability and rate as a function of the distance to the cell edge.",
"title": ""
},
{
"docid": "neg:1840586_2",
"text": "TH1 and TH17 cells mediate neuroinflammation in experimental autoimmune encephalomyelitis (EAE), a mouse model of multiple sclerosis. Pathogenic TH cells in EAE must produce the pro-inflammatory cytokine granulocyte-macrophage colony stimulating factor (GM-CSF). TH cell pathogenicity in EAE is also regulated by cell-intrinsic production of the immunosuppressive cytokine interleukin 10 (IL-10). Here we demonstrate that mice deficient for the basic helix-loop-helix (bHLH) transcription factor Bhlhe40 (Bhlhe40(-/-)) are resistant to the induction of EAE. Bhlhe40 is required in vivo in a T cell-intrinsic manner, where it positively regulates the production of GM-CSF and negatively regulates the production of IL-10. In vitro, GM-CSF secretion is selectively abrogated in polarized Bhlhe40(-/-) TH1 and TH17 cells, and these cells show increased production of IL-10. Blockade of IL-10 receptor in Bhlhe40(-/-) mice renders them susceptible to EAE. These findings identify Bhlhe40 as a critical regulator of autoreactive T-cell pathogenicity.",
"title": ""
},
{
"docid": "neg:1840586_3",
"text": "In this paper we propose a methodology to control a novel class of actuators that we called passive noise rejection variable stiffness actuators (pnrVSA). Differently from nowadays classical VSA designs, this novel class of actuators mimics the human musculoskeletal ability to increase noise rejection without relying on feedback. To fully highlight the potentialities behind these actuators we consider movement planning under two constraints: (1) absence of feedback, i.e. purely open-loop planning1; (2) uncertain dynamic model. Under these constraints, movement planning can be formalized as an open-loop stochastic optimal control. Due to the lack of classical methods forcing the open-loop nature of the computed solution, we used here a slight modification of available methodologies based on importance sampling of trajectories using forward diffusion processes. Simulations show that the proposed algorithm can be effectively used to plan open-loop movements with pnrVSA. In particular, two different scenarios are considered: the control of a single joint pnrVSA and the control of a two degrees of freedom planar arm equipped with antagonist pnrVSAs at each joint. In both cases, movement has to be planned in presence of uncertain dynamics for unstable tasks. It is shown that open-loop stochastic optimal control can modulate the intrinsic stiffness of the system to cope with both instability and noise.",
"title": ""
},
{
"docid": "neg:1840586_4",
"text": "The successful integration of Information and Communications Technology (ICT) into the teaching and learning of English Language is largely dependent on the level of teacher’s ICT competence, the actual utilization of ICT in the language classroom and factors that challenge teachers to use it in language teaching. The study therefore assessed the Secondary School English language teachers’ ICT literacy, the extent of ICT utilization in English language teaching and the challenges that prevent language teachers to integrate ICT in teaching. To answer the problems, three sets of survey questionnaires were distributed to 30 English teachers in the 11 schools of Cluster 1 (CarCanMadCarLan). Data gathered were analyzed using descriptive statistics and frequency count. The results revealed that the teachers’ ICT literacy was moderate. The findings provided evidence that there was only a limited use of ICT in language teaching. Feedback gathered from questionnaires show that teachers faced many challenges that demotivate them from using ICT in language activities. Based on these findings, it is recommended the teachers must be provided with intensive ICT-based trainings to equip them with knowledge of ICT and its utilization in language teaching. School administrators as well as stakeholders may look for interventions to upgrade school’s ICTbased resources for its optimum use in teaching and learning. Most importantly, a larger school-wide ICT development plan may be implemented to ensure coherence of ICT implementation in the teaching-learning activities. ‘ICT & Innovations in Education’ International Journal International Electronic Journal | ISSN 2321 – 7189 | www.ictejournal.com Volume 2, Issue 1 | February 2014",
"title": ""
},
{
"docid": "neg:1840586_5",
"text": "This paper provides a survey of modern LIght Detection And Ranging (LIDAR) sensors from a perspective of how they can be used for spacecraft relative navigation. In addition to LIDAR technology commonly used in space applications today (e.g. scanning, flash), this paper reviews emerging LIDAR technologies gaining traction in other non-aerospace fields. The discussion will include an overview of sensor operating principles and specific pros/cons for each type of LIDAR. This paper provides a comprehensive review of LIDAR technology as applied specifically to spacecraft relative navigation.",
"title": ""
},
{
"docid": "neg:1840586_6",
"text": "Data mining is the process of discovering meaningful new correlation, patterns and trends by sifting through large amounts of data, using pattern recognition technologies as well as statistical and mathematical techniques. Cluster analysis is often used as one of the major data analysis technique widely applied for many practical applications in emerging areas of data mining. Two of the most delegated, partition based clustering algorithms namely k-Means and Fuzzy C-Means are analyzed in this research work. These algorithms are implemented by means of practical approach to analyze its performance, based on their computational time. The telecommunication data is the source data for this analysis. The connection oriented broad band data is used to find the performance of the chosen algorithms. The distance (Euclidian distance) between the server locations and their connections are rearranged after processing the data. The computational complexity (execution time) of each algorithm is analyzed and the results are compared with one another. By comparing the result of this practical approach, it was found that the results obtained are more accurate, easy to understand and above all the time taken to process the data was substantially high in Fuzzy C-Means algorithm than the k-Means. © 2014 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840586_7",
"text": "Compressive sensing is a revolutionary idea proposed recently to achieve much lower sampling rate for sparse signals. For large wireless sensor networks, the events are relatively sparse compared with the number of sources. Because of deployment cost, the number of sensors is limited, and due to energy constraint, not all the sensors are turned on all the time. In this paper, the first contribution is to formulate the problem for sparse event detection in wireless sensor networks as a compressive sensing problem. The number of (wake-up) sensors can be greatly reduced to the similar level of the number of sparse events, which is much smaller than the total number of sources. Second, we suppose the event has the binary nature, and employ the Bayesian detection using this prior information. Finally, we analyze the performance of the compressive sensing algorithms under the Gaussian noise. From the simulation results, we show that the sampling rate can reduce to 25% without sacrificing performance. With further decreasing the sampling rate, the performance is gradually reduced until 10% of sampling rate. Our proposed detection algorithm has much better performance than the l1-magic algorithm proposed in the literature.",
"title": ""
},
{
"docid": "neg:1840586_8",
"text": "Massively multiplayer game holds a huge market in the digital entertainment industry. Companies invest heavily in the game and graphics development since a successful online game can attract million of users, and this translates to a huge investment payoff. However, multiplayer online game is also subjected to various forms of hacks and cheats. Hackers can alter the graphic rendering to reveal information otherwise be hidden in a normal game, or cheaters can use software robot to play the game automatically and gain an unfair advantage. Currently, some popular online games release software patches or incorporate anti-cheating software to detect known cheats. This not only creates deployment difficulty but new cheats will still be able to breach the normal game logic until software patches are available. Moreover, the anti-cheating software themselves are also vulnerable to hacks. In this paper, we propose a scalable and efficient method to detect whether a player is cheating or not. The methodology is based on the dynamic Bayesian network approach. The detection framework relies solely on the game states and runs in the game server only. Therefore it is invulnerable to hacks and it is a much more deployable solution. To demonstrate the effectiveness of the propose method, we implement a prototype multiplayer game system and to detect whether a player is using the “aiming robot” for cheating or not. Experiments show that not only we can effectively detect cheaters, but the false positive rate is extremely low. We believe the proposed methodology and the prototype system provide a first step toward a systematic study of cheating detection and security research in the area of online multiplayer games.",
"title": ""
},
{
"docid": "neg:1840586_9",
"text": "Established in 1987, the EuroQol Group initially comprised a network of international, multilingual and multidisciplinary researchers from seven centres in Finland, the Netherlands, Norway, Sweden and the UK. Nowadays, the Group comprises researchers from Canada, Denmark, Germany, Greece, Japan, New Zealand, Slovenia, Spain, the USA and Zimbabwe. The process of shared development and local experimentation resulted in EQ-5D, a generic measure of health status that provides a simple descriptive profile and a single index value that can be used in the clinical and economic evaluation of health care and in population health surveys. Currently, EQ-5D is being widely used in different countries by clinical researchers in a variety of clinical areas. EQ-5D is also being used by eight out of the first 10 of the top 50 pharmaceutical companies listed in the annual report of Pharma Business (November/December 1999). Furthermore, EQ-5D is one of the handful of measures recommended for use in cost-effectiveness analyses by the Washington Panel on Cost Effectiveness in Health and Medicine. EQ-5D has now been translated into most major languages with the EuroQol Group closely monitoring the process.",
"title": ""
},
{
"docid": "neg:1840586_10",
"text": "Massive digital acquisition and preservation of deteriorating historical and artistic documents is of particular importance due to their value and fragile condition. The study and browsing of such digital libraries is invaluable for scholars in the Cultural Heritage field but requires automatic tools for analyzing and indexing these datasets. We present two completely automatic methods requiring no human intervention: text height estimation and text line extraction. Our proposed methods have been evaluated on a huge heterogeneous corpus of illuminated medieval manuscripts of different writing styles and with various problematic attributes, such as holes, spots, ink bleed-through, ornamentation, background noise, and overlapping text lines. Our experimental results demonstrate that these two new methods are efficient and reliable, even when applied to very noisy and damaged old handwritten manuscripts.",
"title": ""
},
{
"docid": "neg:1840586_11",
"text": "In educational contexts, understanding the student’s learning must take account of the student’s construction of reality. Reality as experienced by the student has an important additional value. This assumption also applies to a student’s perception of evaluation and assessment. Students’ study behaviour is not only determined by the examination or assessment modes that are used. Students’ perceptions about evaluation methods also play a significant role. This review aims to examine evaluation and assessment from the student’s point of view. Research findings reveal that students’ perceptions about assessment significantly influence their approaches to learning and studying. Conversely, students’ approaches to study influence the ways in which they perceive evaluation and assessment. Findings suggest that students hold strong views about different assessment and evaluation formats. In this respect students favour multiple-choice format exams to essay type questions. However, when compared with more innovative assessment methods, students call the ‘fairness’ of these well-known evaluation modes into question.",
"title": ""
},
{
"docid": "neg:1840586_12",
"text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.",
"title": ""
},
{
"docid": "neg:1840586_13",
"text": "The key limiting factor in graphical model inference and learning is the complexity of the partition function. We thus ask the question: what are the most general conditions under which the partition function is tractable? The answer leads to a new kind of deep architecture, which we call sum-product networks (SPNs) and will present in this abstract.",
"title": ""
},
{
"docid": "neg:1840586_14",
"text": "Subband adaptive filtering (SAF) techniques play a prominent role in designing active noise control (ANC) systems. They reduce the computational complexity of ANC algorithms, particularly, when the acoustic noise is a broadband signal and the system models have long impulse responses. In the commonly used uniform-discrete Fourier transform (DFT)-modulated (UDFTM) filter banks, increasing the number of subbands decreases the computational burden but can introduce excessive distortion, degrading performance of the ANC system. In this paper, we propose a new UDFTM-based adaptive subband filtering method that alleviates the degrading effects of the delay and side-lobe distortion introduced by the prototype filter on the system performance. The delay in filter bank is reduced by prototype filter design and the side-lobe distortion is compensated for by oversampling and appropriate stacking of subband weights. Experimental results show the improvement of performance and computational complexity of the proposed method in comparison to two commonly used subband and block adaptive filtering algorithms.",
"title": ""
},
{
"docid": "neg:1840586_15",
"text": "Many common events in our daily life affect us in positive and negative ways. For example, going on vacation is typically an enjoyable event, while being rushed to the hospital is an undesirable event. In narrative stories and personal conversations, recognizing that some events have a strong affective polarity is essential to understand the discourse and the emotional states of the affected people. However, current NLP systems mainly depend on sentiment analysis tools, which fail to recognize many events that are implicitly affective based on human knowledge about the event itself and cultural norms. Our goal is to automatically acquire knowledge of stereotypically positive and negative events from personal blogs. Our research creates an event context graph from a large collection of blog posts and uses a sentiment classifier and semi-supervised label propagation algorithm to discover affective events. We explore several graph configurations that propagate affective polarity across edges using local context, discourse proximity, and event-event co-occurrence. We then harvest highly affective events from the graph and evaluate the agreement of the polarities with human judgements.",
"title": ""
},
{
"docid": "neg:1840586_16",
"text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic",
"title": ""
},
{
"docid": "neg:1840586_17",
"text": "Memetic computation is a paradigm that uses the notion of meme(s) as units of information encoded in computational representations for the purpose of problem-solving. It covers a plethora of potentially rich meme-inspired computing methodologies, frameworks and operational algorithms including simple hybrids, adaptive hybrids and memetic automaton. In this paper, a comprehensive multi-facet survey of recent research in memetic computation is presented.",
"title": ""
},
{
"docid": "neg:1840586_18",
"text": "Matching user accounts can help us build better users’ profiles and benefit many applications. It has attracted much attention from both industry and academia. Most of existing works are mainly based on rich user profile attributes. However, in many cases, user profile attributes are unavailable, incomplete or unreliable, either due to the privacy settings or just because users decline to share their information. This makes the existing schemes quite fragile. Users often share their activities on different social networks. This provides an opportunity to overcome the above problem. We aim to address the problem of user identification based on User Generated Content (UGC). We first formulate the problem of user identification based on UGCs and then propose a UGC-based user identification model. A supervised machine learning based solution is presented. It has three steps: firstly, we propose several algorithms to measure the spatial similarity, temporal similarity and content similarity of two UGCs; secondly, we extract the spatial, temporal and content features to exploit these similarities; afterwards, we employ the machine learning method to match user accounts, and conduct the experiments on three ground truth datasets. The results show that the proposed method has given excellent performance with F1 values reaching 89.79%, 86.78% and 86.24% on three ground truth datasets, respectively. This work presents the possibility of matching user accounts with high accessible online data. © 2018 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840586_19",
"text": "We report the implementation of a text input application (speller) based on the P300 event related potential. We obtain high accuracies by using an SVM classifier and a novel feature. These techniques enable us to maintain fast performance without sacrificing the accuracy, thus making the speller usable in an online mode. In order to further improve the usability, we perform various studies on the data with a view to minimizing the training time required. We present data collected from nine healthy subjects, along with the high accuracies (of the order of 95% or more) measured online. We show that the training time can be further reduced by a factor of two from its current value of about 20 min. High accuracy, fast learning, and online performance make this P300 speller a potential communication tool for severely disabled individuals, who have lost all other means of communication and are otherwise cut off from the world, provided their disability does not interfere with the performance of the speller.",
"title": ""
}
] |
1840587 | Big data technologies and Management: What conceptual modeling can do | [
{
"docid": "pos:1840587_0",
"text": "The Internet of Things (IoT) shall be able to incorporate transparently and seamlessly a large number of different and heterogeneous end systems, while providing open access to selected subsets of data for the development of a plethora of digital services. Building a general architecture for the IoT is hence a very complex task, mainly because of the extremely large variety of devices, link layer technologies, and services that may be involved in such a system. In this paper, we focus specifically to an urban IoT system that, while still being quite a broad category, are characterized by their specific application domain. Urban IoTs, in fact, are designed to support the Smart City vision, which aims at exploiting the most advanced communication technologies to support added-value services for the administration of the city and for the citizens. This paper hence provides a comprehensive survey of the enabling technologies, protocols, and architecture for an urban IoT. Furthermore, the paper will present and discuss the technical solutions and best-practice guidelines adopted in the Padova Smart City project, a proof-of-concept deployment of an IoT island in the city of Padova, Italy, performed in collaboration with the city municipality.",
"title": ""
},
{
"docid": "pos:1840587_1",
"text": "Purpose – The paper aims to focus on so-called NoSQL databases in the context of cloud computing. Design/methodology/approach – Architectures and basic features of these databases are studied, particularly their horizontal scalability and concurrency model, that is mostly weaker than ACID transactions in relational SQL-like database systems. Findings – Some characteristics like a data model and querying capabilities of NoSQL databases are discussed in more detail. Originality/value – The paper shows vary different data models and query possibilities in a common terminology enabling comparison and categorization of NoSQL databases.",
"title": ""
}
] | [
{
"docid": "neg:1840587_0",
"text": "Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.",
"title": ""
},
{
"docid": "neg:1840587_1",
"text": "In this paper we present a case study of frequent surges of unusually high rail-to-earth potential values at Taipei Rapid Transit System. The rail potential values observed and the resulting stray current flow associated with the diode-ground DC traction system during operation are contradictory to the moderate values on which the grounding of the DC traction system design was based. Thus we conducted both theoretical study and field measurements to obtain better understanding of the phenomenon, and to develop a more accurate algorithm for computing the rail-to-earth potential of the diode-ground DC traction systems.",
"title": ""
},
{
"docid": "neg:1840587_2",
"text": "There is a concerted understanding of the ability of root exudates to influence the structure of rhizosphere microbial communities. However, our knowledge of the connection between plant development, root exudation and microbiome assemblage is limited. Here, we analyzed the structure of the rhizospheric bacterial community associated with Arabidopsis at four time points corresponding to distinct stages of plant development: seedling, vegetative, bolting and flowering. Overall, there were no significant differences in bacterial community structure, but we observed that the microbial community at the seedling stage was distinct from the other developmental time points. At a closer level, phylum such as Acidobacteria, Actinobacteria, Bacteroidetes, Cyanobacteria and specific genera within those phyla followed distinct patterns associated with plant development and root exudation. These results suggested that the plant can select a subset of microbes at different stages of development, presumably for specific functions. Accordingly, metatranscriptomics analysis of the rhizosphere microbiome revealed that 81 unique transcripts were significantly (P<0.05) expressed at different stages of plant development. For instance, genes involved in streptomycin synthesis were significantly induced at bolting and flowering stages, presumably for disease suppression. We surmise that plants secrete blends of compounds and specific phytochemicals in the root exudates that are differentially produced at distinct stages of development to help orchestrate rhizosphere microbiome assemblage.",
"title": ""
},
{
"docid": "neg:1840587_3",
"text": "Direct instruction approaches, as well as the design processes that support them, have been criticized for failing to reflect contemporary research and theory in teaching, learning, and technology. Learning systems are needed that encourage divergent reasoning, problem solving, and critical thinking. Student-centered learning environments have been touted as a means to support such processes. With the emergence of technology, many barriers to implementing innovative alternatives may be overcome. The purposes of this paper are to review and critically analyze research and theory related to technology-enhanced studentcentered learning environments and to identify their foundations and assumptions.",
"title": ""
},
{
"docid": "neg:1840587_4",
"text": "This paper describes a model of cooperative behavior and describes how such a model can be applied in a natural language understanding system. We assume that agents attempt to recognize the plans of other agents and, then, use this plan when deciding what response to make. In particular, we show that, given a setting in which purposeful dialogues occur, this model can account for responses that provide more information that explicitly requested and for appropriate responses to both short sentence fragments and indirect speech acts.",
"title": ""
},
{
"docid": "neg:1840587_5",
"text": "Investigating the nature of system intrusions in large distributed systems remains a notoriously difficult challenge. While monitoring tools (e.g., Firewalls, IDS) provide preliminary alerts through easy-to-use administrative interfaces, attack reconstruction still requires that administrators sift through gigabytes of system audit logs stored locally on hundreds of machines. At present, two fundamental obstacles prevent synergy between system-layer auditing and modern cluster monitoring tools: 1) the sheer volume of audit data generated in a data center is prohibitively costly to transmit to a central node, and 2) systemlayer auditing poses a “needle-in-a-haystack” problem, such that hundreds of employee hours may be required to diagnose a single intrusion. This paper presents Winnower, a scalable system for auditbased cluster monitoring that addresses these challenges. Our key insight is that, for tasks that are replicated across nodes in a distributed application, a model can be defined over audit logs to succinctly summarize the behavior of many nodes, thus eliminating the need to transmit redundant audit records to a central monitoring node. Specifically, Winnower parses audit records into provenance graphs that describe the actions of individual nodes, then performs grammatical inference over individual graphs using a novel adaptation of Deterministic Finite Automata (DFA) Learning to produce a behavioral model of many nodes at once. This provenance model can be efficiently transmitted to a central node and used to identify anomalous events in the cluster. We have implemented Winnower for Docker Swarm container clusters and evaluate our system against real-world applications and attacks. We show that Winnower dramatically reduces storage and network overhead associated with aggregating system audit logs, by as much as 98%, without sacrificing the important information needed for attack investigation. Winnower thus represents a significant step forward for security monitoring in distributed systems.",
"title": ""
},
{
"docid": "neg:1840587_6",
"text": "Trichostasis spinulosa is a common disorder of follicular hyperkeratosis that is often confused clinically with similar disorders, such as keratosis pilaris and eruptive vellus hair cysts. Six patients from the UTMB dermatology clinic who had trichostasis spinulosa are presented. Two of the six also had keratosis pilaris and one had eruptive vellus hair cysts. The present study was undertaken to compare and contrast the clinical presentation and histopathologic appearance of these three disorders. The results of the study and review of the literature revealed differences in distribution of lesions and microscopic appearance of follicular and histopathologic material.",
"title": ""
},
{
"docid": "neg:1840587_7",
"text": "We propose an automated breast cancer triage CAD system using machine vision on low-cost, portable ultrasound imaging devices. We demonstrate that the triage CAD software can effectively analyze images captured by minimally-trained operators and output one of three assessments - benign, probably benign (6-month follow-up recommended) and suspicious (biopsy recommended). This system opens up the possibility of offering practical, cost-effective breast cancer diagnosis for symptomatic women in economically developing countries.",
"title": ""
},
{
"docid": "neg:1840587_8",
"text": "This paper is concerned with graphical criteria that can be used to solve the problem of identifying casual effects from nonexperimental data in a causal Bayesian network structure, i.e., a directed acyclic graph that represents causal relationships. We first review Pearl’s work on this topic [Pearl, 1995], in which several useful graphical criteria are presented. Then we present a complete algorithm [Huang and Valtorta, 2006b] for the identifiability problem. By exploiting the completeness of this algorithm, we prove that the three basicdo-calculus rulesthat Pearl presents are complete, in the sense that, if a causal effect is identifiable, there exists a sequence of applications of the rules of the do-calculus that transforms the causal effect formula into a formula that only includes observational quantities.",
"title": ""
},
{
"docid": "neg:1840587_9",
"text": "This article focuses on the traffic coordination problem at traffic intersections. We present a decentralized coordination approach, combining optimal control with model-based heuristics. We show how model-based heuristics can lead to low-complexity solutions that are suitable for a fast online implementation, and analyze its properties in terms of efficiency, feasibility and optimality. Finally, simulation results for different scenarios are also presented.",
"title": ""
},
{
"docid": "neg:1840587_10",
"text": "This paper presents the method that underlies our submission to the untrimmed video classification task of ActivityNet Challenge 2016. We follow the basic pipeline of temporal segment networks [ 16] and further raise the performance via a number of other techniques. Specifically, we use the latest deep model architecture, e.g., ResNet and Inception V3, and introduce new aggregation schemes (top-k and attention-weighted pooling). Additionally, we incorp rate the audio as a complementary channel, extracting relevant information via a CNN applied to the spectrograms. With these techniques, we derive an ensemble of deep models, which, together, attains a high classification accurac y (mAP93.23%) on the testing set and secured the first place in the challenge.",
"title": ""
},
{
"docid": "neg:1840587_11",
"text": "The accuracy of object classifiers can significantly drop when the training data (source domain) and the application scenario (target domain) have inherent differences. Therefore, adapting the classifiers to the scenario in which they must operate is of paramount importance. We present novel domain adaptation (DA) methods for object detection. As proof of concept, we focus on adapting the state-of-the-art deformable part-based model (DPM) for pedestrian detection. We introduce an adaptive structural SVM (A-SSVM) that adapts a pre-learned classifier between different domains. By taking into account the inherent structure in feature space (e.g., the parts in a DPM), we propose a structure-aware A-SSVM (SA-SSVM). Neither A-SSVM nor SA-SSVM needs to revisit the source-domain training data to perform the adaptation. Rather, a low number of target-domain training examples (e.g., pedestrians) are used. To address the scenario where there are no target-domain annotated samples, we propose a self-adaptive DPM based on a self-paced learning (SPL) strategy and a Gaussian Process Regression (GPR). Two types of adaptation tasks are assessed: from both synthetic pedestrians and general persons (PASCAL VOC) to pedestrians imaged from an on-board camera. Results show that our proposals avoid accuracy drops as high as 15 points when comparing adapted and non-adapted detectors.",
"title": ""
},
{
"docid": "neg:1840587_12",
"text": "Active learning strategies respond to the costly labelling task in a supervised classification by selecting the most useful unlabelled examples in training a predictive model. Many conventional active learning algorithms focus on refining the decision boundary, rather than exploring new regions that can be more informative. In this setting, we propose a sequential algorithm named EG − Active that can improve any Active learning algorithm by an optimal random exploration. Experimental results show a statistically significant and appreciable improvement in the performance of our new approach over the existing active feedback methods.",
"title": ""
},
{
"docid": "neg:1840587_13",
"text": "Most studies on TCP over multi-hop wireless ad hoc networks have only addressed the issue of performance degradation due to temporarily broken routes, which results in TCP inability to distinguish between losses due to link failures or congestion. This problem tends to become more serious as network mobility increases. In this work, we tackle the equally important capture problem to which there has been little or no solution, and is present mostly in static and low mobility multihop wireless networks. This is a result of the interplay between the MAC layer and TCP backoff policies, which causes nodes to unfairly capture the wireless shared medium, hence preventing neighboring nodes to access the channel. This has been shown to have major negative effects on TCP performance comparable to the impact of mobility. We propose a novel algorithm, called COPAS (COntention-based PAth Selection), which incorporates two mechanisms to enhance TCP performance by avoiding capture conditions. First, it uses disjoint forward (sender to receiver for TCP data) and reverse (receiver to sender for TCP ACKs) paths in order to minimize the conflicts of TCP data and ACK packets. Second, COPAS employs a dynamic contentionbalancing scheme where it continuously monitors and changes forward and reverse paths according to the level of MAC layer contention, hence minimizing the likelihood of capture. Through extensive simulation, COPAS is shown to improve TCP throughput by up to 90% while keeping routing overhead low.",
"title": ""
},
{
"docid": "neg:1840587_14",
"text": "It is often a difficult task to accurately segment images with intensity inhomogeneity, because most of representative algorithms are region-based that depend on intensity homogeneity of the interested object. In this paper, we present a novel level set method for image segmentation in the presence of intensity inhomogeneity. The inhomogeneous objects are modeled as Gaussian distributions of different means and variances in which a sliding window is used to map the original image into another domain, where the intensity distribution of each object is still Gaussian but better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying a bias field with the original signal within the window. A maximum likelihood energy functional is then defined on the whole image region, which combines the bias field, the level set function, and the piecewise constant function approximating the true image signal. The proposed level set method can be directly applied to simultaneous segmentation and bias correction for 3 and 7T magnetic resonance images. Extensive evaluation on synthetic and real-images demonstrate the superiority of the proposed method over other representative algorithms.",
"title": ""
},
{
"docid": "neg:1840587_15",
"text": "Background: Dental rehabilitation of partially or totally edentulous patients with oral implants has become a routine treatment modality in the last decades, with reliable long-term results. However, unfavorable local conditions of the alveolar ridge, due to atrophy, periodontal disease, and trauma sequelae may provide insufficient bone volume or unfavorable vertical, horizontal, and sagittal intermaxillary relationships, which may render implant placement impossible or incorrect from a functional and esthetic viewpoint. The aim of the current review is to discuss the different strategies for reconstruction of the alveolar ridge defect for implant placement. Study design: The study design includes a literature review of the articles that address the association between Reconstruction of Mandibular Alveolar Ridge Defects and Implant Placement. Results: Yet, despite an increasing number of publications related to the correction of deficient alveolar ridges, much controversy still exists concerning which is the more suitable and reliable technique. This is often because the publications are of insufficient methodological quality (inadequate sample size, lack of well-defined exclusion and inclusion criteria, insufficient follow-up, lack of well-defined success criteria, etc.). Conclusion: On the basis of available data it is difficult to conclude that a particular surgical procedure offered better outcome as compared to another. Hence the practical use of the available bone augmentation procedures for dental implants depends on the clinician’s preference in general and the clinical findings in the patient in particular. Surgical techniques that reduce trauma, preserve and augment the alveolar ridge represent key areas in the goal to optimize implant results.",
"title": ""
},
{
"docid": "neg:1840587_16",
"text": "Code-switching (CS) refers to a linguistic phenomenon where a speaker uses different languages in an utterance or between alternating utterances. In this work, we study end-to-end (E2E) approaches to the Mandarin-English code-switching speech recognition (CSSR) task. We first examine the effectiveness of using data augmentation and byte-pair encoding (BPE) subword units. More importantly, we propose a multitask learning recipe, where a language identification task is explicitly learned in addition to the E2E speech recognition task. Furthermore, we introduce an efficient word vocabulary expansion method for language modeling to alleviate data sparsity issues under the code-switching scenario. Experimental results on the SEAME data, a Mandarin-English CS corpus, demonstrate the effectiveness of the proposed methods.",
"title": ""
},
{
"docid": "neg:1840587_17",
"text": "References 1. Guan, P., Weiss, A., Balan, A., Black, M.J.: Estimating human shape and pose from a single image. ICCV 2009 2. Pishchulin, L., Insafutdinov, E., Tang, S., Andres, B., Andriluka, M., Gehler, P., Schiele, B.: DeepCut: Joint subset partition and labeling for multi person pose estimation. CVPR 2016 3. Loper, M., Mahmood, N., Romero, J., Pons-Moll, G., Black, M.J.: SMPL: A skinned multi-person linear model. SIGGRAPH Asia 2015 4. Akhter, I., Black, M.J.: Pose-conditioned joint angle limits for 3D human pose reconstruction. CVPR 2015 5. Ramakrishna, V., Kanade, T., Sheikh, Y.: Reconstructing 3D Human Pose from 2D Image Landmarks. ECCV 2012 6. Zhou, X., Zhu, M., Leonardos, S., Derpanis, K., Daniilidis, K.: Sparse representation for 3D shape estimation: A convex relaxation approach. CVPR. 2015 Data: Projected joints from 1000 synthetic 3D models + noise.",
"title": ""
},
{
"docid": "neg:1840587_18",
"text": "This paper focuses on modeling resolving and simulations of the inverse kinematics of an anthropomorphic redundant robotic structure with seven degrees of freedom and a workspace similar to human arm. Also the kinematical model and the kinematics equations of the robotic arm are presented. A method of resolving the redundancy of seven degrees of freedom robotic arm is presented using Fuzzy Logic toolbox from MATLAB®.",
"title": ""
},
{
"docid": "neg:1840587_19",
"text": "Mumps Update [October 2017]: The Healthcare Infection Control Practices Advisory Committee (HICPAC) voted to change the recommendation of isolation for persons with mumps from 9 days to 5 days based on a 2008 MMWR report. (https://www.cdc.gov/mmwr/preview/mmwrhtml/mm5740a3.htm accessed September 2018). Ebola Virus Disease Update [August 2014]: The recommendations in this guideline for Ebola has been superseded by these CDC documents: • Infection Prevention and Control Recommendations for Hospitalized Patients with Known or Suspected Ebola Virus Disease in U.S. Hospitals (https://www.cdc.gov/vhf/ebola/clinicians/evd/infection-control.html accessed September 2018) • Interim Guidance for Environmental Infection Control in Hospitals for Ebola Virus (https://www.cdc.gov/vhf/ebola/clinicians/cleaning/hospitals.html accessed September 2018) See CDC’s Ebola Virus Disease website (https://www.cdc.gov/vhf/ebola/ accessed September 2018) for current information on how Ebola virus is transmitted.",
"title": ""
}
] |
1840588 | Land Use Classification in Remote Sensing Images by Convolutional Neural Networks | [
{
"docid": "pos:1840588_0",
"text": "We investigate bag-of-visual-words (BOVW) approaches to land-use classification in high-resolution overhead imagery. We consider a standard non-spatial representation in which the frequencies but not the locations of quantized image features are used to discriminate between classes analogous to how words are used for text document classification without regard to their order of occurrence. We also consider two spatial extensions, the established spatial pyramid match kernel which considers the absolute spatial arrangement of the image features, as well as a novel method which we term the spatial co-occurrence kernel that considers the relative arrangement. These extensions are motivated by the importance of spatial structure in geographic data.\n The methods are evaluated using a large ground truth image dataset of 21 land-use classes. In addition to comparisons with standard approaches, we perform extensive evaluation of different configurations such as the size of the visual dictionaries used to derive the BOVW representations and the scale at which the spatial relationships are considered.\n We show that even though BOVW approaches do not necessarily perform better than the best standard approaches overall, they represent a robust alternative that is more effective for certain land-use classes. We also show that extending the BOVW approach with our proposed spatial co-occurrence kernel consistently improves performance.",
"title": ""
},
{
"docid": "pos:1840588_1",
"text": "In this paper, we evaluate the generalization power of deep features (ConvNets) in two new scenarios: aerial and remote sensing image classification. We evaluate experimentally ConvNets trained for recognizing everyday objects for the classification of aerial and remote sensing images. ConvNets obtained the best results for aerial images, while for remote sensing, they performed well but were outperformed by low-level color descriptors, such as BIC. We also present a correlation analysis, showing the potential for combining/fusing different ConvNets with other descriptors or even for combining multiple ConvNets. A preliminary set of experiments fusing ConvNets obtains state-of-the-art results for the well-known UCMerced dataset.",
"title": ""
}
] | [
{
"docid": "neg:1840588_0",
"text": "Future advanced driver assistance systems will contain multiple sensors that are used for several applications, such as highly automated driving on freeways. The problem is that the sensors are usually asynchronous and their data possibly out-of-sequence, making fusion of the sensor data non-trivial. This paper presents a novel approach to track-to-track fusion for automotive applications with asynchronous and out-of-sequence sensors using information matrix fusion. This approach solves the problem of correlation between sensor data due to the common process noise and common track history, which eliminates the need to replace the global track estimate with the fused local estimate at each fusion cycle. The information matrix fusion approach is evaluated in simulation and its performance demonstrated using real sensor data on a test vehicle designed for highly automated driving on freeways.",
"title": ""
},
{
"docid": "neg:1840588_1",
"text": "The use of deep learning to solve the problems in literary arts has been a recent trend that gained a lot of attention and automated generation of music has been an active area. This project deals with the generation of music using raw audio files in the frequency domain relying on various LSTM architectures. Fully connected and convolutional layers are used along with LSTM’s to capture rich features in the frequency domain and increase the quality of music generated. The work is focused on unconstrained music generation and uses no information about musical structure(notes or chords) to aid learning.The music generated from various architectures are compared using blind fold tests. Using the raw audio to train models is the direction to tapping the enormous amount of mp3 files that exist over the internet without requiring the manual effort to make structured MIDI files. Moreover, not all audio files can be represented with MIDI files making the study of these models an interesting prospect to the future of such models.",
"title": ""
},
{
"docid": "neg:1840588_2",
"text": "BACKGROUND\nThe medical subdomain of a clinical note, such as cardiology or neurology, is useful content-derived metadata for developing machine learning downstream applications. To classify the medical subdomain of a note accurately, we have constructed a machine learning-based natural language processing (NLP) pipeline and developed medical subdomain classifiers based on the content of the note.\n\n\nMETHODS\nWe constructed the pipeline using the clinical NLP system, clinical Text Analysis and Knowledge Extraction System (cTAKES), the Unified Medical Language System (UMLS) Metathesaurus, Semantic Network, and learning algorithms to extract features from two datasets - clinical notes from Integrating Data for Analysis, Anonymization, and Sharing (iDASH) data repository (n = 431) and Massachusetts General Hospital (MGH) (n = 91,237), and built medical subdomain classifiers with different combinations of data representation methods and supervised learning algorithms. We evaluated the performance of classifiers and their portability across the two datasets.\n\n\nRESULTS\nThe convolutional recurrent neural network with neural word embeddings trained-medical subdomain classifier yielded the best performance measurement on iDASH and MGH datasets with area under receiver operating characteristic curve (AUC) of 0.975 and 0.991, and F1 scores of 0.845 and 0.870, respectively. Considering better clinical interpretability, linear support vector machine-trained medical subdomain classifier using hybrid bag-of-words and clinically relevant UMLS concepts as the feature representation, with term frequency-inverse document frequency (tf-idf)-weighting, outperformed other shallow learning classifiers on iDASH and MGH datasets with AUC of 0.957 and 0.964, and F1 scores of 0.932 and 0.934 respectively. We trained classifiers on one dataset, applied to the other dataset and yielded the threshold of F1 score of 0.7 in classifiers for half of the medical subdomains we studied.\n\n\nCONCLUSION\nOur study shows that a supervised learning-based NLP approach is useful to develop medical subdomain classifiers. The deep learning algorithm with distributed word representation yields better performance yet shallow learning algorithms with the word and concept representation achieves comparable performance with better clinical interpretability. Portable classifiers may also be used across datasets from different institutions.",
"title": ""
},
{
"docid": "neg:1840588_3",
"text": "GF (Grammatical Framework) is a grammar formalism based on the distinction between abstract and concrete syntax. An abstract syntax is a free algebra of trees, and a concrete syntax is a mapping from trees to nested records of strings and features. These mappings are naturally defined as functions in a functional programming language; the GF language provides the customary functional programming constructs such as algebraic data types, pattern matching, and higher-order functions, which enable productive grammar writing and linguistic generalizations. Given the seemingly transformational power of the GF language, its computational properties are not obvious. However, all grammars written in GF can be compiled into a simple and austere core language, Canonical GF (CGF). CGF is well suited for implementing parsing and generation with grammars, as well as for proving properties of GF. This paper gives a concise description of both the core and the source language, the algorithm used in compiling GF to CGF, and some back-end optimizations on CGF.",
"title": ""
},
{
"docid": "neg:1840588_4",
"text": "For a microgrid (MG) to participate in a real-time and demand-side bidding market, high-level control strategies aiming at optimizing the operation of the MG are necessary. One of the difficulties for research of a competitive MG power market is the absence of efficient computational tools. Although many commercial power system simulators are available, these power system simulators are usually not directly applicable to solve the optimal power dispatch problem for an MG power market and to perform MG power-flow study. This paper analyzes the typical MG market policies and investigates how these policies can be converted in such a way that one can use commercial power system software for MG power market study. The paper also develops a mechanism suitable for the power-flow study of an MG containing inverter-interfaced distributed energy sources. The extensive simulation analyses are conducted for grid-tied and islanded operations of a benchmark MG network.",
"title": ""
},
{
"docid": "neg:1840588_5",
"text": "The aim of the study was to investigate prospectively the direction of the relationship between adolescent girls' body dissatisfaction and self-esteem. Participants were 242 female high school students who completed questionnaires at two points in time, separated by 2 years. The questionnaire contained measures of weight (BMI), body dissatisfaction (perceived overweight, figure dissatisfaction, weight satisfaction) and self-esteem. Initial body dissatisfaction predicted self-esteem at Time 1 and Time 2, and initial self-esteem predicted body dissatisfaction at Time 1 and Time 2. However, linear panel analysis (regression analyses controlling for Time 1 variables) found that aspects of Time 1 weight and body dissatisfaction predicted change in self-esteem, but not vice versa. It was concluded that young girls with heavier actual weight and perceptions of being overweight were particularly vulnerable to developing low self-esteem.",
"title": ""
},
{
"docid": "neg:1840588_6",
"text": "Analyzing customer feedback is the best way to channelize the data into new marketing strategies that benefit entrepreneurs as well as customers. Therefore an automated system which can analyze the customer behavior is in great demand. Users may write feedbacks in any language, and hence mining appropriate information often becomes intractable. Especially in a traditional feature-based supervised model, it is difficult to build a generic system as one has to understand the concerned language for finding the relevant features. In order to overcome this, we propose deep Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) based approaches that do not require handcrafting of features. We evaluate these techniques for analyzing customer feedback sentences on four languages, namely English, French, Japanese and Spanish. Our empirical analysis shows that our models perform well in all the four languages on the setups of IJCNLP Shared Task on Customer Feedback Analysis. Our model achieved the second rank in French, with an accuracy of 71.75% and third ranks for all the other languages.",
"title": ""
},
{
"docid": "neg:1840588_7",
"text": "The authors would like to thank the Marketing Science Institute for their generous assistance in funding this research. We would also like to thank Claritas for providing us with data. We are indebted to Vincent Bastien, former CEO of Louis Vuitton, for the time he has spent with us critiquing our framework.",
"title": ""
},
{
"docid": "neg:1840588_8",
"text": "Photoluminescent graphene quantum dots (GQDs) have received enormous attention because of their unique chemical, electronic and optical properties. Here a series of GQDs were synthesized under hydrothermal processes in order to investigate the formation process and optical properties of N-doped GQDs. Citric acid (CA) was used as a carbon precursor and self-assembled into sheet structure in a basic condition and formed N-free GQD graphite framework through intermolecular dehydrolysis reaction. N-doped GQDs were prepared using a series of N-containing bases such as urea. Detailed structural and property studies demonstrated the formation mechanism of N-doped GQDs for tunable optical emissions. Hydrothermal conditions promote formation of amide between -NH₂ and -COOH with the presence of amine in the reaction. The intramoleculur dehydrolysis between neighbour amide and COOH groups led to formation of pyrrolic N in the graphene framework. Further, the pyrrolic N transformed to graphite N under hydrothermal conditions. N-doping results in a great improvement of PL quantum yield (QY) of GQDs. By optimized reaction conditions, the highest PL QY (94%) of N-doped GQDs was obtained using CA as a carbon source and ethylene diamine as a N source. The obtained N-doped GQDs exhibit an excitation-independent blue emission with single exponential lifetime decay.",
"title": ""
},
{
"docid": "neg:1840588_9",
"text": "A solar energy semiconductor cooling box is presented in the paper. The cooling box is compact and easy to carry, can be made a special refrigeration unit which is smaller according to user needs. The characteristics of the cooling box are its simple use and maintenance, safe performance, decentralized power supply, convenient energy storage, no environmental pollution, and so on. In addition, compared with the normal mechanical refrigeration, the semiconductor refrigeration system which makes use of Peltier effect does not require pumps, compressors and other moving parts, and so there is no wear and noise. It does not require refrigerant so it will not produce environmental pollution, and it also eliminates the complex transmission pipeline. The concrete realization form of power are “heat - electric - cold”, “light - electric - cold”, “light - heat - electric - cold”. In order to achieve the purpose of cooling, solar cells generate electricity to drive the semiconductor cooling devices. The working principle is mainly photovoltaic effect and the Peltier effect.",
"title": ""
},
{
"docid": "neg:1840588_10",
"text": "Margaret-Anne Storey University of Victoria Victoria, BC, Canada [email protected] Abstract Modern software developers rely on an extensive set of social media tools and communication channels. The adoption of team communication platforms has led to the emergence of conversation-based tools and integrations, many of which are chatbots. Understanding how software developers manage their complex constellation of collaborators in conjunction with the practices and tools they use can bring valuable insights into socio-technical collaborative work in software development and other knowledge work domains.",
"title": ""
},
{
"docid": "neg:1840588_11",
"text": "In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier's archiving and manuscript policies are encouraged to visit: Summary. — A longitudinal anthropological study of cotton farming in Warangal District of Andhra Pradesh, India, compares a group of villages before and after adoption of Bt cotton. It distinguishes \" field-level \" and \" farm-level \" impacts. During this five-year period yields rose by 18% overall, with greater increases among poor farmers with the least access to information. Insecticide sprayings dropped by 55%, although predation by non-target pests was rising. However shifting from the field to the historically-situated context of the farm recasts insect attacks as a symptom of larger problems in agricultural decision-making. Bt cotton's opponents have failed to recognize real benefits at the field level, while its backers have failed to recognize systemic problems that Bt cotton may exacerbate.",
"title": ""
},
{
"docid": "neg:1840588_12",
"text": "BACKGROUND\nPossible associations between television viewing and video game playing and children's aggression have become public health concerns. We did a systematic review of studies that examined such associations, focussing on children and young people with behavioural and emotional difficulties, who are thought to be more susceptible.\n\n\nMETHODS\nWe did computer-assisted searches of health and social science databases, gateways, publications from relevant organizations and for grey literature; scanned bibliographies; hand-searched key journals; and corresponded with authors. We critically appraised all studies.\n\n\nRESULTS\nA total of 12 studies: three experiments with children with behavioural and emotional difficulties found increased aggression after watching aggressive as opposed to low-aggressive content television programmes, one found the opposite and two no clear effect, one found such children no more likely than controls to imitate aggressive television characters. One case-control study and one survey found that children and young people with behavioural and emotional difficulties watched more television than controls; another did not. Two studies found that children and young people with behavioural and emotional difficulties viewed more hours of aggressive television programmes than controls. One study on video game use found that young people with behavioural and emotional difficulties viewed more minutes of violence and played longer than controls. In a qualitative study children with behavioural and emotional difficulties, but not their parents, did not associate watching television with aggression. All studies had significant methodological flaws. None was based on power calculations.\n\n\nCONCLUSION\nThis systematic review found insufficient, contradictory and methodologically flawed evidence on the association between television viewing and video game playing and aggression in children and young people with behavioural and emotional difficulties. If public health advice is to be evidence-based, good quality research is needed.",
"title": ""
},
{
"docid": "neg:1840588_13",
"text": "The findings of 54 research studies were integrated through meta-analysis to determine the effects of calculators on student achievement and attitude levels. Effect sizes were generated through Glassian techniques of meta-analysis, and Hedges and Olkin’s (1985) inferential statistical methods were used to test the significance of effect size data. Results revealed that students’ operational skills and problem-solving skills improved when calculators were an integral part of testing and instruction. The results for both skill types were mixed when calculators were not part of assessment, but in all cases, calculator use did not hinder the development of mathematical skills. Students using calculators had better attitudes toward mathematics than their noncalculator counterparts. Further research is needed in the retention of mathematics skills after instruction and transfer of skills to other mathematics-related subjects.",
"title": ""
},
{
"docid": "neg:1840588_14",
"text": "C. Midgley et al. (2001) raised important questions about the effects of performance-approach goals. The present authors disagree with their characterization of the research findings and implications for theory. They discuss 3 reasons to revise goal theory: (a) the importance of separating approach from avoidance strivings, (b) the positive potential of performance-approach goals, and (c) identification of the ways performance-approach goals can combine with mastery goals to promote optimal motivation. The authors review theory and research to substantiate their claim that goal theory is in need of revision, and they endorse a multiple goal perspective. The revision of goal theory is underway and offers a more complex, but necessary, perspective on important issues of motivation, learning, and achievement.",
"title": ""
},
{
"docid": "neg:1840588_15",
"text": "The Facial Action Coding System (FACS) (Ekman & Friesen, 1978) is a comprehensive and widely used method of objectively describing facial activity. Little is known, however, about inter-observer reliability in coding the occurrence, intensity, and timing of individual FACS action units. The present study evaluated the reliability of these measures. Observational data came from three independent laboratory studies designed to elicit a wide range of spontaneous expressions of emotion. Emotion challenges included olfactory stimulation, social stress, and cues related to nicotine craving. Facial behavior was video-recorded and independently scored by two FACS-certified coders. Overall, we found good to excellent reliability for the occurrence, intensity, and timing of individual action units and for corresponding measures of more global emotion-specified combinations.",
"title": ""
},
{
"docid": "neg:1840588_16",
"text": "Homocysteine (HCY) is a degradation product of the methionine pathway. The B vitamins, in particular vitamin B12 and folate, are the primary nutritional determinant of HCY levels and therefore their deficiencies result in hyperhomocysteinaemia (HHCY). Prevalence of hyperhomocysteinemia (HHCY) and related dietary deficiencies in B vitamins and folate increase with age and have been related to osteoporosis and abnormal development of epiphyseal cartilage and bone in rodents. Here we provide a review of experimental and population studies. The negative effects of HHCY and/or B vitamins and folate deficiencies on bone formation and remodeling are documented by cell models, including primary osteoblasts, osteoclast and bone progenitor cells as well as by animal and human studies. However, underlying pathophysiological mechanisms are complex and remain poorly understood. Whether these associations are the direct consequences of impaired one carbon metabolism is not clarified and more studies are still needed to translate these findings to human population. To date, the evidence is limited and somewhat conflicting, however further trials in groups most vulnerable to impaired one carbon metabolism are required.",
"title": ""
},
{
"docid": "neg:1840588_17",
"text": "This research deals with a vital and important issue in computer world. It is concerned with the software management processes that examine the area of software development through the development models, which are known as software development life cycle. It represents five of the development models namely, waterfall, Iteration, V-shaped, spiral and Extreme programming. These models have advantages and disadvantages as well. Therefore, the main objective of this research is to represent different models of software development and make a comparison between them to show the features and defects of each model.",
"title": ""
},
{
"docid": "neg:1840588_18",
"text": "Host-based security tools such as anti-virus and intrusion detection systems are not adequately protected on today's computers. Malware is often designed to immediately disable any security tools upon installation, rendering them useless. While current research has focused on moving these vulnerable security tools into an isolated virtual machine, this approach cripples security tools by preventing them from doing active monitoring. This paper describes an architecture that takes a hybrid approach, giving security tools the ability to do active monitoring while still benefiting from the increased security of an isolated virtual machine. We discuss the architecture and a prototype implementation that can process hooks from a virtual machine running Windows XP on Xen. We conclude with a security analysis and show the performance of a single hook to be 28 musecs in the best case.",
"title": ""
}
] |
1840589 | Multi-label hypothesis reuse | [
{
"docid": "pos:1840589_0",
"text": "Multi-label learning originated from the investigation of text categorization problem, where each document may belong to several predefined topics simultaneously. In multi-label learning, the training set is composed of instances each associated with a set of labels, and the task is to predict the label sets of unseen instances through analyzing training instances with known label sets. In this paper, a multi-label lazy learning approach named Mlknn is presented, which is derived from the traditional k-Nearest Neighbor (kNN) algorithm. In detail, for each unseen instance, its k nearest neighbors in the training set are firstly identified. After that, based on statistical information gained from the label sets of these neighboring instances, i.e. the number of neighboring instances belonging to each possible class, maximum a posteriori (MAP) principle is utilized to determine the label set for the unseen instance. Experiments on three different real-world multi-label learning problems, i.e. Yeast gene functional analysis, natural scene classification and automatic web page categorization, show that Ml-knn achieves superior performance to some well-established multi-label learning algorithms.",
"title": ""
},
{
"docid": "pos:1840589_1",
"text": "We study hierarchical classification in the general case when an instance could belong to more than one class node in the underlying taxonomy. Experiments done in previous work showed that a simple hierarchy of Support Vectors Machines (SVM) with a top-down evaluation scheme has a surprisingly good performance on this kind of task. In this paper, we introduce a refined evaluation scheme which turns the hierarchical SVM classifier into an approximator of the Bayes optimal classifier with respect to a simple stochastic model for the labels. Experiments on synthetic datasets, generated according to this stochastic model, show that our refined algorithm outperforms the simple hierarchical SVM. On real-world data, however, the advantage brought by our approach is a bit less clear. We conjecture this is due to a higher noise rate for the training labels in the low levels of the taxonomy.",
"title": ""
},
{
"docid": "pos:1840589_2",
"text": "Common approaches to multi-label classification learn independent classifiers for each category, and employ ranking or thresholding schemes for classification. Because they do not exploit dependencies between labels, such techniques are only well-suited to problems in which categories are independent. However, in many domains labels are highly interdependent. This paper explores multi-label conditional random field (CRF)classification models that directly parameterize label co-occurrences in multi-label classification. Experiments show that the models outperform their single-label counterparts on standard text corpora. Even when multi-labels are sparse, the models improve subset classification error by as much as 40%.",
"title": ""
}
] | [
{
"docid": "neg:1840589_0",
"text": "While the volume of scholarly publications has increased at a frenetic pace, accessing and consuming the useful candidate papers, in very large digital libraries, is becoming an essential and challenging task for scholars. Unfortunately, because of language barrier, some scientists (especially the junior ones or graduate students who do not master other languages) cannot efficiently locate the publications hosted in a foreign language repository. In this study, we propose a novel solution, cross-language citation recommendation via Hierarchical Representation Learning on Heterogeneous Graph (HRLHG), to address this new problem. HRLHG can learn a representation function by mapping the publications, from multilingual repositories, to a low-dimensional joint embedding space from various kinds of vertexes and relations on a heterogeneous graph. By leveraging both global (task specific) plus local (task independent) information as well as a novel supervised hierarchical random walk algorithm, the proposed method can optimize the publication representations by maximizing the likelihood of locating the important cross-language neighborhoods on the graph. Experiment results show that the proposed method can not only outperform state-of-the-art baseline models, but also improve the interpretability of the representation model for cross-language citation recommendation task.",
"title": ""
},
{
"docid": "neg:1840589_1",
"text": "A mobile robot is designed to pick and place the objects through voice commands. This work would be practically useful to wheelchair bound persons. The pick and place robot is designed in a way that it is able to help the user to pick up an item that is placed at two different levels using an extendable arm. The robot would move around to pick up an item and then pass it back to the user or to a desired location as told by the user. The robot control is achieved through voice commands such as left, right, straight, etc. in order to help the robot to navigate around. Raspberry Pi 2 controls the overall design with 5 DOF servo motor arm. The webcam is used to navigate around which provides live streaming using a mobile application for the user to look into. Results show the ability of the robot to pick and place the objects up to a height of 23.5cm through proper voice commands.",
"title": ""
},
{
"docid": "neg:1840589_2",
"text": "This paper presents a 14MHz Class-E power amplifier to be used for wireless power transmission. The Class-E power amplifier was built to consider the VSWR and the frequency bandwidth. Tw o kinds of circuits were designed: the high and low quality factor amplifiers. The low quality factor amplifier is confirmed to have larger bandwidth than the high quality factor amplifier. It has also possessed less sensitive characteristics. Therefore, the low quality factor amplifier circuit was adopted and tested. The effect of gate driving input source is studied. The efficiency of the Class-E amplifier reaches 85.5% at 63W.",
"title": ""
},
{
"docid": "neg:1840589_3",
"text": "During a project examining the use of machine learning techniques for oil spill detection, we encountered several essential questions that we believe deserve the attention of the research community. We use our particular case study to illustrate such issues as problem formulation, selection of evaluation measures, and data preparation. We relate these issues to properties of the oil spill application, such as its imbalanced class distribution, that are shown to be common to many applications. Our solutions to these issues are implemented in the Canadian Environmental Hazards Detection System (CEHDS), which is about to undergo field testing.",
"title": ""
},
{
"docid": "neg:1840589_4",
"text": "There is an emerging trend in higher education for the adoption of massive open online courses (MOOCs). However, despite this interest in learning at scale, there has been limited work investigating the impact MOOCs can play on student learning. In this study, we adopt a novel approach, using language and discourse as a tool to explore its association with two established measures related to learning: traditional academic performance and social centrality. We demonstrate how characteristics of language diagnostically reveal the performance and social position of learners as they interact in a MOOC. We use CohMetrix, a theoretically grounded, computational linguistic modeling tool, to explore students’ forum postings across five potent discourse dimensions. Using a Social Network Analysis (SNA) methodology, we determine learners’ social centrality. Linear mixed-effect modeling is used for all other analyses to control for individual learner and text characteristics. The results indicate that learners performed significantly better when they engaged in more expository style discourse, with surface and deep level cohesive integration, abstract language, and simple syntactic structures. However, measures of social centrality revealed a different picture. Learners garnered a more significant and central position in their social network when they engaged with more narrative style discourse with less overlap between words and ideas, simpler syntactic structures and abstract words. Implications for further research and practice are discussed regarding the misalignment between these two learning-related outcomes.",
"title": ""
},
{
"docid": "neg:1840589_5",
"text": "During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-di↵erence time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied for various application contexts including those requiring to model light/matter interactions on the nanoscale. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.",
"title": ""
},
{
"docid": "neg:1840589_6",
"text": "In recent years, a number of network forensics techniques have been proposed to investigate the increasing number of cybercrimes. Network forensics techniques assist in tracking internal and external network attacks by focusing on inherent network vulnerabilities and communication mechanisms. However, investigation of cybercrime becomes more challenging when cyber criminals erase the traces in order to avoid detection. Therefore, network forensics techniques employ mechanisms to facilitate investigation by recording every single packet and event that is disseminated into the network. As a result, it allows identification of the origin of the attack through reconstruction of the recorded data. In the current literature, network forensics techniques are studied on the basis of forensic tools, process models and framework implementations. However, a comprehensive study of cybercrime investigation using network forensics frameworks along with a critical review of present network forensics techniques is lacking. In other words, our study is motivated by the diversity of digital evidence and the difficulty of addressing numerous attacks in the network using network forensics techniques. Therefore, this paper reviews the fundamental mechanism of network forensics techniques to determine how network attacks are identified in the network. Through an extensive review of related literature, a thematic taxonomy is proposed for the classification of current network forensics techniques based on its implementation as well as target data sets involved in the conducting of forensic investigations. The critical aspects and significant features of the current network forensics techniques are investigated using qualitative analysis technique. We derive significant parameters from the literature for discussing the similarities and differences in existing network forensics techniques. The parameters include framework nature, mechanism, target dataset, target instance, forensic processing, time of investigation, execution definition, and objective function. Finally, open research challenges are discussed in network forensics to assist researchers in selecting the appropriate domains for further research and obtain ideas for exploring optimal techniques for investigating cyber-crimes. & 2016 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840589_7",
"text": "Recently popularized graph neural networks achieve the state-of-the-art accuracy on a number of standard benchmark datasets for graph-based semi-supervised learning, improving significantly over existing approaches. These architectures alternate between a propagation layer that aggregates the hidden states of the local neighborhood and a fully-connected layer. Perhaps surprisingly, we show that a linear model, that removes all the intermediate fullyconnected layers, is still able to achieve a performance comparable to the state-of-the-art models. This significantly reduces the number of parameters, which is critical for semi-supervised learning where number of labeled examples are small. This in turn allows a room for designing more innovative propagation layers. Based on this insight, we propose a novel graph neural network that removes all the intermediate fully-connected layers, and replaces the propagation layers with attention mechanisms that respect the structure of the graph. The attention mechanism allows us to learn a dynamic and adaptive local summary of the neighborhood to achieve more accurate predictions. In a number of experiments on benchmark citation networks datasets, we demonstrate that our approach outperforms competing methods. By examining the attention weights among neighbors, we show that our model provides some interesting insights on how neighbors influence each other.",
"title": ""
},
{
"docid": "neg:1840589_8",
"text": "Surgical anaesthesia with haemodynamic stability and opioid-free analgesia in fragile patients can theoretically be provided with lumbosacral plexus blockade. We compared a novel ultrasound-guided suprasacral technique for blockade of the lumbar plexus and the lumbosacral trunk with ultrasound-guided blockade of the lumbar plexus. The objective was to investigate whether the suprasacral technique is equally effective for anaesthesia of the terminal lumbar plexus nerves compared with a lumbar plexus block, and more effective for anaesthesia of the lumbosacral trunk. Twenty volunteers were included in a randomised crossover trial comparing the new suprasacral with a lumbar plexus block. The primary outcome was sensory dermatome anaesthesia of L2-S1. Secondary outcomes were peri-neural analgesic spread estimated with magnetic resonance imaging, sensory blockade of dermatomes L2-S3, motor blockade, volunteer discomfort, arterial blood pressure change, block performance time, lidocaine pharmacokinetics and complications. Only one volunteer in the suprasacral group had sensory blockade of all dermatomes L2-S1. Epidural spread was verified by magnetic resonance imaging in seven of the 34 trials (two suprasacral and five lumbar plexus blocks). Success rates of the sensory and motor blockade were 88-100% for the major lumbar plexus nerves with the suprasacral technique, and 59-88% with the lumbar plexus block (p > 0.05). Success rate of motor blockade was 50% for the lumbosacral trunk with the suprasacral technique and zero with the lumbar plexus block (p < 0.05). Both techniques are effective for blockade of the terminal nerves of the lumbar plexus. The suprasacral parallel shift technique is 50% effective for blockade of the lumbosacral trunk.",
"title": ""
},
{
"docid": "neg:1840589_9",
"text": "Robust and accurate visual tracking is one of the most challenging computer vision problems. Due to the inherent lack of training data, a robust approach for constructing a target appearance model is crucial. Recently, discriminatively learned correlation filters (DCF) have been successfully applied to address this problem for tracking. These methods utilize a periodic assumption of the training samples to efficiently learn a classifier on all patches in the target neighborhood. However, the periodic assumption also introduces unwanted boundary effects, which severely degrade the quality of the tracking model. We propose Spatially Regularized Discriminative Correlation Filters (SRDCF) for tracking. A spatial regularization component is introduced in the learning to penalize correlation filter coefficients depending on their spatial location. Our SRDCF formulation allows the correlation filters to be learned on a significantly larger set of negative training samples, without corrupting the positive samples. We further propose an optimization strategy, based on the iterative Gauss-Seidel method, for efficient online learning of our SRDCF. Experiments are performed on four benchmark datasets: OTB-2013, ALOV++, OTB-2015, and VOT2014. Our approach achieves state-of-the-art results on all four datasets. On OTB-2013 and OTB-2015, we obtain an absolute gain of 8.0% and 8.2% respectively, in mean overlap precision, compared to the best existing trackers.",
"title": ""
},
{
"docid": "neg:1840589_10",
"text": "Cette thèse aborde de façon générale les algorithmes d'apprentissage, avec un intérêt tout particulier pour les grandes bases de données. Après avoir for-mulé leprobì eme de l'apprentissage demanì ere mathématique, nous présentons plusieurs algorithmes d'apprentissage importants, en particulier les Multi Layer Perceptrons, les Mixture d'Experts ainsi que les Support Vector Machines. Nous considérons ensuite une méthode d'entraˆınement pour les Support Vector Machines , adaptée aux ensembles de données de tailles raisonnables. Cepen-dant, l'entraˆınement d'un tel modèle reste irréalisable sur de très grande bases de données. Inspirés par la stratégie \" diviser pour régner \" , nous proposons alors un modèle de la famille des Mixture d'Experts, permettant de séparer le probì eme d' apprentissage en sous-probì emes plus simples , tout en gardant de bonnes performances en généralisation. Malgré de très bonnes performances en pratique , cet algorithme n ' en reste pas moins difficilè a utiliser , ` a cause de son nombre important d ' hyper-paramètres. Pour cette raison , nous préférons nous intéresser ensuitè a l ' amélioration de l ' entraˆınement des Multi Layer Percep-trons , bien plus facilesà utiliser , et plus adaptés aux grandes bases de données que les Support Vector Machines. Enfin , nous montrons que l ' idée de la marge qui fait la force des Support Vector Machines peutêtre appliquéè a une cer-taine classe de Multi Layer Perceptrons , ce qui nous m ` enè a un algorithme très rapide et ayant de très bonnes performances en généralisation. Summary This thesis aims to address machine learning in general , with a particular focus on large models and large databases. After introducing the learning problem in a formal way , we first review several important machine learning algorithms , particularly Multi Layer Perceptrons , Mixture of Experts and Support Vector Machines. We then present a training method for Support Vector Machines , adapted to reasonably large datasets. However the training of such a model is still intractable on very large databases. We thus propose a divide and conquer approach based on a kind of Mixture of Experts in order to break up the training problem into small pieces , while keeping good generalization performance. This mixture model can be applied to any kind of existing machine learning algorithm. Even though it performs well in practice the major drawback of this algorithm is the number of hyper-parameters to tune , which makes it …",
"title": ""
},
{
"docid": "neg:1840589_11",
"text": "In this paper, we propose a semi-supervised learning method where we train two neural networks in a multi-task fashion: a target network and a confidence network. The target network is optimized to perform a given task and is trained using a large set of unlabeled data that are weakly annotated. We propose to weight the gradient updates to the target network using the scores provided by the second confidence network, which is trained on a small amount of supervised data. Thus we avoid that the weight updates computed from noisy labels harm the quality of the target network model. We evaluate our learning strategy on two different tasks: document ranking and sentiment classification. The results demonstrate that our approach not only enhances the performance compared to the baselines but also speeds up the learning process from weak labels.",
"title": ""
},
{
"docid": "neg:1840589_12",
"text": "Harlequin ichthyosis (HI) is an extremely rare genetic skin disorder and the most severe form of a group of disorders, which includes lamellar ichthyosis and congenital ichthyosiform erythroderma. It consists in an autosomal recessive disorder with the majority of affected individuals being homozygous for mutation in the ABCA12 gene. This condition presents a wide range of severity and symptoms. Affected neonates often do not survive beyond the first few days of life and it was usually considered as being fatal in the past, but, with the improvement of neonatal intensive care, the survival of these patients also improved. Our report is about a harlequin baby with new variants, which have not been previously described. He presents two variants in heterozygosity in the ABCA12 gene: c.3067del (p.Tyr1023Ilefs * 22) and c.318-2A>G p(.?), inherited from the father and mother. Several aspects concerning genetics, physiopathology, diagnosis, treatment and prognosis are discussed. An intensive neonatal care and early introduction of oral retinoids improve survival rates in this kind of disorder.",
"title": ""
},
{
"docid": "neg:1840589_13",
"text": "This paper describes an approach for the problem of face pose discrimination using Support Vector Machines (SVM). Face pose discrimination means that one can label the face image as one of several known poses. Face images are drawn from the standard FERET data base. The training set consists of 150 images equally distributed among frontal, approximately 33.75 rotated left and right poses, respectively, and the test set consists of 450 images again equally distributed among the three different types of poses. SVM achieved perfect accuracy 100% discriminating between the three possible face poses on unseen test data, using either polynomials of degree 3 or Radial Basis Functions (RBFs) as kernel approximation functions.",
"title": ""
},
{
"docid": "neg:1840589_14",
"text": "Many believe the electric power system is undergoing a profound change driven by a number of needs. There's the need for environmental compliance and energy conservation. We need better grid reliability while dealing with an aging infrastructure. And we need improved operational effi ciencies and customer service. The changes that are happening are particularly signifi cant for the electricity distribution grid, where \"blind\" and manual operations, along with the electromechanical components, will need to be transformed into a \"smart grid.\" This transformation will be necessary to meet environmental targets, to accommodate a greater emphasis on demand response (DR), and to support plug-in hybrid electric vehicles (PHEVs) as well as distributed generation and storage capabilities. It is safe to say that these needs and changes present the power industry with the biggest challenge it has ever faced. On one hand, the transition to a smart grid has to be evolutionary to keep the lights on; on the other hand, the issues surrounding the smart grid are signifi cant enough to demand major changes in power systems operating philosophy.",
"title": ""
},
{
"docid": "neg:1840589_15",
"text": "Several recent works have shown that image descriptors produced by deep convolutional neural networks provide state-of-the-art performance for image classification and retrieval problems. It also has been shown that the activations from the convolutional layers can be interpreted as local features describing particular image regions. These local features can be aggregated using aggregating methods developed for local features (e.g. Fisher vectors), thus providing new powerful global descriptor. In this paper we investigate possible ways to aggregate local deep features to produce compact descriptors for image retrieval. First, we show that deep features and traditional hand-engineered features have quite different distributions of pairwise similarities, hence existing aggregation methods have to be carefully re-evaluated. Such re-evaluation reveals that in contrast to shallow features, the simple aggregation method based on sum pooling provides the best performance for deep convolutional features. This method is efficient, has few parameters, and bears little risk of overfitting when e.g. learning the PCA matrix. In addition, we suggest a simple yet efficient query expansion scheme suitable for the proposed aggregation method. Overall, the new compact global descriptor improves the state-of-the-art on four common benchmarks considerably.",
"title": ""
},
{
"docid": "neg:1840589_16",
"text": "Property-based Features Given a sentencerepresentation pair, for each property listed in Table 2, we compute if it holds for the representation. For each property that holds and for each n-gram in the sentence we trigger a feature. Consider the first example in Table 1. The features triggered for this example include touches-wall#two-boxes-have and touches-wall#touching-the-side computed from the property touches-wall and the tri-grams two boxes have and touching the side. We observe that the MaxEnt model learns a higher weight for features which combine similar properties of the world and the sentence, such as touches-wall#touching-the-side.",
"title": ""
},
{
"docid": "neg:1840589_17",
"text": "We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires.\n Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.",
"title": ""
}
] |
1840590 | Friends FTW! friendship and competition in halo: reach | [
{
"docid": "pos:1840590_0",
"text": "This article explores the ways social interaction plays an integral role in the game EverQuest. Through our research we argue that social networks form a powerful component of the gameplay and the gaming experience, one that must be seriously considered to understand the nature of massively multiplayer online games. We discuss the discrepancy between how the game is portrayed and how it is actually played. By examining the role of social networks and interactions we seek to explore how the friendships between the players could be considered the ultimate exploit of the game.",
"title": ""
}
] | [
{
"docid": "neg:1840590_0",
"text": "Video-surveillance and traffic analysis systems can be heavily improved using vision-based techniques to extract, manage and track objects in the scene. However, problems arise due to shadows. In particular, moving shadows can affect the correct localization, measurements and detection of moving objects. This work aims to present a technique for shadow detection and suppression used in a system for moving visual object detection and tracking. The major novelty of the shadow detection technique is the analysis carried out in the HSV color space to improve the accuracy in detecting shadows. This paper exploits comparison of shadow suppression using RGB and HSV color space in moving object detection and results in this paper are more encouraging using HSV colour space over RGB colour space. Keywords— Shadow detection; HSV color space; RGB color space.",
"title": ""
},
{
"docid": "neg:1840590_1",
"text": "Software developers’ activities are in general recorded in software repositories such as version control systems, bug trackers and mail archives. While abundant information is usually present in such repositories, successful information extraction is often challenged by the necessity to simultaneously analyze different repositories and to combine the information obtained. We propose to apply process mining techniques, originally developed for business process analysis, to address this challenge. However, in order for process mining to become applicable, different software repositories should be combined, and “related” software development events should be matched: e.g., mails sent about a file, modifications of the file and bug reports that can be traced back to it. The combination and matching of events has been implemented in FRASR (Framework for Analyzing Software Repositories), augmenting the process mining framework ProM. FRASR has been successfully applied in a series of case studies addressing such aspects of the development process as roles of different developers and the way bug reports are handled.",
"title": ""
},
{
"docid": "neg:1840590_2",
"text": "In this paper, we consider a typical image blind denoising problem, which is to remove unknown noise from noisy images. As we all know, discriminative learning based methods, such as DnCNN, can achieve state-of-the-art denoising results, but they are not applicable to this problem due to the lack of paired training data. To tackle the barrier, we propose a novel two-step framework. First, a Generative Adversarial Network (GAN) is trained to estimate the noise distribution over the input noisy images and to generate noise samples. Second, the noise patches sampled from the first step are utilized to construct a paired training dataset, which is used, in turn, to train a deep Convolutional Neural Network (CNN) for denoising. Extensive experiments have been done to demonstrate the superiority of our approach in image blind denoising.",
"title": ""
},
{
"docid": "neg:1840590_3",
"text": "Although deep learning has produced dazzling successes for applications of image, speech, and video processing in the past few years, most trainings are with suboptimal hyper-parameters, requiring unnecessarily long training times. Setting the hyper-parameters remains a black art that requires years of experience to acquire. This report proposes several efficient ways to set the hyper-parameters that significantly reduce training time and improves performance. Specifically, this report shows how to examine the training validation/test loss function for subtle clues of underfitting and overfitting and suggests guidelines for moving toward the optimal balance point. Then it discusses how to increase/decrease the learning rate/momentum to speed up training. Our experiments show that it is crucial to balance every manner of regularization for each dataset and architecture. Weight decay is used as a sample regularizer to show how its optimal value is tightly coupled with the learning rates and momentums.",
"title": ""
},
{
"docid": "neg:1840590_4",
"text": "PURPOSE\nTo report a case of central retinal artery occlusion (CRAO) in a patient with biopsy-verified Wegener's granulomatosis (WG) with positive C-ANCA.\n\n\nMETHODS\nA 55-year-old woman presented with a 3-day history of acute painless bilateral loss of vision; she also complained of fever and weight loss. Examination showed a CRAO in the left eye and angiographically documented choroidal ischemia in both eyes.\n\n\nRESULTS\nThe possibility of systemic vasculitis was not kept in mind until further studies were carried out; methylprednisolone pulse therapy was then started. Renal biopsy disclosed focal and segmental necrotizing vasculitis of the medium-sized arteries, supporting the diagnosis of WG, and cyclophosphamide pulse therapy was administered with gradual improvement, but there was no visual recovery.\n\n\nCONCLUSION\nCRAO as presenting manifestation of WG, in the context of retinal vasculitis, is very uncommon, but we should be aware of WG in the etiology of CRAO. This report shows the difficulty of diagnosing Wegener's granulomatosis; it requires a high index of suspicion, and we should obtain an accurate medical history and repeat serological and histopathological examinations. It emphasizes that inflammation of arteries leads to irreversible retinal infarction, and visual loss may occur.",
"title": ""
},
{
"docid": "neg:1840590_5",
"text": "The tragedy of the digital commons does not prevent the copious voluntary production of content that one witnesses in the web. We show through an analysis of a massive data set from YouTube that the productivity exhibited in crowdsourcing exhibits a strong positive dependence on attention, measured by the number of downloads. Conversely, a lack of attention leads to a decrease in the number of videos uploaded and the consequent drop in productivity, which in many cases asymptotes to no uploads whatsoever. Moreover, uploaders compare themselves to others when having low productivity and to themselves when exceeding a threshold. 1 ar X iv :0 80 9. 30 30 v1 [ cs .C Y ] 1 7 Se p 20 08 We are witnessing an inversion of the traditional way by which content has been generated and consumed over the centuries. From photography to news and encyclopedic knowledge, the centuries-old pattern has been one in which a relatively few people and organizations produce content and most people consume it. With the advent of the web and the ease with which one can migrate content to it, that pattern has reversed, leading to a situation whereby millions create content in the form of blogs, news, videos, music, etc. and relatively few can attend to it all. This phenomenon, which goes under the name of crowdsourcing, is exemplified by websites such as Digg, Flicker, YouTube, and Wikipedia, where content creation without the traditional quality filters manages to produce sought out movies, news and even knowledge that rivals the best encyclopedias. That such content is valued is confirmed by the fact that access to these sites accounts for a sizable percentage of internet traffic. For example, as of June, 2007 YouTube alone comprised approximately 20% of all HTTP traffic, or nearly 10% of all traffic on the Internet [2]. What makes crowdsourcing both interesting and puzzling is the underlying dilemma facing every contributor, which is best exemplified by the well-known tragedy of the commons. In such dilemmas, a group of people attempts to provide a common good in the absence of a central authority. In the case of crowdsourcing, the common good is in the form or videos, music, or encyclopedic knowledge that can be freely accessed by anyone. Furthermore, the good has jointness of supply, which means that its consumption by others does not affect the amounts that other users can use. And since it is nearly impossible to exclude non contributors from using the common good, it is rational for individuals not to upload content and free ride on the production of others. The dilemma ensues when every individual can reason this way and free ride on the efforts of others, making everyone worse off—thus the tragedy of the digital commons [1, 3, 7, 5, 10]. And yet paradoxically, there is ample evidence that while the ratio of contributions to downloads is indeed small, the growth in content provision persists at levels that are hard to understand if analyzed from a public goods point of view. One possible explanation for this puzzling behavior, which we explore in this paper, is that those contributing to the digital commons",
"title": ""
},
{
"docid": "neg:1840590_6",
"text": "Imperforate hymen, a condition in which the hymen has no aperture, usually occurs congenitally, secondary to failure of development of a lumen. A case of a documented simulated \"acquired\" imperforate hymen is presented in this article. The patient, a 5-year-old girl, was the victim of sexual abuse. Initial examination showed tears, scars, and distortion of the hymen, laceration of the perineal body, and loss of normal anal tone. Follow-up evaluations over the next year showed progressive healing. By 7 months after the injury, the hymen was replaced by a thick, opaque scar with no orifice. Patients with an apparent imperforate hymen require a sensitive interview and careful visual inspection of the genital and anal areas to delineate signs of injury. The finding of an apparent imperforate hymen on physical examination does not eliminate the possibility of antecedent vaginal penetration and sexual abuse.",
"title": ""
},
{
"docid": "neg:1840590_7",
"text": "The Web is rapidly transforming from a pure document collection to the largest connected public data space. Semantic annotations of web pages make it notably easier to extract and reuse data and are increasingly used by both search engines and social media sites to provide better search experiences through rich snippets, faceted search, task completion, etc. In our work, we study the novel problem of crawling structured data embedded inside HTML pages. We describe Anthelion, the first focused crawler addressing this task. We propose new methods of focused crawling specifically designed for collecting data-rich pages with greater efficiency. In particular, we propose a novel combination of online learning and bandit-based explore/exploit approaches to predict data-rich web pages based on the context of the page as well as using feedback from the extraction of metadata from previously seen pages. We show that these techniques significantly outperform state-of-the-art approaches for focused crawling, measured as the ratio of relevant pages and non-relevant pages collected within a given budget.",
"title": ""
},
{
"docid": "neg:1840590_8",
"text": "Coagulation-flocculation is a relatively simple physical-chemical technique in treatment of old and stabilized leachate which has been practiced using a variety of conventional coagulants. Polymeric forms of metal coagulants which are increasingly applied in water treatment are not well documented in leachate treatment. In this research, capability of poly-aluminum chloride (PAC) in the treatment of stabilized leachate from Pulau Burung Landfill Site (PBLS), Penang, Malaysia was studied. The removal efficiencies for chemical oxygen demand (COD), turbidity, color and total suspended solid (TSS) obtained using PAC were compared with those obtained using alum as a conventional coagulant. Central composite design (CCD) and response surface method (RSM) were applied to optimize the operating variables viz. coagulant dosage and pH. Quadratic models developed for the four responses (COD, turbidity, color and TSS) studied indicated the optimum conditions to be PAC dosage of 2g/L at pH 7.5 and alum dosage of 9.5 g/L at pH 7. The experimental data and model predictions agreed well. COD, turbidity, color and TSS removal efficiencies of 43.1, 94.0, 90.7, and 92.2% for PAC, and 62.8, 88.4, 86.4, and 90.1% for alum were demonstrated.",
"title": ""
},
{
"docid": "neg:1840590_9",
"text": "We introduce an ultrasonic sensor system that measures artificial potential fields (APF’s) directly. The APF is derived from the traveling-times of the transmitted pulses. Advantages of the sensor are that it needs only three transducers, that its design is simple, and that it measures a quantity that can be used directly for simple navigation, such as collision avoidance.",
"title": ""
},
{
"docid": "neg:1840590_10",
"text": "Sleep is a growing area of research interest in medicine and neuroscience. Actually, one major concern is to find a correlation between several physiologic variables and sleep stages. There is a scientific agreement on the characteristics of the five stages of human sleep, based on EEG analysis. Nevertheless, manual stage classification is still the most widely used approach. This work proposes a new automatic sleep classification method based on unsupervised feature classification algorithms recently developed, and on EEG entropy measures. This scheme extracts entropy metrics from EEG records to obtain a feature vector. Then, these features are optimized in terms of relevance using the Q-α algorithm. Finally, the resulting set of features is entered into a clustering procedure to obtain a final segmentation of the sleep stages. The proposed method reached up to an average of 80% correctly classified stages for each patient separately while keeping the computational cost low. Entropy 2014, 16 6574",
"title": ""
},
{
"docid": "neg:1840590_11",
"text": "Visual odometry and mapping methods can provide accurate navigation and comprehensive environment (obstacle) information for autonomous flights of Unmanned Aerial Vehicle (UAV) in GPS-denied cluttered environments. This work presents a new light small-scale low-cost ARM-based stereo vision pre-processing system, which not only is used as onboard sensor to continuously estimate 6-DOF UAV pose, but also as onboard assistant computer to pre-process visual information, thereby saving more computational capability for the onboard host computer of the UAV to conduct other tasks. The visual odometry is done by one plugin specifically developed for this new system with a fixed baseline (12cm). In addition, the pre-processed infromation from this new system are sent via a Gigabit Ethernet cable to the onboard host computer of UAV for real-time environment reconstruction and obstacle detection with a octree-based 3D occupancy grid mapping approach, i.e. OctoMap. The visual algorithm is evaluated with the stereo video datasets from EuRoC Challenge III in terms of efficiency, accuracy and robustness. Finally, the new system is mounted and tested on a real quadrotor UAV to carry out the visual odometry and mapping task.",
"title": ""
},
{
"docid": "neg:1840590_12",
"text": "Novel scientific knowledge is constantly produced by the scientific community. Understanding the level of novelty characterized by scientific literature is key for modeling scientific dynamics and analyzing the growth mechanisms of scientific knowledge. Metrics derived from bibliometrics and citation analysis were effectively used to characterize the novelty in scientific development. However, time is required before we can observe links between documents such as citation links or patterns derived from the links, which makes these techniques more effective for retrospective analysis than predictive analysis. In this study, we present a new approach to measuring the novelty of a research topic in a scientific community over a specific period by tracking semantic changes of the terms and characterizing the research topic in their usage context. The semantic changes are derived from the text data of scientific literature by temporal embedding learning techniques. We validated the effects of the proposed novelty metric on predicting the future growth of scientific publications and investigated the relations between novelty and growth by panel data analysis applied in a largescale publication dataset (MEDLINE/PubMed). Key findings based on the statistical investigation indicate that the novelty metric has significant predictive effects on the growth of scientific literature and the predictive effects may last for more than ten years. We demonstrated the effectiveness and practical implications of the novelty metric in three case studies. ∗[email protected], [email protected]. Department of Information Science, Drexel University. 1 ar X iv :1 80 1. 09 12 1v 1 [ cs .D L ] 2 7 Ja n 20 18",
"title": ""
},
{
"docid": "neg:1840590_13",
"text": "In this project, we attempt to apply machine-learning algorithms to predict Bitcoin price. For the first phase of our investigation, we aimed to understand and better identify daily trends in the Bitcoin market while gaining insight into optimal features surrounding Bitcoin price. Our data set consists of over 25 features relating to the Bitcoin price and payment network over the course of five years, recorded daily. Using this information we were able to predict the sign of the daily price change with an accuracy of 98.7%. For the second phase of our investigation, we focused on the Bitcoin price data alone and leveraged data at 10-minute and 10-second interval timepoints, as we saw an opportunity to evaluate price predictions at varying levels of granularity and noisiness. By predicting the sign of the future change in price, we are modeling the price prediction problem as a binomial classification task, experimenting with a custom algorithm that leverages both random forests and generalized linear models. These results had 50-55% accuracy in predicting the sign of future price change using 10 minute time intervals.",
"title": ""
},
{
"docid": "neg:1840590_14",
"text": "In this paper a morphological tagging approach for document image invoice analysis is described. Tokens close by their morphology and confirmed in their location within different similar contexts make apparent some parts of speech representative of the structure elements. This bottom up approach avoids the use of an priori knowledge provided that there are redundant and frequent contexts in the text. The approach is applied on the invoice body text roughly recognized by OCR and automatically segmented. The method makes possible the detection of the invoice articles and their different fields. The regularity of the article composition and its redundancy in the invoice is a good help for its structure. The recognition rate of 276 invoices and 1704 articles, is over than 91.02% for articles and 92.56% for fields.",
"title": ""
},
{
"docid": "neg:1840590_15",
"text": "Gripping and holding of objects are key tasks for robotic manipulators. The development of universal grippers able to pick up unfamiliar objects of widely varying shape and surface properties remains, however, challenging. Most current designs are based on the multifingered hand, but this approach introduces hardware and software complexities. These include large numbers of controllable joints, the need for force sensing if objects are to be handled securely without crushing them, and the computational overhead to decide how much stress each finger should apply and where. Here we demonstrate a completely different approach to a universal gripper. Individual fingers are replaced by a single mass of granular material that, when pressed onto a target object, flows around it and conforms to its shape. Upon application of a vacuum the granular material contracts and hardens quickly to pinch and hold the object without requiring sensory feedback. We find that volume changes of less than 0.5% suffice to grip objects reliably and hold them with forces exceeding many times their weight. We show that the operating principle is the ability of granular materials to transition between an unjammed, deformable state and a jammed state with solid-like rigidity. We delineate three separate mechanisms, friction, suction, and interlocking, that contribute to the gripping force. Using a simple model we relate each of them to the mechanical strength of the jammed state. This advance opens up newpossibilities for the designof simple, yet highly adaptive systems that excel at fast gripping of complex objects.",
"title": ""
},
{
"docid": "neg:1840590_16",
"text": "This paper is a short summary of the first real world detection of a backdoor in a military grade FPGA. Using an innovative patented technique we were able to detect and analyse in the first documented case of its kind, a backdoor inserted into the Actel/Microsemi ProASIC3 chips for accessing FPGA configuration. The backdoor was found amongst additional JTAG functionality and exists on the silicon itself, it was not present in any firmware loaded onto the chip. Using Pipeline Emission Analysis (PEA), our pioneered technique, we were able to extract the secret key to activate the backdoor, as well as other security keys such as the AES and the Passkey. This way an attacker can extract all the configuration data from the chip, reprogram crypto and access keys, modify low-level silicon features, access unencrypted configuration bitstream or permanently damage the device. Clearly this means the device is wide open to intellectual property (IP) theft, fraud, re-programming as well as reverse engineering of the design which allows the introduction of a new backdoor or Trojan. Most concerning, it is not possible to patch the backdoor in chips already deployed, meaning those using this family of chips have to accept the fact they can be easily compromised or will have to be physically replaced after a redesign of the silicon itself.",
"title": ""
},
{
"docid": "neg:1840590_17",
"text": "Magnesium and magnesium based alloys are lightweight metallic materials that are extremely biocompatib le and have similar mechanical properties to natural bone. These materials have the potential to function as an osteoconductive and biodegradable substitute in load bearing applicat ions in the field of hard t issue engineering. However, the effects of corrosion and degradation in the physiological environment of the body has prevented their wide spread applicat ion to date. The aim of this review is to examine the properties, chemical stability, degradation in situ and methods of improving the corrosion resistance of magnesium and its alloys for potential application in the orthopaedic field. To be an effective implant, the surface and sub-surface properties of the material needs to be carefully selected so that the degradation kinetics of the implant can be efficiently controlled. Several surface modification techniques are presented and their effectiveness in reducing the corrosion rate and methods of controlling the degradation period are discussed. Ideally, balancing the gradual loss of material and mechanical strength during degradation, with the increasing strength and stability of the newly forming bone tissue is the ultimate goal. If this goal can be achieved, then orthopaedic implants manufactured from magnesium based alloys have the potential to deliver successful clinical outcomes without the need for revision surgery.",
"title": ""
},
{
"docid": "neg:1840590_18",
"text": "Influence Maximization is the problem of finding a certain amount of people in a social network such that their aggregation influence through the network is maximized. In the past this problem has been widely studied under a number of different models. In 2003, Kempe \\emph{et al.} gave a $(1-{1 \\over e})$-approximation algorithm for the \\emph{linear threshold model} and the \\emph{independent cascade model}, which are the two main models in the social network analysis. In addition, Chen \\emph{et al.} proved that the problem of exactly computing the influence given a seed set in the two models is $\\#$P-hard. Both the \\emph{linear threshold model} and the \\emph{independent cascade model} are based on randomized propagation. However such information might be obtained by surveys or data mining techniques, which makes great difference on the properties of the problem. In this paper, we study the Influence Maximization problem in the \\emph{deterministic linear threshold model}. As a contrast, we show that in the \\emph{deterministic linear threshold model}, there is no polynomial time $n^{1-\\epsilon}$-approximation unless P=NP even at the simple case that one person needs at most two active neighbors to become active. This inapproximability result is derived with self-contained proofs without using PCP theorem. In the case that a person can be activated when one of its neighbors become active, there is a polynomial time ${e\\over e-1}$-approximation, and we prove it is the best possible approximation under a reasonable assumption in the complexity theory, $NP \\not\\subset DTIME(n^{\\log\\log n})$. We also show that the exact computation of the final influence given a seed set can be solved in linear time in the \\emph{deterministic linear threshold model}. The Least Seed Set problem, which aims to find a seed set with least number of people to activate all the required people in a given social network, is discussed. Using an analysis framework based on Set Cover, we show a $O($log$n)$-approximation in the case that a people become active when one of its neighbors is activated.",
"title": ""
},
{
"docid": "neg:1840590_19",
"text": "Engineering employers say publicly at national level that they need more engineering graduates, with surveys by, for example, the Engineering Employers Federation, proving there is demand. This project investigated the apparent contradiction between this high demand for engineering graduates and an unemployment rate of about 13% amongst UK engineering graduates (HESA data, July 2010). Employability has received huge attention but there remains a distinct issue of why some engineers do not get graduate level work within a short time of graduation. This National HE STEM Programme project interviewed a selection of unemployed graduates, identified from the Destinations of Leavers from HE (DLHE) survey six months after graduation, in order to investigate their experiences and gain an understanding of factors impeding their entry into graduate engineering employment. Questions ranged from whether the graduate decided to put off looking for a graduate level job until after graduation (and therefore ‘missed the boat’), through to academic and personal skills attributes, motivation and regional location. The data was analysed in the context both of previous research, and of data from interviews with engineering employers and employed graduates. Emerging from this study is that there is no single reason for unemployment amongst engineering graduates, with key findings centring on the importance of: students’ early engagement with career planning and the final year application process; relevant work experience; the distinction between the MEng and the BEng in employers’ recruitment criteria; and the ability of graduates to articulate their skills and competences effectively.",
"title": ""
}
] |
1840591 | Accelerating Scientific Data Exploration via Visual Query Systems | [
{
"docid": "pos:1840591_0",
"text": "Though data analysis tools continue to improve, analysts still expend an inordinate amount of time and effort manipulating data and assessing data quality issues. Such \"data wrangling\" regularly involves reformatting data values or layout, correcting erroneous or missing values, and integrating multiple data sources. These transforms are often difficult to specify and difficult to reuse across analysis tasks, teams, and tools. In response, we introduce Wrangler, an interactive system for creating data transformations. Wrangler combines direct manipulation of visualized data with automatic inference of relevant transforms, enabling analysts to iteratively explore the space of applicable operations and preview their effects. Wrangler leverages semantic data types (e.g., geographic locations, dates, classification codes) to aid validation and type conversion. Interactive histories support review, refinement, and annotation of transformation scripts. User study results show that Wrangler significantly reduces specification time and promotes the use of robust, auditable transforms instead of manual editing.",
"title": ""
}
] | [
{
"docid": "neg:1840591_0",
"text": "Although active islanding detection techniques have smaller non-detection zones than passive techniques, active methods could degrade the system power quality and are not as simple and easy to implement as passive methods. The islanding detection strategy, proposed in this paper, combines the advantages of both active and passive islanding detection methods. The distributed generation (DG) interface was designed so that the DG maintains stable operation while being grid connected and loses its stability once islanded. Thus, the over/under voltage and variation in the reactive power method be sufficient to detect islanding. The main advantage of the proposed technique is that it relies on a simple approach for islanding detection and has negligible non-detection zone. The proposed system was simulated on MATLAB/SIMULINK and simulation results are presented to highlight the effectiveness of the proposed technique.",
"title": ""
},
{
"docid": "neg:1840591_1",
"text": "Example classifications (test set) [And09] Andriluka et al. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009 [Eic09] Eichner et al. Articulated Human Pose Estimation and Search in (Almost) Unconstrained Still Images. In IJCV, 2012 [Sap10] Sapp et al. Cascaded models for articulated pose estimation. In ECCV, 2010 [Yan11] Yang and Ramanan. Articulated pose estimation with flexible mixturesof-parts. In CVPR, 2011. References Human Pose Estimation (HPE) Algorithm Input",
"title": ""
},
{
"docid": "neg:1840591_2",
"text": "The age of big data opens new opportunities in various fields. While the availability of a big dataset can be helpful in some scenarios, it introduces new challenges in digital forensics investigations. The existing tools and infrastructures cannot meet the expected response time when we investigate on a big dataset. Forensics investigators will face challenges while identifying necessary pieces of evidence from a big dataset, and collecting and analyzing those evidence. In this article, we propose the first working definition of big data forensics and systematically analyze the big data forensics domain to explore the challenges and issues in this forensics paradigm. We propose a conceptual model for supporting big data forensics investigations and present several use cases, where big data forensics can provide new insights to determine facts about criminal incidents.",
"title": ""
},
{
"docid": "neg:1840591_3",
"text": "Video-based fire detection is currently a fairly common application with the growth in the number of installed surveillance video systems. Moreover, the related processing units are becoming more powerful. Smoke is an early sign of most fires; therefore, selecting an appropriate smoke-detection method is essential. However, detecting smoke without creating a false alarm remains a challenging problem for open or large spaces with the disturbances of common moving objects, such as pedestrians and vehicles. This study proposes a novel video-based smoke-detection method that can be incorporated into a surveillance system to provide early alerts. In this study, the process of extracting smoke features from candidate regions was accomplished by analyzing the spatial and temporal characteristics of video sequences for three important features: edge blurring, gradual energy changes, and gradual chromatic configuration changes. The proposed spatialtemporal analysis technique improves the feature extraction of gradual energy changes. In order to make the video smoke-detection results more reliable, these three features were combined using a support vector machine (SVM) technique and a temporal-based alarm decision unit (ADU) was also introduced. The effectiveness of the proposed algorithm was evaluated on a PC with an Intel R © Core2 Duo CPU (2.2 GHz) and 2 GB RAM. The average processing time was 32.27 ms per frame; i.e., the proposed algorithm can process 30.98 frames per second. Experimental results showed that the proposed system can detect smoke effectively with a low false-alarm rate and a short reaction time in many real-world scenarios.",
"title": ""
},
{
"docid": "neg:1840591_4",
"text": "This study compares the EPID dosimetry algorithms of two commercial systems for pretreatment QA, and analyzes dosimetric measurements made with each system alongside the results obtained with a standard diode array. 126 IMRT fields are examined with both EPID dosimetry systems (EPIDose by Sun Nuclear Corporation, Melbourne FL, and Portal Dosimetry by Varian Medical Systems, Palo Alto CA) and the diode array, MapCHECK (also by Sun Nuclear Corporation). Twenty-six VMAT arcs of varying modulation complexity are examined with the EPIDose and MapCHECK systems. Optimization and commissioning testing of the EPIDose physics model is detailed. Each EPID IMRT QA system is tested for sensitivity to critical TPS beam model errors. Absolute dose gamma evaluation (3%, 3 mm, 10% threshold, global normalization to the maximum measured dose) yields similar results (within 1%-2%) for all three dosimetry modalities, except in the case of off-axis breast tangents. For these off-axis fields, the Portal Dosimetry system does not adequately model EPID response, though a previously-published correction algorithm improves performance. Both MapCHECK and EPIDose are found to yield good results for VMAT QA, though limitations are discussed. Both the Portal Dosimetry and EPIDose algorithms, though distinctly different, yield similar results for the majority of clinical IMRT cases, in close agreement with a standard diode array. Portal dose image prediction may overlook errors in beam modeling beyond the calculation of the actual fluence, while MapCHECK and EPIDose include verification of the dose calculation algorithm, albeit in simplified phantom conditions (and with limited data density in the case of the MapCHECK detector). Unlike the commercial Portal Dosimetry package, the EPIDose algorithm (when sufficiently optimized) allows accurate analysis of EPID response for off-axis, asymmetric fields, and for orthogonal VMAT QA. Other forms of QA are necessary to supplement the limitations of the Portal Vision Dosimetry system.",
"title": ""
},
{
"docid": "neg:1840591_5",
"text": "We consider a basic cache network, in which a single server is connected to multiple users via a shared bottleneck link. The server has a database of files (content). Each user has an isolated memory that can be used to cache content in a prefetching phase. In a following delivery phase, each user requests a file from the database, and the server needs to deliver users’ demands as efficiently as possible by taking into account their cache contents. We focus on an important and commonly used class of prefetching schemes, where the caches are filled with uncoded data. We provide the exact characterization of the rate-memory tradeoff for this problem, by deriving both the minimum average rate (for a uniform file popularity) and the minimum peak rate required on the bottleneck link for a given cache size available at each user. In particular, we propose a novel caching scheme, which strictly improves the state of the art by exploiting commonality among user demands. We then demonstrate the exact optimality of our proposed scheme through a matching converse, by dividing the set of all demands into types, and showing that the placement phase in the proposed caching scheme is universally optimal for all types. Using these techniques, we also fully characterize the rate-memory tradeoff for a decentralized setting, in which users fill out their cache content without any coordination.",
"title": ""
},
{
"docid": "neg:1840591_6",
"text": "Computer manufacturers spend a huge amount of time, resources, and money in designing new systems and newer configurations, and their ability to reduce costs, charge competitive prices and gain market share depends on how good these systems perform. In this work, we develop predictive models for estimating the performance of systems by using performance numbers from only a small fraction of the overall design space. Specifically, we first develop three models, two based on artificial neural networks and another based on linear regression. Using these models, we analyze the published Standard Performance Evaluation Corporation (SPEC) benchmark results and show that by using the performance numbers of only 2% and 5% of the machines in the design space, we can estimate the performance of all the systems within 9.1% and 4.6% on average, respectively. Then, we show that the performance of future systems can be estimated with less than 2.2% error rate on average by using the data of systems from a previous year. We believe that these tools can accelerate the design space exploration significantly and aid in reducing the corresponding research/development cost and time-to-market.",
"title": ""
},
{
"docid": "neg:1840591_7",
"text": "Analysis of histopathology slides is a critical step for many diagnoses, and in particular in oncology where it defines the gold standard. In the case of digital histopathological analysis, highly trained pathologists must review vast wholeslide-images of extreme digital resolution (100, 000 pixels) across multiple zoom levels in order to locate abnormal regions of cells, or in some cases single cells, out of millions. The application of deep learning to this problem is hampered not only by small sample sizes, as typical datasets contain only a few hundred samples, but also by the generation of ground-truth localized annotations for training interpretable classification and segmentation models. We propose a method for disease localization in the context of weakly supervised learning, where only image-level labels are available during training. Even without pixel-level annotations, we are able to demonstrate performance comparable with models trained with strong annotations on the Camelyon-16 lymph node metastases detection challenge. We accomplish this through the use of pre-trained deep convolutional networks, feature embedding, as well as learning via top instances and negative evidence, a multiple instance learning technique from the field of semantic segmentation and object detection.",
"title": ""
},
{
"docid": "neg:1840591_8",
"text": "Position and orientation profiles are two principal descriptions of shape in space. We describe how a structured light system, coupled with the illumination of a pseudorandom pattern and a suitable choice of feature points, can allow not only the position but also the orientation of individual surface elements to be determined independently. Unlike traditional designs which use the centroids of the illuminated pattern elements as the feature points, the proposed design uses the grid points between the pattern elements instead. The grid points have the essences that their positions in the image data are inert to the effect of perspective distortion, their individual extractions are not directly dependent on one another, and the grid points possess strong symmetry that can be exploited for their precise localization in the image data. Most importantly, the grid lines of the illuminated pattern that form the grid points can aid in determining surface normals. In this paper, we describe how each of the grid points can be labeled with a unique color code, what symmetry they possess and how the symmetry can be exploited for their precise localization at subpixel accuracy in the image data, and how 3D orientation in addition to 3D position can be determined at each of them. Both the position and orientation profiles can be determined with only a single pattern illumination and a single image capture.",
"title": ""
},
{
"docid": "neg:1840591_9",
"text": "Protection against high voltage-standing-wave-ratios (VSWR) is of great importance in many power amplifier applications. Despite excellent thermal and voltage breakdown properties even gallium nitride devices may need such measures. This work focuses on the timing aspect when using barium-strontium-titanate (BST) varactors to limit power dissipation and gate current. A power amplifier was designed and fabricated, implementing a varactor and a GaN-based voltage switch as varactor modulator for VSWR protection. The response time until the protection is effective was measured by switching the voltages at varactor, gate and drain of the transistor, respectively. It was found that it takes a minimum of 50 μs for the power amplifier to reach a safe condition. Pure gate pinch-off or drain voltage reduction solutions were slower and bias-network dependent. For a thick-film BST MIM varactor, optimized for speed and power, a switching time of 160 ns was achieved.",
"title": ""
},
{
"docid": "neg:1840591_10",
"text": "Lichen sclerosus et atrophicus (LSA) is a chronic inflammatory scarring disease with a predilection for the anogenital area; however, 15%-20% of LSA cases are extragenital. The folliculocentric variant is rarely reported and less well understood. The authors report a rare case of extragenital, folliculocentric LSA in a 10-year-old girl. The patient presented to the dermatology clinic for evaluation of an asymptomatic eruption of the arms and legs, with no vaginal or vulvar involvement. Physical examination revealed the presence of numerous 2-4 mm, mostly perifollicular, hypopigmented, slightly atrophic papules and plaques. Many of the lesions had a central keratotic plug. Cutaneous histopathological examination showed features of LSA. Based on clinical and histological findings, folliculocentric extragenital LSA was diagnosed.",
"title": ""
},
{
"docid": "neg:1840591_11",
"text": "We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation. Our approach requires no changes to the model architecture of a standard NMT system, but instead introduces a new component, the contextual parameter generator (CPG), that generates the parameters of the system (e.g., weights in a neural network). This parameter generator accepts source and target language embeddings as input, and generates the parameters for the encoder and the decoder, respectively. The rest of the model remains unchanged and is shared across all languages. We show how this simple modification enables the system to use monolingual data for training and also perform zero-shot translation. We further show it is able to surpass state-of-theart performance for both the IWSLT-15 and IWSLT-17 datasets and that the learned language embeddings are able to uncover interesting relationships between languages.",
"title": ""
},
{
"docid": "neg:1840591_12",
"text": "Image segmentation plays a crucial role in many medical imaging applications by automating or facilitating the delineation of anatomical structures and other regions of interest. We present herein a critical appraisal of the current status of semi-automated and automated methods for the segmentation of anatomical medical images. Current segmentation approaches are reviewed with an emphasis placed on revealing the advantages and disadvantages of these methods for medical imaging applications. The use of image segmentation in different imaging modalities is also described along with the difficulties encountered in each modality. We conclude with a discussion on the future of image segmentation methods in biomedical research.",
"title": ""
},
{
"docid": "neg:1840591_13",
"text": "While the focus of electronic commerce has often been on “dot coms” or pure Internet based companies, a major transformation is under way in many traditional “bricks-and-mortar” organizations. The latter are investing heavily in Internet based technologies and applications in order to attain new heights of efficiency, productivity and business value. While anecdotes in the business press suggest that some firms have achieved unprecedented performance gains by leveraging the Internet, there is no systematic evidence in the Information Technology (IT) productivity or business value literature regarding the payoffs from Internet enabled business initiatives. We propose an exploratory model of electronic business value involving IT applications, processes, business partner readiness, and operational and financial performance measures. This model is rooted in IT business value and productivity research, and is empirically tested with data from over 1000 firms in manufacturing, retail, distribution and wholesale sectors. We find that electronic business initiatives involving customer-facing technologies lead to operational excellence in customer interactions and improved financial performance. Further, supplier related operational excellence is a key determinant of customer excellence, suggesting the related nature of customer and supplier related performance. Customer and supplier readiness to engage in online business have strong positive impacts on customer and supplier related operational excellence respectively, indicating the need for all entities in a value chain to simultaneously adopt Internet applications and business practices. To the best of our knowledge, this is the first study to address the business value of Internet initiatives.",
"title": ""
},
{
"docid": "neg:1840591_14",
"text": "Dynamic network analysis (DNA) varies from traditional social network analysis in that it can handle large dynamic multi-mode, multi-link networks with varying levels of uncertainty. DNA, like quantum mechanics, would be a theory in which relations are probabilistic, the measurement of a node changes its properties, movement in one part of the system propagates through the system, and so on. However, unlike quantum mechanics, the nodes in the DNA, the atoms, can learn. An approach to DNA is described that builds DNA theory through the combined use of multi-agent modeling, machine learning, and meta-matrix approach to network representation. A set of candidate metric for describing the DNA are defined. Then, a model built using this approach is presented. Results concerning the evolution and destabilization of networks are described.",
"title": ""
},
{
"docid": "neg:1840591_15",
"text": "This paper addresses the Volume dimension of Big Data. It presents a preliminary work on finding segments of retailers from a large amount of Electronic Funds Transfer at Point Of Sale (EFTPOS) transaction data. To the best of our knowledge, this is the first time a work on Big EFTPOS Data problem has been reported. A data reduction technique using the RFM (Recency, Frequency, Monetary) analysis as applied to a large data set is presented. Ways to optimise clustering techniques used to segment the big data set through data partitioning and parallelization are explained. Preliminary analysis on the segments of the retailers output from the clustering experiments demonstrates that further drilling down into the retailer segments to find more insights into their business behaviours is warranted.",
"title": ""
},
{
"docid": "neg:1840591_16",
"text": "Knowledge graphs contain knowledge about the world and provide a structured representation of this knowledge. Current knowledge graphs contain only a small subset of what is true in the world. Link prediction approaches aim at predicting new links for a knowledge graph given the existing links among the entities. Tensor factorization approaches have proved promising for such link prediction problems. Proposed in 1927, Canonical Polyadic (CP) decomposition is among the first tensor factorization approaches. CP generally performs poorly for link prediction as it learns two independent embedding vectors for each entity, whereas they are really tied. We present a simple enhancement of CP (which we call SimplE) to allow the two embeddings of each entity to be learned dependently. The complexity of SimplE grows linearly with the size of embeddings. The embeddings learned through SimplE are interpretable, and certain types of background knowledge can be incorporated into these embeddings through weight tying. We prove SimplE is fully expressive and derive a bound on the size of its embeddings for full expressivity. We show empirically that, despite its simplicity, SimplE outperforms several state-of-the-art tensor factorization techniques. SimplE’s code is available on GitHub at https://github.com/Mehran-k/SimplE.",
"title": ""
},
{
"docid": "neg:1840591_17",
"text": "Automatic classification of cancer lesions in tissues observed using gastroenterology imaging is a non-trivial pattern recognition task involving filtering, segmentation, feature extraction and classification. In this paper we measure the impact of a variety of segmentation algorithms (mean shift, normalized cuts, level-sets) on the automatic classification performance of gastric tissue into three classes: cancerous, pre-cancerous and normal. Classification uses a combination of color (hue-saturation histograms) and texture (local binary patterns) features, applied to two distinct imaging modalities: chromoendoscopy and narrow-band imaging. Results show that mean-shift obtains an interesting performance for both scenarios producing low classification degradations (6%), full image classification is highly inaccurate reinforcing the importance of segmentation research for Gastroenterology, and confirm that Patch Index is an interesting measure of the classification potential of small to medium segmented regions.",
"title": ""
},
{
"docid": "neg:1840591_18",
"text": "One characteristic attribute of mobile platforms equipped with a set of independent steering wheels is their omnidirectionality and the ability to realize complex translational and rotational trajectories. An accurate coordination of steering angle and spinning rate of each wheel is necessary for a consistent motion. Since the orientations of the wheels must align to the Instantaneous Center of Rotation (ICR), the current location and velocity of this specific point is essential for describing the state of the platform. However, singular configurations of the controlled system exist depending on the ICR, leading to unfeasible control inputs, i.e., infinite steering rates. Within this work we address and analyze this problem in general. Furthermore, we propose a solution for mobile platforms with variable footprint. An existing controller based on dynamic feedback linearization is augmented by a new potential field-based algorithm for singularity avoidance which uses the tunable leg lengths as an additional control input to minimize deviations from the nominal motion trajectory. Simulations and experimental results on the mobile platform of DLR's humanoid manipulator Justin support our approach.",
"title": ""
},
{
"docid": "neg:1840591_19",
"text": "This report describes simple mechanisms that allow autonomous software agents to en gage in bargaining behaviors in market based environments Groups of agents with such mechanisms could be used in applications including market based control internet com merce and economic modelling After an introductory discussion of the rationale for this work and a brief overview of key concepts from economics work in market based control is reviewed to highlight the need for bargaining agents Following this the early experimental economics work of Smith and the recent results of Gode and Sunder are de scribed Gode and Sunder s work using zero intelligence zi traders that act randomly within a structured market appears to imply that convergence to the theoretical equilib rium price is determined more by market structure than by the intelligence of the traders in that market if this is true developing mechanisms for bargaining agents is of very limited relevance However it is demonstrated here that the average transaction prices of zi traders can vary signi cantly from the theoretical equilibrium level when supply and demand are asymmetric and that the degree of di erence from equilibrium is predictable from a pri ori statistical analysis In this sense it is shown here that Gode and Sunder s results are artefacts of their experimental regime Following this zero intelligence plus zip traders are introduced like zi traders these simple agents make stochastic bids Unlike zi traders they employ an elementary form of machine learning Groups of zip traders interacting in experimental markets similar to those used by Smith and Gode and Sunder are demonstrated and it is shown that the performance of zip traders is signi cantly closer to the human data than is the performance of Gode and Sunder s zi traders This document reports on work done during February to September while the author held a Visiting Academic post at Hewlett Packard Laboratories Bristol Filton Road Bristol BS QZ U K",
"title": ""
}
] |
1840592 | Illumination Invariant Imaging : Applications in Robust Vision-based Localisation , Mapping and Classification for Autonomous Vehicles | [
{
"docid": "pos:1840592_0",
"text": "Today, visual recognition systems are still rarely employed in robotics applications. Perhaps one of the main reasons for this is the lack of demanding benchmarks that mimic such scenarios. In this paper, we take advantage of our autonomous driving platform to develop novel challenging benchmarks for the tasks of stereo, optical flow, visual odometry/SLAM and 3D object detection. Our recording platform is equipped with four high resolution video cameras, a Velodyne laser scanner and a state-of-the-art localization system. Our benchmarks comprise 389 stereo and optical flow image pairs, stereo visual odometry sequences of 39.2 km length, and more than 200k 3D object annotations captured in cluttered scenarios (up to 15 cars and 30 pedestrians are visible per image). Results from state-of-the-art algorithms reveal that methods ranking high on established datasets such as Middlebury perform below average when being moved outside the laboratory to the real world. Our goal is to reduce this bias by providing challenging benchmarks with novel difficulties to the computer vision community. Our benchmarks are available online at: www.cvlibs.net/datasets/kitti.",
"title": ""
},
{
"docid": "pos:1840592_1",
"text": "When operating over extended periods of time, an autonomous system will inevitably be faced with severe changes in the appearance of its environment. Coping with such changes is more and more in the focus of current robotics research. In this paper, we foster the development of robust place recognition algorithms in changing environments by describing a new dataset that was recorded during a 728 km long journey in spring, summer, fall, and winter. Approximately 40 hours of full-HD video cover extreme seasonal changes over almost 3000 km in both natural and man-made environments. Furthermore, accurate ground truth information are provided. To our knowledge, this is by far the largest SLAM dataset available at the moment. In addition, we introduce an open source Matlab implementation of the recently published SeqSLAM algorithm and make it available to the community. We benchmark SeqSLAM using the novel dataset and analyse the influence of important parameters and algorithmic steps.",
"title": ""
},
{
"docid": "pos:1840592_2",
"text": "This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.",
"title": ""
}
] | [
{
"docid": "neg:1840592_0",
"text": "Wearable health tech provides doctors with the ability to remotely supervise their patients' wellness. It also makes it much easier to authorize someone else to take appropriate actions to ensure the person's wellness than ever before. Information Technology may soon change the way medicine is practiced, improving the performance, while reducing the price of healthcare. We analyzed the secrecy demands of wearable devices, including Smartphone, smart watch and their computing techniques, that can soon change the way healthcare is provided. However, before this is adopted in practice, all devices must be equipped with sufficient privacy capabilities related to healthcare service. In this paper, we formulated a new improved conceptual framework for wearable healthcare systems. This framework consists of ten principles and nine checklists, capable of providing complete privacy protection package to wearable device owners. We constructed this framework based on the analysis of existing mobile technology, the results of which are combined with the existing security standards. The approach also incorporates the market share percentage level of every app and its respective OS. This framework is evaluated based on the stringent CIA and HIPAA principles for information security. This evaluation is followed by testing the capability to revoke rights of subjects to access objects and ability to determine the set of available permissions for a particular subject for all models Finally, as the last step, we examine the complexity of the required initial setup.",
"title": ""
},
{
"docid": "neg:1840592_1",
"text": "Rolling upgrade consists of upgrading progressively the servers of a distributed system to reduce service downtime.Upgrading a subset of servers requires a well-engineered cluster membership protocol to maintain, in the meantime, the availability of the system state. Existing cluster membership reconfigurations, like CoreOS etcd, rely on a primary not only for reconfiguration but also for storing information. At any moment, there can be at most one primary, whose replacement induces disruption. We propose Rollup, a non-disruptive rolling upgrade protocol with a fast consensus-based reconfiguration. Rollup relies on a candidate leader only for the reconfiguration and scalable biquorums for service requests. While Rollup implements a non-disruptive cluster membership protocol, it does not offer a full-fledged coordination service. We analyzed Rollup theoretically and experimentally on an isolated network of 26 physical machines and an Amazon EC2 cluster of 59 virtual machines. Our results show an 8-fold speedup compared to a rolling upgrade based on a primary for reconfiguration.",
"title": ""
},
{
"docid": "neg:1840592_2",
"text": "Software game is a kind of application that is used not only for entertainment, but also for serious purposes that can be applicable to different domains such as education, business, and health care. Multidisciplinary nature of the game development processes that combine sound, art, control systems, artificial intelligence (AI), and human factors, makes the software game development practice different from traditional software development. However, the underline software engineering techniques help game development to achieve maintainability, flexibility, lower effort and cost, and better design. The purpose of this study is to assesses the state of the art research on the game development software engineering process and highlight areas that need further consideration by researchers. In the study, we used a systematic literature review methodology based on well-known digital libraries. The largest number of studies have been reported in the production phase of the game development software engineering process life cycle, followed by the pre-production phase. By contrast, the post-production phase has received much less research activity than the pre-production and production phases. The results of this study suggest that the game development software engineering process has many aspects that need further attention from researchers; that especially includes the postproduction phase.",
"title": ""
},
{
"docid": "neg:1840592_3",
"text": "The web has become the world's largest repository of knowledge. Web usage mining is the process of discovering knowledge from the interactions generated by the user in the form of access logs, cookies, and user sessions data. Web Mining consists of three different categories, namely Web Content Mining, Web Structure Mining, and Web Usage Mining (is the process of discovering knowledge from the interaction generated by the users in the form of access logs, browser logs, proxy-server logs, user session data, cookies). Accurate web log mining results and efficient online navigational pattern prediction are undeniably crucial for tuning up websites and consequently helping in visitors’ retention. Like any other data mining task, web log mining starts with data cleaning and preparation and it ends up discovering some hidden knowledge which cannot be extracted using conventional methods. After applying web mining on web sessions we will get navigation patterns which are important for web users such that appropriate actions can be adopted. Due to huge data in web, discovery of patterns and there analysis for further improvement in website becomes a real time necessity. The main focus of this paper is using of hybrid prediction engine to classify users on the basis of discovered patterns from web logs. Our proposed framework is to overcome the problem arise due to using of any single algorithm, we will give results based on comparison of two different algorithms like Longest Common Sequence (LCS) algorithm and Frequent Pattern (Growth) algorithm. Keywords— Web Usage Mining, Navigation Pattern, Frequent Pattern (Growth) Algorithm. ________________________________________________________________________________________________________",
"title": ""
},
{
"docid": "neg:1840592_4",
"text": "This paper provides a review of the literature addressing sensorless operation methods of PM brushless machines. The methods explained are state-of-the-art of open and closed loop control strategies. The closed loop review includes those methods based on voltage and current measurements, those methods based on back emf measurements, and those methods based on novel techniques not included in the previous categories. The paper concludes with a comparison table including all main features for all control strategies",
"title": ""
},
{
"docid": "neg:1840592_5",
"text": "To understand how implicit and explicit biofeedback work in games, we developed a first-person shooter (FPS) game to experiment with different biofeedback techniques. While this area has seen plenty of discussion, there is little rigorous experimentation addressing how biofeedback can enhance human-computer interaction. In our two-part study, (N=36) subjects first played eight different game stages with two implicit biofeedback conditions, with two simulation-based comparison and repetition rounds, then repeated the two biofeedback stages when given explicit information on the biofeedback. The biofeedback conditions were respiration and skin-conductance (EDA) adaptations. Adaptation targets were four balanced player avatar attributes. We collected data with psycho¬physiological measures (electromyography, respiration, and EDA), a game experience questionnaire, and game-play measures.\n According to our experiment, implicit biofeedback does not produce significant effects in player experience in an FPS game. In the explicit biofeedback conditions, players were more immersed and positively affected, and they were able to manipulate the game play with the biosignal interface. We recommend exploring the possibilities of using explicit biofeedback interaction in commercial games.",
"title": ""
},
{
"docid": "neg:1840592_6",
"text": "It has been suggested that Brain-Computer Interfaces (BCI) may one day be suitable for controlling a neuroprosthesis. For closed-loop operation of BCI, a tactile feedback channel that is compatible with neuroprosthetic applications is desired. Operation of an EEG-based BCI using only vibrotactile feedback, a commonly used method to convey haptic senses of contact and pressure, is demonstrated with a high level of accuracy. A Mu-rhythm based BCI using a motor imagery paradigm was used to control the position of a virtual cursor. The cursor position was shown visually as well as transmitted haptically by modulating the intensity of a vibrotactile stimulus to the upper limb. A total of six subjects operated the BCI in a two-stage targeting task, receiving only vibrotactile biofeedback of performance. The location of the vibration was also systematically varied between the left and right arms to investigate location-dependent effects on performance. Subjects are able to control the BCI using only vibrotactile feedback with an average accuracy of 56% and as high as 72%. These accuracies are significantly higher than the 15% predicted by random chance if the subject had no voluntary control of their Mu-rhythm. The results of this study demonstrate that vibrotactile feedback is an effective biofeedback modality to operate a BCI using motor imagery. In addition, the study shows that placement of the vibrotactile stimulation on the biceps ipsilateral or contralateral to the motor imagery introduces a significant bias in the BCI accuracy. This bias is consistent with a drop in performance generated by stimulation of the contralateral limb. Users demonstrated the capability to overcome this bias with training.",
"title": ""
},
{
"docid": "neg:1840592_7",
"text": "Rapid progress has been made towards question answering (QA) systems that can extract answers from text. Existing neural approaches make use of expensive bidirectional attention mechanisms or score all possible answer spans, limiting scalability. We propose instead to cast extractive QA as an iterative search problem: select the answer’s sentence, start word, and end word. This representation reduces the space of each search step and allows computation to be conditionally allocated to promising search paths. We show that globally normalizing the decision process and back-propagating through beam search makes this representation viable and learning efficient. We empirically demonstrate the benefits of this approach using our model, Globally Normalized Reader (GNR), which achieves the second highest single model performance on the Stanford Question Answering Dataset (68.4 EM, 76.21 F1 dev) and is 24.7x faster than bi-attention-flow. We also introduce a data-augmentation method to produce semantically valid examples by aligning named entities to a knowledge base and swapping them with new entities of the same type. This method improves the performance of all models considered in this work and is of independent interest for a variety of NLP tasks.",
"title": ""
},
{
"docid": "neg:1840592_8",
"text": "Affective computing is a newly trend the main goal is exploring the human emotion things. The human emotion is leaded into a key position of behavior clue, and hence it should be included within the sensible model when an intelligent system aims to simulate or forecast human responses. This research utilizes decision tree one of data mining model to classify the emotion. This research integrates and manipulates the Thayer's emotion mode and color theory into the decision tree model, C4.5 for an innovative emotion detecting system. This paper uses 320 data in four emotion groups to train and build the decision tree for verifying the accuracy in this system. The result reveals that C4.5 decision tree model can be effective classified the emotion by feedback color from human. For the further research, colors will not the only human behavior clues, even more than all the factors from human interaction.",
"title": ""
},
{
"docid": "neg:1840592_9",
"text": "Commonsense knowledge is vital to many natural language processing tasks. In this paper, we present a novel open-domain conversation generation model to demonstrate how large-scale commonsense knowledge can facilitate language understanding and generation. Given a user post, the model retrieves relevant knowledge graphs from a knowledge base and then encodes the graphs with a static graph attention mechanism, which augments the semantic information of the post and thus supports better understanding of the post. Then, during word generation, the model attentively reads the retrieved knowledge graphs and the knowledge triples within each graph to facilitate better generation through a dynamic graph attention mechanism. This is the first attempt that uses large-scale commonsense knowledge in conversation generation. Furthermore, unlike existing models that use knowledge triples (entities) separately and independently, our model treats each knowledge graph as a whole, which encodes more structured, connected semantic information in the graphs. Experiments show that the proposed model can generate more appropriate and informative responses than stateof-the-art baselines.",
"title": ""
},
{
"docid": "neg:1840592_10",
"text": "Many animals, on air, water, or land, navigate in three-dimensional (3D) environments, yet it remains unclear how brain circuits encode the animal's 3D position. We recorded single neurons in freely flying bats, using a wireless neural-telemetry system, and studied how hippocampal place cells encode 3D volumetric space during flight. Individual place cells were active in confined 3D volumes, and in >90% of the neurons, all three axes were encoded with similar resolution. The 3D place fields from different neurons spanned different locations and collectively represented uniformly the available space in the room. Theta rhythmicity was absent in the firing patterns of 3D place cells. These results suggest that the bat hippocampus represents 3D volumetric space by a uniform and nearly isotropic rate code.",
"title": ""
},
{
"docid": "neg:1840592_11",
"text": "Space-frequency (SF) codes that exploit both spatial and frequency diversity can be designed using orthogonal frequency division multiplexing (OFDM). However, OFDM is sensitive to frequency offset (FO), which generates intercarrier interference (ICI) among subcarriers. We investigate the pair-wise error probability (PEP) performance of SF codes over quasistatic, frequency selective Rayleigh fading channels with FO. We prove that the conventional SF code design criteria remain valid. The negligible performance loss for small FOs (less than 1%), however, increases with FO and with signal to noise ratio (SNR). While diversity can be used to mitigate ICI, as FO increases, the PEP does not rapidly decay with SNR. Therefore, we propose a new class of SF codes called ICI self-cancellation SF (ISC-SF) codes to combat ICI effectively even with high FO (10%). ISC-SF codes are constructed from existing full diversity space-time codes. Importantly, our code design provide a satisfactory tradeoff among error correction ability, ICI reduction and spectral efficiency. Furthermore, we demonstrate that ISC-SF codes can also mitigate the ICI caused by phase noise and time varying channels. Simulation results affirm the theoretical analysis.",
"title": ""
},
{
"docid": "neg:1840592_12",
"text": "Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We find that applying orthogonal regularization to the generator renders it amenable to a simple “truncation trick,” allowing fine control over the trade-off between sample fidelity and variety by truncating the latent space. Our modifications lead to models which set the new state of the art in class-conditional image synthesis. When trained on ImageNet at 128×128 resolution, our models (BigGANs) achieve an Inception Score (IS) of 166.3 and Fréchet Inception Distance (FID) of 9.6, improving over the previous best IS of 52.52 and FID of 18.65.",
"title": ""
},
{
"docid": "neg:1840592_13",
"text": "In addition to training our policy on the goals that were generated in the current iteration, we also save a list (“regularized replay buffer”) of goals that were generated during previous iterations (update replay). These goals are also used to train our policy, so that our policy does not forget how to achieve goals that it has previously learned. When we generate goals for our policy to train on, we sample two thirds of the goals from the Goal GAN and we sample the one third of the goals uniformly from the replay buffer. To prevent the replay buffer from concentrating in a small portion of goal space, we only insert new goals that are further away than from the goals already in the buffer, where we chose the goal-space metric and to be the same as the ones introduced in Section 3.1.",
"title": ""
},
{
"docid": "neg:1840592_14",
"text": "Awareness of other vehicle's intention may help human drivers or autonomous vehicles judge the risk and avoid traffic accidents. This paper proposed an approach to predicting driver's intentions using Hidden Markov Model (HMM) which is able to access the control and the state of the vehicle. The driver performs maneuvers including stop/non-stop, change lane left/right and turn left/right in a simulator in both highway and urban environments. Moreover, the structure of the road (curved road) is also taken into account for classification. Experiments were conducted with different input sets (steering wheel data with and without vehicle state data) to compare the system performance.",
"title": ""
},
{
"docid": "neg:1840592_15",
"text": "Identification and extraction of singing voice from within musical mixtures is a key challenge in sourc e separation and machine audition. Recently, deep neural network s (DNN) have been used to estimate 'ideal' binary masks for carefully controlled cocktail party speech separation problems. However, it is not yet known whether these methods are capab le of generalizing to the discrimination of voice and non -voice in the context of musical mixtures. Here, we trained a con volutional DNN (of around a billion parameters) to provide probabilistic estimates of the ideal binary mask for separation o f vocal sounds from real-world musical mixtures. We contrast our DNN results with more traditional linear methods. Our approach may be useful for automatic removal of vocal sounds from musical mixtures for 'karaoke' type applications.",
"title": ""
},
{
"docid": "neg:1840592_16",
"text": "Direct methods for restoration of images blurred by motion are analyzed and compared. The term direct means that the considered methods are performed in a one-step fashion without any iterative technique. The blurring point-spread function is assumed to be unknown, and therefore the image restoration process is called blind deconvolution. What is believed to be a new direct method, here called the whitening method, was recently developed. This method and other existing direct methods such as the homomorphic and the cepstral techniques are studied and compared for a variety of motion types. Various criteria such as quality of restoration, sensitivity to noise, and computation requirements are considered. It appears that the recently developed method shows some improvements over other older methods. The research presented here clarifies the differences among the direct methods and offers an experimental basis for choosing which blind deconvolution method to use. In addition, some improvements on the methods are suggested.",
"title": ""
},
{
"docid": "neg:1840592_17",
"text": "Domain adaptation plays an important role for speech recognition models, in particular, for domains that have low resources. We propose a novel generative model based on cyclicconsistent generative adversarial network (CycleGAN) for unsupervised non-parallel speech domain adaptation. The proposed model employs multiple independent discriminators on the power spectrogram, each in charge of different frequency bands. As a result we have 1) better discriminators that focus on fine-grained details of the frequency features, and 2) a generator that is capable of generating more realistic domainadapted spectrogram. We demonstrate the effectiveness of our method on speech recognition with gender adaptation, where the model only has access to supervised data from one gender during training, but is evaluated on the other at test time. Our model is able to achieve an average of 7.41% on phoneme error rate, and 11.10% word error rate relative performance improvement as compared to the baseline, on TIMIT and WSJ dataset, respectively. Qualitatively, our model also generates more natural sounding speech, when conditioned on data from the other domain.",
"title": ""
},
{
"docid": "neg:1840592_18",
"text": "While it is known that academic searchers differ from typical web searchers, little is known about the search behavior of academic searchers over longer periods of time. In this study we take a look at academic searchers through a large-scale log analysis on a major academic search engine. We focus on two aspects: query reformulation patterns and topic shifts in queries. We first analyze how each of these aspects evolve over time. We identify important query reformulation patterns: revisiting and issuing new queries tend to happen more often over time. We also find that there are two distinct types of users: one type of users becomes increasingly focused on the topics they search for as time goes by, and the other becomes increasingly diversifying. After analyzing these two aspects separately, we investigate whether, and to which degree, there is a correlation between topic shifts and query reformulations. Surprisingly, users’ preferences of query reformulations correlate little with their topic shift tendency. However, certain reformulations may help predict the magnitude of the topic shift that happens in the immediate next timespan. Our results shed light on academic searchers’ information seeking behavior and may benefit search personalization.",
"title": ""
}
] |
1840593 | CLUSTERGEN: a statistical parametric synthesizer using trajectory modeling | [
{
"docid": "pos:1840593_0",
"text": "This paper describes an HMM-based speech synthesis system (HTS), in which speech waveform is generated from HMMs themselves, and applies it to English speech synthesis using the general speech synthesis architecture of Festival. Similarly to other datadriven speech synthesis approaches, HTS has a compact language dependent module: a list of contextual factors. Thus, it could easily be extended to other languages, though the first version of HTS was implemented for Japanese. The resulting run-time engine of HTS has the advantage of being small: less than 1 M bytes, excluding text analysis part. Furthermore, HTS can easily change voice characteristics of synthesized speech by using a speaker adaptation technique developed for speech recognition. The relation between the HMM-based approach and other unit selection approaches is also discussed.",
"title": ""
}
] | [
{
"docid": "neg:1840593_0",
"text": "This paper summarize our approach to author profiling task – a part of evaluation lab PAN’13. We have used ensemble-based classification on large features set. All the features are roughly described and experimental section provides evaluation of different methods and classification approaches.",
"title": ""
},
{
"docid": "neg:1840593_1",
"text": "We study a generalization of the setting of regenerating codes, motivated by applications to storage systems consisting of clusters of storage nodes. There are n clusters in total, with m nodes per cluster. A data file is coded and stored across the mn nodes, with each node storing α symbols. For availability of data, we demand that the file is retrievable by downloading the entire content from any subset of k clusters. Nodes represent entities that can fail, and here we distinguish between intra-cluster and inter-cluster bandwidth-costs during node repair. Node-repair is accomplished by downloading β symbols each from any set of d other clusters. The replacement-node also downloads content from any set of ` surviving nodes in the same cluster during the repair process. We identity the optimal trade-off between storage-overhead and inter-cluster (IC) repair-bandwidth under functional repair, and also present optimal exact-repair code constructions for a class of parameters. Our results imply that it is possible to simultaneously achieve both optimal storage overhead and optimal minimum IC bandwidth, for sufficiently large values of nodes per cluster. The simultaneous optimality comes at the expense of intra-cluster bandwidth, and we obtain lower bounds on the necessary intra-cluster repair-bandwidth. Simulation results based on random linear network codes suggest optimality of the bounds on intra-cluster repair-bandwidth.",
"title": ""
},
{
"docid": "neg:1840593_2",
"text": "Neurodegeneration is a phenomenon that occurs in the central nervous system through the hallmarks associating the loss of neuronal structure and function. Neurodegeneration is observed after viral insult and mostly in various so-called 'neurodegenerative diseases', generally observed in the elderly, such as Alzheimer's disease, multiple sclerosis, Parkinson's disease and amyotrophic lateral sclerosis that negatively affect mental and physical functioning. Causative agents of neurodegeneration have yet to be identified. However, recent data have identified the inflammatory process as being closely linked with multiple neurodegenerative pathways, which are associated with depression, a consequence of neurodegenerative disease. Accordingly, pro‑inflammatory cytokines are important in the pathophysiology of depression and dementia. These data suggest that the role of neuroinflammation in neurodegeneration must be fully elucidated, since pro‑inflammatory agents, which are the causative effects of neuroinflammation, occur widely, particularly in the elderly in whom inflammatory mechanisms are linked to the pathogenesis of functional and mental impairments. In this review, we investigated the role played by the inflammatory process in neurodegenerative diseases.",
"title": ""
},
{
"docid": "neg:1840593_3",
"text": "Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval and Fusion Network (Craft), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. Craft explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of Craft while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate Craft on semantic fidelity to caption, composition consistency, and visual quality. Craft outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate Craft on Flintstones, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by Craft, see https://youtu.be/688Vv86n0z8. Fred wearing a red hat is walking in the living room Retrieve Compose Retrieve Compose Retrieve Pebbles is sitting at a table in a room watching the television Retrieve Compose Retrieve Compose Compose Retrieve Retrieve Fuse",
"title": ""
},
{
"docid": "neg:1840593_4",
"text": "Glucose control serves as the primary method of diabetes management. Current digital therapeutic approaches for subjects with Type 1 diabetes mellitus (T1DM) such as the artificial pancreas and bolus calculators leverage machine learning techniques for predicting subcutaneous glucose for improved control. Deep learning has recently been applied in healthcare and medical research to achieve state-of-the-art results in a range of tasks including disease diagnosis, and patient state prediction among others. In this work, we present a deep learning model that is capable of predicting glucose levels over a 30-minute horizon with leading accuracy for simulated patient cases (RMSE = 10.02±1.28 [mg/dl] and MARD = 5.95±0.64%) and real patient cases (RMSE = 21.23±1.15 [mg/dl] and MARD = 10.53±1.28%). In addition, the model also provides competitive performance in forecasting adverse glycaemic events with minimal time lag both in a simulated patient dataset (MCChyperglycaemia = 0.82±0.06 and MCChypoglycaemia = 0.76±0.13) and in a real patient dataset (MCChyperglycaemia = 0.79±0.04 and MCChypoglycaemia = 0.28±0.11). This approach is evaluated on a dataset of 10 simulated cases generated from the UVa/Padova simulator and a clinical dataset of 5 real cases each containing glucose readings, insulin bolus, and meal (carbohydrate) data. Performance of the recurrent convolutional neural network is benchmarked against four state-of-the-art algorithms: support vector regression (SVR), latent variable (LVX) model, autoregressive model (ARX), and neural network for predicting glucose algorithm (NNPG).",
"title": ""
},
{
"docid": "neg:1840593_5",
"text": "Almost all cellular mobile communications including first generation analog systems, second generation digital systems, third generation WCDMA, and fourth generation OFDMA systems use Ultra High Frequency (UHF) band of radio spectrum with frequencies in the range of 300MHz-3GHz. This band of spectrum is becoming increasingly crowded due to spectacular growth in mobile data and other related services. More recently, there have been proposals to explore mmWave spectrum (3-300GHz) for commercial mobile applications due to its unique advantages such as spectrum availability and small component sizes. In this paper, we discuss system design aspects such as antenna array design, base station and mobile station requirements. We also provide system performance and SINR geometry results to demonstrate the feasibility of an outdoor mmWave mobile broadband communication system. We note that with adaptive antenna array beamforming, multi-Gbps data rates can be supported for mobile cellular deployments.",
"title": ""
},
{
"docid": "neg:1840593_6",
"text": "A laparoscopic Heller myotomy with partial fundoplication is considered today in most centers in the United States and abroad the treatment of choice for patients with esophageal achalasia. Even though the operation has initially a very high success rate, dysphagia eventually recurs in some patients. In these cases, it is important to perform a careful work-up to identify the cause of the failure and to design a tailored treatment plan by either endoscopic means or revisional surgery. The best results are obtained by a team approach, in Centers where radiologists, gastroenterologists, and surgeons have experience in the diagnosis and treatment of this disease.",
"title": ""
},
{
"docid": "neg:1840593_7",
"text": "In this work, we apply word embeddings and neural networks with Long Short-Term Memory (LSTM) to text classification problems, where the classification criteria are decided by the context of the application. We examine two applications in particular. The first is that of Actionability, where we build models to classify social media messages from customers of service providers as Actionable or Non-Actionable. We build models for over 30 different languages for actionability, and most of the models achieve accuracy around 85%, with some reaching over 90% accuracy. We also show that using LSTM neural networks with word embeddings vastly outperform traditional techniques. Second, we explore classification of messages with respect to political leaning, where social media messages are classified as Democratic or Republican. The model is able to classify messages with a high accuracy of 87.57%. As part of our experiments, we vary different hyperparameters of the neural networks, and report the effect of such variation on the accuracy. These actionability models have been deployed to production and help company agents provide customer support by prioritizing which messages to respond to. The model for political leaning has been opened and made available for wider use.",
"title": ""
},
{
"docid": "neg:1840593_8",
"text": "A large literature has considered predictability of the mean or volatility of stock returns but little is known about whether the distribution of stock returns more generally is predictable. We explore this issue in a quantile regression framework and consider whether a range of economic state variables are helpful in predicting different quantiles of stock returns representing left tails, right tails or shoulders of the return distribution. Many variables are found to have an asymmetric effect on the return distribution, affecting lower, central and upper quantiles very differently. Out-of-sample forecasts suggest that upper quantiles of the return distribution can be predicted by means of economic state variables although the center of the return distribution is more difficult to predict. Economic gains from utilizing information in time-varying quantile forecasts are demonstrated through portfolio selection and option trading experiments. ∗We thank Torben Andersen, Tim Bollerslev, Peter Christoffersen as well as seminar participants at HEC, University of Montreal, University of Toronto, Goldman Sachs and CREATES, University of Aarhus, for helpful comments.",
"title": ""
},
{
"docid": "neg:1840593_9",
"text": "In this paper, we propose new algorithms for learning segmentation strategies for simultaneous speech translation. In contrast to previously proposed heuristic methods, our method finds a segmentation that directly maximizes the performance of the machine translation system. We describe two methods based on greedy search and dynamic programming that search for the optimal segmentation strategy. An experimental evaluation finds that our algorithm is able to segment the input two to three times more frequently than conventional methods in terms of number of words, while maintaining the same score of automatic evaluation.1",
"title": ""
},
{
"docid": "neg:1840593_10",
"text": "In machine learning, one of the main requirements is to build computational models with a high ability to generalize well the extracted knowledge. When training e.g. artificial neural networks, poor generalization is often characterized by over-training. A common method to avoid over-training is the hold-out crossvalidation. The basic problem of this method represents, however, appropriate data splitting. In most of the applications, simple random sampling is used. Nevertheless, there are several sophisticated statistical sampling methods suitable for various types of datasets. This paper provides a survey of existing sampling methods applicable to the data splitting problem. Supporting experiments evaluating the benefits of the selected data splitting techniques involve artificial neural networks of the back-propagation type.",
"title": ""
},
{
"docid": "neg:1840593_11",
"text": "A vision system was designed to detect multiple lanes on structured highway using an “estimate and detect” scheme. It detected the lane in which the vehicle was driving (the central lane) and estimated the possible position of two adjacent lanes. Then the detection was made based on these estimations. The vehicle was first recognized if it was driving on a straight road or in a curve using its GPS position and the OpenStreetMap digital map. The two cases were processed differently. For straight road, the central lane was detected in the original image using Hough transformation and a simplified perspective transformation was designed to make estimations. In the case of curve path, a complete perspective transformation was performed and the central lane was detected by scanning at each row in the top view image. The system was able to detected lane marks that were not distinct or even obstructed by other vehicles.",
"title": ""
},
{
"docid": "neg:1840593_12",
"text": "Targeting Interleukin-1 in Heart Disease Print ISSN: 0009-7322. Online ISSN: 1524-4539 Copyright © 2013 American Heart Association, Inc. All rights reserved. is published by the American Heart Association, 7272 Greenville Avenue, Dallas, TX 75231 Circulation doi: 10.1161/CIRCULATIONAHA.113.003199 2013;128:1910-1923 Circulation. http://circ.ahajournals.org/content/128/17/1910 World Wide Web at: The online version of this article, along with updated information and services, is located on the",
"title": ""
},
{
"docid": "neg:1840593_13",
"text": "This paper presents the novel design of a wideband circularly polarized (CP) Radio Frequency Identification (RFID) reader microstrip patch antenna for worldwide Ultra High Frequency (UHF) band which covers 840–960 MHz. The proposed antenna, which consists of a microstrip patch with truncated corners and a cross slot, is placed on a foam substrate (εr = 1.06) above a ground plane and is fed through vias through ground plane holes that extend from the quadrature 3 dB branch line hybrid coupler placed below the ground plane. This helps to separate feed network radiation, from the patch antenna and keeping the CP purity. The prototype antenna was fabricated with a total size of 225 × 250 × 12.8 mm3 which shows a measured impedance matching band of 840–1150MHz (31.2%) as well as measured rotating linear based circularly polarized radiation patterns. The simulated and measured 3 dB Axial Ratio (AR) bandwidth is better than 23% from 840–1050 MHz meeting and exceeding the target worldwide RFID UHF band.",
"title": ""
},
{
"docid": "neg:1840593_14",
"text": "Plant-based psychedelics, such as psilocybin, have an ancient history of medicinal use. After the first English language report on LSD in 1950, psychedelics enjoyed a short-lived relationship with psychology and psychiatry. Used most notably as aids to psychotherapy for the treatment of mood disorders and alcohol dependence, drugs such as LSD showed initial therapeutic promise before prohibitive legislature in the mid-1960s effectively ended all major psychedelic research programs. Since the early 1990s, there has been a steady revival of human psychedelic research: last year saw reports on the first modern brain imaging study with LSD and three separate clinical trials of psilocybin for depressive symptoms. In this circumspective piece, RLC-H and GMG share their opinions on the promises and pitfalls of renewed psychedelic research, with a focus on the development of psilocybin as a treatment for depression.",
"title": ""
},
{
"docid": "neg:1840593_15",
"text": "In this paper, for the first time, 600 ∼ 6500 V IGBTs utilizing a new vertical structure of “Light Punch-Through (LPT) (II)” with Thin Wafer Process Technology demonstrate high total performance with low overall loss and high safety operating area (SOA) capability. This collector structure enables a wide position in the trade-off characteristics between on-state voltage (VCE(sat)) and turn-off loss (EOFF) without utilizing any conventional carrier lifetime technique. In addition, this device concept achieves a wide operating junction temperature (@218 ∼ 423 K) of IGBT without the snap-back phenomena (≤298 K) and thermal destruction (≥398 K). From the viewpoint of the high performance of IGBT, the breaking limitation of any Si wafer size, the proposed LPT(II) concept that utilizes an FZ silicon wafer and Thin Wafer Technology is the most promising candidate as a vertical structure of IGBT for the any voltage class.",
"title": ""
},
{
"docid": "neg:1840593_16",
"text": "Local classifiers are sometimes called lazy learners because they do not train a classifier until presented with a test sample. However, such methods are generally not completely lazy because the neighborhood size k (or other locality parameter) is usually chosen by cross validation on the training set, which can require significant preprocessing and risks overfitting. We propose a simple alternative to cross validation of the neighborhood size that requires no preprocessing: instead of committing to one neighborhood size, average the discriminants for multiple neighborhoods. We show that this forms an expected estimated posterior that minimizes the expected Bregman loss with respect to the uncertainty about the neighborhood choice. We analyze this approach for six standard and state-of-the-art local classifiers, including discriminative adaptive metric kNN (DANN), a local support vector machine (SVM-KNN), hyperplane distance nearest neighbor (HKNN), and a new local Bayesian quadratic discriminant analysis (local BDA). The empirical effectiveness of this technique versus cross validation is confirmed with experiments on seven benchmark data sets, showing that similar classification performance can be attained without any training.",
"title": ""
},
{
"docid": "neg:1840593_17",
"text": "The principle of control signal amplification is found in all actuation systems, from engineered devices through to the operation of biological muscles. However, current engineering approaches require the use of hard and bulky external switches or valves, incompatible with both the properties of emerging soft artificial muscle technology and those of the bioinspired robotic systems they enable. To address this deficiency a biomimetic molecular-level approach is developed that employs light, with its excellent spatial and temporal control properties, to actuate soft, pH-responsive hydrogel artificial muscles. Although this actuation is triggered by light, it is largely powered by the resulting excitation and runaway chemical reaction of a light-sensitive acid autocatalytic solution in which the actuator is immersed. This process produces actuation strains of up to 45% and a three-fold chemical amplification of the controlling light-trigger, realising a new strategy for the creation of highly functional soft actuating systems.",
"title": ""
},
{
"docid": "neg:1840593_18",
"text": "We study the approach to jamming in hard-sphere packings and, in particular, the pair correlation function g(2) (r) around contact, both theoretically and computationally. Our computational data unambiguously separate the narrowing delta -function contribution to g(2) due to emerging interparticle contacts from the background contribution due to near contacts. The data also show with unprecedented accuracy that disordered hard-sphere packings are strictly isostatic: i.e., the number of exact contacts in the jamming limit is exactly equal to the number of degrees of freedom, once rattlers are removed. For such isostatic packings, we derive a theoretical connection between the probability distribution of interparticle forces P(f) (f) , which we measure computationally, and the contact contribution to g(2) . We verify this relation for computationally generated isostatic packings that are representative of the maximally random jammed state. We clearly observe a maximum in P(f) and a nonzero probability of zero force, shedding light on long-standing questions in the granular-media literature. We computationally observe an unusual power-law divergence in the near-contact contribution to g(2) , persistent even in the jamming limit, with exponent -0.4 clearly distinguishable from previously proposed inverse-square-root divergence. Additionally, we present high-quality numerical data on the two discontinuities in the split-second peak of g(2) and use a shared-neighbor analysis of the graph representing the contact network to study the local particle clusters responsible for the peculiar features. Finally, we present the computational data on the contact contribution to g(2) for vacancy-diluted fcc crystal packings and also investigate partially crystallized packings along the transition from maximally disordered to fully ordered packings. We find that the contact network remains isostatic even when ordering is present. Unlike previous studies, we find that ordering has a significant impact on the shape of P(f) for small forces.",
"title": ""
},
{
"docid": "neg:1840593_19",
"text": "The modern image search system requires semantic understanding of image, and a key yet under-addressed problem is to learn a good metric for measuring the similarity between images. While deep metric learning has yielded impressive performance gains by extracting high level abstractions from image data, a proper objective loss function becomes the central issue to boost the performance. In this paper, we propose a novel angular loss, which takes angle relationship into account, for learning better similarity metric. Whereas previous metric learning methods focus on optimizing the similarity (contrastive loss) or relative similarity (triplet loss) of image pairs, our proposed method aims at constraining the angle at the negative point of triplet triangles. Several favorable properties are observed when compared with conventional methods. First, scale invariance is introduced, improving the robustness of objective against feature variance. Second, a third-order geometric constraint is inherently imposed, capturing additional local structure of triplet triangles than contrastive loss or triplet loss. Third, better convergence has been demonstrated by experiments on three publicly available datasets.",
"title": ""
}
] |
1840594 | Bandwidth Enhancement of Small-Size Planar Tablet Computer Antenna Using a Parallel-Resonant Spiral Slit | [
{
"docid": "pos:1840594_0",
"text": "A coupled-fed shorted monopole with its feed structure as an effective radiator for eight-band LTE/WWAN (LTE700/GSM850/900/1800/ 1900/UMTS/LTE2300/2500) operation in the laptop computer is presented. The radiating feed structure capacitively excites the shorted monopole. The feed structure mainly comprises a long feeding strip and a loop feed therein. The loop feed is formed at the front section of the feeding strip and connected to a 50-Ω mini-cable to feed the antenna. Both the feeding strip and loop feed contribute wideband resonant modes to combine with those generated by the shorted monopole for the desired eight-band operation. The antenna size above the top shielding metal wall of the laptop display is 4 × 10 × 80 mm3 and is suitable to be embedded inside the casing of the laptop computer. The proposed antenna is fabricated and tested, and good radiation performances of the fabricated antenna are obtained.",
"title": ""
},
{
"docid": "pos:1840594_1",
"text": "A small-size coupled-fed loop antenna suitable to be printed on the system circuit board of the mobile phone for penta-band WWAN operation (824-960/1710-2170 MHz) is presented. The loop antenna requires only a small footprint of 15 x 25 mm2 on the circuit board, and it can also be in close proximity to the surrounding ground plane printed on the circuit board. That is, very small or no isolation distance is required between the antenna's radiating portion and the nearby ground plane. This can lead to compact integration of the internal on-board printed antenna on the circuit board of the mobile phone, especially the slim mobile phone. The loop antenna also shows a simple structure; it is formed by a loop strip of about 87 mm with its end terminal short-circuited to the ground plane and its front section capacitively coupled to a feeding strip which is also an efficient radiator to contribute a resonant mode for the antenna's upper band to cover the GSM1800/1900/UMTS bands (1710-2170 MHz). Through the coupling excitation, the antenna can also generate a 0.25-wavelength loop resonant mode to form the antenna's lower band to cover the GSM850/900 bands (824-960 MHz). Details of the proposed antenna are presented. The SAR results for the antenna with the presence of the head and hand phantoms are also studied.",
"title": ""
},
{
"docid": "pos:1840594_2",
"text": "A multiband printed monopole slot antenna promising for operating as an internal antenna in the thin-profile laptop computer for wireless wide area network (WWAN) operation is presented. The proposed antenna is formed by three monopole slots operated at their quarter-wavelength modes and arranged in a compact planar configuration. A step-shaped microstrip feedline is applied to excite the three monopole slots at their respective optimal feeding position, and two wide operating bands at about 900 and 1900 MHz are obtained for the antenna to cover all the five operating bands of GSM850/900/1800/1900/UMTS for WWAN operation. The antenna is easily printed on a small-size FR4 substrate and shows a length of 60 mm only and a height of 12 mm when mounted at the top edge of the system ground plane or supporting metal frame of the laptop display. Details of the proposed antenna are presented and studied.",
"title": ""
}
] | [
{
"docid": "neg:1840594_0",
"text": "A sketching system for spline-based free-form surfaces on the Responsive Workbench is presented. We propose 3D tools for curve drawing and deformation techniques for curves and surfaces, adapted to the needs of designers. The user directly draws curves in the virtual environment, using a tracked stylus as an input device. A curve network can be formed, describing the skeleton of a virtual model. The non-dominant hand positions and orients the model while the dominant hand uses the editing tools. The curves and the resulting skinning surfaces can interactively be deformed.",
"title": ""
},
{
"docid": "neg:1840594_1",
"text": "Correctly classifying a skin lesion is one of the first steps towards treatment. We propose a novel convolutional neural network (CNN) architecture for skin lesion classification designed to learn based on information from multiple image resolutions while leveraging pretrained CNNs. While traditional CNNs are generally trained on a single resolution image, our CNN is composed of multiple tracts, where each tract analyzes the image at a different resolution simultaneously and learns interactions across multiple image resolutions using the same field-of-view. We convert a CNN, pretrained on a single resolution, to work for multi-resolution input. The entire network is fine-tuned in a fully learned end-to-end optimization with auxiliary loss functions. We show how our proposed novel multi-tract network yields higher classification accuracy, outperforming state-of-the-art multi-scale approaches when compared over a public skin lesion dataset.",
"title": ""
},
{
"docid": "neg:1840594_2",
"text": "Color images provide large information for human visual perception compared to grayscale images. Color image enhancement methods enhance the visual data to increase the clarity of the color image. It increases human perception of information. Different color image contrast enhancement methods are used to increase the contrast of the color images. The Retinex algorithms enhance the color images similar to the scene perceived by the human eye. Multiscale retinex with color restoration (MSRCR) is a type of retinex algorithm. The MSRCR algorithm results in graying out and halo artifacts at the edges of the images. So here the focus is on improving the MSRCR algorithm by combining it with contrast limited adaptive histogram equalization (CLAHE) using image.",
"title": ""
},
{
"docid": "neg:1840594_3",
"text": "Linda Marion is a doctoral student at Drexel University. E-mail: [email protected]. Abstract This exploratory study examined 250 online academic librarian employment ads posted during 2000 to determine current requirements for technologically oriented jobs. A content analysis software program was used to categorize the specific skills and characteristics listed in the ads. The results were analyzed using multivariate analysis (cluster analysis and multidimensional scaling). The results, displayed in a three-dimensional concept map, indicate 19 categories comprised of both computer related skills and behavioral characteristics that can be interpreted along three continua: (1) technical skills to people skills; (2) long-established technologies and behaviors to emerging trends; (3) technical service competencies to public service competencies. There was no identifiable “digital librarian” category.",
"title": ""
},
{
"docid": "neg:1840594_4",
"text": "There's a large push toward offering solutions and services in the cloud due to its numerous advantages. However, there are no clear guidelines for designing and deploying cloud solutions that can seamlessly operate to handle Web-scale traffic. The authors review industry best practices and identify principles for operating Web-scale cloud solutions by deriving design patterns that enable each principle in cloud solutions. In addition, using a seemingly straightforward cloud service as an example, they explain the application of the identified patterns.",
"title": ""
},
{
"docid": "neg:1840594_5",
"text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: [email protected] (Oke). © 2006 AAEP.",
"title": ""
},
{
"docid": "neg:1840594_6",
"text": "Human variation in content selection in summarization has given rise to some fundamental research questions: How can one incorporate the observed variation in suitable evaluation measures? How can such measures reflect the fact that summaries conveying different content can be equally good and informative? In this article, we address these very questions by proposing a method for analysis of multiple human abstracts into semantic content units. Such analysis allows us not only to quantify human variation in content selection, but also to assign empirical importance weight to different content units. It serves as the basis for an evaluation method, the Pyramid Method, that incorporates the observed variation and is predictive of different equally informative summaries. We discuss the reliability of content unit annotation, the properties of Pyramid scores, and their correlation with other evaluation methods.",
"title": ""
},
{
"docid": "neg:1840594_7",
"text": "Question Answering (QA) is one of the most challenging and crucial tasks in Natural Language Processing (NLP) that has a wide range of applications in various domains, such as information retrieval and entity extraction. Traditional methods involve linguistically based NLP techniques, and recent researchers apply Deep Learning on this task and have achieved promising result. In this paper, we combined Dynamic Coattention Network (DCN) [1] and bilateral multiperspective matching (BiMPM) model [2], achieved an F1 score of 63.8% and exact match (EM) of 52.3% on test set.",
"title": ""
},
{
"docid": "neg:1840594_8",
"text": "This paper considers the scenario that multiple data owners wish to apply a machine learning method over the combined dataset of all owners to obtain the best possible learning output but do not want to share the local datasets owing to privacy concerns. We design systems for the scenario that the stochastic gradient descent (SGD) algorithm is used as the machine learning method because SGD (or its variants) is at the heart of recent deep learning techniques over neural networks. Our systems differ from existing systems in the following features: (1) any activation function can be used, meaning that no privacy-preserving-friendly approximation is required; (2) gradients computed by SGD are not shared but the weight parameters are shared instead; and (3) robustness against colluding parties even in the extreme case that only one honest party exists. We prove that our systems, while privacy-preserving, achieve the same learning accuracy as SGD and hence retain the merit of deep learning with respect to accuracy. Finally, we conduct several experiments using benchmark datasets, and show that our systems outperform previous system in terms of learning accuracies. keywords: privacy preservation, stochastic gradient descent, distributed trainers, neural networks.",
"title": ""
},
{
"docid": "neg:1840594_9",
"text": "Lipid nanoparticles (LNPs) have attracted special interest during last few decades. Solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs) are two major types of Lipid-based nanoparticles. SLNs were developed to overcome the limitations of other colloidal carriers, such as emulsions, liposomes and polymeric nanoparticles because they have advantages like good release profile and targeted drug delivery with excellent physical stability. In the next generation of the lipid nanoparticle, NLCs are modified SLNs which improve the stability and capacity loading. Three structural models of NLCs have been proposed. These LNPs have potential applications in drug delivery field, research, cosmetics, clinical medicine, etc. This article focuses on features, structure and innovation of LNPs and presents a wide discussion about preparation methods, advantages, disadvantages and applications of LNPs by focusing on SLNs and NLCs.",
"title": ""
},
{
"docid": "neg:1840594_10",
"text": "Ontology mapping is seen as a solution provider in today’s landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.",
"title": ""
},
{
"docid": "neg:1840594_11",
"text": "Detection of skin cancer in the earlier stage is very Important and critical. In recent days, skin cancer is seen as one of the most Hazardous form of the Cancers found in Humans. Skin cancer is found in various types such as Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most unpredictable. The detection of Melanoma cancer in early stage can be helpful to cure it. Computer vision can play important role in Medical Image Diagnosis and it has been proved by many existing systems. In this paper, we present a computer aided method for the detection of Melanoma Skin Cancer using Image processing tools. The input to the system is the skin lesion image and then by applying novel image processing techniques, it analyses it to conclude about the presence of skin cancer. The Lesion Image analysis tools checks for the various Melanoma parameters Like Asymmetry, Border, Colour, Diameter, (ABCD) etc. by texture, size and shape analysis for image segmentation and feature stages. The extracted feature parameters are used to classify the image as Normal skin and Melanoma cancer lesion.",
"title": ""
},
{
"docid": "neg:1840594_12",
"text": "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.",
"title": ""
},
{
"docid": "neg:1840594_13",
"text": "The R-tree, one of the most popular access methods for rectangles, is based on the heuristic optimization of the area of the enclosing rectangle in each inner node. By running numerous experiments in a standardized testbed under highly varying data, queries and operations, we were able to design the R*-tree which incorporates a combined optimization of area, margin and overlap of each enclosing rectangle in the directory. Using our standardized testbed in an exhaustive performance comparison, it turned out that the R*-tree clearly outperforms the existing R-tree variants. Guttman's linear and quadratic R-tree and Greene's variant of the R-tree. This superiority of the R*-tree holds for different types of queries and operations, such as map overlay, for both rectangles and multidimensional points in all experiments. From a practical point of view the R*-tree is very attractive because of the following two reasons 1 it efficiently supports point and spatial data at the same time and 2 its implementation cost is only slightly higher than that of other R-trees.",
"title": ""
},
{
"docid": "neg:1840594_14",
"text": "The concept of “task” is at the core of artificial intelligence (AI): Tasks are used for training and evaluating AI systems, which are built in order to perform and automatize tasks we deem useful. In other fields of engineering theoretical foundations allow thorough evaluation of designs by methodical manipulation of well understood parameters with a known role and importance; this allows an aeronautics engineer, for instance, to systematically assess the effects of wind speed on an airplane’s performance and stability. No framework exists in AI that allows this kind of methodical manipulation: Performance results on the few tasks in current use (cf. board games, question-answering) cannot be easily compared, however similar or different. The issue is even more acute with respect to artificial general intelligence systems, which must handle unanticipated tasks whose specifics cannot be known beforehand. A task theory would enable addressing tasks at the class level, bypassing their specifics, providing the appropriate formalization and classification of tasks, environments, and their parameters, resulting in more rigorous ways of measuring, comparing, and evaluating intelligent behavior. Even modest improvements in this direction would surpass the current ad-hoc nature of machine learning and AI evaluation. Here we discuss the main elements of the argument for a task theory and present an outline of what it might look like for physical tasks.",
"title": ""
},
{
"docid": "neg:1840594_15",
"text": "We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works.",
"title": ""
},
{
"docid": "neg:1840594_16",
"text": "The power MOSFET on 4H-SiC is an attractive high-speed and low-dissipation power switching device. The problem to be solved before realizing the 4H-SiC power MOSFET with low on-resistance is low channel mobility at the SiO2/SiC interface. This work has succeeded in increasing the channel mobility in the buried channel IEMOSFET on carbon-face substrate, and has achieved an extremely low on-resistance of 1.8 mΩcm2 with a blocking voltage of 660 V",
"title": ""
},
{
"docid": "neg:1840594_17",
"text": "BACKGROUND\nImmunohistochemical markers are often used to classify breast cancer into subtypes that are biologically distinct and behave differently. The aim of this study was to estimate mortality for patients with the major subtypes of breast cancer as classified using five immunohistochemical markers, to investigate patterns of mortality over time, and to test for heterogeneity by subtype.\n\n\nMETHODS AND FINDINGS\nWe pooled data from more than 10,000 cases of invasive breast cancer from 12 studies that had collected information on hormone receptor status, human epidermal growth factor receptor-2 (HER2) status, and at least one basal marker (cytokeratin [CK]5/6 or epidermal growth factor receptor [EGFR]) together with survival time data. Tumours were classified as luminal and nonluminal tumours according to hormone receptor expression. These two groups were further subdivided according to expression of HER2, and finally, the luminal and nonluminal HER2-negative tumours were categorised according to expression of basal markers. Changes in mortality rates over time differed by subtype. In women with luminal HER2-negative subtypes, mortality rates were constant over time, whereas mortality rates associated with the luminal HER2-positive and nonluminal subtypes tended to peak within 5 y of diagnosis and then decline over time. In the first 5 y after diagnosis the nonluminal tumours were associated with a poorer prognosis, but over longer follow-up times the prognosis was poorer in the luminal subtypes, with the worst prognosis at 15 y being in the luminal HER2-positive tumours. Basal marker expression distinguished the HER2-negative luminal and nonluminal tumours into different subtypes. These patterns were independent of any systemic adjuvant therapy.\n\n\nCONCLUSIONS\nThe six subtypes of breast cancer defined by expression of five markers show distinct behaviours with important differences in short term and long term prognosis. Application of these markers in the clinical setting could have the potential to improve the targeting of adjuvant chemotherapy to those most likely to benefit. The different patterns of mortality over time also suggest important biological differences between the subtypes that may result in differences in response to specific therapies, and that stratification of breast cancers by clinically relevant subtypes in clinical trials is urgently required.",
"title": ""
},
{
"docid": "neg:1840594_18",
"text": "The paper presents a complete solution for recognition of textual and graphic structures in various types of documents acquired from the Internet. In the proposed approach, the document structure recognition problem is divided into sub-problems. The first one is localizing logical structure elements within the document. The second one is recognizing segmented logical structure elements. The input to the method is an image of document page, the output is the XML file containing all graphic and textual elements included in the document, preserving the reading order of document blocks. This file contains information about the identity and position of all logical elements in the document image. The paper describes all details of the proposed method and shows the results of the experiments validating its effectiveness. The results of the proposed method for paragraph structure recognition are comparable to the referenced methods which offer segmentation only.",
"title": ""
},
{
"docid": "neg:1840594_19",
"text": "BACKGROUND\nCleft-lip nasal deformity (CLND) affects the overall facial appearance and attractiveness. The CLND nose shares some features in part with the aging nose.\n\n\nOBJECTIVES\nThis questionnaire survey examined: 1) the panel perceptions of the role of secondary cleft rhinoplasty in nasal rejuvenation; and 2) the influence of a medical background in cleft care, age and gender of the panel members on the estimated age of the CLND nose.\n\n\nSTUDY DESIGN\nUsing a cross-sectional study design, we enrolled a random sample of adult laypersons and health care providers. The predictor variables were secondary cleft rhinoplasty (before/after) and a medical background in cleft care (yes/no). The outcome variable was the estimated age of nose in photographs derived from 8 German nonsyndromic CLND patients. Other study variables included age, gender, and career of the assessors. Appropriate descriptive and univariate statistics were computed, and a P value of <.05 was considered to be statistically significant.\n\n\nRESULTS\nThe sample consisted of 507 lay volunteers and 51 medical experts (407 [72.9%] were female; mean age ± SD = 24.9 ± 8.2 y). The estimated age of the CLND noses was higher than their real age. The rhinoplasty decreased the estimated age to a statistically significant degree (P < .0001). A medical background, age, and gender of the participants were not individually associated with their votes (P > .05).\n\n\nCONCLUSIONS\nThe results of this study suggest that CLND noses lack youthful appearance. Secondary cleft rhinoplasty rejuvenates the nose and makes it come close to the actual age of the patients.",
"title": ""
}
] |
1840595 | Track k: medical information systems. | [
{
"docid": "pos:1840595_0",
"text": "OBJECTIVE\nThe aim of the study was to present a systematic review of studies that investigate the effects of robot-assisted therapy on motor and functional recovery in patients with stroke.\n\n\nMETHODS\nA database of articles published up to October 2006 was compiled using the following Medline key words: cerebral vascular accident, cerebral vascular disorders, stroke, paresis, hemiplegia, upper extremity, arm, and robot. References listed in relevant publications were also screened. Studies that satisfied the following selection criteria were included: (1) patients were diagnosed with cerebral vascular accident; (2) effects of robot-assisted therapy for the upper limb were investigated; (3) the outcome was measured in terms of motor and/or functional recovery of the upper paretic limb; and (4) the study was a randomized clinical trial (RCT). For each outcome measure, the estimated effect size (ES) and the summary effect size (SES) expressed in standard deviation units (SDU) were calculated for motor recovery and functional ability (activities of daily living [ADLs]) using fixed and random effect models. Ten studies, involving 218 patients, were included in the synthesis. Their methodological quality ranged from 4 to 8 on a (maximum) 10-point scale.\n\n\nRESULTS\nMeta-analysis showed a nonsignificant heterogeneous SES in terms of upper limb motor recovery. Sensitivity analysis of studies involving only shoulder-elbow robotics subsequently demonstrated a significant homogeneous SES for motor recovery of the upper paretic limb. No significant SES was observed for functional ability (ADL).\n\n\nCONCLUSION\nAs a result of marked heterogeneity in studies between distal and proximal arm robotics, no overall significant effect in favor of robot-assisted therapy was found in the present meta-analysis. However, subsequent sensitivity analysis showed a significant improvement in upper limb motor function after stroke for upper arm robotics. No significant improvement was found in ADL function. However, the administered ADL scales in the reviewed studies fail to adequately reflect recovery of the paretic upper limb, whereas valid instruments that measure outcome of dexterity of the paretic arm and hand are mostly absent in selected studies. Future research into the effects of robot-assisted therapy should therefore distinguish between upper and lower robotics arm training and concentrate on kinematical analysis to differentiate between genuine upper limb motor recovery and functional recovery due to compensation strategies by proximal control of the trunk and upper limb.",
"title": ""
}
] | [
{
"docid": "neg:1840595_0",
"text": "Recent studies have led to the recognition of the epidermal growth factor receptor HER3 as a key player in cancer, and consequently this receptor has gained increased interest as a target for cancer therapy. We have previously generated several Affibody molecules with subnanomolar affinity for the HER3 receptor. Here, we investigate the effects of two of these HER3-specific Affibody molecules, Z05416 and Z05417, on different HER3-overexpressing cancer cell lines. Using flow cytometry and confocal microscopy, the Affibody molecules were shown to bind to HER3 on three different cell lines. Furthermore, the receptor binding of the natural ligand heregulin (HRG) was blocked by addition of Affibody molecules. In addition, both molecules suppressed HRG-induced HER3 and HER2 phosphorylation in MCF-7 cells, as well as HER3 phosphorylation in constantly HER2-activated SKBR-3 cells. Importantly, Western blot analysis also revealed that HRG-induced downstream signalling through the Ras-MAPK pathway as well as the PI3K-Akt pathway was blocked by the Affibody molecules. Finally, in an in vitro proliferation assay, the two Affibody molecules demonstrated complete inhibition of HRG-induced cancer cell growth. Taken together, our findings demonstrate that Z05416 and Z05417 exert an anti-proliferative effect on two breast cancer cell lines by inhibiting HRG-induced phosphorylation of HER3, suggesting that the Affibody molecules are promising candidates for future HER3-targeted cancer therapy.",
"title": ""
},
{
"docid": "neg:1840595_1",
"text": "Drug-induced cardiotoxicity is emerging as an important issue among cancer survivors. For several decades, this topic was almost exclusively associated with anthracyclines, for which cumulative dose-related cardiac damage was the limiting step in their use. Although a number of efforts have been directed towards prediction of risk, so far no consensus exists on the strategies to prevent and monitor chemotherapy-related cardiotoxicity. Recently, a new dimension of the problem has emerged when drugs targeting the activity of certain tyrosine kinases or tumor receptors were recognized to carry an unwanted effect on the cardiovascular system. Moreover, the higher than expected incidence of cardiac dysfunction occurring in patients treated with a combination of old and new chemotherapeutics (e.g. anthracyclines and trastuzumab) prompted clinicians and researchers to find an effective approach to the problem. From the pharmacological standpoint, putative molecular mechanisms involved in chemotherapy-induced cardiotoxicity will be reviewed. From the clinical standpoint, current strategies to reduce cardiotoxicity will be critically addressed. In this perspective, the precise identification of the antitarget (i.e. the unwanted target causing heart damage) and the development of guidelines to monitor patients undergoing treatment with cardiotoxic agents appear to constitute the basis for the management of drug-induced cardiotoxicity.",
"title": ""
},
{
"docid": "neg:1840595_2",
"text": "A 3 dB power divider/combiner in substrate integrated waveguide (SIW) technology is presented. The divider consists of an E-plane SIW bifurcation with an embedded thick film resistor. The transition divides a full-height SIW into two SIWs of half the height. The resistor provides isolation between these two. The divider is fabricated in a multilayer process using high frequency substrates. For the resistor carbon paste is printed on the middle layer of the stack-up. Simulation and measurement results are presented. The measured divider exhibits an isolation of better than 22 dB within a bandwidth of more than 3GHz at 20 GHz.",
"title": ""
},
{
"docid": "neg:1840595_3",
"text": "Very High Spatial Resolution (VHSR) large-scale SAR image databases are still an unresolved issue in the Remote Sensing field. In this work, we propose such a dataset and use it to explore patch-based classification in urban and periurban areas, considering 7 distinct semantic classes. In this context, we investigate the accuracy of large CNN classification models and pre-trained networks for SAR imaging systems. Furthermore, we propose a Generative Adversarial Network (GAN) for SAR image generation and test, whether the synthetic data can actually improve classification accuracy.",
"title": ""
},
{
"docid": "neg:1840595_4",
"text": "Rapidly progressive renal failure (RPRF) is an initial clinical diagnosis in patients who present with progressive renal impairment of short duration. The underlying etiology may be a primary renal disease or a systemic disorder. Important differential diagnoses include vasculitis (systemic or renal-limited), systemic lupus erythematosus, multiple myeloma, thrombotic microangiopathy and acute interstitial nephritis. Good history taking, clinical examination and relevant investigations including serology and ultimately kidney biopsy are helpful in clinching the diagnosis. Early definitive diagnosis of RPRF is essential to reverse the otherwise relentless progression to end-stage kidney disease.",
"title": ""
},
{
"docid": "neg:1840595_5",
"text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.",
"title": ""
},
{
"docid": "neg:1840595_6",
"text": "We compare the robustness of humans and current convolutional deep neural networks (DNNs) on object recognition under twelve different types of image degradations. First, using three well known DNNs (ResNet-152, VGG-19, GoogLeNet) we find the human visual system to be more robust to nearly all of the tested image manipulations, and we observe progressively diverging classification error-patterns between humans and DNNs when the signal gets weaker. Secondly, we show that DNNs trained directly on distorted images consistently surpass human performance on the exact distortion types they were trained on, yet they display extremely poor generalisation abilities when tested on other distortion types. For example, training on salt-and-pepper noise does not imply robustness on uniform white noise and vice versa. Thus, changes in the noise distribution between training and testing constitutes a crucial challenge to deep learning vision systems that can be systematically addressed in a lifelong machine learning approach. Our new dataset consisting of 83K carefully measured human psychophysical trials provide a useful reference for lifelong robustness against image degradations set by the human visual system.",
"title": ""
},
{
"docid": "neg:1840595_7",
"text": "1. SLICED PROGRAMMABLE NETWORKS OpenFlow [4] has been demonstrated as a way for researchers to run networking experiments in their production network. Last year, we demonstrated how an OpenFlow controller running on NOX [3] could move VMs seamlessly around an OpenFlow network [1]. While OpenFlow has potential [2] to open control of the network, only one researcher can innovate on the network at a time. What is required is a way to divide, or slice, network resources so that researchers and network administrators can use them in parallel. Network slicing implies that actions in one slice do not negatively affect other slices, even if they share the same underlying physical hardware. A common network slicing technique is VLANs. With VLANs, the administrator partitions the network by switch port and all traffic is mapped to a VLAN by input port or explicit tag. This coarse-grained type of network slicing complicates more interesting experiments such as IP mobility or wireless handover. Here, we demonstrate FlowVisor, a special purpose OpenFlow controller that allows multiple researchers to run experiments safely and independently on the same production OpenFlow network. To motivate FlowVisor’s flexibility, we demonstrate four network slices running in parallel: one slice for the production network and three slices running experimental code (Figure 1). Our demonstration runs on real network hardware deployed on our production network at Stanford and a wide-area test-bed with a mix of wired and wireless technologies.",
"title": ""
},
{
"docid": "neg:1840595_8",
"text": "uted through the environments’ material and cultural artifacts and through other people in collaborative efforts to complete complex tasks (Latour, 1987; Pea, 1993). For example, Hutchins (1995a) documents how the task of landing a plane can be best understood through investigating a unit of analysis that includes the pilot, the manufactured tools, and the social context. In this case, the tools and social context are not merely “aides” to the pilot’s cognition but rather essential features of a composite. Similarly, tools such as calculators enable students to complete computational tasks in ways that would be distinctly different if the calculators were absent (Pea, 1993). In these cases, cognitive activity is “stretched over” actors and artifacts. Hence, human activity is best understood by considering both artifacts and actors together through cycles of task completion because the artifacts and actors are essentially intertwined in action contexts (Lave, 1988). In addition to material tools, action is distributed across language, theories of action, and interpretive schema, providing the “mediational means” that enable and transform intelligent social activity (Brown & Duguid, 1991; Leont’ev, 1975, 1981; Vygotsky, 1978; Wertsch, 1991). These material and cultural artifacts form identifiable aspects of the “sociocultural” context as products of particular social and cultural situations (Vygotsky, 1978; Wertsch, 1991). Actors develop common understandings and draw on cultural, social, and historical norms in order to think and act. Thus, even when a particular cognitive task is undertaken by an individual apparently in solo, the individual relies on a variety of sociocultural artifacts such as computational methods and language that are social in origin (Wertsch, 1991). HowWhile there is an expansive literature about what school structures, programs, and processes are necessary for instructional change, we know less about how these changes are undertaken or enacted by school leaders in their daily work. To study school leadership we must attend to leadership practice rather than chiefly or exclusively to school structures, programs, and designs. An in-depth analysis of the practice of school leaders is necessary to render an account of how school leadership works. Knowing what leaders do is one thing, but without a rich understanding of how and why they do it, our understanding of leadership is incomplete. To do that, it is insufficient to simply observe school leadership in action and generate thick descriptions of the observed practice. We need to observe from within a conceptual framework. In our opinion, the prevailing framework of individual agency, focused on positional leaders such as principals, is inadequate because leadership is not just a function of what these leaders know and do. Hence, our intent in this paper is to frame an exploration of how leaders think and act by developing a distributed perspective on leadership practice. The Distributed Leadership Study, a study we are currently conducting in Chicago, uses the distributed framework outlined in this paper to frame a program of research that examines the practice of leadership in urban elementary schools working to change mathematics, science, and literacy instruction (see http://www.letus.org/ dls/index.htm). This 4-year longitudinal study, funded by the National Science Foundation and the Spencer Foundation, is designed to make the “black box” of leadership practice more transparent through an in-depth analysis of leadership practice. This research identifies the tasks, actors, actions, and interactions of school leadership as they unfold together in the daily life of schools. The research program involves in-depth observations and interviews with formal and informal leaders and classroom teachers as well as a social network analysis in schools in the Chicago metropolitan area. We outline the distributed framework below, beginning with a brief review of the theoretical underpinnings for this work—distributed cognition and activity theory—which we then use to re-approach the subject of leadership practice. Next we develop our distributed theory of leadership around four ideas: leadership tasks and functions, task enactment, social distribution of task enactment, and situational distribution of task enactment. Our central argument is that school leadership is best understood as a distributed practice, stretched over the school’s social and situational contexts.",
"title": ""
},
{
"docid": "neg:1840595_9",
"text": "A series of GeoVoCamps, run at least twice a year in locations in the U.S., have focused on ontology design patterns as an approach to inform metadata and data models, and on applications in the GeoSciences. In this note, we will redraw the brief history of the series as well as rationales for the particular approach which was chosen, and report on the ongoing uptake of the approach.",
"title": ""
},
{
"docid": "neg:1840595_10",
"text": "A paradigm shift is taking place in medicine from using synthetic implants and tissue grafts to a tissue engineering approach that uses degradable porous material scaffolds integrated with biological cells or molecules to regenerate tissues. This new paradigm requires scaffolds that balance temporary mechanical function with mass transport to aid biological delivery and tissue regeneration. Little is known quantitatively about this balance as early scaffolds were not fabricated with precise porous architecture. Recent advances in both computational topology design (CTD) and solid free-form fabrication (SFF) have made it possible to create scaffolds with controlled architecture. This paper reviews the integration of CTD with SFF to build designer tissue-engineering scaffolds. It also details the mechanical properties and tissue regeneration achieved using designer scaffolds. Finally, future directions are suggested for using designer scaffolds with in vivo experimentation to optimize tissue-engineering treatments, and coupling designer scaffolds with cell printing to create designer material/biofactor hybrids.",
"title": ""
},
{
"docid": "neg:1840595_11",
"text": "Bacterial samples had been isolated from clinically detected diseased juvenile Pangasius, collected from Mymensingh, Bangladesh. Primarily, the isolates were found as Gram-negative, motile, oxidase-positive, fermentative, and O/129 resistant Aeromonas bacteria. The species was exposed as Aeromonas hydrophila from esculin hydrolysis test. Ten isolates of A. hydrophila were identified from eye lesions, kidney, and liver of the infected fishes. Further characterization of A. hydrophila was accomplished using API-20E and antibiotic sensitivity test. Isolates were highly resistant to amoxyclav among ten different antibiotics. All isolates were found as immensely pathogenic to healthy fishes while intraperitoneal injection. Histopathologically, necrotic hematopoietic tissues with pyknotic nuclei, mild hemorrhage, and wide vacuolation in kidney, liver, and muscle were principally noticed due to Aeromonad infection. So far, this is the first full note on characterizing A. hydrophila from diseased farmed Pangasius in Bangladesh. The present findings will provide further direction to develop theranostic strategies of A. hydrophila infection.",
"title": ""
},
{
"docid": "neg:1840595_12",
"text": "Considering the variations of inertia in real applications, an adaptive control scheme for the permanent-magnet synchronous motor speed-regulation system is proposed in this paper. First, a composite control method, i.e., the extended-state-observer (ESO)-based control method, is employed to ensure the performance of the closed-loop system. The ESO can estimate both the states and the disturbances simultaneously so that the composite speed controller can have a corresponding part to compensate for the disturbances. Then, considering the case of variations of load inertia, an adaptive control scheme is developed by analyzing the control performance relationship between the feedforward compensation gain and the system inertia. By using inertia identification techniques, a fuzzy-inferencer-based supervisor is designed to automatically tune the feedforward compensation gain according to the identified inertia. Simulation and experimental results both show that the proposed method achieves a better speed response in the presence of inertia variations.",
"title": ""
},
{
"docid": "neg:1840595_13",
"text": "OBJECTIVE\nDigital mental wellbeing interventions are increasingly being used by the general public as well as within clinical treatment. Among these, mindfulness and meditation programs delivered through mobile device applications are gaining popularity. However, little is known about how people use and experience such applications and what are the enabling factors and barriers to effective use. To address this gap, the study reported here sought to understand how users adopt and experience a popular mobile-based mindfulness intervention.\n\n\nMETHODS\nA qualitative semi-structured interview study was carried out with 16 participants aged 25-38 (M=32.5) using the commercially popular mindfulness application Headspace for 30-40days. All participants were employed and living in a large UK city. The study design and interview schedule were informed by an autoethnography carried out by the first author for thirty days before the main study began. Results were interpreted in terms of the Reasoned Action Approach to understand behaviour change.\n\n\nRESULTS\nThe core concern of users was fitting the application into their busy lives. Use was also influenced by patterns in daily routines, on-going reflections about the consequences of using the app, perceived self-efficacy, emotion and mood states, personal relationships and social norms. Enabling factors for use included positive attitudes towards mindfulness and use of the app, realistic expectations and positive social influences. Barriers to use were found to be busy lifestyles, lack of routine, strong negative emotions and negative perceptions of mindfulness.\n\n\nCONCLUSIONS\nMobile wellbeing interventions should be designed with consideration of people's beliefs, affective states and lifestyles, and should be flexible to meet the needs of different users. Designers should incorporate features in the design of applications that manage expectations about use and that support users to fit app use into a busy lifestyle. The Reasoned Action Approach was found to be a useful theory to inform future research and design of persuasive mental wellbeing technologies.",
"title": ""
},
{
"docid": "neg:1840595_14",
"text": "The work presented here involves the design of a Multi Layer Perceptron (MLP) based pattern classifier for recognition of handwritten Bangla digits using a 76 element feature vector. Bangla is the second most popular script and language in the Indian subcontinent and the fifth most popular language in the world. The feature set developed for representing handwritten Bangla numerals here includes 24 shadow features, 16 centroid features and 36 longest-run features. On experimentation with a database of 6000 samples, the technique yields an average recognition rate of 96.67% evaluated after three-fold cross validation of results. It is useful for applications related to OCR of handwritten Bangla Digit and can also be extended to include OCR of handwritten characters of Bangla alphabet.",
"title": ""
},
{
"docid": "neg:1840595_15",
"text": "A high-sensitivity fully passive 868-MHz wake-up radio (WUR) front-end for wireless sensor network nodes is presented. The front-end does not have an external power source and extracts the entire energy from the radio-frequency (RF) signal received at the antenna. A high-efficiency differential RF-to-DC converter rectifies the incident RF signal and drives the circuit blocks including a low-power comparator and reference generators; and at the same time detects the envelope of the on-off keying (OOK) wake-up signal. The front-end is designed and simulated 0.13μm CMOS and achieves a sensitivity of -33 dBm for a 100 kbps wake-up signal.",
"title": ""
},
{
"docid": "neg:1840595_16",
"text": "One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design philosophy, which allows us to design the ARMA coefficients independently from the underlying graph, renders the ARMA graph filters suitable in static and, particularly, time-varying settings. The latter occur when the graph signal and/or graph are changing over time. We show that in case of a time-varying graph signal our approach extends naturally to a two-dimensional filter, operating concurrently in the graph and regular time domains. We also derive sufficient conditions for filter stability when the graph and signal are time-varying. The analytical and numerical results presented in this paper illustrate that ARMA graph filters are practically appealing for static and time-varying settings, accompanied by strong theoretical guarantees. Keywords— distributed graph filtering, signal processing on graphs, time-varying graph signals, time-varying graphs",
"title": ""
},
{
"docid": "neg:1840595_17",
"text": "The capability of avoid obstacles is the one of the key issues in autonomous search-and-rescue robots research area. In this study, the avoiding obstacles capability has been provided to the virtula robots in USARSim environment. The aim is finding the minimum movement when robot faces an obstacle in path. For obstacle avoidance we used an real time path planning method which is called Vector Field Histogram (VFH). After experiments we observed that VFH method is successful method for obstacle avoidance. Moreover, the usage of VFH method is highly incresing the amount of the visited places per unit time.",
"title": ""
},
{
"docid": "neg:1840595_18",
"text": "This paper proposes a high efficiency LLC resonant inverter for induction heating applications by using asymmetrical voltage cancellation control. The proposed control method is implemented in a full-bridge topology for induction heating application. The operating frequency is automatically adjusted to maintain a small constant lagging phase angle under load parameter variation. The output power is controlled using the asymmetrical voltage cancellation technique. The LLC resonant tank is designed without the use of output transformer. This results in an increase of the net efficiency of the induction heating system. The validity of the proposed method is verified through computer simulation and hardware experiment at the operating frequency of 93 to 96 kHz.",
"title": ""
},
{
"docid": "neg:1840595_19",
"text": "Obesity and hypertension, major risk factors for the metabolic syndrome, render individuals susceptible to an increased risk of cardiovascular complications, such as adverse cardiac remodeling and heart failure. There has been much investigation into the role that an increase in the renin-angiotensin-aldosterone system (RAAS) plays in the pathogenesis of metabolic syndrome and in particular, how aldosterone mediates left ventricular hypertrophy and increased cardiac fibrosis via its interaction with the mineralocorticoid receptor (MR). Here, we review the pertinent findings that link obesity with elevated aldosterone and the development of cardiac hypertrophy and fibrosis associated with the metabolic syndrome. These studies illustrate a complex cross-talk between adipose tissue, the heart, and the adrenal cortex. Furthermore, we discuss findings from our laboratory that suggest that cardiac hypertrophy and fibrosis in the metabolic syndrome may involve cross-talk between aldosterone and adipokines (such as adiponectin).",
"title": ""
}
] |
1840596 | Parallelizing MCMC via Weierstrass Sampler | [
{
"docid": "pos:1840596_0",
"text": "Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious “momentum” variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor — a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.",
"title": ""
}
] | [
{
"docid": "neg:1840596_0",
"text": "Cloud computing is emerging as a major trend in the ICT industry. While most of the attention of the research community is focused on considering the perspective of the Cloud providers, offering mechanisms to support scaling of resources and interoperability and federation between Clouds, the perspective of developers and operators willing to choose the Cloud without being strictly bound to a specific solution is mostly neglected.\n We argue that Model-Driven Development can be helpful in this context as it would allow developers to design software systems in a cloud-agnostic way and to be supported by model transformation techniques into the process of instantiating the system into specific, possibly, multiple Clouds. The MODAClouds (MOdel-Driven Approach for the design and execution of applications on multiple Clouds) approach we present here is based on these principles and aims at supporting system developers and operators in exploiting multiple Clouds for the same system and in migrating (part of) their systems from Cloud to Cloud as needed. MODAClouds offers a quality-driven design, development and operation method and features a Decision Support System to enable risk analysis for the selection of Cloud providers and for the evaluation of the Cloud adoption impact on internal business processes. Furthermore, MODAClouds offers a run-time environment for observing the system under execution and for enabling a feedback loop with the design environment. This allows system developers to react to performance fluctuations and to re-deploy applications on different Clouds on the long term.",
"title": ""
},
{
"docid": "neg:1840596_1",
"text": "Safety stories specify safety requirements, using the EARS (Easy Requirements Specification) format. Software practitioners can use them in agile projects at lower levels of safety criticality to deal effectively with safety concerns.",
"title": ""
},
{
"docid": "neg:1840596_2",
"text": "Purpose – The objectives of this article are to develop a multiple-item scale for measuring e-service quality and to study the influence of perceived quality on consumer satisfaction levels and the level of web site loyalty. Design/methodology/approach – First, there is an explanation of the main attributes of the concepts examined, with special attention being paid to the multi-dimensional nature of the variables and the relationships between them. This is followed by an examination of the validation processes of the measuring instruments. Findings – The validation process of scales suggested that perceived quality is a multidimensional construct: web design, customer service, assurance and order management; that perceived quality influences on satisfaction; and that satisfaction influences on consumer loyalty. Moreover, no differences in these conclusions were observed if the total sample is divided between buyers and information searchers. Practical implications – First, the need to develop user-friendly web sites which ease consumer purchasing and searching, thus creating a suitable framework for the generation of higher satisfaction and loyalty levels. Second, the web site manager should enhance service loyalty, customer sensitivity, personalised service and a quick response to complaints. Third, the web site should uphold sufficient security levels in communications and meet data protection requirements regarding the privacy. Lastly, the need for correct product delivery and product manipulation or service is recommended. Originality/value – Most relevant studies about perceived quality in the internet have focused on web design aspects. Moreover, the existing literature regarding internet consumer behaviour has not fully analysed profits generated by higher perceived quality in terms of user satisfaction and loyalty.",
"title": ""
},
{
"docid": "neg:1840596_3",
"text": "We examined allelic polymorphisms of the serotonin transporter (5-HTT) gene and antidepressant response to 6 weeks' treatment with the selective serotonin reuptake inhibitor (SSRI) drugs fluoxetine or paroxetine. We genotyped 120 patients and 252 normal controls, using polymerase chain reaction of genomic DNA with primers flanking the second intron and promoter regions of the 5-HTT gene. Diagnosis of depression was not associated with 5-HTT polymorphisms. Patients homozygous l/l in intron 2 or homozygous s/s in the promoter region showed better responses than all others (p < 0.0001, p = 0.0074, respectively). Lack of the l/l allele form in intron 2 most powerfully predicted non-response (83.3%). Response to SSRI drugs is related to allelic variation in the 5-HTT gene in depressed Korean patients.",
"title": ""
},
{
"docid": "neg:1840596_4",
"text": "Endocytic mechanisms control the lipid and protein composition of the plasma membrane, thereby regulating how cells interact with their environments. Here, we review what is known about mammalian endocytic mechanisms, with focus on the cellular proteins that control these events. We discuss the well-studied clathrin-mediated endocytic mechanisms and dissect endocytic pathways that proceed independently of clathrin. These clathrin-independent pathways include the CLIC/GEEC endocytic pathway, arf6-dependent endocytosis, flotillin-dependent endocytosis, macropinocytosis, circular doral ruffles, phagocytosis, and trans-endocytosis. We also critically review the role of caveolae and caveolin1 in endocytosis. We highlight the roles of lipids, membrane curvature-modulating proteins, small G proteins, actin, and dynamin in endocytic pathways. We discuss the functional relevance of distinct endocytic pathways and emphasize the importance of studying these pathways to understand human disease processes.",
"title": ""
},
{
"docid": "neg:1840596_5",
"text": "This study provided a comparative analysis of three social network sites, the open-to-all Facebook, the professionally oriented LinkedIn and the exclusive, members-only ASmallWorld.The analysis focused on the underlying structure or architecture of these sites, on the premise that it may set the tone for particular types of interaction.Through this comparative examination, four themes emerged, highlighting the private/public balance present in each social networking site, styles of self-presentation in spaces privately public and publicly private, cultivation of taste performances as a mode of sociocultural identification and organization and the formation of tight or loose social settings. Facebook emerged as the architectural equivalent of a glasshouse, with a publicly open structure, looser behavioral norms and an abundance of tools that members use to leave cues for each other. LinkedIn and ASmallWorld produced tighter spaces, which were consistent with the taste ethos of each network and offered less room for spontaneous interaction and network generation.",
"title": ""
},
{
"docid": "neg:1840596_6",
"text": "Traditional endpoint protection will not address the looming cybersecurity crisis because it ignores the source of the problem--the vast online black market buried deep within the Internet.",
"title": ""
},
{
"docid": "neg:1840596_7",
"text": "recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios from business process modelling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions. Iman Avazpour Faculty of ICT, Centre for Computing and Engineering Software and Systems (SUCCESS), Swinburne University of Technology, Hawthorn, Victoria 3122, Australia e-mail: iavazpour@swin.",
"title": ""
},
{
"docid": "neg:1840596_8",
"text": "The Internet of Things (IoT) has been growing in recent years with the improvements in several different applications in the military, marine, intelligent transportation, smart health, smart grid, smart home and smart city domains. Although IoT brings significant advantages over traditional information and communication (ICT) technologies for Intelligent Transportation Systems (ITS), these applications are still very rare. Although there is a continuous improvement in road and vehicle safety, as well as improvements in IoT, the road traffic accidents have been increasing over the last decades. Therefore, it is necessary to find an effective way to reduce the frequency and severity of traffic accidents. Hence, this paper presents an intelligent traffic accident detection system in which vehicles exchange their microscopic vehicle variables with each other. The proposed system uses simulated data collected from vehicular ad-hoc networks (VANETs) based on the speeds and coordinates of the vehicles and then, it sends traffic alerts to the drivers. Furthermore, it shows how machine learning methods can be exploited to detect accidents on freeways in ITS. It is shown that if position and velocity values of every vehicle are given, vehicles' behavior could be analyzed and accidents can be detected easily. Supervised machine learning algorithms such as Artificial Neural Networks (ANN), Support Vector Machine (SVM), and Random Forests (RF) are implemented on traffic data to develop a model to distinguish accident cases from normal cases. The performance of RF algorithm, in terms of its accuracy, was found superior to ANN and SVM algorithms. RF algorithm has showed better performance with 91.56% accuracy than SVM with 88.71% and ANN with 90.02% accuracy.",
"title": ""
},
{
"docid": "neg:1840596_9",
"text": "This paper describes a map-matching algorithm designed to support the navigational functions of a real-time vehicle performance and emissions monitoring system currently under development, and other transport telematics applications. The algorithm is used together with the outputs of an extended Kalman filter formulation for the integration of GPS and dead reckoning data, and a spatial digital database of the road network, to provide continuous, accurate and reliable vehicle location on a given road segment. This is irrespective of the constraints of the operational environment, thus alleviating outage and accuracy problems associated with the use of stand-alone location sensors. The map-matching algorithm has been tested using real field data and has been found to be superior to existing algorithms, particularly in how it performs at road intersections.",
"title": ""
},
{
"docid": "neg:1840596_10",
"text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.",
"title": ""
},
{
"docid": "neg:1840596_11",
"text": "The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are derived. The exploration–exploitation tradeoff is discussed and illustrated. Examples of performance on benchmark functions superior to previously published results are given. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840596_12",
"text": "Wi-Fi Tracking: Fingerprinting Attacks and CounterMeasures The recent spread of everyday-carried Wi-Fi-enabled devices (smartphones, tablets and wearable devices) comes with a privacy threat to their owner, and to society as a whole. These devices continuously emit signals which can be captured by a passive attacker using cheap hardware and basic knowledge. These signals contain a unique identi er, called the MAC address. To mitigate the threat, device vendors are currently deploying a countermeasure on new devices: MAC address randomization. Unfortunately, we show that this mitigation, in its current state, is insu cient to prevent tracking. To do so, we introduce several attacks, based on the content and the timing of emitted signals. In complement, we study implementations of MAC address randomization in some recent devices, and nd a number of shortcomings limiting the e ciency of these implementations at preventing device tracking. At the same time, we perform two real-world studies. The rst one considers the development of actors exploiting this issue to install Wi-Fi tracking systems. We list some real-world installations and discuss their various aspects, including regulation, privacy implications, consent and public acceptance. The second one deals with the spread of MAC address randomization in the devices population. Finally, we present two tools: an experimental Wi-Fi tracking system for testing and public awareness raising purpose, and a tool estimating the uniqueness of a device based on the content of its emitted signals even if the identi er is randomized.",
"title": ""
},
{
"docid": "neg:1840596_13",
"text": "A compelling body of evidence indicates that observing a task-irrelevant action makes the execution of that action more likely. However, it remains unclear whether this 'automatic imitation' effect is indeed automatic or whether the imitative action is voluntary. The present study tested the automaticity of automatic imitation by asking whether it occurs in a strategic context where it reduces payoffs. Participants were required to play rock-paper-scissors, with the aim of achieving as many wins as possible, while either one or both players were blindfolded. While the frequency of draws in the blind-blind condition was precisely that expected at chance, the frequency of draws in the blind-sighted condition was significantly elevated. Specifically, the execution of either a rock or scissors gesture by the blind player was predictive of an imitative response by the sighted player. That automatic imitation emerges in a context where imitation reduces payoffs accords with its 'automatic' description, and implies that these effects are more akin to involuntary than to voluntary actions. These data represent the first evidence of automatic imitation in a strategic context, and challenge the abstraction from physical aspects of social interaction typical in economic and game theory.",
"title": ""
},
{
"docid": "neg:1840596_14",
"text": "Cell imbalance in large battery packs degrades their capacity delivery, especially for cells connected in series where the weakest cell dominates their overall capacity. In this article, we present a case study of exploiting system reconfigurations to mitigate the cell imbalance in battery packs. Specifically, instead of using all the cells in a battery pack to support the load, selectively skipping cells to be discharged may actually enhance the pack’s capacity delivery. Based on this observation, we propose CSR, a Cell Skipping-assisted Reconfiguration algorithm that identifies the system configuration with (near)-optimal capacity delivery. We evaluate CSR using large-scale emulation based on empirically collected discharge traces of 40 lithium-ion cells. CSR achieves close-to-optimal capacity delivery when the cell imbalance in the battery pack is low and improves the capacity delivery by about 20% and up to 1x in the case of a high imbalance.",
"title": ""
},
{
"docid": "neg:1840596_15",
"text": "There has been a great deal of recent interest in statistical models of 2D landmark data for generating compact deformable models of a given object. This paper extends this work to a class of parametrised shapes where there are no landmarks available. A rigorous statistical framework for the eigenshape model is introduced, which is an extension to the conventional Linear Point Distribution Model. One of the problems associated with landmark free methods is that a large degree of variability in any shape descriptor may be due to the choice of parametrisation. An automated training method is described which utilises an iterative feedback method to overcome this problem. The result is an automatically generated compact linear shape model. The model has been successfully applied to a problem of tracking the outline of a walking pedestrian in real time.",
"title": ""
},
{
"docid": "neg:1840596_16",
"text": "A gender gap in mathematics achievement persists in some nations but not in others. In light of the underrepresentation of women in careers in science, technology, mathematics, and engineering, increasing research attention is being devoted to understanding gender differences in mathematics achievement, attitudes, and affect. The gender stratification hypothesis maintains that such gender differences are closely related to cultural variations in opportunity structures for girls and women. We meta-analyzed 2 major international data sets, the 2003 Trends in International Mathematics and Science Study and the Programme for International Student Assessment, representing 493,495 students 14-16 years of age, to estimate the magnitude of gender differences in mathematics achievement, attitudes, and affect across 69 nations throughout the world. Consistent with the gender similarities hypothesis, all of the mean effect sizes in mathematics achievement were very small (d < 0.15); however, national effect sizes showed considerable variability (ds = -0.42 to 0.40). Despite gender similarities in achievement, boys reported more positive math attitudes and affect (ds = 0.10 to 0.33); national effect sizes ranged from d = -0.61 to 0.89. In contrast to those of previous tests of the gender stratification hypothesis, our results point to specific domains of gender equity responsible for gender gaps in math. Gender equity in school enrollment, women's share of research jobs, and women's parliamentary representation were the most powerful predictors of cross-national variability in gender gaps in math. Results are situated within the context of existing research demonstrating apparently paradoxical effects of societal gender equity and highlight the significance of increasing girls' and women's agency cross-nationally.",
"title": ""
},
{
"docid": "neg:1840596_17",
"text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19",
"title": ""
},
{
"docid": "neg:1840596_18",
"text": "Herbal drug authentication is an important task in traditional medicine; however, it is challenged by the limitations of traditional authentication methods and the lack of trained experts. DNA barcoding is conspicuous in almost all areas of the biological sciences and has already been added to the British pharmacopeia and Chinese pharmacopeia for routine herbal drug authentication. However, DNA barcoding for the Korean pharmacopeia still requires significant improvements. Here, we present a DNA barcode reference library for herbal drugs in the Korean pharmacopeia and developed a species identification engine named KP-IDE to facilitate the adoption of this DNA reference library for the herbal drug authentication. Using taxonomy records, specimen records, sequence records, and reference records, KP-IDE can identify an unknown specimen. Currently, there are 6,777 taxonomy records, 1,054 specimen records, 30,744 sequence records (ITS2 and psbA-trnH) and 285 reference records. Moreover, 27 herbal drug materials were collected from the Seoul Yangnyeongsi herbal medicine market to give an example for real herbal drugs authentications. Our study demonstrates the prospects of the DNA barcode reference library for the Korean pharmacopeia and provides future directions for the use of DNA barcoding for authenticating herbal drugs listed in other modern pharmacopeias.",
"title": ""
},
{
"docid": "neg:1840596_19",
"text": "LETOR is a package of benchmark data sets for research on LEarning TO Rank, which contains standard features, relevance judgments, data partitioning, evaluation tools, and several baselines. Version 1.0 was released in April 2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec. 2008. This version, 4.0, was released in July 2009. Very different from previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page collection (~25M pages) and two query sets from Million Query track of TREC 2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short. There are about 1700 queries in MQ2007 with labeled documents and about 800 queries in MQ2008 with labeled documents. If you have any questions or suggestions about the datasets, please kindly email us ([email protected]). Our goal is to make the dataset reliable and useful for the community.",
"title": ""
}
] |
1840597 | Memory Engram Cells Have Come of Age | [
{
"docid": "pos:1840597_0",
"text": "Do learning and retrieval of a memory activate the same neurons? Does the number of reactivated neurons correlate with memory strength? We developed a transgenic mouse that enables the long-lasting genetic tagging of c-fos-active neurons. We found neurons in the basolateral amygdala that are activated during Pavlovian fear conditioning and are reactivated during memory retrieval. The number of reactivated neurons correlated positively with the behavioral expression of the fear memory, indicating a stable neural correlate of associative memory. The ability to manipulate these neurons genetically should allow a more precise dissection of the molecular mechanisms of memory encoding within a distributed neuronal network.",
"title": ""
},
{
"docid": "pos:1840597_1",
"text": "Two experiments (modeled after J. Deese's 1959 study) revealed remarkable levels of false recall and false recognition in a list learning paradigm. In Experiment 1, subjects studied lists of 12 words (e.g., bed, rest, awake); each list was composed of associates of 1 nonpresented word (e.g., sleep). On immediate free recall tests, the nonpresented associates were recalled 40% of the time and were later recognized with high confidence. In Experiment 2, a false recall rate of 55% was obtained with an expanded set of lists, and on a later recognition test, subjects produced false alarms to these items at a rate comparable to the hit rate. The act of recall enhanced later remembering of both studied and nonstudied material. The results reveal a powerful illusion of memory: People remember events that never happened.",
"title": ""
}
] | [
{
"docid": "neg:1840597_0",
"text": "In this study, the aim is to examine the most popular eSport applications at a global scale. In this context, the App Store and Google Play Store application platforms which have the highest number of users at a global scale were focused on. For this reason, the eSport applications included in these two platforms constituted the sampling of the present study. A data collection form was developed by the researcher of the study in order to collect the data in the study. This form included the number of the countries, the popularity ratings of the application, the name of the application, the type of it, the age limit, the rating of the likes, the company that developed it, the version and the first appearance date. The study was conducted with the Qualitative Research Method, and the Case Study design was made use of in this process; and the Descriptive Analysis Method was used to analyze the data. As a result of the study, it was determined that the most popular eSport applications at a global scale were football, which ranked the first, basketball, billiards, badminton, skateboarding, golf and dart. It was also determined that the popularity of the mobile eSport applications changed according to countries and according to being free or paid. It was determined that the popularity of these applications differed according to the individuals using the App Store and Google Play Store application markets. As a result, it is possible to claim that mobile eSport applications have a wide usage area at a global scale and are accepted widely. In addition, it was observed that the interest in eSport applications was similar to that in traditional sports. However, in the present study, a certain date was set, and the interest in mobile eSport applications was analyzed according to this specific date. In future studies, different dates and different fields like educational sciences may be set to analyze the interest in mobile eSport applications. In this way, findings may be obtained on the change of the interest in mobile eSport applications according to time. The findings of the present study and similar studies may have the quality of guiding researchers and system/software developers in terms of showing the present status of the topic and revealing the relevant needs.",
"title": ""
},
{
"docid": "neg:1840597_1",
"text": "Pyro is a probabilistic programming language built on Python as a platform for developing advanced probabilistic models in AI research. To scale to large datasets and high-dimensional models, Pyro uses stochastic variational inference algorithms and probability distributions built on top of PyTorch, a modern GPU-accelerated deep learning framework. To accommodate complex or model-specific algorithmic behavior, Pyro leverages Poutine, a library of composable building blocks for modifying the behavior of probabilistic programs.",
"title": ""
},
{
"docid": "neg:1840597_2",
"text": "Instead of a standard support vector machine (SVM) that classifies points by assigning them to one of two disjoint half-spaces, points are classified by assigning them to the closest of two parallel planes (in input or feature space) that are pushed apart as far as possible. This formulation, which can also be interpreted as regularized least squares and considered in the much more general context of regularized networks [8, 9], leads to an extremely fast and simple algorithm for generating a linear or nonlinear classifier that merely requires the solution of a single system of linear equations. In contrast, standard SVMs solve a quadratic or a linear program that require considerably longer computational time. Computational results on publicly available datasets indicate that the proposed proximal SVM classifier has comparable test set correctness to that of standard SVM classifiers, but with considerably faster computational time that can be an order of magnitude faster. The linear proximal SVM can easily handle large datasets as indicated by the classification of a 2 million point 10-attribute set in 20.8 seconds. All computational results are based on 6 lines of MATLAB code.",
"title": ""
},
{
"docid": "neg:1840597_3",
"text": "This paper summarizes the method of polyp detection in colonoscopy images and provides preliminary results to participate in ISBI 2015 Grand Challenge on Automatic Polyp Detection in Colonoscopy videos. The key aspect of the proposed method is to learn hierarchical features using convolutional neural network. The features are learned in different scales to provide scale-invariant features through the convolutional neural network, and then each pixel in the colonoscopy image is classified as polyp pixel or non-polyp pixel through fully connected network. The result is refined via smooth filtering and thresholding step. Experimental result shows that the proposed neural network can classify patches of polyp and non-polyp region with an accuracy of about 90%.",
"title": ""
},
{
"docid": "neg:1840597_4",
"text": "Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).",
"title": ""
},
{
"docid": "neg:1840597_5",
"text": "Automatic citation recommendation can be very useful for authoring a paper and is an AI-complete problem due to the challenge of bridging the semantic gap between citation context and the cited paper. It is not always easy for knowledgeable researchers to give an accurate citation context for a cited paper or to find the right paper to cite given context. To help with this problem, we propose a novel neural probabilistic model that jointly learns the semantic representations of citation contexts and cited papers. The probability of citing a paper given a citation context is estimated by training a multi-layer neural network. We implement and evaluate our model on the entire CiteSeer dataset, which at the time of this work consists of 10,760,318 citation contexts from 1,017,457 papers. We show that the proposed model significantly outperforms other stateof-the-art models in recall, MAP, MRR, and nDCG.",
"title": ""
},
{
"docid": "neg:1840597_6",
"text": "The first workshop on Interactive Data Mining is held in Melbourne, Australia, on February 15, 2019 and is co-located with 12th ACM International Conference on Web Search and Data Mining (WSDM 2019). The goal of this workshop is to share and discuss research and projects that focus on interaction with and interactivity of data mining systems. The program includes invited speaker, presentation of research papers, and a discussion session.",
"title": ""
},
{
"docid": "neg:1840597_7",
"text": "The interpretation of the resource-conflict link that has become most publicized—the rebel greed hypothesis—depends on just one of many plausible mechanisms that could underlie a relationship between resource dependence and violence. The author catalogues a large range of rival possible mechanisms, highlights a set of techniques that may be used to identify these mechanisms, and begins to employ these techniques to distinguish between rival accounts of the resource-conflict linkages. The author uses finer natural resource data than has been used in the past, gathering and presenting new data on oil and diamonds production and on oil stocks. The author finds evidence that (1) conflict onset is more responsive to the impacts of past natural resource production than to the potential for future production, supporting a weak states mechanism rather than a rebel greed mechanism; (2) the impact of natural resources on conflict cannot easily be attributed entirely to the weak states mechanism, and in particular, the impact of natural resources is independent of state strength; (3) the link between primary commodities and conflict is driven in part by agricultural dependence rather than by natural resources more narrowly defined, a finding consistent with a “sparse networks” mechanism; (4) natural resources are associated with shorter wars, and natural resource wars are more likely to end with military victory for one side than other wars. This is consistent with evidence that external actors have incentives to work to bring wars to a close when natural resource supplies are threatened. The author finds no evidence that resources are associated with particular difficulties in negotiating ends to conflicts, contrary to arguments that loot-seeking rebels aim to prolong wars.",
"title": ""
},
{
"docid": "neg:1840597_8",
"text": "This paper investigates the application of linear learning techniques to the place recognition problem. We present two learning methods, a supervised change prediction technique based on linear regression and an unsupervised change removal technique based on principal component analysis, and investigate how the performance of each is affected by the choice of training data. We show that the change prediction technique presented here succeeds only if it is provided with appropriate and adequate training data, which can be challenging for a mobile robotic system operating in an uncontrolled environment. In contrast, change removal can improve place recognition performance even when trained with as few as 100 samples. This paper shows that change removal can be combined with a number of different image descriptors and can improve performance across a range of different appearance conditions.",
"title": ""
},
{
"docid": "neg:1840597_9",
"text": "We present in this paper a statistical model for languageindependent bi-directional conversion between spelling and pronunciation, based on joint grapheme/phoneme units extracted from automatically aligned data. The model is evaluated on spelling-to-pronunciation and pronunciation-tospelling conversion on the NetTalk database and the CMU dictionary. We also study the effect of including lexical stress in the pronunciation. Although a direct comparison is difficult to make, our model’s performance appears to be as good or better than that of other data-driven approaches that have been applied to the same tasks.",
"title": ""
},
{
"docid": "neg:1840597_10",
"text": "Mobile learning highly prioritizes the successful acquisition of context-aware contents from a learning server. A variant of 2D barcodes, the quick response (QR) code, which can be rapidly read using a PDA equipped with a camera and QR code reading software, is considered promising for context-aware applications. This work presents a novel QR code and handheld augmented reality (AR) supported mobile learning (m-learning) system: the handheld English language learning organization (HELLO). In the proposed English learning system, the linked information between context-aware materials and learning zones is defined in the QR codes. Each student follows the guide map displayed on the phone screen to visit learning zones and decrypt QR codes. The detected information is then sent to the learning server to request and receive context-aware learning material wirelessly. Additionally, a 3D animated virtual learning partner is embedded in the learning device based on AR technology, enabling students to complete their context-aware immersive learning. A case study and a survey conducted in a university demonstrate the effectiveness of the proposed m-learning system.",
"title": ""
},
{
"docid": "neg:1840597_11",
"text": "Populated IP addresses (PIP) -- IP addresses that are associated with a large number of user requests are important for online service providers to efficiently allocate resources and to detect attacks. While some PIPs serve legitimate users, many others are heavily abused by attackers to conduct malicious activities such as scams, phishing, and malware distribution. Unfortunately, commercial proxy lists like Quova have a low coverage of PIP addresses and offer little support for distinguishing good PIPs from abused ones. In this study, we propose PIPMiner, a fully automated method to extract and classify PIPs through analyzing service logs. Our methods combine machine learning and time series analysis to distinguish good PIPs from abused ones with over 99.6% accuracy. When applying the derived PIP list to several applications, we can identify millions of malicious Windows Live accounts right on the day of their sign-ups, and detect millions of malicious Hotmail accounts well before the current detection system captures them.",
"title": ""
},
{
"docid": "neg:1840597_12",
"text": "As financial time series are inherently noisy and non-stationary, it is regarded as one of the most challenging applications of time series forecasting. Due to the advantages of generalization capability in obtaining a unique solution, support vector regression (SVR) has also been successfully applied in financial time series forecasting. In the modeling of financial time series using SVR, one of the key problems is the inherent high noise. Thus, detecting and removing the noise are important but difficult tasks when building an SVR forecasting model. To alleviate the influence of noise, a two-stage modeling approach using independent component analysis (ICA) and support vector regression is proposed in financial time series forecasting. ICA is a novel statistical signal processing technique that was originally proposed to find the latent source signals from observed mixture signals without having any prior knowledge of the mixing mechanism. The proposed approach first uses ICA to the forecasting variables for generating the independent components (ICs). After identifying and removing the ICs containing the noise, the rest of the ICs are then used to reconstruct the forecasting variables which contain less noise and served as the input variables of the SVR forecasting model. In order to evaluate the performance of the proposed approach, the Nikkei 225 opening index and TAIEX closing index are used as illustrative examples. Experimental results show that the proposed model outperforms the SVR model with non-filtered forecasting variables and a random walk model.",
"title": ""
},
{
"docid": "neg:1840597_13",
"text": "Machine comprehension of text is the overarching goal of a great deal of research in natural language processing. The Machine Comprehension Test (Richardson et al., 2013) was recently proposed to assess methods on an open-domain, extensible, and easy-to-evaluate task consisting of two datasets. In this paper we develop a lexical matching method that takes into account multiple context windows, question types and coreference resolution. We show that the proposed method outperforms the baseline of Richardson et al. (2013), and despite its relative simplicity, is comparable to recent work using machine learning. We hope that our approach will inform future work on this task. Furthermore, we argue that MC500 is harder than MC160 due to the way question answer pairs were created.",
"title": ""
},
{
"docid": "neg:1840597_14",
"text": "Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions performed on an initial program input, the fuzzing agent learns a policy that can next generate new higher-reward inputs. We have implemented this new approach, and preliminary empirical evidence shows that reinforcement fuzzing can outperform baseline random fuzzing.",
"title": ""
},
{
"docid": "neg:1840597_15",
"text": "BACKGROUND\nThe main purpose of this study was to identify factors that influence healthcare quality in the Iranian context.\n\n\nMETHODS\nExploratory in-depth individual and focus group interviews were conducted with 222 healthcare stakeholders including healthcare providers, managers, policy-makers, and payers to identify factors affecting the quality of healthcare services provided in Iranian healthcare organisations.\n\n\nRESULTS\nQuality in healthcare is a production of cooperation between the patient and the healthcare provider in a supportive environment. Personal factors of the provider and the patient, and factors pertaining to the healthcare organisation, healthcare system, and the broader environment affect healthcare service quality. Healthcare quality can be improved by supportive visionary leadership, proper planning, education and training, availability of resources, effective management of resources, employees and processes, and collaboration and cooperation among providers.\n\n\nCONCLUSION\nThis article contributes to healthcare theory and practice by developing a conceptual framework that provides policy-makers and managers a practical understanding of factors that affect healthcare service quality.",
"title": ""
},
{
"docid": "neg:1840597_16",
"text": "When pedestrians encounter vehicles, they typically stop and wait for a signal from the driver to either cross or wait. What happens when the car is autonomous and there isn’t a human driver to signal them? This paper seeks to address this issue with an intent communication system (ICS) that acts in place of a human driver. This intent system has been developed to take into account the psychology behind what pedestrians are familiar with and what they expect from machines. The system integrates those expectations into the design of physical systems and mathematical algorithms. The goal of the system is to ensure that communication is simple, yet effective without leaving pedestrians with a sense of distrust in autonomous vehicles. To validate the ICS, two types of experiments have been run: field tests with an autonomous vehicle to determine how humans actually interact with the ICS and simulations to account for multiple potential behaviors.The results from both experiments show that humans react positively and more predictably when the intent of the vehicle is communicated compared to when the intent of the vehicle is unknown. In particular, the results from the simulation specifically showed a 142 percent difference between the pedestrian’s trust in the vehicle’s actions when the ICS is enabled and the pedestrian has prior knowledge of the vehicle than when the ICS is not enabled and the pedestrian having no prior knowledge of the vehicle.",
"title": ""
},
{
"docid": "neg:1840597_17",
"text": "English. The SENTIment POLarity Classification Task 2016 (SENTIPOLC), is a rerun of the shared task on sentiment classification at the message level on Italian tweets proposed for the first time in 2014 for the Evalita evaluation campaign. It includes three subtasks: subjectivity classification, polarity classification, and irony detection. In 2016 SENTIPOLC has been again the most participated EVALITA task with a total of 57 submitted runs from 13 different teams. We present the datasets – which includes an enriched annotation scheme for dealing with the impact on polarity of a figurative use of language – the evaluation methodology, and discuss results and participating systems. Italiano. Descriviamo modalità e risultati della seconda edizione della campagna di valutazione di sistemi di sentiment analysis (SENTIment POLarity Classification Task), proposta nel contesto di “EVALITA 2016: Evaluation of NLP and Speech Tools for Italian”. In SENTIPOLC è stata valutata la capacità dei sistemi di riconoscere diversi aspetti del sentiment espresso nei messaggi Twitter in lingua italiana, con un’articolazione in tre sottotask: subjectivity classification, polarity classification e irony detection. La campagna ha suscitato nuovamente grande interesse, con un totale di 57 run inviati da 13 gruppi di partecipanti.",
"title": ""
},
{
"docid": "neg:1840597_18",
"text": "Eruptive syringomas: unresponsiveness to oral isotretinoin A 22-year-old man of Egyptian origin was referred to our department due to exacerbation of pruritic pre-existing papular dermatoses. The skin lesions had been present since childhood. The family history was negative for a similar condition. The patient complained of exacerbation of the pruritus during physical activity under a hot climate and had moderate to severe pruritus during his work. Physical examination revealed multiple reddish-brownish smooth-surfaced, symmetrically distributed papules 2–4 mm in diameter on the patient’s trunk, neck, axillae, and limbs (Fig. 1). The rest of the physical examination was unremarkable. The Darier sign was negative. A skin biopsy was obtained from a representative lesion on the trunk. Histopathologic examination revealed a small, wellcircumscribed neoplasm confined to the upper dermis, composed of small solid and ductal structures relatively evenly distributed in a sclerotic collagenous stroma. The solid elements were of various shapes (round, oval, curvilinear, “comma-like,” or “tadpole-like”) (Fig. 2). These microscopic features and the clinical presentation were consistent with the diagnosis of eruptive syringomas. Our patient was treated with a short course of oral antihistamines without any effect and subsequently with low-dose isotretinoin (10 mg/day) for 5 months. No improvement of the skin eruption was observed while cessation of the pruritus was accomplished. Syringoma is a common adnexal tumor with differentiation towards eccrine acrosyringium composed of small solid and ductal elements embedded in a sclerotic stroma and restricted as a rule to the upper to mid dermis, usually presenting clinically as multiple lesions on the lower eyelids and cheeks of adolescent females. A much less common variant is the eruptive or disseminated syringomas, which involve primarily young women. Eruptive syringomas are characterized by rapid development during a short period of time of hundreds of small (1–5 mm), ill-defined, smooth surfaced, skin-colored, pink, yellowish, or brownish papules typically involving the face, trunk, genitalia, pubic area, and extremities but can occur principally in any site where eccrine glands are found. The pathogenesis of eruptive syringoma remains unclear. Some authors have recently challenged the traditional notion that eruptive syringomas are neoplastic lesions. Chandler and Bosenberg presented evidence that eruptive syringomas result from autoimmune destruction of the acrosyringium and proposed the term autoimmune acrosyringitis with ductal cysts. Garrido-Ruiz et al. support the theory that eruptive syringomas may represent a hyperplastic response of the eccrine duct to an inflammatory reaction. In a recent systematic review by Williams and Shinkai the strongest association of syringomas was with Down’s syndrome (183 reported cases, 22.2%). Syringomas are also associated with diabetes mellitus (17 reported cases, 2.1%), Ehlers–Danlos",
"title": ""
},
{
"docid": "neg:1840597_19",
"text": "Conditional belief networks introduce stochastic binary variables in neural networks. Contrary to a classical neural network, a belief network can predict more than the expected value of the output Y given the input X . It can predict a distribution of outputs Y which is useful when an input can admit multiple outputs whose average is not necessarily a valid answer. Such networks are particularly relevant to inverse problems such as image prediction for denoising, or text to speech. However, traditional sigmoid belief networks are hard to train and are not suited to continuous problems. This work introduces a new family of networks called linearizing belief nets or LBNs. A LBN decomposes into a deep linear network where each linear unit can be turned on or off by non-deterministic binary latent units. It is a universal approximator of real-valued conditional distributions and can be trained using gradient descent. Moreover, the linear pathways efficiently propagate continuous information and they act as multiplicative skip-connections that help optimization by removing gradient diffusion. This yields a model which trains efficiently and improves the state-of-the-art on image denoising and facial expression generation with the Toronto faces dataset.",
"title": ""
}
] |
1840598 | Hidden Roles of CSR : Perceived Corporate Social Responsibility as a Preventive against Counterproductive Work Behaviors | [
{
"docid": "pos:1840598_0",
"text": "In spite of the increasing importance of corporate social responsibility (CSR) and employee job performance, little is still known about the links between the socially responsible actions of organizations and the job performance of their members. In order to explain how employees’ perceptions of CSR influence their job performance, this study first examines the relationships between perceived CSR, organizational identification, job satisfaction, and job performance, and then develops a sequential mediation model by fully integrating these links. The results of structural equation modeling analyses conducted for 250 employees at hotels in South Korea offered strong support for the proposed model. We found that perceived CSR was indirectly and positively associated with job performance sequentially mediated first through organizational identification and then job satisfaction. This study theoretically contributes to the CSR literature by revealing the sequential mechanism through which employees’ perceptions of CSR affect their job performance, and offers practical implications by stressing the importance of employees’ perceptions of CSR. Limitations of this study and future research directions are discussed.",
"title": ""
},
{
"docid": "pos:1840598_1",
"text": "The purpose of this research was to develop broad, theoretically derived measure(s) of deviant behavior in the workplace. Two scales were developed: a 12-item scale of organizational deviance (deviant behaviors directly harmful to the organization) and a 7-item scale of interpersonal deviance (deviant behaviors directly harmful to other individuals within the organization). These scales were found to have internal reliabilities of .81 and .78, respectively. Confirmatory factor analysis verified that a 2-factor structure had acceptable fit. Preliminary evidence of construct validity is also provided. The implications of this instrument for future empirical research on workplace deviance are discussed.",
"title": ""
}
] | [
{
"docid": "neg:1840598_0",
"text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.",
"title": ""
},
{
"docid": "neg:1840598_1",
"text": "In this paper, we examine the generalization error of regularized distance metric learning. We show that with appropriate constraints, the generalization error of regularized distance metric learning could be independent from the dimensionality, making it suitable for handling high dimensional data. In addition, we present an efficient online learning algorithm for regularized distance metric learning. Our empirical studies with data classification and face recognition show that the proposed algorithm is (i) effective for distance metric learning when compared to the state-of-the-art methods, and (ii) efficient and robust for high dimensional data.",
"title": ""
},
{
"docid": "neg:1840598_2",
"text": "Recently, there has been considerable interest in providing \"trusted computing platforms\" using hardware~---~TCPA and Palladium being the most publicly visible examples. In this paper we discuss our experience with building such a platform using a traditional time-sharing operating system executing on XOM~---~a processor architecture that provides copy protection and tamper-resistance functions. In XOM, only the processor is trusted; main memory and the operating system are not trusted.Our operating system (XOMOS) manages hardware resources for applications that don't trust it. This requires a division of responsibilities between the operating system and hardware that is unlike previous systems. We describe techniques for providing traditional operating systems services in this context.Since an implementation of a XOM processor does not exist, we use SimOS to simulate the hardware. We modify IRIX 6.5, a commercially available operating system to create xomos. We are then able to analyze the performance and implementation overheads of running an untrusted operating system on trusted hardware.",
"title": ""
},
{
"docid": "neg:1840598_3",
"text": "Machine learning is an established method of selecting algorithms to solve hard search problems. Despite this, to date no systematic comparison and evaluation of the different techniques has been performed and the performance of existing systems has not been critically compared to other approaches. We compare machine learning techniques for algorithm selection on real-world data sets of hard search problems. In addition to well-established approaches, for the first time we also apply statistical relational learning to this problem. We demonstrate that most machine learning techniques and existing systems perform less well than one might expect. To guide practitioners, we close by giving clear recommendations as to which machine learning techniques are likely to perform well based on our experiments.",
"title": ""
},
{
"docid": "neg:1840598_4",
"text": "Performing High Voltage (HV) tasks with a multi craft work force create a special set of safety circumstances. This paper aims to present vital information relating to when it is acceptable to use a single or a two-layer soil structure. Also it discusses the implication of the high voltage infrastructure on the earth grid and the safety of this implication under a single or a two-layer soil structure. A multiple case study is investigated to show the importance of using the right soil resistivity structure during the earthing system design. Keywords—Earth Grid, EPR, High Voltage, Soil Resistivity Structure, Step Voltage, Touch Voltage.",
"title": ""
},
{
"docid": "neg:1840598_5",
"text": "This project is one of the research topics in Professor William Dally’s group. In this project, we developed a pruning based method to learn both weights and connections for Long Short Term Memory (LSTM). In this method, we discard the unimportant connections in a pretrained LSTM, and make the weight matrix sparse. Then, we retrain the remaining model. After we remaining model is converge, we prune this model again and retrain the remaining model iteratively, until we achieve the desired size of model and performance. This method will save the size of the LSTM as well as prevent overfitting. Our results retrained on NeuralTalk shows that we can discard nearly 90% of the weights without hurting the performance too much. Part of the results in this project will be posted in NIPS 2015.",
"title": ""
},
{
"docid": "neg:1840598_6",
"text": "This paper presents a novel method for hierarchically organizing large face databases, with application to efficient identity-based face retrieval. The method relies on metric learning with local binary pattern (LBP) features. On one hand, LBP features have proved to be highly resilient to various appearance changes due to illumination and contrast variations while being extremely efficient to calculate. On the other hand, metric learning (ML) approaches have been proved very successful for face verification ‘in the wild’, i.e. in uncontrolled face images with large amounts of variations in pose, expression, appearances, lighting, etc. While such ML based approaches compress high dimensional features into low dimensional spaces using discriminatively learned projections, the complexity of retrieval is still significant for large scale databases (with millions of faces). The present paper shows that learning such discriminative projections locally while organizing the database hierarchically leads to a more accurate and efficient system. The proposed method is validated on the standard Labeled Faces in the Wild (LFW) benchmark dataset with millions of additional distracting face images collected from photos on the internet.",
"title": ""
},
{
"docid": "neg:1840598_7",
"text": "Motor sequence learning is a process whereby a series of elementary movements is re-coded into an efficient representation for the entire sequence. Here we show that human subjects learn a visuomotor sequence by spontaneously chunking the elementary movements, while each chunk acts as a single memory unit. The subjects learned to press a sequence of 10 sets of two buttons through trial and error. By examining the temporal patterns with which subjects performed a visuomotor sequence, we found that the subjects performed the 10 sets as several clusters of sets, which were separated by long time gaps. While the overall performance time decreased by repeating the same sequence, the clusters became clearer and more consistent. The cluster pattern was uncorrelated with the distance of hand movements and was different across subjects who learned the same sequence. We then split a learned sequence into three segments, while preserving or destroying the clusters in the learned sequence, and shuffled the segments. The performance on the shuffled sequence was more accurate and quicker when the clusters in the original sequence were preserved than when they were destroyed. The results suggest that each cluster is processed as a single memory unit, a chunk, and is necessary for efficient sequence processing. A learned visuomotor sequence is hierarchically represented as chunks that contain several elementary movements. We also found that the temporal patterns of sequence performance transferred from the nondominant to dominant hand, but not vice versa. This may suggest a role of the dominant hemisphere in storage of learned chunks. Together with our previous unit-recording and imaging studies that used the same learning paradigm, we predict specific roles of the dominant parietal area, basal ganglia, and presupplementary motor area in the chunking.",
"title": ""
},
{
"docid": "neg:1840598_8",
"text": "We investigate the task of modeling opendomain, multi-turn, unstructured, multiparticipant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant’s history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.",
"title": ""
},
{
"docid": "neg:1840598_9",
"text": "© 2 0 0 1 m a s s a c h u s e t t s i n s t i t u t e o f t e c h n o l o g y, c a m b r i d g e , m a 0 2 1 3 9 u s a — w w w. a i. m i t. e d u",
"title": ""
},
{
"docid": "neg:1840598_10",
"text": "Digital asset management (DAM) has increasing benefits in booming global Internet economy, but it is still a great challenge for providing an effective way to manage, store, ingest, organize and retrieve digital asset. To do it, we present a new digital asset management platform, called DAM-Chain, with Transaction-based Access Control (TBAC) which integrates the distribution ABAC model and the blockchain technology. In this platform, the ABAC provides flexible and diverse authorization mechanisms for digital asset escrowed into blockchain while the blockchain's transactions serve as verifiable and traceable medium of access request procedure. We also present four types of transactions to describe the TBAC access control procedure, and provide the algorithms of these transactions corresponding to subject registration, object escrowing and publication, access request and grant. By maximizing the strengths of both ABAC and blockchain, this platform can support flexible and diverse permission management, as well as verifiable and transparent access authorization process in an open decentralized environment.",
"title": ""
},
{
"docid": "neg:1840598_11",
"text": "This paper proposes a new power-factor-correction (PFC) topology, and explains its operation principle, its control mechanism, related application problems followed by experimental results. In this proposed topology, critical-conduction-mode (CRM) interleaved technique is applied to a bridgeless PFC in order to achieve high efficiency by combining benefits of each topology. This application is targeted toward low to middle power applications that normally employs continuous-conductionmode boost converter. key words: PFC, Interleaved, critical-conduction-mode, totem-pole",
"title": ""
},
{
"docid": "neg:1840598_12",
"text": "Demand response (DR) is very important in the future smart grid, aiming to encourage consumers to reduce their demand during peak load hours. However, if binary decision variables are needed to specify start-up time of a particular appliance, the resulting mixed integer combinatorial problem is in general difficult to solve. In this paper, we study a versatile convex programming (CP) DR optimization framework for the automatic load management of various household appliances in a smart home. In particular, an L1 regularization technique is proposed to deal with schedule-based appliances (SAs), for which their on/off statuses are governed by binary decision variables. By relaxing these variables from integer to continuous values, the problem is reformulated as a new CP problem with an additional L1 regularization term in the objective. This allows us to transform the original mixed integer problem into a standard CP problem. Its major advantage is that the overall DR optimization problem remains to be convex and therefore the solution can be found efficiently. Moreover, a wide variety of appliances with different characteristics can be flexibly incorporated. Simulation result shows that the energy scheduling of SAs and other appliances can be determined simultaneously using the proposed CP formulation.",
"title": ""
},
{
"docid": "neg:1840598_13",
"text": "Introduction Digital technologies play an increasingly important role in shaping the profile of human thought and action. In the few short decades since its invention, for example, the World Wide Web has transformed the way we shop, date, socialize and undertake scientific endeavours. We are also witnessing an unprecedented rate of technological innovation and change, driven, at least in part, by exponential rates of growth in computing power and performance. The technological landscape is thus a highly dynamic one – new technologies are being introduced all the time, and the rate of change looks set to continue unabated. In view of all this, it is natural to wonder about the effects of new technology on both ourselves and the societies in which we live.",
"title": ""
},
{
"docid": "neg:1840598_14",
"text": "BACKGROUND\nLycopodium clavatum (Lyc) is a widely used homeopathic medicine for the liver, urinary and digestive disorders. Recently, acetyl cholinesterase (AchE) inhibitory activity has been found in Lyc alkaloid extract, which could be beneficial in dementia disorder. However, the effect of Lyc has not yet been explored in animal model of memory impairment and on cerebral blood flow.\n\n\nAIM\nThe present study was planned to explore the effect of Lyc on learning and memory function and cerebral blood flow (CBF) in intracerebroventricularly (ICV) administered streptozotocin (STZ) induced memory impairment in rats.\n\n\nMATERIALS AND METHODS\nMemory deficit was induced by ICV administration of STZ (3 mg/kg) in rats on 1st and 3rd day. Male SD rats were treated with Lyc Mother Tincture (MT) 30, 200 and 1000 for 17 days. Learning and memory was evaluated by Morris water maze test on 14th, 15th and 16th day. CBF was measured by Laser Doppler flow meter on 17th day.\n\n\nRESULTS\nSTZ (ICV) treated rats showed impairment in learning and memory along with reduced CBF. Lyc MT and 200 showed improvement in learning and memory. There was increased CBF in STZ (ICV) treated rats at all the potencies of Lyc studied.\n\n\nCONCLUSION\nThe above study suggests that Lyc may be used as a drug of choice in condition of memory impairment due to its beneficial effect on CBF.",
"title": ""
},
{
"docid": "neg:1840598_15",
"text": "Intensive use of e-business can provide number of opportunities and actual benefits to companies of all activities and sizes. In general, through the use of web sites companies can create global presence and widen business boundaries. Many organizations now have websites to complement their other activities, but it is likely that a smaller proportion really know how successful their sites are and in what extent they comply with business objectives. A key enabler of web sites measurement is web site analytics and metrics. Web sites analytics especially refers to the use of data collected from a web site to determine which aspects of the web site work towards the business objectives. Advanced web analytics must play an important role in overall company strategy and should converge to web intelligence – a specific part of business intelligence which collect and analyze information collected from web sites and apply them in relevant ‘business’ context. This paper examines the importance of measuring the web site quality of the Croatian hotels. Wide range of web site metrics are discussed and finally a set of 8 dimensions and 44 attributes chosen for the evaluation of Croatian hotel’s web site quality. The objective of the survey conducted on the 30 hotels was to identify different groups of hotel web sites in relation to their quality measured with specific web metrics. Key research question was: can hotel web sites be placed into meaningful groups by consideration of variation in web metrics and number of hotel stars? To confirm the reliability of chosen metrics a Cronbach's alpha test was conducted. Apart from descriptive statistics tools, to answer the posed research question, clustering analysis was conducted and the characteristics of the three clusters were considered. Experiences and best practices of the hotel web sites clusters are taken as the prime source of recommendation for improving web sites quality level. Key-Words: web metrics, hotel web sites, web analytics, web site audit, web site quality, cluster analysis",
"title": ""
},
{
"docid": "neg:1840598_16",
"text": "Starch is a major storage product of many economically important crops such as wheat, rice, maize, tapioca, and potato. A large-scale starch processing industry has emerged in the last century. In the past decades, we have seen a shift from the acid hydrolysis of starch to the use of starch-converting enzymes in the production of maltodextrin, modified starches, or glucose and fructose syrups. Currently, these enzymes comprise about 30% of the world’s enzyme production. Besides the use in starch hydrolysis, starch-converting enzymes are also used in a number of other industrial applications, such as laundry and porcelain detergents or as anti-staling agents in baking. A number of these starch-converting enzymes belong to a single family: the -amylase family or family13 glycosyl hydrolases. This group of enzymes share a number of common characteristics such as a ( / )8 barrel structure, the hydrolysis or formation of glycosidic bonds in the conformation, and a number of conserved amino acid residues in the active site. As many as 21 different reaction and product specificities are found in this family. Currently, 25 three-dimensional (3D) structures of a few members of the -amylase family have been determined using protein crystallization and X-ray crystallography. These data in combination with site-directed mutagenesis studies have helped to better understand the interactions between the substrate or product molecule and the different amino acids found in and around the active site. This review illustrates the reaction and product diversity found within the -amylase family, the mechanistic principles deduced from structure–function relationship structures, and the use of the enzymes of this family in industrial applications. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840598_17",
"text": "AFFILIATIONS: asHouri, Hsu, soroosHian, and braitHwaite— Center for Hydrometeorology and Remote Sensing, Henry Samueli School of Engineering, Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California; Knapp and neLson—NOAA/National Climatic Data Center, Asheville, North Carolina; CeCiL—Global Science & Technology, Inc., Asheville, North Carolina; prat—Cooperative Institute for Climate and Satellites, North Carolina State University, and NOAA/National Climatic Data Center, Asheville, North Carolina CORRESPONDING AUTHOR: Hamed Ashouri, Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 E-mail: [email protected]",
"title": ""
},
{
"docid": "neg:1840598_18",
"text": "Images of faces manipulated to make their shapes closer to the average are perceived as more attractive. The influences of symmetry and averageness are often confounded in studies based on full-face views of faces. Two experiments are reported that compared the effect of manipulating the averageness of female faces in profile and full-face views. Use of a profile view allows a face to be \"morphed\" toward an average shape without creating an image that becomes more symmetrical. Faces morphed toward the average were perceived as more attractive in both views, but the effect was significantly stronger for full-face views. Both full-face and profile views morphed away from the average shape were perceived as less attractive. It is concluded that the effect of averageness is independent of any effect of symmetry on the perceived attractiveness of female faces.",
"title": ""
},
{
"docid": "neg:1840598_19",
"text": "Maximum Power Point Tracking (MPPT) is widely used control technique to extract maximum power available from the solar cell of photovoltaic (PV) module. Since the solar cells have non-linear i–v characteristics. The efficiency of PV module is very low and power output depends on solar insolation level and ambient temperature, so maximization of power output with greater efficiency is of special interest. Moreover there is great loss of power due to mismatch of source and load. So, to extract maximum power from solar panel a MPPT needs to be designed. The objective of the paper is to present a novel cost effective and efficient microcontroller based MPPT system for solar photovoltaic system to ensure fast maximum power point operation at all fast changing environmental conditions. The proposed controller scheme utilizes PWM techniques to regulate the output power of boost DC/DC converter at its maximum possible value and simultaneously controls the charging process of battery. Incremental Conductance algorithm is implemented to track maximum power point. For the feasibility study, parameter extraction, model evaluation and analysis of converter system design a MATLAB/Simulink model is demonstrated and simulated for a typical 40W solar panel from Kyocera KC-40 for hardware implementation and verification. Finally, a hardware model is designed and tested in lab at different operating conditions. Further, MPPT system has been tested with Solar Panel at different solar insolation level and temperature. The resulting system has high-efficiency, lower-cost, very fast tracking speed and can be easily modified for additional control function for future use.",
"title": ""
}
] |
1840599 | A model for improved association of radar and camera objects in an indoor environment | [
{
"docid": "pos:1840599_0",
"text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.",
"title": ""
}
] | [
{
"docid": "neg:1840599_0",
"text": "I n the growing fields of wearable robotics, rehabilitation robotics, prosthetics, and walking robots, variable stiffness actuators (VSAs) or adjustable compliant actuators are being designed and implemented because of their ability to minimize large forces due to shocks, to safely interact with the user, and their ability to store and release energy in passive elastic elements. This review article describes the state of the art in the design of actuators with adaptable passive compliance. This new type of actuator is not preferred for classical position-controlled applications such as pick and place operations but is preferred in novel robots where safe human– robot interaction is required or in applications where energy efficiency must be increased by adapting the actuator’s resonance frequency. The working principles of the different existing designs are explained and compared. The designs are divided into four groups: equilibrium-controlled stiffness, antagonistic-controlled stiffness, structure-controlled stiffness (SCS), and mechanically controlled stiffness. In classical robotic applications, actuators are preferred to be as stiff as possible to make precise position movements or trajectory tracking control easier (faster systems with high bandwidth). The biological counterpart is the muscle that has superior functional performance and a neuromechanical control system that is much more advanced at adapting and tuning its parameters. The superior power-to-weight ratio, force-toweight ratio, compliance, and control of muscle, when compared with traditional robotic actuators, are the main barriers for the development of machines that can match the motion, safety, and energy efficiency of human or other animals. One of the key differences of these systems is the compliance or springlike behavior found in biological systems [1]. Although such compliant",
"title": ""
},
{
"docid": "neg:1840599_1",
"text": "Raman microscopy is a non-destructive technique requiring minimal sample preparation that can be used to measure the chemical properties of the mineral and collagen parts of bone simultaneously. Modern Raman instruments contain the necessary components and software to acquire the standard information required in most bone studies. The spatial resolution of the technique is about a micron. As it is non-destructive and small samples can be used, it forms a useful part of a bone characterisation toolbox.",
"title": ""
},
{
"docid": "neg:1840599_2",
"text": "Abeta peptide accumulation is thought to be the primary event in the pathogenesis of Alzheimer's disease (AD), with downstream neurotoxic effects including the hyperphosphorylation of tau protein. Glycogen synthase kinase-3 (GSK-3) is increasingly implicated as playing a pivotal role in this amyloid cascade. We have developed an adult-onset Drosophila model of AD, using an inducible gene expression system to express Arctic mutant Abeta42 specifically in adult neurons, to avoid developmental effects. Abeta42 accumulated with age in these flies and they displayed increased mortality together with progressive neuronal dysfunction, but in the apparent absence of neuronal loss. This fly model can thus be used to examine the role of events during adulthood and early AD aetiology. Expression of Abeta42 in adult neurons increased GSK-3 activity, and inhibition of GSK-3 (either genetically or pharmacologically by lithium treatment) rescued Abeta42 toxicity. Abeta42 pathogenesis was also reduced by removal of endogenous fly tau; but, within the limits of detection of available methods, tau phosphorylation did not appear to be altered in flies expressing Abeta42. The GSK-3-mediated effects on Abeta42 toxicity appear to be at least in part mediated by tau-independent mechanisms, because the protective effect of lithium alone was greater than that of the removal of tau alone. Finally, Abeta42 levels were reduced upon GSK-3 inhibition, pointing to a direct role of GSK-3 in the regulation of Abeta42 peptide level, in the absence of APP processing. Our study points to the need both to identify the mechanisms by which GSK-3 modulates Abeta42 levels in the fly and to determine if similar mechanisms are present in mammals, and it supports the potential therapeutic use of GSK-3 inhibitors in AD.",
"title": ""
},
{
"docid": "neg:1840599_3",
"text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].",
"title": ""
},
{
"docid": "neg:1840599_4",
"text": "Deep neural networks (DNNs) are powerful machine learning models and have succeeded in various artificial intelligence tasks. Although various architectures and modules for the DNNs have been proposed, selecting and designing the appropriate network structure for a target problem is a challenging task. In this paper, we propose a method to simultaneously optimize the network structure and weight parameters during neural network training. We consider a probability distribution that generates network structures, and optimize the parameters of the distribution instead of directly optimizing the network structure. The proposed method can apply to the various network structure optimization problems under the same framework. We apply the proposed method to several structure optimization problems such as selection of layers, selection of unit types, and selection of connections using the MNIST, CIFAR-10, and CIFAR-100 datasets. The experimental results show that the proposed method can find the appropriate and competitive network structures.",
"title": ""
},
{
"docid": "neg:1840599_5",
"text": "Modern cyber-physical systems are complex networked computing systems that electronically control physical systems. Autonomous road vehicles are an important and increasingly ubiquitous instance. Unfortunately, their increasing complexity often leads to security vulnerabilities. Network connectivity exposes these vulnerable systems to remote software attacks that can result in real-world physical damage, including vehicle crashes and loss of control authority. We introduce an integrated architecture to provide provable security and safety assurance for cyber-physical systems by ensuring that safety-critical operations and control cannot be unintentionally affected by potentially malicious parts of the system. Finegrained information flow control is used to design both hardware and software, determining how low-integrity information can affect high-integrity control decisions. This security assurance is used to improve end-to-end security across the entire cyber-physical system. We demonstrate this integrated approach by developing a mobile robotic testbed modeling a self-driving system and testing it with a malicious attack. ACM Reference Format: Jed Liu, Joe Corbett-Davies, Andrew Ferraiuolo, Alexander Ivanov, Mulong Luo, G. Edward Suh, Andrew C. Myers, and Mark Campbell. 2018. Secure Autonomous Cyber-Physical Systems Through Verifiable Information Flow Control. InWorkshop on Cyber-Physical Systems Security & Privacy (CPS-SPC ’18), October 19, 2018, Toronto, ON, Canada. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3264888.3264889",
"title": ""
},
{
"docid": "neg:1840599_6",
"text": "Hair is typically modeled and rendered using either explicitly defined hair strand geometry or a volume texture of hair densities. Taken each on their own, these two hair representations have difficulties in the case of animal fur as it consists of very dense and thin undercoat hairs in combination with coarse guard hairs. Explicit hair strand geometry is not well-suited for the undercoat hairs, while volume textures are not well-suited for the guard hairs. To efficiently model and render both guard hairs and undercoat hairs, we present a hybrid technique that combines rasterization of explicitly defined guard hairs with ray marching of a prismatic shell volume with dynamic resolution. The latter is the key to practical combination of the two techniques, and it also enables a high degree of detail in the undercoat. We demonstrate that our hybrid technique creates a more detailed and soft fur appearance as compared with renderings that only use explicitly defined hair strands. Finally, our rasterization approach is based on order-independent transparency and renders high-quality fur images in seconds.",
"title": ""
},
{
"docid": "neg:1840599_7",
"text": "Domain-oriented dialogue systems are often faced with users that try to cross the limits of their knowledge, by unawareness of its domain limitations or simply to test their capacity. These interactions are considered to be Out-Of-Domain and several strategies can be found in the literature to deal with some specific situations. Since if a certain input appears once, it has a non-zero probability of being entered later, the idea of taking advantage of real human interactions to feed these dialogue systems emerges, thus, naturally. In this paper, we introduce the SubTle Corpus, a corpus of Interaction-Response pairs extracted from subtitles files, created to help dialogue systems to deal with Out-of-Domain interactions.",
"title": ""
},
{
"docid": "neg:1840599_8",
"text": "Most current term frequency normalization approaches for information retrieval involve the use of parameters. The tuning of these parameters has an important impact on the overall performance of the information retrieval system. Indeed, a small variation in the involved parameter(s) could lead to an important variation in the precision/recall values. Most current tuning approaches are dependent on the document collections. As a consequence, the effective parameter value cannot be obtained for a given new collection without extensive training data. In this paper, we propose a novel and robust method for the tuning of term frequency normalization parameter(s), by measuring the normalization effect on the within document frequency of the query terms. As an illustration, we apply our method on Amati \\& Van Rijsbergen's so-called normalization 2. The experiments for the ad-hoc TREC-6,7,8 tasks and TREC-8,9,10 Web tracks show that the new method is independent of the collections and able to provide reliable and good performance.",
"title": ""
},
{
"docid": "neg:1840599_9",
"text": "AIMS\nExcessive internet use is becoming a concern, and some have proposed that it may involve addiction. We evaluated the dimensions assessed by, and psychometric properties of, a range of questionnaires purporting to assess internet addiction.\n\n\nMETHODS\nFourteen questionnaires were identified purporting to assess internet addiction among adolescents and adults published between January 1993 and October 2011. Their reported dimensional structure, construct, discriminant and convergent validity and reliability were assessed, as well as the methods used to derive these.\n\n\nRESULTS\nMethods used to evaluate internet addiction questionnaires varied considerably. Three dimensions of addiction predominated: compulsive use (79%), negative outcomes (86%) and salience (71%). Less common were escapism (21%), withdrawal symptoms (36%) and other dimensions. Measures of validity and reliability were found to be within normally acceptable limits.\n\n\nCONCLUSIONS\nThere is a broad convergence of questionnaires purporting to assess internet addiction suggesting that compulsive use, negative outcome and salience should be covered and the questionnaires show adequate psychometric properties. However, the methods used to evaluate the questionnaires vary widely and possible factors contributing to excessive use such as social motivation do not appear to be covered.",
"title": ""
},
{
"docid": "neg:1840599_10",
"text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.",
"title": ""
},
{
"docid": "neg:1840599_11",
"text": "Isogeometric analysis has been proposed as a methodology for bridging the gap between computer aided design (CAD) and finite element analysis (FEA). Although both the traditional and isogeometric pipelines rely upon the same conceptualization to solid model steps, they drastically differ in how they bring the solid model both to and through the analysis process. The isogeometric analysis process circumvents many of the meshing pitfalls experienced by the traditional pipeline by working directly within the approximation spaces used by the model representation. In this paper, we demonstrate that in a similar way as how mesh quality is used in traditional FEA to help characterize the impact of the mesh on analysis, an analogous concept of model quality exists within isogeometric analysis. The consequence of these observations is the need for a new area within modeling – analysis-aware modeling – in which model properties and parameters are selected to facilitate isogeometric analysis. ! 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840599_12",
"text": "Demand response (DR) is very important in the future smart grid, aiming to encourage consumers to reduce their demand during peak load hours. However, if binary decision variables are needed to specify start-up time of a particular appliance, the resulting mixed integer combinatorial problem is in general difficult to solve. In this paper, we study a versatile convex programming (CP) DR optimization framework for the automatic load management of various household appliances in a smart home. In particular, an L1 regularization technique is proposed to deal with schedule-based appliances (SAs), for which their on/off statuses are governed by binary decision variables. By relaxing these variables from integer to continuous values, the problem is reformulated as a new CP problem with an additional L1 regularization term in the objective. This allows us to transform the original mixed integer problem into a standard CP problem. Its major advantage is that the overall DR optimization problem remains to be convex and therefore the solution can be found efficiently. Moreover, a wide variety of appliances with different characteristics can be flexibly incorporated. Simulation result shows that the energy scheduling of SAs and other appliances can be determined simultaneously using the proposed CP formulation.",
"title": ""
},
{
"docid": "neg:1840599_13",
"text": "Chaos and its drive-response synchronization for a fractional-order cellular neural networks (CNN) are studied. It is found that chaos exists in the fractional-order system with six-cell. The phase synchronisation of drive and response chaotic trajectories is investigated after that. These works based on Lyapunov exponents (LE), Lyapunov stability theory and numerical solving fractional-order system in Matlab environment.",
"title": ""
},
{
"docid": "neg:1840599_14",
"text": "BACKGROUND\nΔ(9)-Tetrahydrocannabinol (THC), 11-nor-9-carboxy-THC (THCCOOH), and cannabinol (CBN) were measured in breath following controlled cannabis smoking to characterize the time course and windows of detection of breath cannabinoids.\n\n\nMETHODS\nExhaled breath was collected from chronic (≥4 times per week) and occasional (<twice per week) smokers before and after smoking a 6.8% THC cigarette. Sample analysis included methanol extraction from breath pads, solid-phase extraction, and liquid chromatography-tandem mass spectrometry quantification.\n\n\nRESULTS\nTHC was the major cannabinoid in breath; no sample contained THCCOOH and only 1 contained CBN. Among chronic smokers (n = 13), all breath samples were positive for THC at 0.89 h, 76.9% at 1.38 h, and 53.8% at 2.38 h, and only 1 sample was positive at 4.2 h after smoking. Among occasional smokers (n = 11), 90.9% of breath samples were THC-positive at 0.95 h and 63.6% at 1.49 h. One occasional smoker had no detectable THC. Analyte recovery from breath pads by methanolic extraction was 84.2%-97.4%. Limits of quantification were 50 pg/pad for THC and CBN and 100 pg/pad for THCCOOH. Solid-phase extraction efficiency was 46.6%-52.1% (THC) and 76.3%-83.8% (THCCOOH, CBN). Matrix effects were -34.6% to 12.3%. Cannabinoids fortified onto breath pads were stable (≤18.2% concentration change) for 8 h at room temperature and -20°C storage for 6 months.\n\n\nCONCLUSIONS\nBreath may offer an alternative matrix for identifying recent driving under the influence of cannabis, but currently sensitivity is limited to a short detection window (0.5-2 h).",
"title": ""
},
{
"docid": "neg:1840599_15",
"text": "With increasing audio/video service consumption through unmanaged IP networks, HTTP adaptive streaming techniques have emerged to handle bandwidth limitations and variations. But while it is becoming common to serve multiple clients in one home network, these solutions do not adequately address fine tuned quality arbitration between the multiple streams. While clients compete for bandwidth, the video suffers unstable conditions and/or inappropriate bit-rate levels.\n We hereby experiment a mechanism based on traffic chapping that allow bandwidth arbitration to be implemented in the home gateway, first determining desirable target bit-rates to be reached by each stream and then constraining the clients to stay within their limits. This enables the delivery of optimal quality of experience to the maximum number of users. This approach is validated through experimentation, and results are shown through a set of objective measurement criteria.",
"title": ""
},
{
"docid": "neg:1840599_16",
"text": "Although parallel and convergent evolution are discussed extensively in technical articles and textbooks, their meaning can be overlapping, imprecise, and contradictory. The meaning of parallel evolution in much of the evolutionary literature grapples with two separate hypotheses in relation to phenotype and genotype, but often these two hypotheses have been inferred from only one hypothesis, and a number of subsidiary but problematic criteria, in relation to the phenotype. However, examples of parallel evolution of genetic traits that underpin or are at least associated with convergent phenotypes are now emerging. Four criteria for distinguishing parallelism from convergence are reviewed. All are found to be incompatible with any single proposition of homoplasy. Therefore, all homoplasy is equivalent to a broad view of convergence. Based on this concept, all phenotypic homoplasy can be described as convergence and all genotypic homoplasy as parallelism, which can be viewed as the equivalent concept of convergence for molecular data. Parallel changes of molecular traits may or may not be associated with convergent phenotypes but if so describe homoplasy at two biological levels-genotype and phenotype. Parallelism is not an alternative to convergence, but rather it entails homoplastic genetics that can be associated with and potentially explain, at the molecular level, how convergent phenotypes evolve.",
"title": ""
},
{
"docid": "neg:1840599_17",
"text": "s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges.",
"title": ""
},
{
"docid": "neg:1840599_18",
"text": "We present impacto, a device designed to render the haptic sensation of hitting or being hit in virtual reality. The key idea that allows the small and light impacto device to simulate a strong hit is that it decomposes the stimulus: it renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impact to the hit by thrusting the user's arm backwards using electrical muscle stimulation. The device is self-contained, wireless, and small enough for wearable use, thus leaves the user unencumbered and able to walk around freely in a virtual environment. The device is of generic shape, allowing it to also be worn on legs, so as to enhance the experience of kicking, or merged into props, such as a baseball bat. We demonstrate how to assemble multiple impacto units into a simple haptic suit. Participants of our study rated impact simulated using impacto's combination of solenoid hit and electrical muscle stimulation as more realistic than either technique in isolation.",
"title": ""
},
{
"docid": "neg:1840599_19",
"text": "I n our last article 1 we described the external features which characterize the cranial and facial structures of the cranial strains known as hyperflexion and hyperextension. To understand how these strains develop we have to examine the anatomical relations underlying all cranial patterns. Each strain represent a variation on a theme. By studying the features in common, it is possible to account for the facial and dental consequences of these variations. The key is the spheno-basilar symphysis and the displacements which can take place between the occiput and the sphenoid at that suture. In hyperflexion there is shortening of the cranium in an antero-posterior direction with a subsequent upward buckling of the spheno-basilar symphysis (Figure 1). In children, where the cartilage of the joint has not ossified, a v-shaped wedge can be seen occasionally on the lateral skull radiograph (Figure 2). Figure (3a) is of the cranial base seen from a vertex viewpoint. By leaving out the temporal bones the connection between the centrally placed spheno-basilar symphysis and the peripheral structures of the cranium can be seen more easily. Sutherland realized that the cranium could be divided into quadrants (Figure 3b) centered on the spheno-basilar symphysis and that what happens in each quadrant is directly influenced by the spheno-basilar symphysis. He noted that accompanying the vertical changes at the symphysis there are various lateral displacements. As the peripheral structures move laterally, this is known as external rotation. If they move closer to the midline, this is called internal rotation. It is not unusual to have one side of the face externally rotated and the other side internally rotated (Figure 4a). This can have a significant effect in the mouth, giving rise to asymmetries (Figure 4b). This shows a palatal view of the maxilla with the left posterior dentition externally rotated and the right buccal posterior segment internally rotated, reflecting the internal rotation of the whole right side of the face. This can be seen in hyperflexion but also other strains. With this background, it is now appropriate to examine in detail the cranial strain known as hyperflexion. As its name implies, it is brought about by an exaggeration of the flexion/ extension movement of the cranium into flexion. Rhythmic movement of the cranium continues despite the displacement into flexion, but it does so more readily into flexion than extension. As the skull is shortened in an antero-posterior plane, it is widened laterally. Figures 3a and 3b. 3a: cranial base from a vertex view (temporal bones left out). 3b: Sutherland’s quadrants imposed on cranial base. Figure 2. Lateral Skull Radiograph of Hyperflexion patient. Note V-shaped wedge at superior border of the spheno-basillar symphysis. Figure 1. Movement of Occiput and Sphenold in Hyperflexion. Reprinted from Orthopedic Gnathology, Hockel, J., Ed. 1983. With permission from Quintessence Publishing Co.",
"title": ""
}
] |
1840600 | Design of LTCC Wideband Patch Antenna for LMDS Band Applications | [
{
"docid": "pos:1840600_0",
"text": "A simple procedure for the design of compact stacked-patch antennas is presented based on LTCC multilayer packaging technology. The advantage of this topology is that only one parameter, i.e., the substrate thickness (or equivalently the number of LTCC layers), needs to be adjusted in order to achieve an optimized bandwidth performance. The validity of the new design strategy is verified through applying it to practical compact antenna design for several wireless communication bands, including ISM 2.4-GHz band, IEEE 802.11a 5.8-GHz, and LMDS 28-GHz band. It is shown that a 10-dB return-loss bandwidth of 7% can be achieved for the LTCC (/spl epsiv//sub r/=5.6) multilayer structure with a thickness of less than 0.03 wavelengths, which can be realized using a different number of laminated layers for different frequencies (e.g., three layers for the 28-GHz band).",
"title": ""
}
] | [
{
"docid": "neg:1840600_0",
"text": "ABSTRACT. The purpose of this study is to construct doctors’ acceptance model of Electronic Medical Records (EMR) in private hospitals. The model extends the Technology Acceptance Model (TAM) with two factors of Individual Capabilities; Self-Efficacy (SE) and Perceived Behavioral Control (PBC). The initial findings proposes additional factors over the original factors in TAM making Perceived Usefulness (PU), Perceived Ease Of Use (PEOU), Behavioral Intention to use (BI), SE, and PBC working in incorporation. A cross-sectional survey was used in which data were gathered by a personal administered questionnaire as the instrument for data collection. Doctors of public hospitals were involved in this study which proves that all factors are reliable.",
"title": ""
},
{
"docid": "neg:1840600_1",
"text": "Understanding the anatomy of the ankle ligaments is important for correct diagnosis and treatment. Ankle ligament injury is the most frequent cause of acute ankle pain. Chronic ankle pain often finds its cause in laxity of one of the ankle ligaments. In this pictorial essay, the ligaments around the ankle are grouped, depending on their anatomic orientation, and each of the ankle ligaments is discussed in detail.",
"title": ""
},
{
"docid": "neg:1840600_2",
"text": "We consider a generalization of the lcm-sum function, and we give two kinds of asymptotic formulas for the sum of that function. Our results include a generalization ofBordelì es's results and a refinement of the error estimate of Alladi's result. We prove these results by the method similar to those ofBordelì es.",
"title": ""
},
{
"docid": "neg:1840600_3",
"text": "We present the characterization of dry spiked biopotential electrodes and test their suitability to be used in anesthesia monitoring systems based on the measurement of electroencephalographic signals. The spiked electrode consists of an array of microneedles penetrating the outer skin layers. We found a significant dependency of the electrode-skin-electrode impedance (ESEI) on the electrode size (i.e., the number of spikes) and the coating material of the spikes. Electrodes larger than 3/spl times/3 mm/sup 2/ coated with Ag-AgCl have sufficiently low ESEI to be well suited for electroencephalograph (EEG) recordings. The maximum measured ESEI was 4.24 k/spl Omega/ and 87 k/spl Omega/, at 1 kHz and 0.6 Hz, respectively. The minimum ESEI was 0.65 k/spl Omega/ an 16 k/spl Omega/, at the same frequencies. The ESEI of spiked electrodes is stable over an extended period of time. The arithmetic mean of the generated DC offset voltage is 11.8 mV immediately after application on the skin and 9.8 mV after 20-30 min. A spectral study of the generated potential difference revealed that the AC part was unstable at frequencies below approximately 0.8 Hz. Thus, the signal does not interfere with a number of clinical applications using real-time EEG. Comparing raw EEG recordings of the spiked electrode with commercial Zipprep electrodes showed that both signals were similar. Due to the mechanical strength of the silicon microneedles and the fact that neither skin preparation nor electrolytic gel is required, use of the spiked electrode is convenient. The spiked electrode is very comfortable for the patient.",
"title": ""
},
{
"docid": "neg:1840600_4",
"text": "Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer age. Reverse mode automatic differentiation both antedates and generalizes the method of backwards propagation of errors used in machine learning. Despite this, practitioners in a variety of fields, including machine learning, have been little influenced by automatic differentiation, and make scant use of available tools. Here we review the technique of automatic differentiation, describe its two main modes, and explain how it can benefit machine learning practitioners. To reach the widest possible audience our treatment assumes only elementary differential calculus, and does not assume any knowledge of linear algebra.",
"title": ""
},
{
"docid": "neg:1840600_5",
"text": "Moore’s law has allowed the microprocessor market to innovate at an astonishing rate. We believe microchip implants are the next frontier for the integrated circuit industry. Current health monitoring technologies are large, expensive, and consume significant power. By miniaturizing and reducing power, monitoring equipment can be implanted into the body and allow 24/7 health monitoring. We plan to implement a new transmitter topology, compressed sensing, which can be used for wireless communications with microchip implants. This paper focuses on the ADC used in the compressed sensing signal chain. Using the Cadence suite of tools and a 32/28nm process, we produced simulations of our compressed sensing Analog to Digital Converter to feed into a Digital Compression circuit. Our results indicate that a 12-bit, 20Ksample, 9.8nW Successive Approximation ADC is possible for diagnostic resolution (10 bits). By incorporating a hybrid-C2C DAC with differential floating voltage shields, it is possible to obtain 9.7 ENOB. Thus, we recommend this ADC for use in compressed sensing for biomedical purposes. Not only will it be useful in digital compressed sensing, but this can also be repurposed for use in analog compressed sensing.",
"title": ""
},
{
"docid": "neg:1840600_6",
"text": "Digital investigation in the cloud is challenging, but there's also opportunities for innovations in digital forensic solutions (such as remote forensic collection of evidential data from cloud servers client devices and the underlying supporting infrastructure such as distributed file systems). This column describes the challenges and opportunities in cloud forensics.",
"title": ""
},
{
"docid": "neg:1840600_7",
"text": "In several previous papers and particularly in [3] we presented the use of logic equations and their solution using ternary vectors and set-theoretic considerations as well as binary codings and bit-parallel vector operations. In this paper we introduce a new and elegant model for the game of Sudoku that uses the same approach and solves this problem without any search always finding all solutions (including no solutions or several solutions). It can also be extended to larger Sudokus and to a whole class of similar discrete problems, such as Queens’ problems on the chessboard, graph-coloring problems etc. Disadvantages of known SAT approaches for such problems were overcome by our new method.",
"title": ""
},
{
"docid": "neg:1840600_8",
"text": "Adjectives like warm, hot, and scalding all describe temperature but differ in intensity. Understanding these differences between adjectives is a necessary part of reasoning about natural language. We propose a new paraphrasebased method to automatically learn the relative intensity relation that holds between a pair of scalar adjectives. Our approach analyzes over 36k adjectival pairs from the Paraphrase Database under the assumption that, for example, paraphrase pair really hot↔ scalding suggests that hot < scalding. We show that combining this paraphrase evidence with existing, complementary patternand lexicon-based approaches improves the quality of systems for automatically ordering sets of scalar adjectives and inferring the polarity of indirect answers to yes/no questions.",
"title": ""
},
{
"docid": "neg:1840600_9",
"text": "The popularity of mobile devices and location-based services (LBSs) has raised significant concerns regarding the location privacy of their users. A popular approach to protect location privacy is anonymizing the users of LBS systems. In this paper, we introduce an information-theoretic notion for location privacy, which we call perfect location privacy. We then demonstrate how anonymization should be used by LBS systems to achieve the defined perfect location privacy. We study perfect location privacy under two models for user movements. First, we assume that a user’s current location is independent from her past locations. Using this independent identically distributed (i.i.d.) model, we show that if the pseudonym of the user is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{r-1}}}\\right)$ </tex-math></inline-formula> observations are made by the adversary for that user, then the user has perfect location privacy. Here, <inline-formula> <tex-math notation=\"LaTeX\">$n$ </tex-math></inline-formula> is the number of the users in the network and <inline-formula> <tex-math notation=\"LaTeX\">$r$ </tex-math></inline-formula> is the number of all possible locations. Next, we model users’ movements using Markov chains to better model real-world movement patterns. We show that perfect location privacy is achievable for a user if the user’s pseudonym is changed before <inline-formula> <tex-math notation=\"LaTeX\">$O\\left({n^{\\frac {2}{|E|-r}}}\\right)$ </tex-math></inline-formula> observations are collected by the adversary for that user, where <inline-formula> <tex-math notation=\"LaTeX\">$|E|$ </tex-math></inline-formula> is the number of edges in the user’s Markov chain model.",
"title": ""
},
{
"docid": "neg:1840600_10",
"text": "This paper demonstrates the sketch drawing capability of NAO humanoid robot. Two redundant degrees of freedom elbow yaw (RElbowYaw) and wrist yaw (RWristYaw) of the right hand have been sacrificed because of their less contribution in drawing. The Denavit-Hartenberg (DH) parameters of the system has been defined in order to measure the working envelop of the right hand as well as to achieve the inverse kinematic solution. A linear transformation has been used to transform the image points with respect to real world coordinate system and novel 4 point calibration technique has been proposed to calibrate the real world coordinate system with respect to NAO end effector.",
"title": ""
},
{
"docid": "neg:1840600_11",
"text": "A simultaneous analytical method for etizolam and its main metabolites (alpha-hydroxyetizolam and 8-hydroxyetizolam) in whole blood was developed using solid-phase extraction, TMS derivatization and ion trap gas chromatography tandem mass spectrometry (GC-MS/MS). Separation of etizolam, TMS derivatives of alpha-hydroxyetizolam and 8-hydroxyetizolam and fludiazepam as internal standard was performed within about 17 min. The inter-day precision evaluated at the concentration of 50 ng/mL etizolam, alpha-hydroxyetizolam and 8-hydroxyetizolam was evaluated 8.6, 6.4 and 8.0% respectively. Linearity occurred over the range in 5-50 ng/mL. This method is satisfactory for clinical and forensic purposes. This method was applied to two unnatural death cases suspected to involve etizolam. Etizolam and its two metabolites were detected in these cases.",
"title": ""
},
{
"docid": "neg:1840600_12",
"text": "Telecare medical information systems (TMISs) are increasingly popular technologies for healthcare applications. Using TMISs, physicians and caregivers can monitor the vital signs of patients remotely. Since the database of TMISs stores patients’ electronic medical records (EMRs), only authorized users should be granted the access to this information for the privacy concern. To keep the user anonymity, recently, Chen et al. proposed a dynamic ID-based authentication scheme for telecare medical information system. They claimed that their scheme is more secure and robust for use in a TMIS. However, we will demonstrate that their scheme fails to satisfy the user anonymity due to the dictionary attacks. It is also possible to derive a user password in case of smart card loss attacks. Additionally, an improved scheme eliminating these weaknesses is also presented.",
"title": ""
},
{
"docid": "neg:1840600_13",
"text": "Classification of environmental sounds is a fundamental procedure for a wide range of real-world applications. In this paper, we propose a novel acoustic feature extraction method for classifying the environmental sounds. The proposed method is motivated from the image processing technique, local binary pattern (LBP), and works on a spectrogram which forms two-dimensional (time-frequency) data like an image. Since the spectrogram contains noisy pixel values, for improving classification performance, it is crucial to extract the features which are robust to the fluctuations in pixel values. We effectively incorporate the local statistics, mean and standard deviation on local pixels, to establish robust LBP. In addition, we provide the technique of L2-Hellinger normalization which is efficiently applied to the proposed features so as to further enhance the discriminative power while increasing the robustness. In the experiments on environmental sound classification using RWCP dataset that contains 105 sound categories, the proposed method produces the superior performance (98.62%) compared to the other methods, exhibiting significant improvements over the standard LBP method as well as robustness to noise and low computation time.",
"title": ""
},
{
"docid": "neg:1840600_14",
"text": "Global terrestrial ecosystems absorbed carbon at a rate of 1–4 Pg yr-1 during the 1980s and 1990s, offsetting 10–60 per cent of the fossil-fuel emissions. The regional patterns and causes of terrestrial carbon sources and sinks, however, remain uncertain. With increasing scientific and political interest in regional aspects of the global carbon cycle, there is a strong impetus to better understand the carbon balance of China. This is not only because China is the world’s most populous country and the largest emitter of fossil-fuel CO2 into the atmosphere, but also because it has experienced regionally distinct land-use histories and climate trends, which together control the carbon budget of its ecosystems. Here we analyse the current terrestrial carbon balance of China and its driving mechanisms during the 1980s and 1990s using three different methods: biomass and soil carbon inventories extrapolated by satellite greenness measurements, ecosystem models and atmospheric inversions. The three methods produce similar estimates of a net carbon sink in the range of 0.19–0.26 Pg carbon (PgC) per year, which is smaller than that in the conterminous United States but comparable to that in geographic Europe. We find that northeast China is a net source of CO2 to the atmosphere owing to overharvesting and degradation of forests. By contrast, southern China accounts for more than 65 per cent of the carbon sink, which can be attributed to regional climate change, large-scale plantation programmes active since the 1980s and shrub recovery. Shrub recovery is identified as the most uncertain factor contributing to the carbon sink. Our data and model results together indicate that China’s terrestrial ecosystems absorbed 28–37 per cent of its cumulated fossil carbon emissions during the 1980s and 1990s.",
"title": ""
},
{
"docid": "neg:1840600_15",
"text": "The transition from an informative to a service oriented interactive governmental portals has become a necessity due to the time and cost saving benefits for both governments and users. User experience is a key factor in maintaining these benefits. In this study we propose an E-government Portal Assessment Method (EGPAM), which is a direct method for measuring user experience in e-government portals. We present a case study assessing the portal of the Ministry of Public Works (MOW) in Kuwait. Results showed that having a direct measurement to user experience enabled easier identification of the current level of user satisfaction and provided a guidance on ways to improve user experience and addressing identified issues.",
"title": ""
},
{
"docid": "neg:1840600_16",
"text": "Recent work has made significant progress in improving spatial resolution for pixelwise labeling with Fully Convolutional Network (FCN) framework by employing Dilated/Atrous convolution, utilizing multi-scale features and refining boundaries. In this paper, we explore the impact of global contextual information in semantic segmentation by introducing the Context Encoding Module, which captures the semantic context of scenes and selectively highlights class-dependent featuremaps. The proposed Context Encoding Module significantly improves semantic segmentation results with only marginal extra computation cost over FCN. Our approach has achieved new state-of-the-art results 51.7% mIoU on PASCAL-Context, 85.9% mIoU on PASCAL VOC 2012. Our single model achieves a final score of 0.5567 on ADE20K test set, which surpasses the winning entry of COCO-Place Challenge 2017. In addition, we also explore how the Context Encoding Module can improve the feature representation of relatively shallow networks for the image classification on CIFAR-10 dataset. Our 14 layer network has achieved an error rate of 3.45%, which is comparable with state-of-the-art approaches with over 10× more layers. The source code for the complete system are publicly available1.",
"title": ""
},
{
"docid": "neg:1840600_17",
"text": "Chaos scales graph processing from secondary storage to multiple machines in a cluster. Earlier systems that process graphs from secondary storage are restricted to a single machine, and therefore limited by the bandwidth and capacity of the storage system on a single machine. Chaos is limited only by the aggregate bandwidth and capacity of all storage devices in the entire cluster.\n Chaos builds on the streaming partitions introduced by X-Stream in order to achieve sequential access to storage, but parallelizes the execution of streaming partitions. Chaos is novel in three ways. First, Chaos partitions for sequential storage access, rather than for locality and load balance, resulting in much lower pre-processing times. Second, Chaos distributes graph data uniformly randomly across the cluster and does not attempt to achieve locality, based on the observation that in a small cluster network bandwidth far outstrips storage bandwidth. Third, Chaos uses work stealing to allow multiple machines to work on a single partition, thereby achieving load balance at runtime.\n In terms of performance scaling, on 32 machines Chaos takes on average only 1.61 times longer to process a graph 32 times larger than on a single machine. In terms of capacity scaling, Chaos is capable of handling a graph with 1 trillion edges representing 16 TB of input data, a new milestone for graph processing capacity on a small commodity cluster.",
"title": ""
},
{
"docid": "neg:1840600_18",
"text": "While it is known that academic searchers differ from typical web searchers, little is known about the search behavior of academic searchers over longer periods of time. In this study we take a look at academic searchers through a large-scale log analysis on a major academic search engine. We focus on two aspects: query reformulation patterns and topic shifts in queries. We first analyze how each of these aspects evolve over time. We identify important query reformulation patterns: revisiting and issuing new queries tend to happen more often over time. We also find that there are two distinct types of users: one type of users becomes increasingly focused on the topics they search for as time goes by, and the other becomes increasingly diversifying. After analyzing these two aspects separately, we investigate whether, and to which degree, there is a correlation between topic shifts and query reformulations. Surprisingly, users’ preferences of query reformulations correlate little with their topic shift tendency. However, certain reformulations may help predict the magnitude of the topic shift that happens in the immediate next timespan. Our results shed light on academic searchers’ information seeking behavior and may benefit search personalization.",
"title": ""
}
] |
1840601 | Automated Diagnosis of Glaucoma Using Texture and Higher Order Spectra Features | [
{
"docid": "pos:1840601_0",
"text": "This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.",
"title": ""
},
{
"docid": "pos:1840601_1",
"text": "Diabetic retinopathy (DR) is a condition where the retina is damaged due to fluid leaking from the blood vessels into the retina. In extreme cases, the patient will become blind. Therefore, early detection of diabetic retinopathy is crucial to prevent blindness. Various image processing techniques have been used to identify the different stages of diabetes retinopathy. The application of non-linear features of the higher-order spectra (HOS) was found to be efficient as it is more suitable for the detection of shapes. The aim of this work is to automatically identify the normal, mild DR, moderate DR, severe DR and prolific DR. The parameters are extracted from the raw images using the HOS techniques and fed to the support vector machine (SVM) classifier. This paper presents classification of five kinds of eye classes using SVM classifier. Our protocol uses, 300 subjects consisting of five different kinds of eye disease conditions. We demonstrate a sensitivity of 82% for the classifier with the specificity of 88%.",
"title": ""
}
] | [
{
"docid": "neg:1840601_0",
"text": "BACKGROUND\nThis study examined longitudinal patterns of heroin use, other substance use, health, mental health, employment, criminal involvement, and mortality among heroin addicts.\n\n\nMETHODS\nThe sample was composed of 581 male heroin addicts admitted to the California Civil Addict Program (CAP) during the years 1962 through 1964; CAP was a compulsory drug treatment program for heroin-dependent criminal offenders. This 33-year follow-up study updates information previously obtained from admission records and 2 face-to-face interviews conducted in 1974-1975 and 1985-1986; in 1996-1997, at the latest follow-up, 284 were dead and 242 were interviewed.\n\n\nRESULTS\nIn 1996-1997, the mean age of the 242 interviewed subjects was 57.4 years. Age, disability, years since first heroin use, and heavy alcohol use were significant correlates of mortality. Of the 242 interviewed subjects, 20.7% tested positive for heroin (with additional 9.5% urine refusal and 14.0% incarceration, for whom urinalyses were unavailable), 66.9% reported tobacco use, 22.1% were daily alcohol drinkers, and many reported illicit drug use (eg, past-year heroin use was 40.5%; marijuana, 35.5%; cocaine, 19.4%; crack, 10.3%; amphetamine, 11.6%). The group also reported high rates of health problems, mental health problems, and criminal justice system involvement. Long-term heroin abstinence was associated with less criminality, morbidity, psychological distress, and higher employment.\n\n\nCONCLUSIONS\nWhile the number of deaths increased steadily over time, heroin use patterns were remarkably stable for the group as a whole. For some, heroin addiction has been a lifelong condition associated with severe health and social consequences.",
"title": ""
},
{
"docid": "neg:1840601_1",
"text": "Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a \"variable,\" the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.",
"title": ""
},
{
"docid": "neg:1840601_2",
"text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.",
"title": ""
},
{
"docid": "neg:1840601_3",
"text": "BACKGROUND\nAcute hospital discharge delays are a pressing concern for many health care administrators. In Canada, a delayed discharge is defined by the alternate level of care (ALC) construct and has been the target of many provincial health care strategies. Little is known on the patient characteristics that influence acute ALC length of stay. This study examines which characteristics drive acute ALC length of stay for those awaiting nursing home admission.\n\n\nMETHODS\nPopulation-level administrative and assessment data were used to examine 17,111 acute hospital admissions designated as alternate level of care (ALC) from a large Canadian health region. Case level hospital records were linked to home care administrative and assessment records to identify and characterize those ALC patients that account for the greatest proportion of acute hospital ALC days.\n\n\nRESULTS\nALC patients waiting for nursing home admission accounted for 41.5% of acute hospital ALC bed days while only accounting for 8.8% of acute hospital ALC patients. Characteristics that were significantly associated with greater ALC lengths of stay were morbid obesity (27 day mean deviation, 99% CI = ±14.6), psychiatric diagnosis (13 day mean deviation, 99% CI = ±6.2), abusive behaviours (12 day mean deviation, 99% CI = ±10.7), and stroke (7 day mean deviation, 99% CI = ±5.0). Overall, persons with morbid obesity, a psychiatric diagnosis, abusive behaviours, or stroke accounted for 4.3% of all ALC patients and 23% of all acute hospital ALC days between April 1st 2009 and April 1st, 2011. ALC patients with the identified characteristics had unique clinical profiles.\n\n\nCONCLUSIONS\nA small number of patients with non-medical days waiting for nursing home admission contribute to a substantial proportion of total non-medical days in acute hospitals. Increases in nursing home capacity or changes to existing funding arrangements should target the sub-populations identified in this investigation to maximize effectiveness. Specifically, incentives should be introduced to encourage nursing homes to accept acute patients with the least prospect for community-based living, while acute patients with the greatest prospect for community-based living are discharged to transitional care or directly to community-based care.",
"title": ""
},
{
"docid": "neg:1840601_4",
"text": "Bullying is a serious public health concern that is associated with significant negative mental, social, and physical outcomes. Technological advances have increased adolescents' use of social media, and online communication platforms have exposed adolescents to another mode of bullying- cyberbullying. Prevention and intervention materials, from websites and tip sheets to classroom curriculum, have been developed to help youth, parents, and teachers address cyberbullying. While youth and parents are willing to disclose their experiences with bullying to their health care providers, these disclosures need to be taken seriously and handled in a caring manner. Health care providers need to include questions about bullying on intake forms to encourage these disclosures. The aim of this article is to examine the current status of cyberbullying prevention and intervention. Research support for several school-based intervention programs is summarised. Recommendations for future research are provided.",
"title": ""
},
{
"docid": "neg:1840601_5",
"text": "The structure of blood vessels in the sclerathe white part of the human eye, is unique for every individual, hence it is best suited for human identification. However, this is a challenging research because it has a high insult rate (the number of occasions the valid user is rejected). In this survey firstly a brief introduction is presented about the sclera based biometric authentication. In addition, a literature survey is presented. We have proposed simplified method for sclera segmentation, a new method for sclera pattern enhancement based on histogram equalization and line descriptor based feature extraction and pattern matching with the help of matching score between the two segment descriptors. We attempt to increase the awareness about this topic, as much of the research is not done in this area.",
"title": ""
},
{
"docid": "neg:1840601_6",
"text": "Nowadays, with the growth in the use of search engines, the extension of spying programs and anti -terrorism prevention, several researches focused on text analysis. In this sense, lemmatization and stemming are two common requirements of these researches. They include reducing different grammatical forms of a word and bring them to a common base form. In what follows, we will discuss these treatment methods on arabic text, especially the Khoja Stemmer, show their limits and provide new tools to improve it.",
"title": ""
},
{
"docid": "neg:1840601_7",
"text": "The problem and the solution.The majority of the literature on creativity has focused on the individual, yet the social environment can influence both the level and frequency of creative behavior. This article reviews the literature for factors related to organizational culture and climate that act as supports and impediments to organizational creativity and innovation. The work of Amabile, Kanter, Van de Ven, Angle, and others is reviewed and synthesized to provide an integrative understanding of the existing literature. Implications for human resource development research and practice are discussed.",
"title": ""
},
{
"docid": "neg:1840601_8",
"text": "How to measure usability is an important question in HCI research and user interface evaluation. We review current practice in measuring usability by categorizing and discussing usability measures from 180 studies published in core HCI journals and proceedings. The discussion distinguish several problems with the measures, including whether they actually measure usability, if they cover usability broadly, how they are reasoned about, and if they meet recommendations on how to measure usability. In many studies, the choice of and reasoning about usability measures fall short of a valid and reliable account of usability as quality-in-use of the user interface being studied. Based on the review, we discuss challenges for studies of usability and for research into how to measure usability. The challenges are to distinguish and empirically compare subjective and objective measures of usability; to focus on developing and employing measures of learning and retention; to study long-term use and usability; to extend measures of satisfaction beyond post-use questionnaires; to validate and standardize the host of subjective satisfaction questionnaires used; to study correlations between usability measures as a means for validation; and to use both micro and macro tasks and corresponding measures of usability. In conclusion, we argue that increased attention to the problems identified and challenges discussed may strengthen studies of usability and usability research. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840601_9",
"text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.",
"title": ""
},
{
"docid": "neg:1840601_10",
"text": "In this article, we provide a basic introduction to CMOS image-sensor technology, design and performance limits and present recent developments and future directions in this area. We also discuss image-sensor operation and describe the most popular CMOS image-sensor architectures. We note the main non-idealities that limit CMOS image sensor performance, and specify several key performance measures. One of the most important advantages of CMOS image sensors over CCDs is the ability to integrate sensing with analog and digital processing down to the pixel level. Finally, we focus on recent developments and future research directions that are enabled by pixel-level processing, the applications of which promise to further improve CMOS image sensor performance and broaden their applicability beyond current markets.",
"title": ""
},
{
"docid": "neg:1840601_11",
"text": "We present a new class of methods for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. SpAM is essentially a functional version of the grouped lasso of Yuan and Lin (2006). SpAM is also closely related to the COSSO model of Lin and Zhang (2006), but decouples smoothing and sparsity, enabling the use of arbitrary nonparametric smoothers. We give an analysis of the theoretical properties of sparse additive models, and present empirical results on synthetic and real data, showing that SpAM can be effective in fitting sparse nonparametric models in high dimensional data.",
"title": ""
},
{
"docid": "neg:1840601_12",
"text": "In this paper a criminal detection framework that could help policemen to recognize the face of a criminal or a suspect is proposed. The framework is a client-server video based face recognition surveillance in the real-time. The framework applies face detection and tracking using Android mobile devices at the client side and video based face recognition at the server side. This paper focuses on the development of the client side of the proposed framework, face detection and tracking using Android mobile devices. For the face detection stage, robust Viola-Jones algorithm that is not affected by illuminations is used. The face tracking stage is based on Optical Flow algorithm. Optical Flow is implemented in the proposed framework with two feature extraction methods, Fast Corner Features, and Regular Features. The proposed face detection and tracking is implemented using Android studio and OpenCV library, and tested using Sony Xperia Z2 Android 5.1 Lollipop Smartphone. Experiments show that face tracking using Optical Flow with Regular Features achieves a higher level of accuracy and efficiency than Optical Flow with Fast Corner Features.",
"title": ""
},
{
"docid": "neg:1840601_13",
"text": "Accelerators are special purpose processors designed to speed up compute-intensive sections of applications. Two extreme endpoints in the spectrum of possible accelerators are FPGAs and GPUs, which can often achieve better performance than CPUs on certain workloads. FPGAs are highly customizable, while GPUs provide massive parallel execution resources and high memory bandwidth. Applications typically exhibit vastly different performance characteristics depending on the accelerator. This is an inherent problem attributable to architectural design, middleware support and programming style of the target platform. For the best application-to-accelerator mapping, factors such as programmability, performance, programming cost and sources of overhead in the design flows must be all taken into consideration. In general, FPGAs provide the best expectation of performance, flexibility and low overhead, while GPUs tend to be easier to program and require less hardware resources. We present a performance study of three diverse applications - Gaussian elimination, data encryption standard (DES), and Needleman-Wunsch - on an FPGA, a GPU and a multicore CPU system. We perform a comparative study of application behavior on accelerators considering performance and code complexity. Based on our results, we present an application characteristic to accelerator platform mapping, which can aid developers in selecting an appropriate target architecture for their chosen application.",
"title": ""
},
{
"docid": "neg:1840601_14",
"text": "Raine Mäntysalo The purpose of this article is to make an overview of postWWII urban planning theories from the point of view of participation. How have the ideas of public accountability, deliberative democracy and involvement of special interests developed from one theory to another? The urban planning theories examined are rational-comprehensive planning theory, advocacy planning theory, incrementalist planning theory and the two branches of communicative planning theory: planning as consensus-seeking and planning as management of conflicts.",
"title": ""
},
{
"docid": "neg:1840601_15",
"text": "Preventable behaviors contribute to many life threatening health problems. Behavior-change technologies have been deployed to modify these, but such systems typically draw on traditional behavioral theories that overlook affect. We examine the importance of emotion tracking for behavior change. First, we conducted interviews to explore how emotions influence unwanted behaviors. Next, we deployed a system intervention, in which 35 participants logged information for a self-selected, unwanted behavior (e.g., smoking or overeating) over 21 days. 16 participants engaged in standard behavior tracking using a Fact-Focused system to record objective information about goals. 19 participants used an Emotion-Focused system to record emotional consequences of behaviors. Emotion-Focused logging promoted more successful behavior change and analysis of logfiles revealed mechanisms for success: greater engagement of negative affect for unsuccessful days and increased insight were key to motivating change. We present design implications to improve behavior-change technologies with emotion tracking.",
"title": ""
},
{
"docid": "neg:1840601_16",
"text": "BACKGROUND\nAtrophic scars can complicate moderate and severe acne. There are, at present, several modalities of treatment with different results. Percutaneous collagen induction (PCI) has recently been proposed as a simple and effective therapeutic option for the management of atrophic scars.\n\n\nOBJECTIVE\nThe aim of our study was to analyze the efficacy and safety of percutaneous collagen induction for the treatment of acne scarring in different skin phototypes.\n\n\nMETHODS & MATERIALS\nA total of 60 patients of skin types phototype I to VI were included in the study. They were divided into three groups before beginning treatment: Group A (phototypes I to II), Group B (phototypes III to V), and Group C (phototypes VI). Each patient had three treatments at monthly intervals. The aesthetic improvement was evaluated by using a Global Aesthetic Improvement Scale (GAIS), and analyzed statistically by computerized image analysis of the patients' photographs. The differences in the GAIS scores in the different time-points of each group were found using the Wilcoxon's test for nonparametric-dependent continuous variables. Computerized image analysis of silicone replicas was used to quantify the irregularity of the surface micro-relief with Fast Fourier Transformation (FFT); average values of gray were obtained along the x- and y-axes. The calculated indexes were the integrals of areas arising from the distribution of pixels along the axes.\n\n\nRESULTS\nAll patients completed the study. The Wilcoxon's test for nonparametric-dependent continuous variables showed a statistically significant (p < 0.05) reduction in severity grade of acne scars at T5 compared to baseline (T1). The analysis of the surface micro-relief performed on skin replicas showed a decrease in the degree of irregularity of skin texture in all three groups of patients, with an average reduction of 31% in both axes after three sessions. No short- or long-term dyschromia was observed.\n\n\nCONCLUSION\nPCI offers a simple and safe modality to improve the appearance of acne scars without risk of dyspigmentation in patient of all skin types.",
"title": ""
},
{
"docid": "neg:1840601_17",
"text": "Individuals with Binge Eating Disorder (BED) often evidence comorbid Substance Use Disorders (SUD), resulting in poor outcome. This study is the first to examine treatment outcome for this concurrent disordered population. In this pilot study, 38 individuals diagnosed with BED and SUD participated in a 16-week group Mindfulness-Action Based Cognitive Behavioral Therapy (MACBT). Participants significantly improved on measures of objective binge eating episodes; disordered eating attitudes; alcohol and drug addiction severity; and depression. Taken together, MACBT appears to hold promise in treating individuals with co-existing BED-SUD.",
"title": ""
},
{
"docid": "neg:1840601_18",
"text": "Geometrical validation around the Calpha is described, with a new Cbeta measure and updated Ramachandran plot. Deviation of the observed Cbeta atom from ideal position provides a single measure encapsulating the major structure-validation information contained in bond angle distortions. Cbeta deviation is sensitive to incompatibilities between sidechain and backbone caused by misfit conformations or inappropriate refinement restraints. A new phi,psi plot using density-dependent smoothing for 81,234 non-Gly, non-Pro, and non-prePro residues with B < 30 from 500 high-resolution proteins shows sharp boundaries at critical edges and clear delineation between large empty areas and regions that are allowed but disfavored. One such region is the gamma-turn conformation near +75 degrees,-60 degrees, counted as forbidden by common structure-validation programs; however, it occurs in well-ordered parts of good structures, it is overrepresented near functional sites, and strain is partly compensated by the gamma-turn H-bond. Favored and allowed phi,psi regions are also defined for Pro, pre-Pro, and Gly (important because Gly phi,psi angles are more permissive but less accurately determined). Details of these accurate empirical distributions are poorly predicted by previous theoretical calculations, including a region left of alpha-helix, which rates as favorable in energy yet rarely occurs. A proposed factor explaining this discrepancy is that crowding of the two-peptide NHs permits donating only a single H-bond. New calculations by Hu et al. [Proteins 2002 (this issue)] for Ala and Gly dipeptides, using mixed quantum mechanics and molecular mechanics, fit our nonrepetitive data in excellent detail. To run our geometrical evaluations on a user-uploaded file, see MOLPROBITY (http://kinemage.biochem.duke.edu) or RAMPAGE (http://www-cryst.bioc.cam.ac.uk/rampage).",
"title": ""
},
{
"docid": "neg:1840601_19",
"text": "The authors examined relations between the Big Five personality traits and academic outcomes, specifically SAT scores and grade-point average (GPA). Openness was the strongest predictor of SAT verbal scores, and Conscientiousness was the strongest predictor of both high school and college GPA. These relations replicated across 4 independent samples and across 4 different personality inventories. Further analyses showed that Conscientiousness predicted college GPA, even after controlling for high school GPA and SAT scores, and that the relation between Conscientiousness and college GPA was mediated, both concurrently and longitudinally, by increased academic effort and higher levels of perceived academic ability. The relation between Openness and SAT verbal scores was independent of academic achievement and was mediated, both concurrently and longitudinally, by perceived verbal intelligence. Together, these findings show that personality traits have independent and incremental effects on academic outcomes, even after controlling for traditional predictors of those outcomes. ((c) 2007 APA, all rights reserved).",
"title": ""
}
] |
1840602 | End-to-End People Detection in Crowded Scenes | [
{
"docid": "pos:1840602_0",
"text": "Current high-quality object detection approaches use the same scheme: salience-based object proposal methods followed by post-classification using deep convolutional features. This spurred recent research in improving object proposal methods [18, 32, 15, 11, 2]. However, domain agnostic proposal generation has the principal drawback that the proposals come unranked or with very weak ranking, making it hard to trade-off quality for running time. Also, it raises the more fundamental question of whether high-quality proposal generation requires careful engineering or can be derived just from data alone. We demonstrate that learning-based proposal methods can effectively match the performance of hand-engineered methods while allowing for very efficient runtime-quality trade-offs. Using our new multi-scale convolutional MultiBox (MSC-MultiBox) approach, we substantially advance the state-of-the-art on the ILSVRC 2014 detection challenge data set, with 0.5 mAP for a single model and 0.52 mAP for an ensemble of two models. MSC-Multibox significantly improves the proposal quality over its predecessor Multibox [4] method: AP increases from 0.42 to 0.53 for the ILSVRC detection challenge. Finally, we demonstrate improved bounding-box recall compared to Multiscale Combinatorial Grouping [18] with less proposals on the Microsoft-COCO [14] data set.",
"title": ""
},
{
"docid": "pos:1840602_1",
"text": "We propose a new supervised learning framework for visual object counting tasks, such as estimating the number of cells in a microscopic image or the number of humans in surveillance video frames. We focus on the practically-attractive case when the training images are annotated with dots (one dot per object). Our goal is to accurately estimate the count. However, we evade the hard task of learning to detect and localize individual object instances. Instead, we cast the problem as that of estimating an image density whose integral over any image region gives the count of objects within that region. Learning to infer such density can be formulated as a minimization of a regularized risk quadratic cost function. We introduce a new loss function, which is well-suited for such learning, and at the same time can be computed efficiently via a maximum subarray algorithm. The learning can then be posed as a convex quadratic program solvable with cutting-plane optimization. The proposed framework is very flexible as it can accept any domain-specific visual features. Once trained, our system provides accurate object counts and requires a very small time overhead over the feature extraction step, making it a good candidate for applications involving real-time processing or dealing with huge amount of visual data.",
"title": ""
},
{
"docid": "pos:1840602_2",
"text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"title": ""
}
] | [
{
"docid": "neg:1840602_0",
"text": "In this paper, an impedance control scheme for aerial robotic manipulators is proposed, with the aim of reducing the end-effector interaction forces with the environment. The proposed control has a multi-level architecture, in detail the outer loop is composed by a trajectory generator and an impedance filter that modifies the trajectory to achieve a complaint behaviour in the end-effector space; a middle loop is used to generate the joint space variables through an inverse kinematic algorithm; finally the inner loop is aimed at ensuring the motion tracking. The proposed control architecture has been experimentally tested.",
"title": ""
},
{
"docid": "neg:1840602_1",
"text": "Routing protocols for Wireless Sensor Networks (WSN) are designed to select parent nodes so that data packets can reach their destination in a timely and efficient manner. Typically neighboring nodes with strongest connectivity are more selected as parents. This Greedy Routing approach can lead to unbalanced routing loads in the network. Consequently, the network experiences the early death of overloaded nodes causing permanent network partition. Herein, we propose a framework for load balancing of routing in WSN. In-network path tagging is used to monitor network traffic load of nodes. Based on this, nodes are identified as being relatively overloaded, balanced or underloaded. A mitigation algorithm finds suitable new parents for switching from overloaded nodes. The routing engine of the child of the overloaded node is then instructed to switch parent. A key future of the proposed framework is that it is primarily implemented at the Sink and so requires few changes to existing routing protocols. The framework was implemented in TinyOS on TelosB motes and its performance was assessed in a testbed network and in TOSSIM simulation. The algorithm increased the lifetime of the network by 41 % as recorded in the testbed experiment. The Packet Delivery Ratio was also improved from 85.97 to 99.47 %. Finally a comparative study was performed using the proposed framework with various existing routing protocols.",
"title": ""
},
{
"docid": "neg:1840602_2",
"text": "We present the prenatal ultrasound findings of massive macroglossia in a fetus with prenatally diagnosed Beckwith-Wiedemann syndrome. Three-dimensional surface mode ultrasound was utilized for enhanced visualization of the macroglossia.",
"title": ""
},
{
"docid": "neg:1840602_3",
"text": "Ensemble learning has been proved to improve the generalization ability effectively in both theory and practice. In this paper, we briefly outline the current status of research on it first. Then, a new deep neural network-based ensemble method that integrates filtering views, local views, distorted views, explicit training, implicit training, subview prediction, and Simple Average is proposed for biomedical time series classification. Finally, we validate its effectiveness on the Chinese Cardiovascular Disease Database containing a large number of electrocardiogram recordings. The experimental results show that the proposed method has certain advantages compared to some well-known ensemble methods, such as Bagging and AdaBoost.",
"title": ""
},
{
"docid": "neg:1840602_4",
"text": "Monaural speech separation is a fundamental problem in robust speech processing. Recently, deep neural network (DNN)-based speech separation methods, which predict either clean speech or an ideal time-frequency mask, have demonstrated remarkable performance improvement. However, a single DNN with a given window length does not leverage contextual information sufficiently, and the differences between the two optimization objectives are not well understood. In this paper, we propose a deep ensemble method, named multicontext networks, to address monaural speech separation. The first multicontext network averages the outputs of multiple DNNs whose inputs employ different window lengths. The second multicontext network is a stack of multiple DNNs. Each DNN in a module of the stack takes the concatenation of original acoustic features and expansion of the soft output of the lower module as its input, and predicts the ratio mask of the target speaker; the DNNs in the same module employ different contexts. We have conducted extensive experiments with three speech corpora. The results demonstrate the effectiveness of the proposed method. We have also compared the two optimization objectives systematically and found that predicting the ideal time-frequency mask is more efficient in utilizing clean training speech, while predicting clean speech is less sensitive to SNR variations.",
"title": ""
},
{
"docid": "neg:1840602_5",
"text": "Cloud computing opens a new era in IT as it can provide various elastic and scalable IT services in a pay-as-you-go fashion, where its users can reduce the huge capital investments in their own IT infrastructure. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several common drawbacks. First, a necessary authorization/authentication process is missing between the auditor and cloud service provider, i.e., anyone can challenge the cloud service provider for a proof of integrity of certain file, which potentially puts the quality of the so-called `auditing-as-a-service' at risk; Second, although some of the recent work based on BLS signature can already support fully dynamic data updates over fixed-size data blocks, they only support updates with fixed-sized blocks as basic unit, which we call coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Based on our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates. Theoretical analysis and experimental results demonstrate that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.",
"title": ""
},
{
"docid": "neg:1840602_6",
"text": "Criteria for the diagnosis of vascular dementia (VaD) that are reliable, valid, and readily applicable in a variety of settings are urgently needed for both clinical and research purposes. To address this need, the Neuroepidemiology Branch of the National Institute of Neurological Disorders and Stroke (NINDS) convened an International Workshop with support from the Association Internationale pour la Recherche et l'Enseignement en Neurosciences (AIREN), resulting in research criteria for the diagnosis of VaD. Compared with other current criteria, these guidelines emphasize (1) the heterogeneity of vascular dementia syndromes and pathologic subtypes including ischemic and hemorrhagic strokes, cerebral hypoxic-ischemic events, and senile leukoencephalopathic lesions; (2) the variability in clinical course, which may be static, remitting, or progressive; (3) specific clinical findings early in the course (eg, gait disorder, incontinence, or mood and personality changes) that support a vascular rather than a degenerative cause; (4) the need to establish a temporal relationship between stroke and dementia onset for a secure diagnosis; (5) the importance of brain imaging to support clinical findings; (6) the value of neuropsychological testing to document impairments in multiple cognitive domains; and (7) a protocol for neuropathologic evaluations and correlative studies of clinical, radiologic, and neuropsychological features. These criteria are intended as a guide for case definition in neuroepidemiologic studies, stratified by levels of certainty (definite, probable, and possible). They await testing and validation and will be revised as more information becomes available.",
"title": ""
},
{
"docid": "neg:1840602_7",
"text": "This paper presents a transformerless single-phase inverter topology based on a modified H-bridge-based multilevel converter. The topology comprises two legs, namely, a usual two-level leg and a T-type leg. The latter is based on a usual two-level leg, which has been modified to gain access to the midpoint of the split dc-link by means of a bidirectional switch. The topology is referred as an asymmetrical T-type five-level (5L-T-AHB) inverter. An ad hoc modulation strategy based on sinusoidal pulsewidth modulation is also presented to control the 5L-T-AHB inverter, where the two-level leg is commuted at fundamental frequency. Numerical and experimental results show that the proposed 5L-T-AHB inverter achieves high efficiency, exhibits reduced leakage currents, and complies with the transformerless norms and regulations, which makes it suitable for the transformerless PV inverters market.11This updated version includes experimental evidence, considerations for practical implementation, efficiency studies, visualization of semiconductor losses distribution, a deeper and corrected common mode analysis, and an improved notation among other modifications.",
"title": ""
},
{
"docid": "neg:1840602_8",
"text": "Blood flow measurement using Doppler ultrasound has become a useful tool for diagnosing cardiovascular diseases and as a physiological monitor. Recently, pocket-sized ultrasound scanners have been introduced for portable diagnosis. The present paper reports the implementation of a portable ultrasound pulsed-wave (PW) Doppler flowmeter using a smartphone. A 10-MHz ultrasonic surface transducer was designed for the dynamic monitoring of blood flow velocity. The directional baseband Doppler shift signals were obtained using a portable analog circuit system. After hardware processing, the Doppler signals were fed directly to a smartphone for Doppler spectrogram analysis and display in real time. To the best of our knowledge, this is the first report of the use of this system for medical ultrasound Doppler signal processing. A Couette flow phantom, consisting of two parallel disks with a 2-mm gap, was used to evaluate and calibrate the device. Doppler spectrograms of porcine blood flow were measured using this stand-alone portable device under the pulsatile condition. Subsequently, in vivo portable system verification was performed by measuring the arterial blood flow of a rat and comparing the results with the measurement from a commercial ultrasound duplex scanner. All of the results demonstrated the potential for using a smartphone as a novel embedded system for portable medical ultrasound applications.",
"title": ""
},
{
"docid": "neg:1840602_9",
"text": "Pressure ulcers are a common problem among older adults in all health care settings. Prevalence and incidence estimates vary by setting, ulcer stage, and length of follow-up. Risk factors associated with increased pressure ulcer incidence have been identified. Activity or mobility limitation, incontinence, abnormalities in nutritional status, and altered consciousness are the most consistently reported risk factors for pressure ulcers. Pain, infectious complications, prolonged and expensive hospitalizations, persistent open ulcers, and increased risk of death are all associated with the development of pressure ulcers. The tremendous variability in pressure ulcer prevalence and incidence in health care settings suggests that opportunities exist to improve outcomes for persons at risk for and with pressure ulcers.",
"title": ""
},
{
"docid": "neg:1840602_10",
"text": "This paper presents the design of a hardware-efficient, low-power image processing system for next-generation wireless endoscopy. The presented system is composed of a custom CMOS image sensor, a dedicated image compressor, a forward error correction (FEC) encoder protecting radio transmitted data against random and burst errors, a radio data transmitter, and a controller supervising all operations of the system. The most significant part of the system is the image compressor. It is based on an integer version of a discrete cosine transform and a novel, low complexity yet efficient, entropy encoder making use of an adaptive Golomb-Rice algorithm instead of Huffman tables. The novel hardware-efficient architecture designed for the presented system enables on-the-fly compression of the acquired image. Instant compression, together with elimination of the necessity of retransmitting erroneously received data by their prior FEC encoding, significantly reduces the size of the required memory in comparison to previous systems. The presented system was prototyped in a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. Its power consumption is low and comparable to other application-specific-integrated-circuits-based systems, despite FPGA-based implementation.",
"title": ""
},
{
"docid": "neg:1840602_11",
"text": "Machine reading aims to automatically extract knowledge from text. It is a long-standing goal of AI and holds the promise of revolutionizing Web search and other fields. In this paper, we analyze the core challenges of machine reading and show that statistical relational AI is particularly well suited to address these challenges. We then propose a unifying approach to machine reading in which statistical relational AI plays a central role. Finally, we demonstrate the promise of this approach by presenting OntoUSP, an end-toend machine reading system that builds on recent advances in statistical relational AI and greatly outperforms state-of-theart systems in a task of extracting knowledge from biomedical abstracts and answering questions.",
"title": ""
},
{
"docid": "neg:1840602_12",
"text": "This paper presents a video-based motion modeling technique for capturing physically realistic human motion from monocular video sequences. We formulate the video-based motion modeling process in an image-based keyframe animation framework. The system first computes camera parameters, human skeletal size, and a small number of 3D key poses from video and then uses 2D image measurements at intermediate frames to automatically calculate the \"in between\" poses. During reconstruction, we leverage Newtonian physics, contact constraints, and 2D image measurements to simultaneously reconstruct full-body poses, joint torques, and contact forces. We have demonstrated the power and effectiveness of our system by generating a wide variety of physically realistic human actions from uncalibrated monocular video sequences such as sports video footage.",
"title": ""
},
{
"docid": "neg:1840602_13",
"text": "We present the concept of logarithmic computation for neural networks. We explore how logarithmic encoding of non-uniformly distributed weights and activations is preferred over linear encoding at resolutions of 4 bits and less. Logarithmic encoding enables networks to 1) achieve higher classification accuracies than fixed-point at low resolutions and 2) eliminate bulky digital multipliers. We demonstrate our ideas in the hardware realization, LogNet, an inference engine using only bitshift-add convolutions and weights distributed across the computing fabric. The opportunities from hardware work in synergy with those from the algorithm domain.",
"title": ""
},
{
"docid": "neg:1840602_14",
"text": "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach, in comparison to other spectral domain convolutional architectures, on spectral image classification, community detection, vertex classification, and matrix completion tasks.",
"title": ""
},
{
"docid": "neg:1840602_15",
"text": "Deeply embedded domain-specific languages (EDSLs) intrinsically compromise programmer experience for improved program performance. Shallow EDSLs complement them by trading program performance for good programmer experience. We present Yin-Yang, a framework for DSL embedding that uses Scala macros to reliably translate shallow EDSL programs to the corresponding deep EDSL programs. The translation allows program prototyping and development in the user friendly shallow embedding, while the corresponding deep embedding is used where performance is important. The reliability of the translation completely conceals the deep em- bedding from the user. For the DSL author, Yin-Yang automatically generates the deep DSL embeddings from their shallow counterparts by reusing the core translation. This obviates the need for code duplication and leads to reliability by construction.",
"title": ""
},
{
"docid": "neg:1840602_16",
"text": "Intrabody communications (IBC) is a novel communication technique which uses the human body itself as the signal propagation medium. This communication method is categorized as a physical layer of IEEE 802.15.6 or Wireless Body Area Network (WBAN) standard. It is significant to investigate the IBC systems to improve the transceiver design characteristics such as data rate and power consumption. In this paper, we propose a new IBC transmitter implementing pulse position modulation (PPM) scheme based on impulse radio. A FPGA is employed to implement the architecture of a carrier-free PPM transmission. Results demonstrate the data rate of 1.56 Mb/s which is suitable for the galvanic coupling IBC method. The PPM transmitter power consumption is 2.0 mW with 3.3 V supply voltage. Having energy efficiency as low as 1.28 nJ/bit provides an enhanced solution for portable biomedical applications based on body area networks.",
"title": ""
},
{
"docid": "neg:1840602_17",
"text": "Three diierent algorithms for obstacle detection are presented in this paper each based on diierent assumptions. The rst two algorithms are qualitative in that they return only yes/no answers regarding the presence of obstacles in the eld of view; no 3D reconstruction is performed. They have the advantage of fast determination of the existence of obstacles in a scene based on the solvability of a linear system. The rst algorithm uses information about the ground plane, while the second only assumes that the ground is planar. The third algorithm is quantitative in that it continuously estimates the ground plane and reconstructs partial 3D structures by determining the height above the ground plane of each point in the scene. Experimental results are presented for real and simulated data, and the performance of the three algorithms under diierent noise levels is compared in simulation. We conclude that in terms of the robustness of performance, the third algorithm is superior to the other two.",
"title": ""
},
{
"docid": "neg:1840602_18",
"text": "This paper 1 presents an algorithm for automatically detecting bone contours from hand radiographs using active contours. Prior knowledge is first used to locate initial contours for the snakes inside each bone of interest. Next, an adaptive snake algorithm is applied so that parameters are properly adjusted for each bone specifically. We introduce a novel truncation technique to prevent the external forces of the snake from pulling the contour outside the bones boundaries, yielding excelent results.",
"title": ""
},
{
"docid": "neg:1840602_19",
"text": "Finding one's way in a large-scale environment may engage different cognitive processes than following a familiar route. The neural bases of these processes were investigated using functional MRI (fMRI). Subjects found their way in one virtual-reality town and followed a well-learned route in another. In a control condition, subjects followed a visible trail. Within subjects, accurate wayfinding activated the right posterior hippocampus. Between-subjects correlations with performance showed that good navigators (i.e., accurate wayfinders) activated the anterior hippocampus during wayfinding and head of caudate during route following. These results coincide with neurophysiological evidence for distinct response (caudate) and place (hippocampal) representations supporting navigation. We argue that the type of representation used influences both performance and concomitant fMRI activation patterns.",
"title": ""
}
] |
1840603 | Why do narcissists take more risks ? Testing the roles of perceived risks and benefits of risky behaviors | [
{
"docid": "pos:1840603_0",
"text": "Different psychotherapeutic theories provide contradictory accounts of adult narcissism as the product of either parental coldness or excessive parental admiration during childhood. Yet, none of these theories has been tested systematically in a nonclinical sample. The authors compared four structural equation models predicting overt and covert narcissism among 120 United Kingdom adults. Both forms of narcissism were predicted by both recollections of parental coldness and recollections of excessive parental admiration. Moreover, a suppression relationship was detected between these predictors: The effects of each were stronger when modeled together than separately. These effects were found after controlling for working models of attachment; covert narcissism was predicted also by attachment anxiety. This combination of childhood experiences may help to explain the paradoxical combination of grandiosity and fragility in adult narcissism.",
"title": ""
}
] | [
{
"docid": "neg:1840603_0",
"text": "Academic study of cloud computing is an emerging research field in Saudi Arabia. Saudi Arabia represents the largest economy in the Arab Gulf region, which makes it a potential market of cloud computing technologies. This cross-sectional exploratory empirical research is based on technology–organization–environment (TOE) framework, targeting higher education institutions. In this study, the factors that affect the cloud adoption by higher education institutions were identified and tested using SmartPLS software, a powerful statistical analysis tool for structural equation modeling. Three factors were found significant in this context. Relative advantage, complexity and data concern were the most significant factors. The model explained 47.9 % of the total adoption variance. The findings offer education institutions and cloud computing service providers with better understanding of factors affecting the adoption of cloud computing.",
"title": ""
},
{
"docid": "neg:1840603_1",
"text": "Background: While several benefits are attributed to the Internet and video games, an important proportion of the population presents symptoms related to possible new technological addictions and there has been little discussion of treatment of problematic technology use. Although demand for knowledge is growing, only a small number of treatments have been described. Objective: To conduct a systematic review of the literature, to establish Cognitive Behavioral Therapy (CBT) as a possible strategy for treating Internet and video game addictions. Method: The review was conducted in the following databases: Science Direct on Line, PubMed, PsycINFO, Cochrane Clinical Trials Library, BVS and SciELO. The keywords used were: Cognitive Behavioral Therapy; therapy; treatment; with association to the terms Internet addiction and video game addiction. Given the scarcity of studies in the field, no restrictions to the minimum period of publication were made, so that articles found until October 2013 were accounted. Results: Out of 72 articles found, 23 described CBT as a psychotherapy for Internet and video game addiction. The manuscripts showed the existence of case studies and protocols with satisfactory efficacy. Discussion: Despite the novelty of technological dependencies, CBT seems to be applicable and allows an effective treatment for this population. Lemos IL, et al. / Rev Psiq Clín. 2014;41(3):82-8",
"title": ""
},
{
"docid": "neg:1840603_2",
"text": "Cross-modal retrieval has become a highlighted research topic for retrieval across multimedia data such as image and text. A two-stage learning framework is widely adopted by most existing methods based on deep neural network (DNN): The first learning stage is to generate separate representation for each modality and the second learning stage is to get the cross-modal common representation. However the existing methods have three limitations: 1) In the first learning stage they only model intramodality correlation but ignore intermodality correlation with rich complementary context. 2) In the second learning stage they only adopt shallow networks with single-loss regularization but ignore the intrinsic relevance of intramodality and intermodality correlation. 3) Only original instances are considered while the complementary fine-grained clues provided by their patches are ignored. For addressing the above problems this paper proposes a cross-modal correlation learning (CCL) approach with multigrained fusion by hierarchical network and the contributions are as follows: 1) In the first learning stage CCL exploits multilevel association with joint optimization to preserve the complementary context from intramodality and intermodality correlation simultaneously. 2) In the second learning stage a multitask learning strategy is designed to adaptively balance the intramodality semantic category constraints and intermodality pairwise similarity constraints. 3) CCL adopts multigrained modeling which fuses the coarse-grained instances and fine-grained patches to make cross-modal correlation more precise. Comparing with 13 state-of-the-art methods on 6 widely-used cross-modal datasets the experimental results show our CCL approach achieves the best performance.",
"title": ""
},
{
"docid": "neg:1840603_3",
"text": "In this letter, we propose a novel method for classifying ambulatory activities using eight plantar pressure sensors within smart shoes. Using these sensors, pressure data of participants can be collected regarding level walking, stair descent, and stair ascent. Analyzing patterns of the ambulatory activities, we present new features with which to describe the ambulatory activities. After selecting critical features, a multi-class support vector machine algorithm is applied to classify these activities. Applying the proposed method to the experimental database, we obtain recognition rates up to 95.2% after six steps.",
"title": ""
},
{
"docid": "neg:1840603_4",
"text": "The aim of this study is to find a minimal size of text samples for authorship attribution that would provide stable results independent of random noise. A few controlled tests for different sample lengths, languages and genres are discussed and compared. Although I focus on Delta methodology, the results are valid for many other multidimensional methods relying on word frequencies and \"nearest neighbor\" classifications.",
"title": ""
},
{
"docid": "neg:1840603_5",
"text": "A large number of post-transcriptional modifications of transfer RNAs (tRNAs) have been described in prokaryotes and eukaryotes. They are known to influence their stability, turnover, and chemical/physical properties. A specific subset of tRNAs contains a thiolated uridine residue at the wobble position to improve the codon-anticodon interaction and translational accuracy. The proteins involved in tRNA thiolation are reminiscent of prokaryotic sulfur transfer reactions and of the ubiquitylation process in eukaryotes. In plants, some of the proteins involved in this process have been identified and show a high degree of homology to their non-plant equivalents. For other proteins, the identification of the plant homologs is much less clear, due to the low conservation in protein sequence. This manuscript describes the identification of CTU2, the second CYTOPLASMIC THIOURIDYLASE protein of Arabidopsis thaliana. CTU2 is essential for tRNA thiolation and interacts with ROL5, the previously identified CTU1 homolog of Arabidopsis. CTU2 is ubiquitously expressed, yet its activity seems to be particularly important in root tissue. A ctu2 knock-out mutant shows an alteration in root development. The analysis of CTU2 adds a new component to the so far characterized protein network involved in tRNA thiolation in Arabidopsis. CTU2 is essential for tRNA thiolation as a ctu2 mutant fails to perform this tRNA modification. The identified Arabidopsis CTU2 is the first CTU2-type protein from plants to be experimentally verified, which is important considering the limited conservation of these proteins between plant and non-plant species. Based on the Arabidopsis protein sequence, CTU2-type proteins of other plant species can now be readily identified.",
"title": ""
},
{
"docid": "neg:1840603_6",
"text": "Modelling the similarity of sentence pairs is an important problem in natural language processing and information retrieval, with applications in tasks such as paraphrase identification and answer selection in question answering. The Multi-Perspective Convolutional Neural Network (MP-CNN) is a model that improved previous state-of-the-art models in 2015 and has remained a popular model for sentence similarity tasks. However, until now, there has not been a rigorous study of how the model actually achieves competitive accuracy. In this thesis, we report on a series of detailed experiments that break down the contribution of each component of MP-CNN towards its statistical accuracy and how they affect model robustness. We find that two key components of MP-CNN are non-essential to achieve competitive accuracy and they make the model less robust to changes in hyperparameters. Furthermore, we suggest simple changes to the architecture and experimentally show that we improve the accuracy of MP-CNN when we remove these two major components of MP-CNN and incorporate these small changes, pushing its scores closer to more recent works on competitive semantic textual similarity and answer selection datasets, while using eight times fewer parameters.",
"title": ""
},
{
"docid": "neg:1840603_7",
"text": "[Context and motivation] For the past several years, Cyber Physical Systems (CPS) have emerged as a new system type like embedded systems or information systems. CPS are highly context-dependent, observe the world through sensors, act upon it through actuators, and communicate with one another through powerful networks. It has been widely argued that these properties pose new challenges for the development process. [Question/problem] Yet, how these CPS properties impact the development process has thus far been subject to conjecture. An investigation of a development process from a cyber physical perspective has thus far not been undertaken. [Principal ideas/results] In this paper, we conduct initial steps into such an investigation. We present a case study involving the example of a software simulator of an airborne traffic collision avoidance system. [Contribution] The goal of the case study is to investigate which of the challenges from the literature impact the development process of CPS the most.",
"title": ""
},
{
"docid": "neg:1840603_8",
"text": "This paper considers optimal synthesis of a special type of four-bar linkages. Combination of this optimal four-bar linkage with on of it’s cognates and elimination of two redundant cognates will result in a Watt’s six-bar mechanism, which generates straight and parallel motion. This mechanism can be utilized for legged machines. The advantage of this mechanism is that the leg remains straight during it’s contact period and because of it’s parallel motion, the legs can be as wide as desired to increase contact area and decrease the number of legs required to keep body’s stability statically and dynamically. “Genetic algorithm” optimization method is used to find optimal lengths. It is especially useful for problems like the coupler curve equation which are completely nonlinear or extremely difficult to solve.",
"title": ""
},
{
"docid": "neg:1840603_9",
"text": "The task of automatically tracking the visual attention in dynamic visual scenes is highly challenging. To approach it, we propose a Bayesian online learning algorithm. As the visual scene changes and new objects appear, based on a mixture model, the algorithm can identify and tell visual saccades (transitions) from visual fixation clusters (regions of interest). The approach is evaluated on real-world data, collected from eye-tracking experiments in driving sessions.",
"title": ""
},
{
"docid": "neg:1840603_10",
"text": "Driven by the demands on healthcare resulting from the shift toward more sedentary lifestyles, considerable effort has been devoted to the monitoring and classification of human activity. In previous studies, various classification schemes and feature extraction methods have been used to identify different activities from a range of different datasets. In this paper, we present a comparison of 14 methods to extract classification features from accelerometer signals. These are based on the wavelet transform and other well-known time- and frequency-domain signal characteristics. To allow an objective comparison between the different features, we used two datasets of activities collected from 20 subjects. The first set comprised three commonly used activities, namely, level walking, stair ascent, and stair descent, and the second a total of eight activities. Furthermore, we compared the classification accuracy for each feature set across different combinations of three different accelerometer placements. The classification analysis has been performed with robust subject-based cross-validation methods using a nearest-neighbor classifier. The findings show that, although the wavelet transform approach can be used to characterize nonstationary signals, it does not perform as accurately as frequency-based features when classifying dynamic activities performed by healthy subjects. Overall, the best feature sets achieved over 95% intersubject classification accuracy.",
"title": ""
},
{
"docid": "neg:1840603_11",
"text": "Mapping the physical location of nodes within a wireless sensor network (WSN) is critical in many applications such as tracking and environmental sampling. Passive RFID tags pose an interesting solution to localizing nodes because an outside reader, rather than the tag, supplies the power to the tag. Thus, utilizing passive RFID technology allows a localization scheme to not be limited to objects that have wireless communication capability because the technique only requires that the object carries a RFID tag. This paper illustrates a method in which objects can be localized without the need to communicate received signal strength information between the reader and the tagged item. The method matches tag count percentage patterns under different signal attenuation levels to a database of tag count percentages, attenuations and distances from the base station reader.",
"title": ""
},
{
"docid": "neg:1840603_12",
"text": "Pedestrian detection has been an important problem for decades, given its relevance to a number of applications in robotics, including driver assistance systems, road scene understanding and surveillance systems. The two main practical requirements for fielding such systems are very high accuracy and real-time speed: we need pedestrian detectors that are accurate enough to be relied on and are fast enough to run on systems with limited compute power. This paper addresses both of these requirements by combining very accurate deep-learning-based classifiers within very efficient cascade classifier frameworks. Deep neural networks (DNN) have been shown to excel at classification tasks [5], and their ability to operate on raw pixel input without the need to design special features is very appealing. However, deep nets are notoriously slow at inference time. In this paper, we propose an approach that cascades deep nets and fast features, that is both very fast and accurate. We apply it to the challenging task of pedestrian detection. Our algorithm runs in real-time at 15 frames per second (FPS). The resulting approach achieves a 26.2% average miss rate on the Caltech Pedestrian detection benchmark, which is the first work we are aware of that achieves high accuracy while running in real-time. To achieve this, we combine a fast cascade [2] with a cascade of classifiers, which we propose to be DNNs. Our approach is unique, as it is the only one to produce a pedestrian detector at real-time speeds (15 FPS) that is also very accurate. Figure 1 visualizes existing methods as plotted on the accuracy computational time axis, measured on the challenging Caltech pedestrian detection benchmark [4]. As can be seen in this figure, our approach is the only one to reside in the high accuracy, high speed region of space, which makes it particularly appealing for practical applications. Fast Deep Network Cascade. Our main architecture is a cascade structure in which we take advantage of the fast features for elimination, VeryFast [2] as an initial stage and combine it with small and large deep networks [1, 5] for high accuracy. The VeryFast algorithm is a cascade itself, but of boosting classifiers. It reduces recall with each stage, producing a high average miss rate in the end. Since the goal is eliminate many non-pedestrian patches and at the same time keep the recall high, we used only 10% of the stages in that cascade. Namely, we use a cascade of only 200 stages, instead of the 2000 in the original work. The first stage of our deep cascade processes all image patches that have high confidence values and pass through the VeryFast classifier. We here utilize the idea of a tiny convolutional network proposed by our prior work [1]. The tiny deep network has three layers only and features a 5x5 convolution, a 1x1 convolution and a very shallow fully-connected layer of 512 units. It reduces the massive computational time that is needed to evaluate a full DNN at all candidate locations filtered by the previous stage. The speedup produced by the tiny network, is a crucial component in achieving real-time performance in our fast cascade method. The baseline deep neural network is based on the original deep network of Krizhevsky et al [5]. As mentioned, this network in general is extremely slow to be applied alone. To achieve real-time speeds, we first apply it to only the remaining filtered patches from the previous two stages. Another key difference is that we reduced the depths of some of the convolutional layers and the sizes of the receptive fields, which is specifically done to gain speed advantage. Runtime. Our deep cascade works at 67ms on a standard NVIDIA K20 Tesla GPU per 640x480 image, which is a runtime of 15 FPS. The time breakdown is as follows. The soft-cascade takes about 7 milliseconds (ms). About 1400 patches are passed through per image from the fast cascade. The tiny DNN runs at 0.67 ms per batch of 128, so it can process the patches in 7.3 ms. The final stage of the cascade (which is the baseline classifier) takes about 53ms. This is an overall runtime of 67ms. Experimental evaluation. We evaluate the performance of the Fast Deep Network Cascade using the training and test protocols established in the Caltech pedestrian benchmark [4]. We tested several scenarios by training on the Caltech data only, denoted as DeepCascade, on an indeFigure 1: Performance of pedestrian detection methods on the accuracy vs speed axis. Our DeepCascade method achieves both smaller missrates and real-time speeds. Methods for which the runtime is more than 5 seconds per image, or is unknown, are plotted on the left hand side. The SpatialPooling+/Katamari methods use additional motion information.",
"title": ""
},
{
"docid": "neg:1840603_13",
"text": "This article presents the event calculus, a logic-based formalism for representing actions and their effects. A circumscriptive solution to the frame problem is deployed which reduces to monotonic predicate completion. Using a number of benchmark examples from the literature, the formalism is shown to apply to a variety of domains, including those featuring actions with indirect effects, actions with non-deterministic effects, concurrent actions, and continuous change.",
"title": ""
},
{
"docid": "neg:1840603_14",
"text": "We consider the stochastic optimization of finite sums over a Riemannian manifold where the functions are smooth and convex. We present MASAGA, an extension of the stochastic average gradient variant SAGA on Riemannian manifolds. SAGA is a variance-reduction technique that typically outperforms methods that rely on expensive full-gradient calculations, such as the stochastic variance-reduced gradient method. We show that MASAGA achieves a linear convergence rate with uniform sampling, and we further show that MASAGA achieves a faster convergence rate with non-uniform sampling. Our experiments show that MASAGA is faster than the recent Riemannian stochastic gradient descent algorithm for the classic problem of finding the leading eigenvector corresponding to the maximum eigenvalue.",
"title": ""
},
{
"docid": "neg:1840603_15",
"text": "Internet of Things is referred to a combination of physical devices having sensors and connection capabilities enabling them to interact with each other (machine to machine) and can be controlled remotely via cloud engine. Success of an IoT device depends on the ability of systems and devices to securely sample, collect, and analyze data, and then transmit over link, protocol, or media selections based on stated requirements, all without human intervention. Among the requirements of the IoT, connectivity is paramount. It's hard to imagine that a single communication technology can address all the use cases possible in home, industry and smart cities. Along with the existing low power technologies like Zigbee, Bluetooth and 6LoWPAN, 802.11 WiFi standards are also making its way into the market with its own advantages in high range and better speed. Along with IEEE, WiFi Alliance has a new standard for the proximity applications. Neighbor Awareness Network (NAN) popularly known as WiFi Aware is that standard which enables low power discovery over WiFi and can light up many proximity based used cases. In this paper we discuss how NAN can influence the emerging IoT market as a connectivity solution for proximity assessment and contextual notifications with its benefits in some of the scenarios. When we consider WiFi the infrastructure already exists in terms of access points all around in public and smart phones or tablets come with WiFi as a default feature hence enabling NAN can be easy and if we can pair them with IoT, many innovative use cases can evolve.",
"title": ""
},
{
"docid": "neg:1840603_16",
"text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.",
"title": ""
},
{
"docid": "neg:1840603_17",
"text": "This report summarizes my overview talk on software clone detection research. It first discusses the notion of software redundancy, cloning, duplication, and similarity. Then, it describes various categorizations of clone types, empirical studies on the root causes for cloning, current opinions and wisdom of consequences of cloning, empirical studies on the evolution of clones, ways to remove, to avoid, and to detect them, empirical evaluations of existing automatic clone detector performance (such as recall, precision, time and space consumption) and their fitness for a particular purpose, benchmarks for clone detector evaluations, presentation issues, and last but not least application of clone detection in other related fields. After each summary of a subarea, I am listing open research questions.",
"title": ""
},
{
"docid": "neg:1840603_18",
"text": "Photovoltaic method is very popular for generating electrical power. Its energy production depends on solar radiation on that location and orientation. Shadow rapidly decreases performance of the Photovoltaic system. In this research, it is being investigated that how exactly real-time shadow can be detected. In principle, 3D city models containing roof structure, vegetation, thematically differentiated surface and texture, are suitable to simulate exact real-time shadow. An automated procedure to measure exact shadow effect from the 3D city models and a long-term simulation model to determine the produced energy from the photovoltaic system is being developed here. In this paper, a method for detecting shadow for direct radiation has been discussed with its result using a 3D city model to perform a solar energy potentiality analysis. Figure 1. Partial Shadow on PV array (Reisa 2011). Former military area Scharnhauser Park shown in figure 2 has been choosen as the case study area for this research. It is an urban conversion and development area of 150 hecta res in the community of Ostfildern on the southern border near Stuttgart with 7000 inhabitants. About 80% heating energy demand of the whole area is supplied by renewable energies and a small portion of electricity is delivered by existing roof top photovoltaic system (Tereci et al, 2009). This has been selected as the study area for this research because of availability CityGML and LIDAR data, building footprints and existing photovoltaic cells on roofs and façades. Land Survey Office Baden-Wüttemberg provides the laser scanning data with a density of 4 points per square meter at a high resolution of 0.2 meter. The paper has been organized with a brief introduction at the beginning explaining background of photovoltaic energy and motivation for this research in. Then the effect of shadow on photovoltaic cells and a methodology for detecting shadow from direct radiation. Then result has been shown applying the methodology and some brief idea about the future work of this research has been presented.",
"title": ""
},
{
"docid": "neg:1840603_19",
"text": "We present a large scale unified natural language inference (NLI) dataset for providing insight into how well sentence representations capture distinct types of reasoning. We generate a large-scale NLI dataset by recasting 11 existing datasets from 7 different semantic tasks. We use our dataset of approximately half a million context-hypothesis pairs to test how well sentence encoders capture distinct semantic phenomena that are necessary for general language understanding. Some phenomena that we consider are event factuality, named entity recognition, figurative language, gendered anaphora resolution, and sentiment analysis, extending prior work that included semantic roles and frame semantic parsing. Our dataset will be available at https:// www.decomp.net, to grow over time as additional resources are recast.",
"title": ""
}
] |
1840604 | The Reversible Residual Network: Backpropagation Without Storing Activations | [
{
"docid": "pos:1840604_0",
"text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.",
"title": ""
},
{
"docid": "pos:1840604_1",
"text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.",
"title": ""
},
{
"docid": "pos:1840604_2",
"text": "We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O( √ n) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(logn) with as little as O(n logn) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30% additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.",
"title": ""
},
{
"docid": "pos:1840604_3",
"text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.",
"title": ""
}
] | [
{
"docid": "neg:1840604_0",
"text": "Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting (MTC) dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large margins and achieves the overall best counting performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences. TasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem.",
"title": ""
},
{
"docid": "neg:1840604_1",
"text": "While gamification is gaining ground in business, marketing, corporate management, and wellness initiatives, its application in education is still an emerging trend. This article presents a study of the published empirical research on the application of gamification to education. The study is limited to papers that discuss explicitly the effects of using game elements in specific educational contexts. It employs a systematic mapping design. Accordingly, a categorical structure for classifying the research results is proposed based on the extracted topics discussed in the reviewed papers. The categories include gamification design principles, game mechanics, context of applying gamification (type of application, educational level, and academic subject), implementation, and evaluation. By mapping the published works to the classification criteria and analyzing them, the study highlights the directions of the currently conducted empirical research on applying gamification to education. It also indicates some major obstacles and needs, such as the need for proper technological support, for controlled studies demonstrating reliable positive or negative results of using specific game elements in particular educational contexts, etc. Although most of the reviewed papers report promising results, more substantial empirical research is needed to determine whether both extrinsic and intrinsic motivation of the learners can be influenced by gamification.",
"title": ""
},
{
"docid": "neg:1840604_2",
"text": "Analysis of vascular geometry is important in many medical imaging applications, such as retinal, pulmonary, and cardiac investigations. In order to make reliable judgments for clinical usage, accurate and robust segmentation methods are needed. Due to the high complexity of biological vasculature trees, manual identification is often too time-consuming and tedious to be used in practice. To design an automated and computerized method, a major challenge is that the appearance of vasculatures in medical images has great variance across modalities and subjects. Therefore, most existing approaches are specially designed for a particular task, lacking the flexibility to be adapted to other circumstances. In this paper, we present a generic approach for vascular structure identification from medical images, which can be used for multiple purposes robustly. The proposed method uses the state-of-the-art deep convolutional neural network (CNN) to learn the appearance features of the target. A Principal Component Analysis (PCA)-based nearest neighbor search is then utilized to estimate the local structure distribution, which is further incorporated within the generalized probabilistic tracking framework to extract the entire connected tree. Qualitative and quantitative results over retinal fundus data demonstrate that the proposed framework achieves comparable accuracy as compared with state-of-the-art methods, while efficiently producing more information regarding the candidate tree structure.",
"title": ""
},
{
"docid": "neg:1840604_3",
"text": "Conduction loss reduction technique using a small resonant capacitor for a phase shift full bridge converter with clamp diodes is proposed in this paper. The proposed technique can be implemented simply by adding a small resonant capacitor beside the leakage inductor of transformer. Since the voltage across the small resonant capacitor is applied to the small leakage inductor of transformer during freewheeling period, the primary current can be decreased rapidly. This results in the reduced conduction loss on the secondary side of transformer while the proposed technique can still guarantee the wide ZVS ranges. The operational principles and analysis are presented. Experimental results show that the proposed reduction technique of conduction loss can be operated properly.",
"title": ""
},
{
"docid": "neg:1840604_4",
"text": "Occupational therapists have used activity analysis to ensure the therapeutic use of activities. Recently, they have begun to explore the affective components of activities. This study explores the feelings (affective responses) that chronic psychiatric patients have toward selected activities commonly used in occupational therapy. Twenty-two participating chronic psychiatric patients were randomly assigned to one of three different activity groups: cooking, craft, or sensory awareness. Immediately following participation, each subject was asked to rate the activity by using Osgood's semantic differential, which measures the evaluation, power, and action factors of affective meaning. Data analysis revealed significant differences between the cooking activity and the other two activities on the evaluation factor. The fact that the three activities were rated differently is evidence that different activities can elicit different responses in one of the target populations of occupational therapy. The implications of these findings to occupational therapists are discussed and areas of future research are indicated.",
"title": ""
},
{
"docid": "neg:1840604_5",
"text": "The development of reliability-based design criteria for surface ship structures needs to consider the following three components: (1) loads, (2) structural strength, and (3) methods of reliability analysis. A methodology for reliability-based design of ship structures is provided in this document. The methodology consists of the following two approaches: (1) direct reliabilitybased design, and (2) load and resistance factor design (LRFD) rules. According to this methodology, loads can be linearly or nonlinearly treated. Also in assessing structural strength, linear or nonlinear analysis can be used. The reliability assessment and reliability-based design can be performed at several levels of a structural system, such as at the hull-girder, grillage, panel, plate and detail levels. A rational treatment of uncertainty is suggested by considering all its types. Also, failure definitions can have significant effects on the assessed reliability, or resulting reliability-based designs. A method for defining and classifying failures at the system level is provided. The method considers the continuous nature of redundancy in ship structures. A bibliography is provided at the end of this document to facilitate future implementation of the methodology.",
"title": ""
},
{
"docid": "neg:1840604_6",
"text": "Supervisory control and data acquisition (SCADA) systems are large-scale industrial control systems often spread across geographically dispersed locations that let human operators control entire physical systems, from a single control room. Early multi-site SCADA systems used closed networks and propriety industrial communication protocols like Modbus, DNP3 etc to reach remote sites. But with time it has become more convenient and more cost-effective to connect them to the Internet. However, internet connections to SCADA systems build in new vulnerabilities, as SCADA systems were not designed with internet security in mind. This can become matter of national security if these systems are power plants, water treatment facilities, or other pieces of critical infrastructure. Compared to IT systems, SCADA systems have a higher requirement concerning reliability, latency and uptime, so it is not always feasible to apply IT security measures deployed in IT systems. This paper provides an overview of security issues and threats in SCADA networks. Next, attention is focused on security assessment of the SCADA. This is followed by an overview of relevant SCADA security solutions. Finally we propose our security solution approach which is embedded in bump-in-the-wire is discussed.",
"title": ""
},
{
"docid": "neg:1840604_7",
"text": "The goal of robot learning from demonstra tion is to have a robot learn from watching a demonstration of the task to be performed In our approach to learning from demon stration the robot learns a reward function from the demonstration and a task model from repeated attempts to perform the task A policy is computed based on the learned reward function and task model Lessons learned from an implementation on an an thropomorphic robot arm using a pendulum swing up task include simply mimicking demonstrated motions is not adequate to per form this task a task planner can use a learned model and reward function to com pute an appropriate policy this model based planning process supports rapid learn ing both parametric and nonparametric models can be learned and used and in corporating a task level direct learning com ponent which is non model based in addi tion to the model based planner is useful in compensating for structural modeling errors and slow model learning",
"title": ""
},
{
"docid": "neg:1840604_8",
"text": "Current study is with the aim to identify similarities and distinctions between irony and sarcasm by adopting quantitative sentiment analysis as well as qualitative content analysis. The result of quantitative sentiment analysis shows that sarcastic tweets are used with more positive tweets than ironic tweets. The result of content analysis corresponds to the result of quantitative sentiment analysis in identifying the aggressiveness of sarcasm. On the other hand, from content analysis it shows that irony owns two senses. The first sense of irony is equal to aggressive sarcasm with speaker awareness. Thus, tweets of first sense of irony may attack a specific target, and the speaker may tag his/her tweet irony because the tweet itself is ironic. These tweets though tagged as irony are in fact sarcastic tweets. Different from this, the tweets of second sense of irony is tagged to classify an event to be ironic. However, from the distribution in sentiment analysis and examples in content analysis, irony seems to be more broadly used in its second sense.",
"title": ""
},
{
"docid": "neg:1840604_9",
"text": "BACKGROUND\nPatients with advanced squamous-cell non-small-cell lung cancer (NSCLC) who have disease progression during or after first-line chemotherapy have limited treatment options. This randomized, open-label, international, phase 3 study evaluated the efficacy and safety of nivolumab, a fully human IgG4 programmed death 1 (PD-1) immune-checkpoint-inhibitor antibody, as compared with docetaxel in this patient population.\n\n\nMETHODS\nWe randomly assigned 272 patients to receive nivolumab, at a dose of 3 mg per kilogram of body weight every 2 weeks, or docetaxel, at a dose of 75 mg per square meter of body-surface area every 3 weeks. The primary end point was overall survival.\n\n\nRESULTS\nThe median overall survival was 9.2 months (95% confidence interval [CI], 7.3 to 13.3) with nivolumab versus 6.0 months (95% CI, 5.1 to 7.3) with docetaxel. The risk of death was 41% lower with nivolumab than with docetaxel (hazard ratio, 0.59; 95% CI, 0.44 to 0.79; P<0.001). At 1 year, the overall survival rate was 42% (95% CI, 34 to 50) with nivolumab versus 24% (95% CI, 17 to 31) with docetaxel. The response rate was 20% with nivolumab versus 9% with docetaxel (P=0.008). The median progression-free survival was 3.5 months with nivolumab versus 2.8 months with docetaxel (hazard ratio for death or disease progression, 0.62; 95% CI, 0.47 to 0.81; P<0.001). The expression of the PD-1 ligand (PD-L1) was neither prognostic nor predictive of benefit. Treatment-related adverse events of grade 3 or 4 were reported in 7% of the patients in the nivolumab group as compared with 55% of those in the docetaxel group.\n\n\nCONCLUSIONS\nAmong patients with advanced, previously treated squamous-cell NSCLC, overall survival, response rate, and progression-free survival were significantly better with nivolumab than with docetaxel, regardless of PD-L1 expression level. (Funded by Bristol-Myers Squibb; CheckMate 017 ClinicalTrials.gov number, NCT01642004.).",
"title": ""
},
{
"docid": "neg:1840604_10",
"text": "In recent years, Steganography and Steganalysis are two important areas of research that involve a number of applications. These two areas of research are important especially when reliable and secure information exchange is required. Steganography is an art of embedding information in a cover image without causing statistically significant variations to the cover image. Steganalysis is the technology that attempts to defeat Steganography by detecting the hidden information and extracting. In this paper a comparative analysis is made to demonstrate the effectiveness of the proposed methods. The effectiveness of the proposed methods has been estimated by computing Mean square error (MSE) and Peak Signal to Noise Ratio (PSNR), Processing time, security.The analysis shows that the BER and PSNR is improved in the LSB Method but security sake DCT is the best method.",
"title": ""
},
{
"docid": "neg:1840604_11",
"text": "We present a new approach for defining groups of populations that are geographically homogeneous and maximally differentiated from each other. As a by-product, it also leads to the identification of genetic barriers between these groups. The method is based on a simulated annealing procedure that aims to maximize the proportion of total genetic variance due to differences between groups of populations (spatial analysis of molecular variance; samova). Monte Carlo simulations were used to study the performance of our approach and, for comparison, the behaviour of the Monmonier algorithm, a procedure commonly used to identify zones of sharp genetic changes in a geographical area. Simulations showed that the samova algorithm indeed finds maximally differentiated groups, which do not always correspond to the simulated group structure in the presence of isolation by distance, especially when data from a single locus are available. In this case, the Monmonier algorithm seems slightly better at finding predefined genetic barriers, but can often lead to the definition of groups of populations not differentiated genetically. The samova algorithm was then applied to a set of European roe deer populations examined for their mitochondrial DNA (mtDNA) HVRI diversity. The inferred genetic structure seemed to confirm the hypothesis that some Italian populations were recently reintroduced from a Balkanic stock, as well as the differentiation of groups of populations possibly due to the postglacial recolonization of Europe or the action of a specific barrier to gene flow.",
"title": ""
},
{
"docid": "neg:1840604_12",
"text": "It is essential to base instruction on a foundation of understanding of children’s thinking, but it is equally important to adopt the longer-term view that is needed to stretch these early competencies into forms of thinking that are complex, multifaceted, and subject to development over years, rather than weeks or months. We pursue this topic through our studies of model-based reasoning. We have identified four forms of models and related modeling practices that show promise for developing model-based reasoning. Models have the fortuitous feature of making forms of student reasoning public and inspectable—not only among the community of modelers, but also to teachers. Modeling provides feedback about student thinking that can guide teaching decisions, an important dividend for improving professional practice.",
"title": ""
},
{
"docid": "neg:1840604_13",
"text": "For a set $P$ of $n$ points in the plane and an integer $k \\leq n$, consider the problem of finding the smallest circle enclosing at least $k$ points of $P$. We present a randomized algorithm that computes in $O( n k )$ expected time such a circle, improving over previously known algorithms. Further, we present a linear time $\\delta$-approximation algorithm that outputs a circle that contains at least $k$ points of $P$ and has radius less than $(1+\\delta)r_{opt}(P,k)$, where $r_{opt}(P,k)$ is the radius of the minimum circle containing at least $k$ points of $P$. The expected running time of this approximation algorithm is $O(n + n \\cdot\\min((1/k\\delta^3) \\log^2 (1/\\delta), k))$.",
"title": ""
},
{
"docid": "neg:1840604_14",
"text": "This study compares the cradle-to-gate total energy and major emissions for the extraction of raw materials, production, and transportation of the common wood building materials from the CORRIM 2004 reports. A life-cycle inventory produced the raw materials, including fuel resources and emission to air, water, and land for glued-laminated timbers, kiln-dried and green softwood lumber, laminated veneer lumber, softwood plywood, and oriented strandboard. Major findings from these comparisons were that the production of wood products, by the nature of the industry, uses a third of their energy consumption from renewable resources and the remainder from fossil-based, non-renewable resources when the system boundaries consider forest regeneration and harvesting, wood products and resin production, and transportation life-cycle stages. When the system boundaries are reduced to a gate-to-gate (manufacturing life-cycle stage) model for the wood products, the biomass component of the manufacturing energy increases to nearly 50% for most products and as high as 78% for lumber production from the Southeast. The manufacturing life-cycle stage consumed the most energy over all the products when resin is considered part of the production process. Extraction of log resources and transportation of raw materials for production had the least environmental impact.",
"title": ""
},
{
"docid": "neg:1840604_15",
"text": "Satisfying the needs of users of online video streaming services requires not only to manage the network Quality of Service (QoS), but also to address the user's Quality of Experience (QoE) expectations. While QoS factors reflect the status of individual networks, they do not comprehensively capture the end-to-end features affecting the quality delivered to the user. In this situation, QoE management is the better option. However, traditionally used QoE management models require human interaction and have stringent requirements in terms of time and complexity. Thus, they fail to achieve successful performance in terms of real-timeliness, accuracy, scalability and adaptability. This dissertation work investigates new methods to bring QoE management to the level required by the real-time management of video services. In this paper, we highlight our main contributions. First, with the aim to perform a combined network-service assessment, we designed an experimental methodology able to map network QoS onto service QoE. Our methodology is meant to provide service and network providers with the means to pinpoint the working boundaries of their video-sets and to predict the effect of network policies on perception. Second, we developed a generic machine learning framework that allows deriving accurate predictive No Reference (NR) assessment metrics, based on simplistic NR QoE methods, that are functionally and computationally viable for real-time QoE evaluation. The tools, methods and conclusions derived from this dissertation conform a solid contribution to QoE management of video streaming services, opening new venues for further research.",
"title": ""
},
{
"docid": "neg:1840604_16",
"text": "Pattern recognition is used to classify the input data into different classes based on extracted key features. Increasing the recognition rate of pattern recognition applications is a challenging task. The spike neural networks inspired from physiological brain architecture, is a neuromorphic hardware implementation of network of neurons. A sample of neuromorphic architecture has two layers of neurons, input and output. The number of input neurons is fixed based on the input data patterns. While the number of outputs neurons can be different. The goal of this paper is performance evaluation of neuromorphic architecture in terms of recognition rates using different numbers of output neurons. For this purpose a simulation environment of N2S3 and MNIST handwritten digits are used. Our simulation results show the recognition rate for various number of output neurons, 20, 30, 50, 100, 200, and 300 is 70%, 74%, 79%, 85%, 89%, and 91%, respectively.",
"title": ""
},
{
"docid": "neg:1840604_17",
"text": "Jennifer L. Docktor, Natalie E. Strand, José P. Mestre, and Brian H. Ross Department of Physics, University of Wisconsin–La Crosse, La Crosse, Wisconsin 54601, USA Department of Physics, University of Illinois, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, Illinois 61801, USA Department of Educational Psychology, University of Illinois, Champaign, Illinois 61820, USA Department of Psychology, University of Illinois, Champaign, Illinois 61820, USA (Received 30 April 2015; published 1 September 2015)",
"title": ""
},
{
"docid": "neg:1840604_18",
"text": "Lately enhancing the capability of network services automatically and dynamically through SDN and CDN/CDNi networks has become a recent topic of research. While, in one hand, these systems can be very beneficial to control and optimize the overall network services that studies the topology, traffic paths, packet handling and such others, on the other hand, the servers in such architectures can also be a potential target for DoS and/or DDoS attacks. We, therefore, propose a mechanism for the SDN based CDNi networks to securely deliver services with a multi-defense strategy against DDoS attacks. Addition of ALTO like servers in such architectures enables mapping a very big network to provide a bird's eye view. We propose an additional marking path map in the ALTO server to trace the request packets. The next defense is a protection switch to protect the main servers. A Management Information Base (MIB) is also proposed in the SDN controller to compare and assess the request traffic coming to the protection switches.",
"title": ""
}
] |
1840605 | Improving the Resolution of CNN Feature Maps Efficiently with Multisampling | [
{
"docid": "pos:1840605_0",
"text": "Spatial pyramid pooling module or encode-decoder structure are used in deep neural networks for semantic segmentation task. The former networks are able to encode multi-scale contextual information by probing the incoming features with filters or pooling operations at multiple rates and multiple effective fields-of-view, while the latter networks can capture sharper object boundaries by gradually recovering the spatial information. In this work, we propose to combine the advantages from both methods. Specifically, our proposed model, DeepLabv3+, extends DeepLabv3 by adding a simple yet effective decoder module to refine the segmentation results especially along object boundaries. We further explore the Xception model and apply the depthwise separable convolution to both Atrous Spatial Pyramid Pooling and decoder modules, resulting in a faster and stronger encoder-decoder network. We demonstrate the effectiveness of the proposed model on PASCAL VOC 2012 and Cityscapes datasets, achieving the test set performance of 89.0% and 82.1% without any post-processing. Our paper is accompanied with a publicly available reference implementation of the proposed models in Tensorflow at https: //github.com/tensorflow/models/tree/master/research/deeplab.",
"title": ""
}
] | [
{
"docid": "neg:1840605_0",
"text": "The accuracy of Automated Speech Recognition (ASR) technology has improved, but it is still imperfect in many settings. Researchers who evaluate ASR performance often focus on improving the Word Error Rate (WER) metric, but WER has been found to have little correlation with human-subject performance on many applications. We propose a new captioning-focused evaluation metric that better predicts the impact of ASR recognition errors on the usability of automatically generated captions for people who are Deaf or Hard of Hearing (DHH). Through a user study with 30 DHH users, we compared our new metric with the traditional WER metric on a caption usability evaluation task. In a side-by-side comparison of pairs of ASR text output (with identical WER), the texts preferred by our new metric were preferred by DHH participants. Further, our metric had significantly higher correlation with DHH participants' subjective scores on the usability of a caption, as compared to the correlation between WER metric and participant subjective scores. This new metric could be used to select ASR systems for captioning applications, and it may be a better metric for ASR researchers to consider when optimizing ASR systems.",
"title": ""
},
{
"docid": "neg:1840605_1",
"text": "The purpose of this study is to evaluate the impact of brand awareness to repurchase intention of customers with trilogy of emotions approach. The study population consisted if all the people in Yazd. As the research sample, 384 people who went to cell phone shopping centers in Yazd province responded to the questionnaire. Cronbach's alpha was used to determine the reliability of the questionnaire, and its values was 0.87. To examine the effects of brand awareness on purchase intention, structural equation modeling and AMOUS and SPSS softwares were used. The results of this study show that consumers cognition does not affect the purchase intention, but the customers’ conation and affection affect the re-purchase intention. In addition, brand awareness affects emotions (cognition, affection, and conation) and consumer purchase intention.",
"title": ""
},
{
"docid": "neg:1840605_2",
"text": "In 1948 Shannon developed fundamental limits on the efficiency of communication over noisy channels. The coding theorem asserts that there are block codes with code rates arbitrarily close to channel capacity and probabilities of error arbitrarily close to zero. Fifty years later, codes for the Gaussian channel have been discovered that come close to these fundamental limits. There is now a substantial algebraic theory of error-correcting codes with as many connections to mathematics as to engineering practice, and the last 20 years have seen the construction of algebraic-geometry codes that can be encoded and decoded in polynomial time, and that beat the Gilbert–Varshamov bound. Given the size of coding theory as a subject, this review is of necessity a personal perspective, and the focus is reliable communication, and not source coding or cryptography. The emphasis is on connecting coding theories for Hamming and Euclidean space and on future challenges, specifically in data networking, wireless communication, and quantum information theory.",
"title": ""
},
{
"docid": "neg:1840605_3",
"text": "In order to improve real-time and robustness of the lane detection and get more ideal lane, in the image preprocessing, the filter is used in strengthening lane information of the binary image, reducing the noise and removing irrelevant information. The lane edge detection is by using Canny operator, then the corner detection method is used in getting the Image corners coordinates and finally using the RANSAC to circulation fit for corners, according to the optimal lanes parameters drawing lane. Through experiment of different scenes, this method can not only effectively rule out linear pixel interference of outside the road in multiple complex environments, but also quickly and accurately identify lane. This method improves the stability of the lane detection to a certain extent, which has good robust and real-time.",
"title": ""
},
{
"docid": "neg:1840605_4",
"text": "The rapid growth in the population density in urban cities and the advancement in technology demands real-time provision of services and infrastructure. Citizens, especially travelers, want to be reached within time to the destination. Consequently, they require to be facilitated with smart and real-time traffic information depending on the current traffic scenario. Therefore, in this paper, we proposed a graph-oriented mechanism to achieve the smart transportation system in the city. We proposed to deploy road sensors to get the overall traffic information as well as the vehicular network to obtain location and speed information of the individual vehicle. These Internet of Things (IoT) based networks generate enormous volume of data, termed as Big Data, depicting the traffic information of the city. To process incoming Big Data from IoT devices, then generating big graphs from the data, and processing them, we proposed an efficient architecture that uses the Giraph tool with parallel processing servers to achieve real-time efficiency. Later, various graph algorithms are used to achieve smart transportation by making real-time intelligent decisions to facilitate the citizens as well as the metropolitan authorities. Vehicular Datasets from various reliable resources representing the real city traffic are used for analysis and evaluation purpose. The system is implemented using Giraph and Spark tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.",
"title": ""
},
{
"docid": "neg:1840605_5",
"text": "The recent explosive growth in convolutional neural network (CNN) research has produced a variety of new architectures for deep learning. One intriguing new architecture is the bilinear CNN (BCNN), which has shown dramatic performance gains on certain fine-grained recognition problems [13]. We apply this new CNN to the challenging new face recognition benchmark, the IARPA Janus Benchmark A (IJB-A) [10]. This is the first widely available public benchmark designed specifically to test face identification in real-world images. It features faces from a large number of identities in challenging real-world conditions. Because the face images were not identified automatically using a computer face detection system, it does not have the bias inherent in such a database. As a result, it includes variations in pose that are more challenging than many other popular benchmarks. In our experiments, we demonstrate the performance of the model trained only on ImageNet, then fine-tuned on the training set of IJB-A, and finally use a moderate-sized external database, FaceScrub [15]. Another feature of this benchmark is that that the testing data consists of collections of samples of a particular identity. We consider two techniques for pooling samples from these collections to improve performance over using only a single image, and we report results for both methods. Our application of this new CNN to the IJB-A results in gains over the published baselines of this new database.",
"title": ""
},
{
"docid": "neg:1840605_6",
"text": "This paper presents a low profile ultrawideband tightly coupled phased array antenna with integrated feedlines. The aperture array consists of planar element pairs with fractal geometry. In each element these pairs are set orthogonal to each other for dual polarisation. The design is an array of closely capacitively coupled pairs of fractal octagonal rings. The adjustment of the capacitive load at the tip end of the elements and the strong mutual coupling between the elements, enables a wideband conformal performance. Adding a ground plane below the array partly compensates for the frequency variation of the array impedance, providing further enhancement in the array bandwidth. Additional improvement is achieved by placing another layer of conductive elements at a defined distance above the radiating elements. A Genetic Algorithm was scripted in MATLAB and combined with the HFSS simulator, providing an easy optimisation tool across the operational bandwidth for the array unit cell design parameters. The proposed antenna shows a wide-scanning ability with a low cross-polarisation level over a wide bandwidth.",
"title": ""
},
{
"docid": "neg:1840605_7",
"text": "We present a qualitative study of hospitality exchange processes that take place via the online peer-to-peer platform Airbnb. We explore 1) what motivates individuals to monetize network hospitality and 2) how the presence of money ties in with the social interaction related to network hospitality. We approach the topic from the perspective of hosts -- that is, Airbnb users who participate by offering accommodation for other members in exchange for monetary compensation. We found that participants were motivated to monetize network hospitality for both financial and social reasons. Our analysis indicates that the presence of money can provide a helpful frame for network hospitality, supporting hosts in their efforts to accomplish desired sociability, select guests consistent with their preferences, and control the volume and type of demand. We conclude the paper with a critical discussion of the implications of our findings for network hospitality and, more broadly, for the so-called sharing economy.",
"title": ""
},
{
"docid": "neg:1840605_8",
"text": "We contribute a dense SLAM system that takes a live stream of depth images as input and reconstructs nonrigid deforming scenes in real time, without templates or prior models. In contrast to existing approaches, we do not maintain any volumetric data structures, such as truncated signed distance function (TSDF) fields or deformation fields, which are performance and memory intensive. Our system works with a flat point (surfel) based representation of geometry, which can be directly acquired from commodity depth sensors. Standard graphics pipelines and general purpose GPU (GPGPU) computing are leveraged for all central operations: i.e., nearest neighbor maintenance, non-rigid deformation field estimation and fusion of depth measurements. Our pipeline inherently avoids expensive volumetric operations such as marching cubes, volumetric fusion and dense deformation field update, leading to significantly improved performance. Furthermore, the explicit and flexible surfel based geometry representation enables efficient tackling of topology changes and tracking failures, which makes our reconstructions consistent with updated depth observations. Our system allows robots maintain a scene description with nonrigidly deformed objects that potentially enables interactions with dynamic working environments.",
"title": ""
},
{
"docid": "neg:1840605_9",
"text": "The majority of deterministic mathematical programming problems have a compact formulation in terms of algebraic equations. Therefore they can easily take advantage of the facilities offered by algebraic modeling languages. These tools allow expressing models by using convenient mathematical notation (algebraic equations) and translate the models into a form understandable by the solvers for mathematical programs. Algebraic modeling languages provide facility for the management of a mathematical model and its data, and access different general-purpose solvers. The use of algebraic modeling languages (AMLs) simplifies the process of building the prototype model and in some cases makes it possible to create and maintain even the production version of the model. As presented in other chapters of this book, stochastic programming (SP) is needed when exogenous parameters of the mathematical programming problem are random. Dealing with stochasticities in planning is not an easy task. In a standard scenario-by-scenario analysis, the system is optimized for each scenario separately. Varying the scenario hypotheses we can observe the different optimal responses of the system and delineate the “strong trends” of the future. Indeed, this scenarioby-scenario approach implicitly assumes perfect foresight. The method provides a first-stage decision, which is valid only for the scenario under consideration. Having as many decisions as there are scenarios leaves the decision-maker without a clear recommendation. In stochastic programming the whole set of scenarios is combined into an event tree, which describes the unfolding of uncertainties over the period of planning. The model takes into account the uncertainties characterizing the scenarios through stochastic programming techniques. This adaptive plan is much closer, in spirit, to the way that decision-makers have to deal with uncertain future",
"title": ""
},
{
"docid": "neg:1840605_10",
"text": "Festivals have been proliferating worldwide, and local authorities are either supporting, or organizing small, local festivals to enhance the attractiveness of the destination for non-local visitors. Festivals are also very effective tools for developing destination image, revitalizing economy, culture, traditions, building civic pride, raising funds for special, civic or charitable projects, and providing opportunities for the community to deal with fi ne arts. Th is situation increases the importance of factors related to the satisfaction and loyalty of festival visitors, especially for small and local festivals. Th erefore, drawing on the existing literature and an assumption that festivalscape is the most important contributor to visitors’ satisfaction and loyalty in the context of a small, local and municipality organized annual festivals, the present study aims to identify factors related to the festivalscape that determine visitors’ satisfaction and loyalty by using a structural equation modeling. Th e study examines several variables as the antecedents of the festival visitors’ satisfaction and loyalty such as staff , festival area, food, souvenir, informational adequacy and convenience. As a result of the analysis, the study reveals three dimensions related to the festivalscape environmental factors which are food, festival area, and convenience and examines how these factors aff ect the visitors’ satisfaction and, in turn, their loyalty.",
"title": ""
},
{
"docid": "neg:1840605_11",
"text": "Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS Forestto systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.",
"title": ""
},
{
"docid": "neg:1840605_12",
"text": "Sustainable production of renewable energy is being hotly debated globally since it is increasingly understood that first generation biofuels, primarily produced from food crops and mostly oil seeds are limited in their ability to achieve targets for biofuel production, climate change mitigation and economic growth. These concerns have increased the interest in developing second generation biofuels produced from non-food feedstocks such as microalgae, which potentially offer greatest opportunities in the longer term. This paper reviews the current status of microalgae use for biodiesel production, including their cultivation, harvesting, and processing. The microalgae species most used for biodiesel production are presented and their main advantages described in comparison with other available biodiesel feedstocks. The various aspects associated with the design of microalgae production units are described, giving an overview of the current state of development of algae cultivation systems (photo-bioreactors and open ponds). Other potential applications and products from microalgae are also presented such as for biological sequestration of CO2, wastewater treatment, in human health, as food additive, and for aquaculture. 2009 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840605_13",
"text": "Mobile systems, such as smartphones and tablets, incorporate a diverse set of I/O devices, such as camera, audio devices, GPU, and sensors. This in turn results in a large number of diverse and customized device drivers running in the operating system kernel of mobile systems. These device drivers contain various bugs and vulnerabilities, making them a top target for kernel exploits [78]. Unfortunately, security analysts face important challenges in analyzing these device drivers in order to find, understand, and patch vulnerabilities. More specifically, using the state-of-the-art dynamic analysis techniques such as interactive debugging, fuzzing, and record-and-replay for analysis of these drivers is difficult, inefficient, or even completely inaccessible depending on the analysis. In this paper, we present Charm1, a system solution that facilitates dynamic analysis of device drivers of mobile systems. Charm’s key technique is remote device driver execution, which enables the device driver to execute in a virtual machine on a workstation. Charm makes this possible by using the actual mobile system only for servicing the low-level and infrequent I/O operations through a low-latency and customized USB channel. Charm does not require any specialized hardware and is immediately available to analysts. We show that it is feasible to apply Charm to various device drivers, including camera, audio, GPU, and IMU sensor drivers, in different mobile systems, including LG Nexus 5X, Huawei Nexus 6P, and Samsung Galaxy S7. In an extensive evaluation, we show that Charm enhances the usability of fuzzing of device drivers, enables record-andreplay of driver’s execution, and facilitates detailed vulnerability analysis. Altogether, these capabilities have enabled us to find 25 bugs in device drivers, analyze 3 existing ones, and even build an arbitrary-code-execution kernel exploit using one of them. 1Charm is open sourced: https://trusslab.github.io/charm/",
"title": ""
},
{
"docid": "neg:1840605_14",
"text": "• Use small, cross-functional teams managing smaller, prioritized tasks • Frequently test incremental project progress against user stories to ensure a match between final product and customer expectation • Utilize the best mix of agile, traditional, and hybrid techniques to meet specific project requirements, recognize and avoid pitfalls, and improve quality • Differentiate between frameworks such as Scrum, Extreme Programming (XP), and Lean, and select the most suitable for the specific domain and project",
"title": ""
},
{
"docid": "neg:1840605_15",
"text": "The Architecture, Engineering & Construction (AEC) sector is a highly fragmented, data intensive, project based industry, involving a number of very different professions and organisations. Projects carried out within this sector involve collaboration between various people, using a variety of different systems. This, along with the industry’s strong data sharing and processing requirements, means that the management of building data is complex and challenging. This paper presents a solution to data sharing requirements of the AEC sector by utilising Cloud Computing. Our solution presents two key contributions, first a governance model for building data, based on extensive research and industry consultation. Second, a prototype implementation of this governance model, utilising the CometCloud autonomic Cloud Computing engine based on the Master/Worker paradigm. We have integrated our prototype with the 3D modelling software Google Sketchup. The approach and prototype presented has applicability in a number of other eScience related applications involving multi-disciplinary, collaborative working using Cloud Computing infrastructure.",
"title": ""
},
{
"docid": "neg:1840605_16",
"text": "The amount of contextual data collected, stored, mined, and shared is increasing exponentially. Street cameras, credit card transactions, chat and Twitter logs, e-mail, web site visits, phone logs and recordings, social networking sites, all are examples of data that persists in a manner not under individual control, leading some to declare the death of privacy. We argue here that the ability to generate convincing fake contextual data can be a basic tool in the fight to preserve privacy. One use for the technology is for an individual to make his actual data indistinguishable amongst a pile of false data.\n In this paper we consider two examples of contextual data, search engine query data and location data. We describe the current state of faking these types of data and our own efforts in this direction.",
"title": ""
},
{
"docid": "neg:1840605_17",
"text": "Government corruption is more prevalent in poor countries than in rich countries. This paper uses cross-industry heterogeneity in growth rates within Vietnam to test empirically whether growth leads to lower corruption. We find that it does. We begin by developing a model of government officials’ choice of how much bribe money to extract from firms that is based on the notion of inter-regional tax competition, and consider how officials’ choices change as the economy grows. We show that economic growth is predicted to decrease the rate of bribe extraction under plausible assumptions, with the benefit to officials of demanding a given share of revenue as bribes outweighed by the increased risk that firms will move elsewhere. This effect is dampened if firms are less mobile. Our empirical analysis uses survey data collected from over 13,000 Vietnamese firms between 2006 and 2010 and an instrumental variables strategy based on industry growth in other provinces. We find, first, that firm growth indeed causes a decrease in bribe extraction. Second, this pattern is particularly true for firms with strong land rights and those with operations in multiple provinces, consistent with these firms being more mobile. Our results suggest that as poor countries grow, corruption could subside “on its own,” and they demonstrate one type of positive feedback between economic growth and good institutions. ∗Contact information: Bai: [email protected]; Jayachandran: [email protected]; Malesky: [email protected]; Olken: [email protected]. We thank Lori Beaman, Raymond Fisman, Chang-Tai Hsieh, Supreet Kaur, Neil McCulloch, Andrei Shleifer, Matthew Stephenson, Eric Verhoogen, and Ekaterina Zhuravskaya for helpful comments.",
"title": ""
},
{
"docid": "neg:1840605_18",
"text": "A compact reconfigurable rectifying antenna (rectenna) has been proposed for 5.2- and 5.8-GHz microwave power transmission. The proposed rectenna consists of a frequency reconfigurable microstrip antenna and a frequency reconfigurable rectifying circuit. Here, the use of the odd-symmetry mode has significantly cut down the antenna size by half. By controlling the switches installed in the antenna and the rectifying circuit, the rectenna is able to switch operation between 5.2 and 5.8 GHz. Simulated conversion efficiencies of 70.5% and 69.4% are achievable at the operating frequencies of 5.2 and 5.8 GHz, respectively, when the rectenna is given with an input power of 16.5 dBm. Experiment has been conducted to verify the design idea. Due to fabrication tolerances and parametric deviation of the actual diode, the resonant frequencies of the rectenna are measured to be 4.9 and 5.9 GHz. When supplied with input powers of 16 and 15 dBm, the measured maximum conversion efficiencies of the proposed rectenna are found to be 65.2% and 64.8% at 4.9 and 5.9 GHz, respectively, which are higher than its contemporary counterparts.",
"title": ""
},
{
"docid": "neg:1840605_19",
"text": "We demonstrate a multimodal dialogue system using reinforcement learning for in-car scenarios, developed at Edinburgh University and Cambridge University for the TALK project1. This prototype is the first “Information State Update” (ISU) dialogue system to exhibit reinforcement learning of dialogue strategies, and also has a fragmentary clarification feature. This paper describes the main components and functionality of the system, as well as the purposes and future use of the system, and surveys the research issues involved in its construction. Evaluation of this system (i.e. comparing the baseline system with handcoded vs. learnt dialogue policies) is ongoing, and the demonstration will show both.",
"title": ""
}
] |
1840606 | On Model Discovery For Hosted Data Science Projects | [
{
"docid": "pos:1840606_0",
"text": "Ground is an open-source data context service, a system to manage all the information that informs the use of data. Data usage has changed both philosophically and practically in the last decade, creating an opportunity for new data context services to foster further innovation. In this paper we frame the challenges of managing data context with basic ABCs: Applications, Behavior, and Change. We provide motivation and design guidelines, present our initial design of a common metamodel and API, and explore the current state of the storage solutions that could serve the needs of a data context service. Along the way we highlight opportunities for new research and engineering solutions. 1. FROM CRISIS TO OPPORTUNITY Traditional database management systems were developed in an era of risk-averse design. The technology itself was expensive, as was the on-site cost of managing it. Expertise was scarce and concentrated in a handful of computing and consulting firms. Two conservative design patterns emerged that lasted many decades. First, the accepted best practices for deploying databases revolved around tight control of schemas and data ingest in support of general-purpose accounting and compliance use cases. Typical advice from data warehousing leaders held that “There is no point in bringing data . . . into the data warehouse environment without integrating it” [15]. Second, the data management systems designed for these users were often built by a single vendor and deployed as a monolithic stack. A traditional DBMS included a consistent storage engine, a dataflow engine, a language compiler and optimizer, a runtime scheduler, a metadata catalog, and facilities for data ingest and queueing—all designed to work closely together. As computing and data have become orders of magnitude more efficient, changes have emerged for both of these patterns. Usage is changing profoundly, as expertise and control shifts from the central accountancy of an IT department to the domain expertise of “business units” tasked with extracting value from data [12]. The changes in economics and usage brought on the “three Vs” of Big Data: Volume, Velocity and Variety. Resulting best practices focus on open-ended schema-on-use data “lakes” and agile development, This article is published under a Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0/), which permits distribution and reproduction in any medium as well allowing derivative works, provided that you attribute the original work to the author(s) and CIDR 2017. CIDR ’17 January 8-11, 2017, Chaminade, CA, USA in support of exploratory analytics and innovative application intelligence [26]. Second, while many pieces of systems software that have emerged in this space are familiar, the overriding architecture is profoundly different. In today’s leading open source data management stacks, nearly all of the components of a traditional DBMS are explicitly independent and interchangeable. This architectural decoupling is a critical and under-appreciated aspect of the Big Data movement, enabling more rapid innovation and specialization. 1.1 Crisis: Big Metadata An unfortunate consequence of the disaggregated nature of contemporary data systems is the lack of a standard mechanism to assemble a collective understanding of the origin, scope, and usage of the data they manage. In the absence of a better solution to this pressing need, the Hive Metastore is sometimes used, but it only serves simple relational schemas—a dead end for representing a Variety of data. As a result, data lake projects typically lack even the most rudimentary information about the data they contain or how it is being used. For emerging Big Data customers and vendors, this Big Metadata problem is hitting a crisis point. Two significant classes of end-user problems follow directly from the absence of shared metadata services. The first is poor productivity. Analysts are often unable to discover what data exists, much less how it has been previously used by peers. Valuable data is left unused and human effort is routinely duplicated—particularly in a schema-on-use world with raw data that requires preparation. “Tribal knowledge” is a common description for how organizations manage this productivity problem. This is clearly not a systematic solution, and scales very poorly as organizations grow. The second problem stemming from the absence of a system to track metadata is governance risk. Data management necessarily entails tracking or controlling who accesses data, what they do with it, where they put it, and how it gets consumed downstream. In the absence of a standard place to store metadata and answer these questions, it is impossible to enforce policies and/or audit behavior. As a result, many administrators marginalize their Big Data stack as a playpen for non-critical data, and thereby inhibit both the adoption and the potential of new technologies. In our experiences deploying and managing systems in production, we have seen the need for a common service layer to support the capture, publishing and sharing of metadata information in a flexible way. The effort in this paper began by addressing that need. 1.2 Opportunity: Data Context The lack of metadata services in the Big Data stack can be viewed as an opportunity: a clean slate to rethink how we track and leverage modern usage of data. Storage economics and schema-on-use agility suggest that the Data Lake movement could go much farther than Data Warehousing in enabling diverse, widely-used central repositories of data that can adapt to new data formats and rapidly changing organizations. In that spirit, we advocate rethinking traditional metadata in a far more comprehensive sense. More generally, what we should strive to capture is the full context of data. To emphasize the conceptual shifts of this data context, and as a complement to the “three Vs” of Big Data, we introduce three key sources of information—the ABCs of Data Context. Each represents a major change from the simple metadata of traditional enterprise data management. Applications: Application context is the core information that describes how raw bits get interpreted for use. In modern agile scenarios, application context is often relativistic (many schemas for the same data) and complex (with custom code for data interpretation). Application context ranges from basic data descriptions (encodings, schemas, ontologies, tags), to statistical models and parameters, to user annotations. All of the artifacts involved—wrangling scripts, view definitions, model parameters, training sets, etc.—are critical aspects of application context. Behavior: This is information about how data was created and used over time. In decoupled systems, behavioral context spans multiple services, applications and formats and often originates from highvolume sources (e.g., machine-generated usage logs). Not only must we track upstream lineage— the data sets and code that led to the creation of a data object—we must also track the downstream lineage, including data products derived from this data object. Aside from data lineage, behavioral context includes logs of usage: the “digital exhaust” left behind by computations on the data. As a result, behavioral context metadata can often be larger than the data itself. Change: This is information about the version history of data, code and associated information, including changes over time to both structure and content. Traditional metadata focused on the present, but historical context is increasingly useful in agile organizations. This context can be a linear sequence of versions, or it can encompass branching and concurrent evolution, along with interactions between co-evolving versions. By tracking the version history of all objects spanning code, data, and entire analytics pipelines, we can simplify debugging and enable auditing and counterfactual analysis. Data context services represent an opportunity for database technology innovation, and an urgent requirement for the field. We are building an open-source data context service we call Ground, to serve as a central model, API and repository for capturing the broad context in which data gets used. Our goal is to address practical problems for the Big Data community in the short term and to open up opportunities for long-term research and innovation. In the remainder of the paper we illustrate the opportunities in this space, design requirements for solutions, and our initial efforts to tackle these challenges in open source. 2. DIVERSE USE CASES To illustrate the potential of the Ground data context service, we describe two concrete scenarios in which Ground can aid in data discovery, facilitate better collaboration, protect confidentiality, help diagnose problems, and ultimately enable new value to be captured from existing data. After presenting these scenarios, we explore the design requirements for a data context service. 2.1 Scenario: Context-Enabled Analytics This scenario represents the kind of usage we see in relatively technical organizations making aggressive use of data for machinelearning driven applications like customer targeting. In these organizations, data analysts make extensive use of flexible tools for data preparation and visualization and often have some SQL skills, while data scientists actively prototype and develop custom software for machine learning applications. Janet is an analyst in the Customer Satisfaction department at a large bank. She suspects that the social network behavior of customers can predict if they are likely to close their accounts (customer churn). Janet has access to a rich context-service-enabled data lake and a wide range of tools that she can use to assess her hypothesis. Janet begins by downloading a free sample of a social media feed. She uses an advanced data catalog application (we’ll call it “Catly”) which connects to Ground, recognizes the co",
"title": ""
},
{
"docid": "pos:1840606_1",
"text": "As data-driven methods are becoming pervasive in a wide variety of disciplines, there is an urgent need to develop scalable and sustainable tools to simplify the process of data science, to make it easier for the users to keep track of the analyses being performed and datasets being generated, and to enable the users to understand and analyze the workflows. In this paper, we describe our vision of a unified provenance and metadata management system to support lifecycle management of complex collaborative data science workflows. We argue that the information about the analysis processes and data artifacts can, and should be, captured in a semi-passive manner; and we show that querying and analyzing this information can not only simplify bookkeeping and debugging tasks but also enable a rich new set of capabilities like identifying flaws in the data science process itself. It can also significantly reduce the user time spent in fixing post-deployment problems through automated analysis and monitoring. We have implemented a prototype system, PROVDB, on top of git and Neo4j, and we describe its key features and capabilities.",
"title": ""
},
{
"docid": "pos:1840606_2",
"text": "We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyperparameters. Inspired by the principle of “optimism under uncertainty,” we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on n samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior.",
"title": ""
}
] | [
{
"docid": "neg:1840606_0",
"text": "This article describes on-going developments of the VENUS European Project (Virtual ExploratioN of Underwater Sites, http://www.venus-project.eu) concerning the first mission to sea in Pianosa Island, Italy in October 2006. The VENUS project aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. In this paper we focus on the underwater photogrammetric approach used to survey the archaeological site of Pianosa. After a brief presentation of the archaeological context we shall see the calibration process in such a context. The next part of this paper is dedicated to the survey: it is divided into two parts: a DTM of the site (combining acoustic bathymetry and photogrammetry) and a specific artefact plotting dedicated to the amphorae present on the site. * Corresponding author. This is useful to know for communication with the appropriate person in cases with more than one author. ** http://cordis.europa.eu/ist/digicult/venus.htm or the project web site : http://www.venus-project.eu 1. VENUS, VIRTUAL EXPLORATION OF UNDERWATER SITES The VENUS project is funded by European Commission, Information Society Technologies (IST) programme of the 6th FP for RTD . It aims at providing scientific methodologies and technological tools for the virtual exploration of deep underwater archaeological sites. (Chapman et alii, 2006). Underwater archaeological sites, for example shipwrecks, offer extraordinary opportunities for archaeologists due to factors such as darkness, low temperatures and a low oxygen rate which are favourable to preservation. On the other hand, these sites can not be experienced first hand and today are continuously jeopardised by activities such as deep trawling that destroy their surface layer. The VENUS project will improve the accessibility of underwater sites by generating thorough and exhaustive 3D records for virtual exploration. The project team plans to survey shipwrecks at various depths and to explore advanced methods and techniques of data acquisition through autonomous or remotely operated unmanned vehicles with innovative sonar and photogrammetry equipment. Research will also cover aspects such as data processing and storage, plotting of archaeological artefacts and information system management. This work will result in a series of best practices and procedures for collecting and storing data. Further, VENUS will develop virtual reality and augmented reality tools for the visualisation of an immersive interaction with a digital model of an underwater site. The model will be made accessible online, both as an example of digital preservation and for demonstrating new facilities of exploration in a safe, cost-effective and pedagogical environment. The virtual underwater site will provide archaeologists with an improved insight into the data and the general public with simulated dives to the site. The VENUS consortium, composed of eleven partners, is pooling expertise in various disciplines: archaeology and underwater exploration, knowledge representation and photogrammetry, virtual reality and digital data preservation. This paper focuses on the first experimentation in Pianosa Island, Tuscany, Italy. The document is structured as follows. A short description of the archaeological context, then the next section explains the survey method: calibration, collecting photographs using ROV and divers, photographs orientation and a particular way to measure amphorae with photogrammetry using archaeological knowledge. A section shows 3D results in VRML and finally we present the future planned work. 2. THE UNDERWATER ARCHAEOLOGICAL SITE OF PIANOSA ISLAND The underwater archaeological site of Pianosa, discovered in 1989 by volunteer divers (Giuseppe Adriani, Paolo Vaccari), is located at a depth of 35 m, close to the Scoglio della Scola, in XXI International CIPA Symposium, 01-06 October, Athens, Greece",
"title": ""
},
{
"docid": "neg:1840606_1",
"text": "In this digital age, most business is conducted electronically. This contemporary paradigm creates openings for potentially harmful unanticipated information security incidents of both a criminal or civil nature, with the potential to cause considerable direct and indirect damage to smaller businesses. Electronic evidence is fundamental to the successful handling of such incidents. If an organisation does not prepare proactively for such incidents it is highly likely that important relevant digital evidence will not be available. Not being able to respond effectively could be extremely damaging to smaller companies, as they are unable to absorb losses as easily as larger organisations. In order to prepare smaller businesses for incidents of this nature, the implementation of Digital Forensic Readiness policies and procedures is necessitated. Numerous varying factors such as the perceived high cost, as well as the current lack of forensic skills, make the implementation of Digital Forensic Readiness appear difficult if not infeasible for smaller organisations. In order to solve this problem it is necessary to develop a scalable and flexible framework for the implementation of Digital Forensic Readiness based on the individual risk profile of a small to medium enterprise (SME). This paper aims to determine, from literature, the concepts of Digital Forensic Readiness and how they apply to SMEs. Based on the findings, the aspects of Digital Forensics and organisational characteristics that should be included in such a framework is highlighted.",
"title": ""
},
{
"docid": "neg:1840606_2",
"text": "In this paper, we address a new research problem on active learning from data streams where data volumes grow continuously and labeling all data is considered expensive and impractical. The objective is to label a small portion of stream data from which a model is derived to predict newly arrived instances as accurate as possible. In order to tackle the challenges raised by data streams' dynamic nature, we propose a classifier ensembling based active learning framework which selectively labels instances from data streams to build an accurate classifier. A minimal variance principle is introduced to guide instance labeling from data streams. In addition, a weight updating rule is derived to ensure that our instance labeling process can adaptively adjust to dynamic drifting concepts in the data. Experimental results on synthetic and real-world data demonstrate the performances of the proposed efforts in comparison with other simple approaches.",
"title": ""
},
{
"docid": "neg:1840606_3",
"text": "Vehicular Ad hoc Networks (VANETs) are classified as an application of Mobile Ad-hoc Networks (MANETs) that has the potential in improving road safety and providing Intelligent Transportation System (ITS). Vehicular communication system facilitates communication devices for exchange of information among vehicles and vehicles and Road Side Units (RSUs).The era of vehicular adhoc networks is now gaining attention and momentum. Researchers and developers have built VANET simulation tools to allow the study and evaluation of various routing protocols, various emergency warning protocols and others VANET applications. Simulation of VANET routing protocols and its applications is fundamentally different from MANETs simulation because in VANETs, vehicular environment impose new issues and requirements, such as multi-path fading, roadside obstacles, trip models, traffic flow models, traffic lights, traffic congestion, vehicular speed and mobility, drivers behaviour etc. This paper presents a comparative study of various publicly available VANET simulation tools. Currently, there are network simulators, VANET mobility generators and VANET simulators are publicly available. In particular, this paper contrast their software characteristics, graphical user interface, accuracy of simulation, ease of use, popularity, input requirements, output visualization capabilities etc. Keywords-Ad-hoc network, ITS (Intelligent Transportation System), MANET, Simulation, VANET.",
"title": ""
},
{
"docid": "neg:1840606_4",
"text": "Article history: Available online 26 October 2012 We present an O ( √ n log n)-approximation algorithm for the problem of finding the sparsest spanner of a given directed graph G on n vertices. A spanner of a graph is a sparse subgraph that approximately preserves distances in the original graph. More precisely, given a graph G = (V , E) with nonnegative edge lengths d : E → R 0 and a stretch k 1, a subgraph H = (V , E H ) is a k-spanner of G if for every edge (s, t) ∈ E , the graph H contains a path from s to t of length at most k · d(s, t). The previous best approximation ratio was Õ (n2/3), due to Dinitz and Krauthgamer (STOC ’11). We also improve the approximation ratio for the important special case of directed 3-spanners with unit edge lengths from Õ ( √ n ) to O (n1/3 log n). The best previously known algorithms for this problem are due to Berman, Raskhodnikova and Ruan (FSTTCS ’10) and Dinitz and Krauthgamer. The approximation ratio of our algorithm almost matches Dinitz and Krauthgamer’s lower bound for the integrality gap of a natural linear programming relaxation. Our algorithm directly implies an O (n1/3 log n)-approximation for the 3-spanner problem on undirected graphs with unit lengths. An easy O ( √ n )-approximation algorithm for this problem has been the best known for decades. Finally, we consider the Directed Steiner Forest problem: given a directed graph with edge costs and a collection of ordered vertex pairs, find a minimum-cost subgraph that contains a path between every prescribed pair. We obtain an approximation ratio of O (n2/3+ ) for any constant > 0, which improves the O (n · min(n4/5,m2/3)) ratio due to Feldman, Kortsarz and Nutov (JCSS’12). © 2012 Elsevier Inc. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840606_5",
"text": "BACKGROUND\nBiomarkers have many distinct purposes, and depending on their intended use, the validation process varies substantially.\n\n\nPURPOSE\nThe goal of this article is to provide an introduction to the topic of biomarkers, and then to discuss three specific types of biomarkers, namely, prognostic, predictive, and surrogate.\n\n\nRESULTS\nA principle challenge for biomarker validation from a statistical perspective is the issue of multiplicity. In general, the solution to this multiplicity challenge is well known to statisticians: pre-specification and replication. Critical requirements for prognostic marker validation include uniform treatment, complete follow-up, unbiased case selection, and complete ascertainment of the many possible confounders that exist in the context of an observational sample. In the case of predictive biomarker validation, observational data are clearly inadequate and randomized controlled trials are mandatory. Within the context of randomization, strategies for predictive marker validation can be grouped into two categories: retrospective versus prospective validation. The critical validation criteria for a surrogate endpoint is to ensure that if a trial uses a surrogate endpoint, the trial will result in the same inferences as if the trial had observed the true endpoint. The field of surrogate endpoint validation has now moved to the multi-trial or meta-analytic setting as the preferred method.\n\n\nCONCLUSIONS\nBiomarkers are a highly active research area. For all biomarker developmental and validation studies, the importance of fundamental statistical concepts remains the following: pre-specification of hypotheses, randomization, and replication. Further statistical methodology research in this area is clearly needed as we move forward.",
"title": ""
},
{
"docid": "neg:1840606_6",
"text": "Acromioclavicular (AC) joint separations are common injuries of the shoulder girdle, especially in the young and active population. Typically the mechanism of this injury is a direct force against the lateral aspect of the adducted shoulder, the magnitude of which affects injury severity. While low-grade injuries are frequently managed successfully using non-surgical measures, high-grade injuries frequently warrant surgical intervention to minimize pain and maximize shoulder function. Factors such as duration of injury and activity level should also be taken into account in an effort to individualize each patient's treatment. A number of surgical techniques have been introduced to manage symptomatic, high-grade injuries. The purpose of this article is to review the important anatomy, biomechanical background, and clinical management of this entity.",
"title": ""
},
{
"docid": "neg:1840606_7",
"text": "In recent years, the venous flap has been highly regarded in microsurgical and reconstructive surgeries, especially in the reconstruction of hand and digit injuries. It is easily designed and harvested with good quality. It is thin and pliable, without the need of sacrificing a major artery at the donor site, and has no limitation on the donor site. It can be transferred not only as a pure skin flap, but also as a composite flap including tendons and nerves as well as vein grafts. All these advantages make it an optimal candidate for hand and digit reconstruction when conventional flaps are limited or unavailable. In this article, we review its classifications and the selection of donor sites, update its clinical applications, and summarize its indications for all types of venous flaps in hand and digit reconstruction.",
"title": ""
},
{
"docid": "neg:1840606_8",
"text": "Decisions are often guided by generalizing from past experiences. Fundamental questions remain regarding the cognitive and neural mechanisms by which generalization takes place. Prior data suggest that generalization may stem from inference-based processes at the time of generalization. By contrast, generalization may emerge from mnemonic processes occurring while premise events are encoded. Here, participants engaged in a two-phase learning and generalization task, wherein they learned a series of overlapping associations and subsequently generalized what they learned to novel stimulus combinations. Functional MRI revealed that successful generalization was associated with coupled changes in learning-phase activity in the hippocampus and midbrain (ventral tegmental area/substantia nigra). These findings provide evidence for generalization based on integrative encoding, whereby overlapping past events are integrated into a linked mnemonic representation. Hippocampal-midbrain interactions support the dynamic integration of experiences, providing a powerful mechanism for building a rich associative history that extends beyond individual events.",
"title": ""
},
{
"docid": "neg:1840606_9",
"text": "The EDT2 750V uses a micro pattern trench cell with a narrow mesa for reducing the on-state losses with a tailored channel width for short circuit robustness. To account for high system stray inductances (Lstray) and currents for Full or Hybrid Electric Vehicle inverter applications, it features a 750V voltage rating compared to the predecessor IGBT3 650V by an optimized vertical structure and proper plasma shaping. This plasma distribution not only determines the performance tradeoff between on-state and switching losses, but at the same time defines the surge voltage for a given Lstray*I in the application as visualized in a switch-off loss vs. surge voltage trade-off diagram. Shaping of the feedback capacitance Cgc optimizes the tunability of the switching slopes by means of an external gate resistor for an easier adaption to a wider range of system inductances with low losses.",
"title": ""
},
{
"docid": "neg:1840606_10",
"text": "This paper presents a strategy to generate generic summary of documents using Probabilistic Latent Semantic Indexing. Generally a document contains several topics rather than a single one. Summaries created by human beings tend to cover several topics to give the readers an overall idea about the original document. Hence we can expect that a summary containing sentences from better part of the topic spectrum should make a better summary. PLSI has proven to be an effective method in topic detection. In this paper we present a method for creating extractive summary of the document by using PLSI to analyze the features of document such as term frequency and graph structure. We also show our results, which was evaluated using ROUGE, and compare the results with other techniques, proposed in the past.",
"title": ""
},
{
"docid": "neg:1840606_11",
"text": "Recently, some E-commerce sites launch a new interaction box called Tips on their mobile apps. Users can express their experience and feelings or provide suggestions using short texts typically several words or one sentence. In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings. Jointly modeling these two facets is helpful for designing a better recommendation system. While some existing models integrate text information such as item specifications or user reviews into user and item latent factors for improving the rating prediction, no existing works consider tips for improving recommendation quality. We propose a deep learning based framework named NRT which can simultaneously predict precise ratings and generate abstractive tips with good linguistic quality simulating user experience and feelings. For abstractive tips generation, gated recurrent neural networks are employed to \"translate'' user and item latent representations into a concise sentence. Extensive experiments on benchmark datasets from different domains show that NRT achieves significant improvements over the state-of-the-art methods. Moreover, the generated tips can vividly predict the user experience and feelings.",
"title": ""
},
{
"docid": "neg:1840606_12",
"text": "Neural sequence-to-sequence model has achieved great success in abstractive summarization task. However, due to the limit of input length, most of previous works can only utilize lead sentences as the input to generate the abstractive summarization, which ignores crucial information of the document. To alleviate this problem, we propose a novel approach to improve neural sentence summarization by using extractive summarization, which aims at taking full advantage of the document information as much as possible. Furthermore, we present both of streamline strategy and system combination strategy to achieve the fusion of the contents in different views, which can be easily adapted to other domains. Experimental results on CNN/Daily Mail dataset demonstrate both our proposed strategies can significantly improve the performance of neural sentence summarization.",
"title": ""
},
{
"docid": "neg:1840606_13",
"text": "Intellectual Property issues (IP) is a concern that refrains companies to cooperate in whatever of Open Innovation (OI) processes. Particularly, SME consider open innovation as uncertain, risky processes. Despite the opportunities that online OI platforms offer, SMEs have so far failed to embrace them, and proved reluctant to OI. We intend to find whether special collaborative spaces that facilitate a sort of preventive idea claiming, explicit claiming evolution of defensive publication, as so far patents and publications for prevailing innovation, can be the right complementary instruments in OI as to when stronger IP protection regimes might drive openness by SME in general. These spaces, which we name NIR (Networking Innovation Rooms), are a practical, smart paradigm to boost OI for SME. There users sign smart contracts as NDA which takes charge of timestamping any IP disclosure or creation and declares what corrective actions (if they might apply) might be taken for unauthorised IP usage or disclosure of any of the NDA signers. With Blockchain, a new technology emerges which enables decentralised, fine-grained IP management for OI.",
"title": ""
},
{
"docid": "neg:1840606_14",
"text": "Conceptual natural language processing systems usually rely on case frame instantiation to recognize events and role objects in text. But generating a good set of case frames for a domain is timeconsuming, tedious, and prone to errors of omission. We have developed a corpus-based algorithm for acquiring conceptual case frames empirically from unannotated text. Our algorithm builds on previous research on corpus-based methods for acquiring extraction patterns and semantic lexicons. Given extraction patterns and a semantic lexicon for a domain, our algorithm learns semantic preferences for each extraction pattern and merges the syntactically compatible patterns to produce multi-slot case frames with selectional restrictions. The case frames generate more cohesive output and produce fewer false hits than the original extraction patterns. Our system requires only preclassified training texts and a few hours of manual review to filter the dictionaries, demonstrating that conceptual case frames can be acquired from unannotated text without special training resources.",
"title": ""
},
{
"docid": "neg:1840606_15",
"text": "The discovery of disease-causing mutations typically requires confirmation of the variant or gene in multiple unrelated individuals, and a large number of rare genetic diseases remain unsolved due to difficulty identifying second families. To enable the secure sharing of case records by clinicians and rare disease scientists, we have developed the PhenomeCentral portal (https://phenomecentral.org). Each record includes a phenotypic description and relevant genetic information (exome or candidate genes). PhenomeCentral identifies similar patients in the database based on semantic similarity between clinical features, automatically prioritized genes from whole-exome data, and candidate genes entered by the users, enabling both hypothesis-free and hypothesis-driven matchmaking. Users can then contact other submitters to follow up on promising matches. PhenomeCentral incorporates data for over 1,000 patients with rare genetic diseases, contributed by the FORGE and Care4Rare Canada projects, the US NIH Undiagnosed Diseases Program, the EU Neuromics and ANDDIrare projects, as well as numerous independent clinicians and scientists. Though the majority of these records have associated exome data, most lack a molecular diagnosis. PhenomeCentral has already been used to identify causative mutations for several patients, and its ability to find matching patients and diagnose these diseases will grow with each additional patient that is entered.",
"title": ""
},
{
"docid": "neg:1840606_16",
"text": "The smart grid is an innovative energy network that will improve the conventional electrical grid network to be more reliable, cooperative, responsive, and economical. Within the context of the new capabilities, advanced data sensing, communication, and networking technology will play a significant role in shaping the future of the smart grid. The smart grid will require a flexible and efficient framework to ensure the collection of timely and accurate information from various locations in power grid to provide continuous and reliable operation. This article presents a tutorial on the sensor data collection, communications, and networking issues for the smart grid. First, the applications of data sensing in the smart grid are reviewed. Then, the requirements for data sensing and collection, the corresponding sensors and actuators, and the communication and networking architecture are discussed. The communication technologies and the data communication network architecture and protocols for the smart grid are described. Next, different emerging techniques for data sensing, communications, and sensor data networking are reviewed. The issues related to security of data sensing and communications in the smart grid are then discussed. To this end, the standardization activities and use cases related to data sensing and communications in the smart grid are summarized. Finally, several open issues and challenges are outlined. Copyright © 2012 John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "neg:1840606_17",
"text": "MOTIVATION\nMany problems in data integration in bioinformatics can be posed as one common question: Are two sets of observations generated by the same distribution? We propose a kernel-based statistical test for this problem, based on the fact that two distributions are different if and only if there exists at least one function having different expectation on the two distributions. Consequently we use the maximum discrepancy between function means as the basis of a test statistic. The Maximum Mean Discrepancy (MMD) can take advantage of the kernel trick, which allows us to apply it not only to vectors, but strings, sequences, graphs, and other common structured data types arising in molecular biology.\n\n\nRESULTS\nWe study the practical feasibility of an MMD-based test on three central data integration tasks: Testing cross-platform comparability of microarray data, cancer diagnosis, and data-content based schema matching for two different protein function classification schemas. In all of these experiments, including high-dimensional ones, MMD is very accurate in finding samples that were generated from the same distribution, and outperforms its best competitors.\n\n\nCONCLUSIONS\nWe have defined a novel statistical test of whether two samples are from the same distribution, compatible with both multivariate and structured data, that is fast, easy to implement, and works well, as confirmed by our experiments.\n\n\nAVAILABILITY\nhttp://www.dbs.ifi.lmu.de/~borgward/MMD.",
"title": ""
},
{
"docid": "neg:1840606_18",
"text": "Profiling data to determine metadata about a given dataset is an important and frequent activity of any IT professional and researcher and is necessary for various use-cases. It encompasses a vast array of methods to examine datasets and produce metadata. Among the simpler results are statistics, such as the number of null values and distinct values in a column, its data type, or the most frequent patterns of its data values. Metadata that are more difficult to compute involve multiple columns, namely correlations, unique column combinations, functional dependencies, and inclusion dependencies. Further techniques detect conditional properties of the dataset at hand. This survey provides a classification of data profiling tasks and comprehensively reviews the state of the art for each class. In addition, we review data profiling tools and systems from research and industry. We conclude with an outlook on the future of data profiling beyond traditional profiling tasks and beyond relational databases.",
"title": ""
}
] |
1840607 | Learning an Optimizer for Image Deconvolution | [
{
"docid": "pos:1840607_0",
"text": "Many fundamental image-related problems involve deconvolution operators. Real blur degradation seldom complies with an ideal linear convolution model due to camera noise, saturation, image compression, to name a few. Instead of perfectly modeling outliers, which is rather challenging from a generative model perspective, we develop a deep convolutional neural network to capture the characteristics of degradation. We note directly applying existing deep neural networks does not produce reasonable results. Our solution is to establish the connection between traditional optimization-based schemes and a neural network architecture where a novel, separable structure is introduced as a reliable support for robust deconvolution against artifacts. Our network contains two submodules, both trained in a supervised manner with proper initialization. They yield decent performance on non-blind image deconvolution compared to previous generative-model based methods.",
"title": ""
},
{
"docid": "pos:1840607_1",
"text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.",
"title": ""
}
] | [
{
"docid": "neg:1840607_0",
"text": "In the current world, sports produce considerable data such as players skills, game results, season matches, leagues management, etc. The big challenge in sports science is to analyze this data to gain a competitive advantage. The analysis can be done using several techniques and statistical methods in order to produce valuable information. The problem of modeling soccer data has become increasingly popular in the last few years, with the prediction of results being the most popular topic. In this paper, we propose a Bayesian Model based on rank position and shared history that predicts the outcome of future soccer matches. The model was tested using a data set containing the results of over 200,000 soccer matches from different soccer leagues around the world.",
"title": ""
},
{
"docid": "neg:1840607_1",
"text": "The contradiction between the stated preferences of social media users toward privacy and actual privacy behaviors has suggested a willingness to trade privacy regulation for social goals. This study employs data from a survey of 361 social media users, which collected data on privacy attitudes, online privacy strategies and behaviors, and the uses and gratifications that social media experiences bring. Using canonical correlation, it examines in detail how underlying dimensions of privacy concern relate to specific contexts of social media use, and how these contexts relate to various domains of privacyprotecting behaviors. In addition, this research identifies how specific areas of privacy concern relate to levels of privacy regulation, offering new insight into the privacy paradox. In doing so, this study lends greater nuance to how the dynamic of privacy and sociality is understood and enacted by users, and how privacy management and the motivations underlying media use intersect.",
"title": ""
},
{
"docid": "neg:1840607_2",
"text": "In this paper, we study a new learning paradigm for neural machine translation (NMT). Instead of maximizing the likelihood of the human translation as in previous works, we minimize the distinction between human translation and the translation given by an NMT model. To achieve this goal, inspired by the recent success of generative adversarial networks (GANs), we employ an adversarial training architecture and name it as AdversarialNMT. In Adversarial-NMT, the training of the NMT model is assisted by an adversary, which is an elaborately designed 2D convolutional neural network (CNN). The goal of the adversary is to differentiate the translation result generated by the NMT model from that by human. The goal of the NMT model is to produce high quality translations so as to cheat the adversary. A policy gradient method is leveraged to co-train the NMT model and the adversary. Experimental results on English→French and German→English translation tasks show that Adversarial-NMT can achieve significantly better translation quality than several strong baselines.",
"title": ""
},
{
"docid": "neg:1840607_3",
"text": "The elbow patients herein discussed feature common soft tissue conditions such as tennis elbow, golfers' elbow and olecranon bursitis. Relevant anatomical structures for these conditions can easily be identified and demonstrated by cross examination by instructors and participants. Patients usually present rotator cuff tendinopathy, frozen shoulder, axillary neuropathy and suprascapular neuropathy. The structures involved in tendinopathy and frozen shoulder can be easily identified and demonstrated under normal conditions. The axillary and the suprascapular nerves have surface landmarks but cannot be palpated. In neuropathy however, physical findings in both neuropathies are pathognomonic and will be discussed.",
"title": ""
},
{
"docid": "neg:1840607_4",
"text": "Multiview learning has shown promising potential in many applications. However, most techniques are focused on either view consistency, or view diversity. In this paper, we introduce a novel multiview boosting algorithm, called Boost.SH, that computes weak classifiers independently of each view but uses a shared weight distribution to propagate information among the multiple views to ensure consistency. To encourage diversity, we introduce randomized Boost.SH and show its convergence to the greedy Boost.SH solution in the sense of minimizing regret using the framework of adversarial multiarmed bandits. We also introduce a variant of Boost.SH that combines decisions from multiple experts for recommending views for classification. We propose an expert strategy for multiview learning based on inverse variance, which explores both consistency and diversity. Experiments on biometric recognition, document categorization, multilingual text, and yeast genomic multiview data sets demonstrate the advantage of Boost.SH (85%) compared with other boosting algorithms like AdaBoost (82%) using concatenated views and substantially better than a multiview kernel learning algorithm (74%).",
"title": ""
},
{
"docid": "neg:1840607_5",
"text": "Underactuated systems offer compact design with easy actuation and control but at the cost of limited stable configurations and reduced dexterity compared to the directly driven and fully actuated systems. Here, we propose a compact origami-based design in which we can modulate the material stiffness of the joints and thereby control the stable configurations and the overall stiffness in an underactuated robot. The robotic origami, robogami, design uses multiple functional layers in nominally two-dimensional robots to achieve the desired functionality. To control the stiffness of the structure, we adjust the elastic modulus of a shape memory polymer using an embedded customized stretchable heater. We study the actuation of a robogami finger with three joints and determine its stable configurations and contact forces at different stiffness settings. We monitor the configuration of the finger using feedback from customized curvature sensors embedded in each joint. A scaled down version of the design is used in a two-fingered gripper and different grasp modes are achieved by activating different sets of joints.",
"title": ""
},
{
"docid": "neg:1840607_6",
"text": "This paper presents a system that transforms the speech signals of speakers with physical speech disabilities into a more intelligible orm that can be more easily understood by listeners. These transformations are based on the correction of pronunciation errors y the removal of repeated sounds, the insertion of deleted sounds, the devoicing of unvoiced phonemes, the adjustment of the empo of speech by phase vocoding, and the adjustment of the frequency characteristics of speech by anchor-based morphing of he spectrum. These transformations are based on observations of disabled articulation including improper glottal voicing, lessened ongue movement, and lessened energy produced by the lungs. This system is a substantial step towards full automation in speech ransformation without the need for expert or clinical intervention. Among human listeners, recognition rates increased up to 191% (from 21.6% to 41.2%) relative to the original speech by using he module that corrects pronunciation errors. Several types of modified dysarthric speech signals are also supplied to a standard utomatic speech recognition system. In that study, the proportion of words correctly recognized increased up to 121% (from 72.7% o 87.9%) relative to the original speech, across various parameterizations of the recognizer. This represents a significant advance owards human-to-human assistive communication software and human–computer interaction. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840607_7",
"text": "The chapter 1 on Social Media and Social Computing has documented the nature and characteristics of social networks and community detection. The explanation about the emerging of social networks and their properties constitute this chapter followed by a discussion on social community. The nodes, ties and influence in the social networks are the core of the discussion in the second chapter. Centrality is the core discussion here and the degree of centrality and its measure is explained. Understanding network topology is required for social networks concepts.",
"title": ""
},
{
"docid": "neg:1840607_8",
"text": "Abstract The state of security on the Internet is poor and progress toward increased protection is slow. This has given rise to a class of action referred to as “Ethical Hacking”. Companies are releasing software with little or no testing and no formal verification and expecting consumers to debug their product for them. For dot.com companies time-to-market is vital, security is not perceived as a marketing advantage, and implementing a secure design process an expensive sunk expense such that there is no economic incentive to produce bug-free software. There are even legislative initiatives to release software manufacturers from legal responsibility to their defective software.",
"title": ""
},
{
"docid": "neg:1840607_9",
"text": "The past few years have seen an explosion of interest in the epigenetics of cancer. This has been a consequence of both the exciting coalescence of the chromatin and DNA methylation fields, and the realization that DNA methylation changes are involved in human malignancies. The ubiquity of DNA methylation changes has opened the way to a host of innovative diagnostic and therapeutic strategies. Recent advances attest to the great promise of DNA methylation markers as powerful future tools in the clinic.",
"title": ""
},
{
"docid": "neg:1840607_10",
"text": "Dimensionality reduction is an important aspect in the pattern classification literature, and linear discriminant analysis (LDA) is one of the most widely studied dimensionality reduction technique. The application of variants of LDA technique for solving small sample size (SSS) problem can be found in many research areas e.g. face recognition, bioinformatics, text recognition, etc. The improvement of the performance of variants of LDA technique has great potential in various fields of research. In this paper, we present an overview of these methods. We covered the type, characteristics and taxonomy of these methods which can overcome SSS problem. We have also highlighted some important datasets and software/ packages.",
"title": ""
},
{
"docid": "neg:1840607_11",
"text": "Autonomous long range navigation in partially known planetary-like terrain is an open challenge for robotics. Navigating several hundreds of meters without any human intervention requires the robot to be able to build various representations of its environment, to plan and execute trajectories according to the kind of terrain traversed, to localize itself as it moves, and to schedule, start, control and interrupt these various activities. In this paper, we brie y describe some functionalities that are currently running on board the Marsokhod model robot Lama at LAAS/CNRS. We then focus on the necessity to integrate various instances of the perception and decision functionalities, and on the di culties raised by this integration.",
"title": ""
},
{
"docid": "neg:1840607_12",
"text": "This dissertation investigates the use of hierarchy and problem decomposition as a means of solving large, stochastic, sequential decision problems. These problems are framed as Markov decision problems (MDPs). The new technical content of this dissertation begins with a discussion of the concept of temporal abstraction. Temporal abstraction is shown to be equivalent to the transformation of a policy deened over a region of an MDP to an action in a semi-Markov decision problem (SMDP). Several algorithms are presented for performing this transformation eeciently. This dissertation introduces the HAM method for generating hierarchical, temporally abstract actions. This method permits the partial speciication of abstract actions in a way that corresponds to an abstract plan or strategy. Abstract actions speciied as HAMs can be optimally reened for new tasks by solving a reduced SMDP. The formal results show that traditional MDP algorithms can be used to optimally reene HAMs for new tasks. This can be achieved in much less time than it would take to learn a new policy for the task from scratch. HAMs complement some novel decomposition algorithms that are presented in this dissertation. These algorithms work by constructing a cache of policies for diierent regions of the MDP and then optimally combining the cached solution to produce a global solution that is within provable bounds of the optimal solution. Together, the methods developed in this dissertation provide important tools for 2 producing good policies for large MDPs. Unlike some ad-hoc methods, these methods provide strong formal guarantees. They use prior knowledge in a principled way, and they reduce larger MDPs into smaller ones while maintaining a well-deened relationship between the smaller problem and the larger problem.",
"title": ""
},
{
"docid": "neg:1840607_13",
"text": "This paper outlines possible evolution trends of e-learning, supported by most recent advancements in the World Wide Web. Specifically, we consider a situation in which the Semantic Web technology and tools are widely adopted, and fully integrated within a context of applications exploiting the Internet of Things paradigm. Such a scenario will dramatically impact on learning activities, as well as on teaching strategies and instructional design methodology. In particular, the models characterized by learning pervasiveness and interactivity will be greatly empowered.",
"title": ""
},
{
"docid": "neg:1840607_14",
"text": "Social media such as Twitter have become an important method of communication, with potential opportunities for NLG to facilitate the generation of social media content. We focus on the generation of indicative tweets that contain a link to an external web page. While it is natural and tempting to view the linked web page as the source text from which the tweet is generated in an extractive summarization setting, it is unclear to what extent actual indicative tweets behave like extractive summaries. We collect a corpus of indicative tweets with their associated articles and investigate to what extent they can be derived from the articles using extractive methods. We also consider the impact of the formality and genre of the article. Our results demonstrate the limits of viewing indicative tweet generation as extractive summarization, and point to the need for the development of a methodology for tweet generation that is sensitive to genre-specific issues.",
"title": ""
},
{
"docid": "neg:1840607_15",
"text": "Augmented Reality (AR) was first demonstrated in the 1960s, but only recently have technologies emerged that can be used to easily deploy AR applications to many users. Cameraequipped cell phones with significant processing power and graphics abilities provide an inexpensive and versatile platform for AR applications, while the social networking technology of Web 2.0 provides a large-scale infrastructure for collaboratively producing and distributing geo-referenced AR content. This combination of widely used mobile hardware and Web 2.0 software allows the development of a new type of AR platform that can be used on a global scale. In this paper we describe the Augmented Reality 2.0 concept and present existing work on mobile AR and web technologies that could be used to create AR 2.0 applications.",
"title": ""
},
{
"docid": "neg:1840607_16",
"text": "Computational propaganda has recently exploded into public consciousness. The U.S. presidential campaign of 2016 was marred by evidence, which continues to emerge, of targeted political propaganda and the use of bots to distribute political messages on social media. This computational propaganda is both a social and technical phenomenon. Technical knowledge is necessary to work with the massive databases used for audience targeting; it is necessary to create the bots and algorithms that distribute propaganda; it is necessary to monitor and evaluate the results of these efforts in agile campaigning. Thus, a technical knowledge comparable to those who create and distribute this propaganda is necessary to investigate the phenomenon. However, viewing computational propaganda only from a technical perspective—as a set of variables, models, codes, and algorithms—plays into the hands of those who create it, the platforms that serve it, and the firms that profit from it. The very act of making something technical and impartial makes it seem inevitable and unbiased. This undermines the opportunities to argue for change in the social value and meaning of this content and the structures in which it exists. Bigdata research is necessary to understand the sociotechnical issue of computational propaganda and the influence of technology in politics. However, big data researchers must maintain a critical stance toward the data being used and analyzed so as to ensure that we are critiquing as we go about describing, predicting, or recommending changes. If research studies of computational propaganda and political big data do not engage with the forms of power and knowledge that produce it, then the very possibility for improving the role of social-media platforms in public life evaporates. Definitionally, computational propaganda has two important parts: the technical and the social. Focusing on the technical, Woolley and Howard define computational propaganda as the assemblage of social-media platforms, autonomous agents, and big data tasked with the manipulation of public opinion. In contrast, the social definition of computational propaganda derives from the definition of propaganda—communications that deliberately misrepresent symbols, appealing to emotions and prejudices and bypassing rational thought, to achieve a specific goal of its creators—with computational propaganda understood as propaganda created or disseminated using computational (technical) means. Propaganda has a long history. Scholars who study propaganda as an offline or historical phenomenon have long been split over whether the existence of propaganda is necessarily detrimental to the functioning of democracies. However, the rise of the Internet and, in particular, social media has profoundly changed the landscape of propaganda. It has opened the creation and dissemination of propaganda messages, which were once the province of states and large institutions, to a wide variety of individuals and groups. It has allowed cross-border computational propaganda and interference in domestic political processes by foreign states. The anonymity of the Internet has allowed stateproduced propaganda to be presented as if it were not produced by state actors. The Internet has also provided new affordances for the efficient dissemination of propaganda, through the manipulation of the algorithms and processes that govern online information and through audience targeting based on big data analytics. The social effects of the changing nature of propaganda are only just beginning to be understood, and the advancement of this understanding is complicated by the unprecedented marrying of the social and the technical that the Internet age has enabled. The articles in this special issue showcase the state of the art in the use of big data in the study of computational propaganda and the influence of social media on politics. This rapidly emerging field represents a new clash of the highly social and highly technical in both",
"title": ""
},
{
"docid": "neg:1840607_17",
"text": "Two Gram-stain-negative, non-motile, non-spore-forming, rod-shaped bacterial strains, designated 3B-2(T) and 10AO(T), were isolated from a sand sample collected from the west coast of the Korean peninsula by using low-nutrient media, and their taxonomic positions were investigated in a polyphasic study. The strains did not grow on marine agar. They grew optimally at 30 °C and pH 6.5-7.5. Strains 3B-2(T) and 10AO(T) shared 97.5 % 16S rRNA gene sequence similarity and mean level of DNA-DNA relatedness of 12 %. In phylogenetic trees based on 16S rRNA gene sequences, strains 3B-2(T) and 10AO(T), together with several uncultured bacterial clones, formed independent lineages within the evolutionary radiation encompassed by the phylum Bacteroidetes. Strains 3B-2(T) and 10AO(T) contained MK-7 as the predominant menaquinone and iso-C(15 : 0) and C(16 : 1)ω5c as the major fatty acids. The DNA G+C contents of strains 3B-2(T) and 10AO(T) were 42.8 and 44.6 mol%, respectively. Strains 3B-2(T) and 10AO(T) exhibited very low levels of 16S rRNA gene sequence similarity (<85.0 %) to the type strains of recognized bacterial species. These data were sufficient to support the proposal that the novel strains should be differentiated from previously known genera of the phylum Bacteroidetes. On the basis of the data presented, we suggest that strains 3B-2(T) and 10AO(T) represent two distinct novel species of a new genus, for which the names Ohtaekwangia koreensis gen. nov., sp. nov. (the type species; type strain 3B-2(T) = KCTC 23018(T) = CCUG 58939(T)) and Ohtaekwangia kribbensis sp. nov. (type strain 10AO(T) = KCTC 23019(T) = CCUG 58938(T)) are proposed.",
"title": ""
},
{
"docid": "neg:1840607_18",
"text": "It is increasingly recognized that the human planum temporale is not a dedicated language processor, but is in fact engaged in the analysis of many types of complex sound. We propose a model of the human planum temporale as a computational engine for the segregation and matching of spectrotemporal patterns. The model is based on segregating the components of the acoustic world and matching these components with learned spectrotemporal representations. Spectrotemporal information derived from such a 'computational hub' would be gated to higher-order cortical areas for further processing, leading to object recognition and the perception of auditory space. We review the evidence for the model and specific predictions that follow from it.",
"title": ""
},
{
"docid": "neg:1840607_19",
"text": "The multilevel thresholding is an important technique for image processing and pattern recognition. The maximum entropy thresholding has been widely applied in the literature. In this paper, a new multilevel MET algorithm based on the technology of the firefly algorithm is proposed. This proposed method is called the maximum entropy based firefly thresholding method. Four different methods are implemented for comparing to this proposed method: the exhaustive search, the particle swarm optimization, the hybrid cooperative-comprehensive learning based PSO algorithm and the honey bee mating optimization. The experimental results demonstrated that the proposed MEFFT algorithm can search for multiple thresholds which are very close to the optimal ones examined by the exhaustive search method. Compared to the PSO and HCOCLPSO, the segmentation results of using the MEFFT algorithm is significantly improved and the computation time of the proposed MEFFT algorithm is shortest.",
"title": ""
}
] |
1840608 | Radar Cross Section Reduction of a Microstrip Antenna Based on Polarization Conversion Metamaterial | [
{
"docid": "pos:1840608_0",
"text": "In this paper, a novel metamaterial absorber working in the C band frequency range has been proposed to reduce the in-band Radar Cross Section (RCS) of a typical planar antenna. The absorber is first designed in the shape of a hexagonal ring structure having dipoles at the corresponding arms of the rings. The various geometrical parameters of the proposed metamaterial structure have first been optimized using the numerical simulator, and the structure is fabricated and tested. In the second step, the metamaterial absorber is loaded on a microstrip patch antenna working in the same frequency band as that of the metamaterial absorber to reduce the in-band Radar Cross Section (RCS) of the antenna. The prototype is simulated, fabricated and tested. The simulated results show the 99% absorption of the absorber at 6.35 GHz which is in accordance with the measured data. A close agreement between the simulated and the measured results shows that the proposed absorber can be used for the RCS reduction of the planar antenna in order to improve its in-band stealth performance.",
"title": ""
}
] | [
{
"docid": "neg:1840608_0",
"text": "Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models’ usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study population. This article presents the results of a simulation study that examines the performance of likelihood-based tests and the traditionally used Information Criterion (ICs) used for determining the number of classes in mixture modeling. We look at the performance of these tests and indexes for 3 types of mixture models: latent class analysis (LCA), a factor mixture model (FMA), and a growth mixture models (GMM). We evaluate the ability of the tests and indexes to correctly identify the number of classes at three different sample sizes (n D 200, 500, 1,000). Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test proved to be a very consistent indicator of classes across all of the models considered.",
"title": ""
},
{
"docid": "neg:1840608_1",
"text": "BACKGROUND\nIn Mexico, stunting and anemia have declined but are still high in some regions and subpopulations, whereas overweight and obesity have increased at alarming rates in all age and socioeconomic groups.\n\n\nOBJECTIVE\nThe objective was to describe the coexistence of stunting, anemia, and overweight and obesity at the national, household, and individual levels.\n\n\nDESIGN\nWe estimated national prevalences of and trends for stunting, anemia, and overweight and obesity in children aged <5 y and in school-aged children (5-11 y old) and anemia and overweight and obesity in women aged 20-49 y by using the National Health and Nutrition Surveys conducted in 1988, 1999, 2006, and 2012. With the use of the most recent data (2012), the double burden of malnutrition at the household level was estimated and defined as the coexistence of stunting in children aged <5 y and overweight or obesity in the mother. At the individual level, double burden was defined as concurrent stunting and overweight and obesity in children aged 5-11 y and concurrent anemia and overweight or obesity in children aged 5-11 y and in women. We also tested if the coexistence of the conditions corresponded to expected values, under the assumption of independent distributions of each condition.\n\n\nRESULTS\nAt the household level, the prevalence of concurrent stunting in children aged <5 y and overweight and obesity in mothers was 8.4%; at the individual level, prevalences were 1% for stunting and overweight or obesity and 2.9% for anemia and overweight or obesity in children aged 5-11 y and 7.6% for anemia and overweight or obesity in women. At the household and individual levels in children aged 5-11 y, prevalences of double burden were significantly lower than expected, whereas anemia and the prevalence of overweight or obesity in women were not different from that expected.\n\n\nCONCLUSIONS\nAlthough some prevalences of double burden were lower than expected, assuming independent distributions of the 2 conditions, the coexistence of stunting, overweight or obesity, and anemia at the national, household, and intraindividual levels in Mexico calls for policies and programs to prevent the 3 conditions.",
"title": ""
},
{
"docid": "neg:1840608_2",
"text": "Insect-scale legged robots have the potential to locomote on rough terrain, crawl through confined spaces, and scale vertical and inverted surfaces. However, small scale implies that such robots are unable to carry large payloads. Limited payload capacity forces miniature robots to utilize simple control methods that can be implemented on a simple onboard microprocessor. In this study, the design of a new version of the biologically-inspired Harvard Ambulatory MicroRobot (HAMR) is presented. In order to find the most suitable control inputs for HAMR, maneuverability experiments are conducted for several drive parameters. Ideal input candidates for orientation and lateral velocity control are identified as a result of the maneuverability experiments. Using these control inputs, two simple feedback controllers are implemented to control the orientation and the lateral velocity of the robot. The controllers are used to force the robot to track trajectories with a minimum turning radius of 55 mm and a maximum lateral to normal velocity ratio of 0.8. Due to their simplicity, the controllers presented in this work are ideal for implementation with on-board computation for future HAMR prototypes.",
"title": ""
},
{
"docid": "neg:1840608_3",
"text": "We describe a watermarking scheme for ownership verification and authentication. Depending on the desire of the user, the watermark can be either visible or invisible. The scheme can detect any modification made to the image and indicate the specific locations that have been modified. If the correct key is specified in the watermark extraction procedure, then an output image is returned showing a proper watermark, indicating the image is authentic and has not been changed since the insertion of the watermark. Any modification would be reflected in a corresponding error in the watermark. If the key is incorrect, or if the image was not watermarked, or if the watermarked image is cropped, the watermark extraction algorithm will return an image that resembles random noise. Since it requires a user key during both the insertion and the extraction procedures, it is not possible for an unauthorized user to insert a new watermark or alter the existing watermark so that the resulting image will pass the test. We present secret key and public key versions of the technique.",
"title": ""
},
{
"docid": "neg:1840608_4",
"text": "Crowdsourcing label generation has been a crucial component for many real-world machine learning applications. In this paper, we provide finite-sample exponential bounds on the error rate (in probability and in expectation) of hyperplane binary labeling rules for the Dawid-Skene (and Symmetric DawidSkene ) crowdsourcing model. The bounds can be applied to analyze many commonly used prediction methods, including the majority voting, weighted majority voting and maximum a posteriori (MAP) rules. These bound results can be used to control the error rate and design better algorithms. In particular, under the Symmetric Dawid-Skene model we use simulation to demonstrate that the data-driven EM-MAP rule is a good approximation to the oracle MAP rule which approximately optimizes our upper bound on the mean error rate for any hyperplane binary labeling rule. Meanwhile, the average error rate of the EM-MAP rule is bounded well by the upper bound on the mean error rate of the oracle MAP rule in the simulation.",
"title": ""
},
{
"docid": "neg:1840608_5",
"text": "This thesis performs an empirical analysis of Word2Vec by comparing its output to WordNet, a well-known, human-curated lexical database. It finds that Word2Vec tends to uncover more of certain types of semantic relations than others – with Word2Vec returning more hypernyms, synonomyns and hyponyms than hyponyms or holonyms. It also shows the probability that neighbors separated by a given cosine distance in Word2Vec are semantically related in WordNet. This result both adds to our understanding of the stillunknown Word2Vec and helps to benchmark new semantic tools built from word vectors. Word2Vec, Natural Language Processing, WordNet, Distributional Semantics",
"title": ""
},
{
"docid": "neg:1840608_6",
"text": "Automated text summarization is important to for humans to better manage the massive information explosion. Several machine learning approaches could be successfully used to handle the problem. This paper reports the results of our study to compare the performance between neural networks and support vector machines for text summarization. Both models have the ability to discover non-linear data and are effective model when dealing with large datasets.",
"title": ""
},
{
"docid": "neg:1840608_7",
"text": "Two decades since the idea of using software diversity for security was put forward, ASLR is the only technique to see widespread deployment. This is puzzling since academic security researchers have published scores of papers claiming to advance the state of the art in the area of code randomization. Unfortunately, these improved diversity techniques are generally less deployable than integrity-based techniques, such as control-flow integrity, due to their limited compatibility with existing optimization, development, and distribution practices. This paper contributes yet another diversity technique called pagerando. Rather than trading off practicality for security, we first and foremost aim for deployability and interoperability. Most code randomization techniques interfere with memory sharing and deduplication optimization across processes and virtual machines, ours does not. We randomize at the granularity of individual code pages but never rewrite page contents. This also avoids incompatibilities with code integrity mechanisms that only allow signed code to be mapped into memory and prevent any subsequent changes. On Android, pagerando fully adheres to the default SELinux policies. All practical mitigations must interoperate with unprotected legacy code, our implementation transparently interoperates with unmodified applications and libraries. To support our claims of practicality, we demonstrate that our technique can be integrated into and protect all shared libraries shipped with stock Android 6.0. We also consider hardening of non-shared libraries and executables and other concerns that must be addressed to put software diversity defenses on par with integrity-based mitigations such as CFI.",
"title": ""
},
{
"docid": "neg:1840608_8",
"text": "We consider adaptive meshless discretisation of the Dirichlet problem for Poisson equation based on numerical differentiation stencils obtained with the help of radial basis functions. New meshless stencil selection and adaptive refinement algorithms are proposed in 2D. Numerical experiments show that the accuracy of the solution is comparable with, and often better than that achieved by the mesh-based adaptive finite element method.",
"title": ""
},
{
"docid": "neg:1840608_9",
"text": "Link prediction appears as a central problem of network science, as it calls for unfolding the mechanisms that govern the micro-dynamics of the network. In this work, we are interested in ego-networks, that is the mere information of interactions of a node to its neighbors, in the context of social relationships. As the structural information is very poor, we rely on another source of information to predict links among egos’ neighbors: the timing of interactions. We define several features to capture different kinds of temporal information and apply machine learning methods to combine these various features and improve the quality of the prediction. We demonstrate the efficiency of this temporal approach on a cellphone interaction dataset, pointing out features which prove themselves to perform well in this context, in particular the temporal profile of interactions and elapsed time between contacts.",
"title": ""
},
{
"docid": "neg:1840608_10",
"text": "Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the \"state instability\" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.",
"title": ""
},
{
"docid": "neg:1840608_11",
"text": "In this paper we show that the Euler number of the compactified Jacobian of a rational curve C with locally planar singularities is equal to the multiplicity of the δ-constant stratum in the base of a semi-universal deformation of C. In particular, the multiplicity assigned by Yau, Zaslow and Beauville to a rational curve on a K3 surface S coincides with the multiplicity of the normalisation map in the moduli space of stable maps to S. Introduction Let C be a reduced and irreducible projective curve with singular set Σ ⊂ C and let n : C̃ −→ C be its normalisation. The generalised Jacobian JC of C is an extension of JC̃ by an affine commutative group of dimension δ := dimH0(n∗(OC̃)/OC) = ∑",
"title": ""
},
{
"docid": "neg:1840608_12",
"text": "Functional infrared thermal imaging (fITI) is considered a promising method to measure emotional autonomic responses through facial cutaneous thermal variations. However, the facial thermal response to emotions still needs to be investigated within the framework of the dimensional approach to emotions. The main aim of this study was to assess how the facial thermal variations index the emotional arousal and valence dimensions of visual stimuli. Twenty-four participants were presented with three groups of standardized emotional pictures (unpleasant, neutral and pleasant) from the International Affective Picture System. Facial temperature was recorded at the nose tip, an important region of interest for facial thermal variations, and compared to electrodermal responses, a robust index of emotional arousal. Both types of responses were also compared to subjective ratings of pictures. An emotional arousal effect was found on the amplitude and latency of thermal responses and on the amplitude and frequency of electrodermal responses. The participants showed greater thermal and dermal responses to emotional than to neutral pictures with no difference between pleasant and unpleasant ones. Thermal responses correlated and the dermal ones tended to correlate with subjective ratings. Finally, in the emotional conditions compared to the neutral one, the frequency of simultaneous thermal and dermal responses increased while both thermal or dermal isolated responses decreased. Overall, this study brings convergent arguments to consider fITI as a promising method reflecting the arousal dimension of emotional stimulation and, consequently, as a credible alternative to the classical recording of electrodermal activity. The present research provides an original way to unveil autonomic implication in emotional processes and opens new perspectives to measure them in touchless conditions.",
"title": ""
},
{
"docid": "neg:1840608_13",
"text": "A compact dual-band bandstop filter (BSF) is presented. It combines a conventional open-stub BSF and three spurlines. This filter generates two stopbands at 2.0 GHz and 3.0 GHz with the same circuit size as the conventional BSF.",
"title": ""
},
{
"docid": "neg:1840608_14",
"text": "0167-8655/$ see front matter 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.patrec.2013.07.007 ⇑ Corresponding author at: Department of Computer Science, Triangle Research & Development Center, Kafr Qarea, Israel. Fax: +972 4 6356168. E-mail addresses: [email protected] (R. Saabni), [email protected] (A. Asi), [email protected] (J. El-Sana). 1 These authors contributed equally to this work. Raid Saabni a,b,⇑,1, Abedelkadir Asi , Jihad El-Sana c",
"title": ""
},
{
"docid": "neg:1840608_15",
"text": "Online telemedicine systems are useful due to the possibility of timely and efficient healthcare services. These systems are based on advanced wireless and wearable sensor technologies. The rapid growth in technology has remarkably enhanced the scope of remote health monitoring systems. In this paper, a real-time heart monitoring system is developed considering the cost, ease of application, accuracy, and data security. The system is conceptualized to provide an interface between the doctor and the patients for two-way communication. The main purpose of this study is to facilitate the remote cardiac patients in getting latest healthcare services which might not be possible otherwise due to low doctor-to-patient ratio. The developed monitoring system is then evaluated for 40 individuals (aged between 18 and 66 years) using wearable sensors while holding an Android device (i.e., smartphone under supervision of the experts). The performance analysis shows that the proposed system is reliable and helpful due to high speed. The analyses showed that the proposed system is convenient and reliable and ensures data security at low cost. In addition, the developed system is equipped to generate warning messages to the doctor and patient under critical circumstances.",
"title": ""
},
{
"docid": "neg:1840608_16",
"text": "One of the key aspects in the implementation of reactive behaviour in the Web and, most importantly, in the semantic Web is the development of event detection engines. An event engine detects events occurring in a system and notifies their occurrences to its clients. Although primitive events are useful for modelling a good number of applications, certain other applications require the combination of primitive events in order to support reactive behaviour. This paper presents the implementation of an event detection engine that detects composite events specified by expressions of an illustrative sublanguage of the SNOOP event algebra",
"title": ""
},
{
"docid": "neg:1840608_17",
"text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.",
"title": ""
},
{
"docid": "neg:1840608_18",
"text": "Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes.",
"title": ""
}
] |
1840609 | Engineering Methodologies : A Review of the Waterfall Model and Object-Oriented Approach | [
{
"docid": "pos:1840609_0",
"text": "By its very nature, software development consists of many knowledge-intensive processes. One of the most difficult to model, however, is requirements elicitation. This paper presents a mathematical model of the requirements elicitation process that clearly shows the critical role of knowledge in its performance. One metaprocess of requirements elicitation, selection of an appropriate elicitation technique, is also captured in the model. The values of this model are: (1) improved understanding of what needs to be performed during elicitation helps analysts improve their elicitation efforts, (2) improved understanding of how elicitation techniques are selected helps less experienced analysts be as successful as more experienced analysts, and (3) as we improve our ability to perform elicitation, we improve the likelihood that the systems we create will meet their intended customers’ needs. Many papers have been written that promulgate specific elicitation methods. A few have been written that model elicitation in general. However, none have yet to model elicitation in a way that makes clear the critical role played by knowledge. This paper’s model captures the critical roles played by knowledge in both elicitation and elicitation technique selection.",
"title": ""
}
] | [
{
"docid": "neg:1840609_0",
"text": "BACKGROUND\nThere is conflicting evidence about the relationship between the dose of enteral caloric intake and survival in critically ill patients. The objective of this systematic review and meta-analysis is to compare the effect of lower versus higher dose of enteral caloric intake in adult critically ill patients on outcome.\n\n\nMETHODS\nWe reviewed MEDLINE, EMBASE, Cochrane Central Register of Controlled Trials, Cochrane Database of Systematic Reviews, and Scopus from inception through November 2015. We included randomized and quasi-randomized studies in which there was a significant difference in the caloric intake in adult critically ill patients, including trials in which caloric restriction was the primary intervention (caloric restriction trials) and those with other interventions (non-caloric restriction trials). Two reviewers independently extracted data on study characteristics, caloric intake, and outcomes with hospital mortality being the primary outcome.\n\n\nRESULTS\nTwenty-one trials mostly with moderate bias risk were included (2365 patients in the lower caloric intake group and 2352 patients in the higher caloric group). Lower compared with higher caloric intake was not associated with difference in hospital mortality (risk ratio (RR) 0.953; 95 % confidence interval (CI) 0.838-1.083), ICU mortality (RR 0.885; 95 % CI 0.751-1.042), total nosocomial infections (RR 0.982; 95 % CI 0.878-1.077), mechanical ventilation duration, or length of ICU or hospital stay. Blood stream infections (11 trials; RR 0.718; 95 % CI 0.519-0.994) and incident renal replacement therapy (five trials; RR 0.711; 95 % CI 0.545-0.928) were lower with lower caloric intake. The associations between lower compared with higher caloric intake and primary and secondary outcomes, including pneumonia, were not different between caloric restriction and non-caloric restriction trials, except for the hospital stay which was longer with lower caloric intake in the caloric restriction trials.\n\n\nCONCLUSIONS\nWe found no association between the dose of caloric intake in adult critically ill patients and hospital mortality. Lower caloric intake was associated with lower risk of blood stream infections and incident renal replacement therapy (five trials only). The heterogeneity in the design, feeding route and timing and caloric dose among the included trials could limit our interpretation. Further studies are needed to clarify our findings.",
"title": ""
},
{
"docid": "neg:1840609_1",
"text": "The objective of this paper is to present an approach to electromagnetic field simulation based on the systematic use of the global (i.e. integral) quantities. In this approach, the equations of electromagnetism are obtained directly in a finite form starting from experimental laws without resorting to the differential formulation. This finite formulation is the natural extension of the network theory to electromagnetic field and it is suitable for computational electromagnetics.",
"title": ""
},
{
"docid": "neg:1840609_2",
"text": "The accurate estimation of students’ grades in future courses is important as it can inform the selection of next term’s courses and create personalized degree pathways to facilitate successful and timely graduation. This paper presents future course grade predictions methods based on sparse linear and low-rank matrix factorization models that are specific to each course or student–course tuple. These methods identify the predictive subsets of prior courses on a course-by-course basis and better address problems associated with the not-missing-at-random nature of the student–course historical grade data. The methods were evaluated on a dataset obtained from the University of Minnesota, for two different departments with different characteristics. This evaluation showed that focusing on course-specific data improves the accuracy of grade prediction.",
"title": ""
},
{
"docid": "neg:1840609_3",
"text": "Semi-supervised classifier design that simultaneously utilizes both labeled and unlabeled samples is a major research issue in machine learning. Existing semisupervised learning methods belong to either generative or discriminative approaches. This paper focuses on probabilistic semi-supervised classifier design and presents a hybrid approach to take advantage of the generative and discriminative approaches. Our formulation considers a generative model trained on labeled samples and a newly introduced bias correction model. Both models belong to the same model family. The proposed hybrid model is constructed by combining both generative and bias correction models based on the maximum entropy principle. The parameters of the bias correction model are estimated by using training data, and combination weights are estimated so that labeled samples are correctly classified. We use naive Bayes models as the generative models to apply the hybrid approach to text classification problems. In our experimental results on three text data sets, we confirmed that the proposed method significantly outperformed pure generative and discriminative methods when the classification performances of the both methods were comparable.",
"title": ""
},
{
"docid": "neg:1840609_4",
"text": "High-quality word representations have been very successful in recent years at improving performance across a variety of NLP tasks. These word representations are the mappings of each word in the vocabulary to a real vector in the Euclidean space. Besides high performance on specific tasks, learned word representations have been shown to perform well on establishing linear relationships among words. The recently introduced skipgram model improved performance on unsupervised learning of word embeddings that contains rich syntactic and semantic word relations both in terms of accuracy and speed. Word embeddings that have been used frequently on English language, is not applied to Turkish yet. In this paper, we apply the skip-gram model to a large Turkish text corpus and measured the performance of them quantitatively with the \"question\" sets that we generated. The learned word embeddings and the question sets are publicly available at our website. Keywords—Word embeddings, Natural Language Processing, Deep Learning",
"title": ""
},
{
"docid": "neg:1840609_5",
"text": "With this paper we take a first step to understand the appropriation of social media by the police. For this purpose we analyzed the Twitter communication by the London Metropolitan Police (MET) and the Greater Manchester Police (GMP) during the riots in August 2011. The systematic comparison of tweets demonstrates that the two forces developed very different practices for using Twitter. While MET followed an instrumental approach in their communication, in which the police aimed to remain in a controlled position and keep a distance to the general public, GMP developed an expressive approach, in which the police actively decreased the distance to the citizens. In workshops and interviews, we asked the police officers about their perspectives, which confirmed the identified practices. Our study discusses benefits and risks of the two approaches and the potential impact of social media on the evolution of the role of police in society.",
"title": ""
},
{
"docid": "neg:1840609_6",
"text": "BACKGROUND\nSubfertility and poor nutrition are increasing problems in Western countries. Moreover, nutrition affects fertility in both women and men. In this study, we investigate the association between adherence to general dietary recommendations in couples undergoing IVF/ICSI treatment and the chance of ongoing pregnancy.\n\n\nMETHODS\nBetween October 2007 and October 2010, couples planning pregnancy visiting the outpatient clinic of the Department of Obstetrics and Gynaecology of the Erasmus Medical Centre in Rotterdam, the Netherlands were offered preconception counselling. Self-administered questionnaires on general characteristics and diet were completed and checked during the visit. Six questions, based on dietary recommendations of the Netherlands Nutrition Centre, covered the intake of six main food groups (fruits, vegetables, meat, fish, whole wheat products and fats). Using the questionnaire results, we calculated the Preconception Dietary Risk score (PDR), providing an estimate of nutritional habits. Dietary quality increases with an increasing PDR score. We define ongoing pregnancy as an intrauterine pregnancy with positive heart action confirmed by ultrasound. For this analysis we selected all couples (n=199) who underwent a first IVF/ICSI treatment within 6 months after preconception counselling. We applied adjusted logistic regression analysis on the outcomes of interest using SPSS.\n\n\nRESULTS\nAfter adjustment for age of the woman, smoking of the woman, PDR of the partner, BMI of the couple and treatment indication we show an association between the PDR of the woman and the chance of ongoing pregnancy after IVF/ICSI treatment (odds ratio 1.65, confidence interval: 1.08-2.52; P=0.02]. Thus, a one-point increase in the PDR score associates with a 65% increased chance of ongoing pregnancy.\n\n\nCONCLUSIONS\nOur results show that increasing adherence to Dutch dietary recommendations in women undergoing IVF/ICSI treatment increases the chance of ongoing pregnancy. These data warrant further confirmation in couples achieving a spontaneous pregnancy and in randomized controlled trials.",
"title": ""
},
{
"docid": "neg:1840609_7",
"text": "In this study, it proposes a new optimization algorithm called APRIORI-IMPROVE based on the insufficient of Apriori. APRIORI-IMPROVE algorithm presents optimizations on 2-items generation, transactions compression and so on. APRIORI-IMPROVE uses hash structure to generate L2, uses an efficient horizontal data representation and optimized strategy of storage to save time and space. The performance study shows that APRIORI-IMPROVE is much faster than Apriori.",
"title": ""
},
{
"docid": "neg:1840609_8",
"text": "An array of four uniform half-width microstrip leaky-wave antennas (MLWAs) was designed and tested to obtain maximum radiation in the boresight direction. To achieve this, uniform MLWAs are placed at 90 ° and fed by a single probe at the center. Four beams from four individual branches combine to form the resultant directive beam. The measured matched bandwidth of the array is 300 MHz (3.8-4.1 GHz). Its beam toward boresight occurs over a relatively wide 6.4% (3.8-4.05 GHz) band. The peak measured boresight gain of the array is 10.1 dBi, and its variation within the 250-MHz boresight radiation band is only 1.7 dB.",
"title": ""
},
{
"docid": "neg:1840609_9",
"text": "The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.",
"title": ""
},
{
"docid": "neg:1840609_10",
"text": "A vehicular ad hoc network (VANET) serves as an application of the intelligent transportation system that improves traffic safety as well as efficiency. Vehicles in a VANET broadcast traffic and safety-related information used by road safety applications, such as an emergency electronic brake light. The broadcast of these messages in an open-access environment makes security and privacy critical and challenging issues in the VANET. A misuse of this information may lead to a traffic accident and loss of human lives atworse and, therefore, vehicle authentication is a necessary requirement. During authentication, a vehicle’s privacy-related data, such as identity and location information, must be kept private. This paper presents an approach for privacy-preserving authentication in a VANET. Our hybrid approach combines the useful features of both the pseudonym-based approaches and the group signature-based approaches to preclude their respective drawbacks. The proposed approach neither requires a vehicle to manage a certificate revocation list, nor indulges vehicles in any group management. The proposed approach utilizes efficient and lightweight pseudonyms that are not only used for message authentication, but also serve as a trapdoor in order to provide conditional anonymity. We present various attack scenarios that show the resilience of the proposed approach against various security and privacy threats. We also provide analysis of computational and communication overhead to show the efficiency of the proposed technique. In addition, we carry out extensive simulations in order to present a detailed network performance analysis. The results show the feasibility of our proposed approach in terms of end-to-end delay and packet delivery ratio.",
"title": ""
},
{
"docid": "neg:1840609_11",
"text": "BACKGROUND AND OBJECTIVE\nThe effective processing of biomedical images usually requires the interoperability of diverse software tools that have different aims but are complementary. The goal of this work is to develop a bridge to connect two of those tools: ImageJ, a program for image analysis in life sciences, and OpenCV, a computer vision and machine learning library.\n\n\nMETHODS\nBased on a thorough analysis of ImageJ and OpenCV, we detected the features of these systems that could be enhanced, and developed a library to combine both tools, taking advantage of the strengths of each system. The library was implemented on top of the SciJava converter framework. We also provide a methodology to use this library.\n\n\nRESULTS\nWe have developed the publicly available library IJ-OpenCV that can be employed to create applications combining features from both ImageJ and OpenCV. From the perspective of ImageJ developers, they can use IJ-OpenCV to easily create plugins that use any functionality provided by the OpenCV library and explore different alternatives. From the perspective of OpenCV developers, this library provides a link to the ImageJ graphical user interface and all its features to handle regions of interest.\n\n\nCONCLUSIONS\nThe IJ-OpenCV library bridges the gap between ImageJ and OpenCV, allowing the connection and the cooperation of these two systems.",
"title": ""
},
{
"docid": "neg:1840609_12",
"text": "This paper presents a simulation framework for pathological gait assistance with a hip exoskeleton. Previously we had developed an event-driven controller for gait assistance [1]. We now simulate (or optimize) the gait assistance in ankle pathologies (e.g., weak dorsiflexion or plantarflexion). It is done by 1) utilizing the neuromuscular walking model, 2) parameterizing assistive torques for swing and stance legs, and 3) performing dynamic optimizations that takes into account the human-robot interactive dynamics. We evaluate the energy expenditures and walking parameters for the different gait types. Results show that each gait type should have a different assistance strategy comparing with the assistance of normal gait. Although we need further studies about the pathologies, our simulation model is feasible to design the gait assistance for the ankle muscle weaknesses.",
"title": ""
},
{
"docid": "neg:1840609_13",
"text": "BACKGROUND & AIMS\nHepatitis B and D viruses (HBV and HDV) are human pathogens with restricted host ranges and high selectivity for hepatocytes; the HBV L-envelope protein interacts specifically with a receptor on these cells. We aimed to identify this receptor and analyze whether it is the recently described sodium-taurocholate co-transporter polypeptide (NTCP), encoded by the SLC10A1 gene.\n\n\nMETHODS\nTo identify receptor candidates, we compared gene expression patterns between differentiated HepaRG cells, which express the receptor, and naïve cells, which do not. Receptor candidates were evaluated by small hairpin RNA silencing in HepaRG cells; the ability of receptor expression to confer binding and infection were tested in transduced hepatoma cell lines. We used interspecies domain swapping to identify motifs for receptor-mediated host discrimination of HBV and HDV binding and infection.\n\n\nRESULTS\nBioinformatic analyses of comparative expression arrays confirmed that NTCP, which was previously identified through a biochemical approach is a bona fide receptor for HBV and HDV. NTCPs from rat, mouse, and human bound Myrcludex B, a peptide ligand derived from the HBV L-protein. Myrcludex B blocked NTCP transport of bile salts; small hairpin RNA-mediated knockdown of NTCP in HepaRG cells prevented their infection by HBV or HDV. Expression of human but not mouse NTCP in HepG2 and HuH7 cells conferred a limited cell-type-related and virus-dependent susceptibility to infection; these limitations were overcome when cells were cultured with dimethyl sulfoxide. We identified 2 short-sequence motifs in human NTCP that were required for species-specific binding and infection by HBV and HDV.\n\n\nCONCLUSIONS\nHuman NTCP is a specific receptor for HBV and HDV. NTCP-expressing cell lines can be efficiently infected with these viruses, and might be used in basic research and high-throughput screening studies. Mapping of motifs in NTCPs have increased our understanding of the species specificities of HBV and HDV, and could lead to small animal models for studies of viral infection and replication.",
"title": ""
},
{
"docid": "neg:1840609_14",
"text": "BACKGROUND\nRecent years have seen an explosion in the availability of data in the chemistry domain. With this information explosion, however, retrieving relevant results from the available information, and organising those results, become even harder problems. Computational processing is essential to filter and organise the available resources so as to better facilitate the work of scientists. Ontologies encode expert domain knowledge in a hierarchically organised machine-processable format. One such ontology for the chemical domain is ChEBI. ChEBI provides a classification of chemicals based on their structural features and a role or activity-based classification. An example of a structure-based class is 'pentacyclic compound' (compounds containing five-ring structures), while an example of a role-based class is 'analgesic', since many different chemicals can act as analgesics without sharing structural features. Structure-based classification in chemistry exploits elegant regularities and symmetries in the underlying chemical domain. As yet, there has been neither a systematic analysis of the types of structural classification in use in chemistry nor a comparison to the capabilities of available technologies.\n\n\nRESULTS\nWe analyze the different categories of structural classes in chemistry, presenting a list of patterns for features found in class definitions. We compare these patterns of class definition to tools which allow for automation of hierarchy construction within cheminformatics and within logic-based ontology technology, going into detail in the latter case with respect to the expressive capabilities of the Web Ontology Language and recent extensions for modelling structured objects. Finally we discuss the relationships and interactions between cheminformatics approaches and logic-based approaches.\n\n\nCONCLUSION\nSystems that perform intelligent reasoning tasks on chemistry data require a diverse set of underlying computational utilities including algorithmic, statistical and logic-based tools. For the task of automatic structure-based classification of chemical entities, essential to managing the vast swathes of chemical data being brought online, systems which are capable of hybrid reasoning combining several different approaches are crucial. We provide a thorough review of the available tools and methodologies, and identify areas of open research.",
"title": ""
},
{
"docid": "neg:1840609_15",
"text": "Students often have their own individual laptop computers in university classes, and researchers debate the potential benefits and drawbacks of laptop use. In the presented research, we used a combination of surveys and in-class observations to study how students use their laptops in an unmonitored and unrestricted class setting—a large lecture-based university class with nearly 3000 enrolled students. By analyzing computer use over the duration of long (165 minute) classes, we demonstrate how computer use changes over time. The observations and studentreports provided similar descriptions of laptop activities. Note taking was the most common use for the computers, followed by the use of social media web sites. Overall, the data show that students engaged in off-task computer activities for nearly two-thirds of the time. An analysis of the frequency of the various laptop activities over time showed that engagement in individual activities varied significantly over the duration of the class.",
"title": ""
},
{
"docid": "neg:1840609_16",
"text": "Search over encrypted data is a technique of great interest in the cloud computing era, because many believe that sensitive data has to be encrypted before outsourcing to the cloud servers in order to ensure user data privacy. Devising an efficient and secure search scheme over encrypted data involves techniques from multiple domains – information retrieval for index representation, algorithms for search efficiency, and proper design of cryptographic protocols to ensure the security and privacy of the overall system. This chapter provides a basic introduction to the problem definition, system model, and reviews the state-of-the-art mechanisms for implementing privacy-preserving keyword search over encrypted data. We also present one integrated solution, which hopefully offer more insights into this important problem.",
"title": ""
},
{
"docid": "neg:1840609_17",
"text": "MicroRNAs (miRNAs) have within the past decade emerged as key regulators of metabolic homoeostasis. Major tissues in intermediary metabolism important during development of the metabolic syndrome, such as β-cells, liver, skeletal and heart muscle as well as adipose tissue, have all been shown to be affected by miRNAs. In the pancreatic β-cell, a number of miRNAs are important in maintaining the balance between differentiation and proliferation (miR-200 and miR-29 families) and insulin exocytosis in the differentiated state is controlled by miR-7, miR-375 and miR-335. MiR-33a and MiR-33b play crucial roles in cholesterol and lipid metabolism, whereas miR-103 and miR-107 regulates hepatic insulin sensitivity. In muscle tissue, a defined number of miRNAs (miR-1, miR-133, miR-206) control myofibre type switch and induce myogenic differentiation programmes. Similarly, in adipose tissue, a defined number of miRNAs control white to brown adipocyte conversion or differentiation (miR-365, miR-133, miR-455). The discovery of circulating miRNAs in exosomes emphasizes their importance as both endocrine signalling molecules and potentially disease markers. Their dysregulation in metabolic diseases, such as obesity, type 2 diabetes and atherosclerosis stresses their potential as therapeutic targets. This review emphasizes current ideas and controversies within miRNA research in metabolism.",
"title": ""
},
{
"docid": "neg:1840609_18",
"text": "In this work we propose an online multi person pose tracking approach which works on two consecutive frames It−1 and It . The general formulation of our temporal network allows to rely on any multi person pose estimation approach as spatial network. From the spatial network we extract image features and pose features for both frames. These features serve as input for our temporal model that predicts Temporal Flow Fields (TFF). These TFF are vector fields which indicate the direction in which each body joint is going to move from frame It−1 to frame It . This novel representation allows to formulate a similarity measure of detected joints. These similarities are used as binary potentials in a bipartite graph optimization problem in order to perform tracking of multiple poses. We show that these TFF can be learned by a relative small CNN network whilst achieving state-of-the-art multi person pose tracking results.",
"title": ""
},
{
"docid": "neg:1840609_19",
"text": "We study the notion of consistency between a 3D shape and a 2D observation and propose a differentiable formulation which allows computing gradients of the 3D shape given an observation from an arbitrary view. We do so by reformulating view consistency using a differentiable ray consistency (DRC) term. We show that this formulation can be incorporated in a learning framework to leverage different types of multi-view observations e.g. foreground masks, depth, color images, semantics etc. as supervision for learning single-view 3D prediction. We present empirical analysis of our technique in a controlled setting. We also show that this approach allows us to improve over existing techniques for single-view reconstruction of objects from the PASCAL VOC dataset.",
"title": ""
}
] |
1840610 | Graph Visualization and Navigation in Information Visualization: A Survey | [
{
"docid": "pos:1840610_0",
"text": "The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.",
"title": ""
}
] | [
{
"docid": "neg:1840610_0",
"text": "Do countries with lower policy-induced barriers to international trade grow faster, once other relevant country characteristics are controlled for? There exists a large empirical literature providing an affirmative answer to this question. We argue that methodological problems with the empirical strategies employed in this literature leave the results open to diverse interpretations. In many cases, the indicators of \"openness\" used by researchers are poor measures of trade barriers or are highly correlated with other sources of bad economic performance. In other cases, the methods used to ascertain the link between trade policy and growth have serious shortcomings. Papers that we review include Dollar (1992), Ben-David (1993), Sachs and Warner (1995), and Edwards (1998). We find little evidence that open trade policies--in the sense of lower tariff and non-tariff barriers to trade--are significantly associated with economic growth. Francisco Rodríguez Dani R odrik Department of Economics John F. Kennedy School of Government University of Maryland Harvard University College Park, MD 20742 79 Kennedy Street Cambridge, MA 02138 Phone: (301) 405-3480 Phone: (617) 495-9454 Fax: (301) 405-3542 Fax: (617) 496-5747 TRADE POLICY AND ECONOMIC GROWTH: A SKEPTIC'S GUIDE TO THE CROSS-NATIONAL EVIDENCE \"It isn't what we don't know that kills us. It's what we know that ain't so.\" -Mark Twain",
"title": ""
},
{
"docid": "neg:1840610_1",
"text": "Automatically segmenting unstructured text strings into structured records is necessary for importing the information contained in legacy sources and text collections into a data warehouse for subsequent querying, analysis, mining and integration. In this paper, we mine tables present in data warehouses and relational databases to develop an automatic segmentation system. Thus, we overcome limitations of existing supervised text segmentation approaches, which require comprehensive manually labeled training data. Our segmentation system is robust, accurate, and efficient, and requires no additional manual effort. Thorough evaluation on real datasets demonstrates the robustness and accuracy of our system, with segmentation accuracy exceeding state of the art supervised approaches.",
"title": ""
},
{
"docid": "neg:1840610_2",
"text": "Supervised learning, more specifically Convolutional Neural Networks (CNN), has surpassed human ability in some visual recognition tasks such as detection of traffic signs, faces and handwritten numbers. On the other hand, even stateof-the-art reinforcement learning (RL) methods have difficulties in environments with sparse and binary rewards. They requires manually shaping reward functions, which might be challenging to come up with. These tasks, however, are trivial to human. One of the reasons that human are better learners in these tasks is that we are embedded with much prior knowledge of the world. These knowledge might be either embedded in our genes or learned from imitation a type of supervised learning. For that reason, the best way to narrow the gap between machine and human learning ability should be to mimic how we learn so well in various tasks by a combination of RL and supervised learning. Our method, which integrates Deep Deterministic Policy Gradients and Hindsight Experience Replay (RL method specifically dealing with sparse rewards) with an experience ranking CNN, provides a significant speedup over the learning curve on simulated robotics tasks. Experience ranking allows high-reward transitions to be replayed more frequently, and therefore help learn more efficiently. Our proposed approach can also speed up learning in any other tasks that provide additional information for experience ranking.",
"title": ""
},
{
"docid": "neg:1840610_3",
"text": "Recent researches on neural network have shown signicant advantage in machine learning over traditional algorithms based on handcraed features and models. Neural network is now widely adopted in regions like image, speech and video recognition. But the high computation and storage complexity of neural network inference poses great diculty on its application. CPU platforms are hard to oer enough computation capacity. GPU platforms are the rst choice for neural network process because of its high computation capacity and easy to use development frameworks. On the other hand, FPGA-based neural network inference accelerator is becoming a research topic. With specically designed hardware, FPGA is the next possible solution to surpass GPU in speed and energy eciency. Various FPGA-based accelerator designs have been proposed with soware and hardware optimization techniques to achieve high speed and energy eciency. In this paper, we give an overview of previous work on neural network inference accelerators based on FPGA and summarize the main techniques used. An investigation from soware to hardware, from circuit level to system level is carried out to complete analysis of FPGA-based neural network inference accelerator design and serves as a guide to future work.",
"title": ""
},
{
"docid": "neg:1840610_4",
"text": "The concept of ecosystem emanates from ecology and subsequently has been broadly used in business studies to describe and investigate complex interrelationships between companies and other organizations. Concepts that are transferred from other disciplines (and used both in research and in practice) can, however, be ambiguous and problematic. For example, the use of the ecosystem concept has been questioned in the literature. To better understand the potential ambiguities between the business ecosystem concept and other related concepts, this study presents a conceptual analysis of business ecosystem. We continue by analytically comparing business ecosystem with other concepts used to describe business relationships, namely industry, population, cluster, and inter-organizational network. The results indicate a need for conceptual clarity when describing business networks. We conclude with a synthesis and discuss under what circumstances using the business ecosystem concept may add value for research and practice. The paper contributes to the business ecosystem literature by positioning the business ecosystem concept in relation to other closely related concepts",
"title": ""
},
{
"docid": "neg:1840610_5",
"text": "Alarm correlation plays an important role in improving the service and reliability in modern telecommunication networks. Most previous research of alarm correlation didnt consider the effects of noise data in the database. This paper focuses on the method of discovering alarm correlation rules from the database containing noise data. We firstly define two parameters Win_freq and Win_add as the measures of noise data and then present the Robust_search algorithm to solve the problem. At different size of Win_freq and Win_add, the experiments on alarm database containing noise data show that the Robust_search Algorithm can discover more rules with the bigger size of Win_add. We also compare two different interestingness measures of confidence and correlation by experiments.",
"title": ""
},
{
"docid": "neg:1840610_6",
"text": "Fast advances in the wireless technology and the intensive penetration of cell phones have motivated banks to spend large budget on building mobile banking systems, but the adoption rate of mobile banking is still underused than expected. Therefore, research to enrich current knowledge about what affects individuals to use mobile banking is required. Consequently, this study employs the Unified Theory of Acceptance and Use of Technology (UTAUT) to investigate what impacts people to adopt mobile banking. Through sampling 441 respondents, this study empirically concluded that individual intention to adopt mobile banking was significantly influenced by social influence, perceived financial cost, performance expectancy, and perceived credibility, in their order of influencing strength. The behavior was considerably affected by individual intention and facilitating conditions. As for moderating effects of gender and age, this study discovered that gender significantly moderated the effects of performance expectancy and perceived financial cost on behavioral intention, and the age considerably moderated the effects of facilitating conditions and perceived self-efficacy on actual adoption behavior.",
"title": ""
},
{
"docid": "neg:1840610_7",
"text": "Ultrasound has been recently proposed as an alternative modality for efficient wireless power transmission (WPT) to biomedical implants with millimeter (mm) dimensions. This paper presents the theory and design methodology of ultrasonic WPT links that involve mm-sized receivers (Rx). For given load <inline-formula><tex-math notation=\"LaTeX\">$(R_{L})$</tex-math></inline-formula> and powering distance <inline-formula><tex-math notation=\"LaTeX\">$(d)$</tex-math></inline-formula>, the optimal geometries of transmitter (Tx) and Rx ultrasonic transducers, including their diameter and thickness, as well as the optimal operation frequency <inline-formula><tex-math notation=\"LaTeX\">$(f_{c})$</tex-math></inline-formula> are found through a recursive design procedure to maximize the power transmission efficiency (PTE). First, a range of realistic <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula>s is found based on the Rx thickness constrain. For a chosen <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula> within the range, the diameter and thickness of the Rx transducer are then swept together to maximize PTE. Then, the diameter and thickness of the Tx transducer are optimized to maximize PTE. Finally, this procedure is repeated for different <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula>s to find the optimal <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula> and its corresponding transducer geometries that maximize PTE. A design example of ultrasonic link has been presented and optimized for WPT to a 1 mm<sup>3</sup> implant, including a disk-shaped piezoelectric transducer on a silicon die. In simulations, a PTE of 2.11% at <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula> of 1.8 MHz was achieved for <inline-formula><tex-math notation=\"LaTeX\">$R_{L}$</tex-math></inline-formula> of 2.5 <inline-formula><tex-math notation=\"LaTeX\">$\\text{k}\\Omega$</tex-math></inline-formula> at <inline-formula><tex-math notation=\"LaTeX\">$d = 3\\ \\text{cm}$</tex-math></inline-formula>. In order to validate our simulations, an ultrasonic link was optimized for a 1 mm<sup>3</sup> piezoelectric transducer mounted on a printed circuit board (PCB), which led to simulated and measured PTEs of 0.65% and 0.66% at <inline-formula><tex-math notation=\"LaTeX\">$f_{c}$</tex-math></inline-formula> of 1.1 MHz for <inline-formula><tex-math notation=\"LaTeX\">$R_{L}$</tex-math></inline-formula> of 2.5 <inline-formula><tex-math notation=\"LaTeX\">$\\text{k}\\Omega$</tex-math></inline-formula> at <inline-formula><tex-math notation=\"LaTeX\">$d = 3\\ \\text{cm}$</tex-math></inline-formula>, respectively.",
"title": ""
},
{
"docid": "neg:1840610_8",
"text": "We study bandlimited signals with fractional Fourier transform (FRFT). We show that if a nonzero signal f is bandlimited with FRFT F/sub /spl alpha// for a certain real /spl alpha/, then it is not bandlimited with FRFT F/sub /spl beta// for any /spl beta/ with /spl beta//spl ne//spl plusmn//spl alpha/+n/spl pi/ for any integer n. This is a generalization of the fact that a nonzero signal can not be both timelimited and bandlimited. We also provide sampling theorems for bandlimited signals with FRFT that are similar to the Shannon sampling theorem.",
"title": ""
},
{
"docid": "neg:1840610_9",
"text": "As many automated test input generation tools for Android need to instrument the system or the app, they cannot be used in some scenarios such as compatibility testing and malware analysis. We introduce DroidBot, a lightweight UI-guided test input generator, which is able to interact with an Android app on almost any device without instrumentation. The key technique behind DroidBot is that it can generate UI-guided test inputs based on a state transition model generated on-the-fly, and allow users to integrate their own strategies or algorithms. DroidBot is lightweight as it does not require app instrumentation, thus users do not need to worry about the inconsistency between the tested version and the original version. It is compatible with most Android apps, and able to run on almost all Android-based systems, including customized sandboxes and commodity devices. Droidbot is released as an open-source tool on GitHub, and the demo video can be found at https://youtu.be/3-aHG_SazMY.",
"title": ""
},
{
"docid": "neg:1840610_10",
"text": "Numerical simulation of quantum systems is crucial to further our understanding of natural phenomena. Many systems of key interest and importance, in areas such as superconducting materials and quantum chemistry, are thought to be described by models which we cannot solve with sufficient accuracy, neither analytically nor numerically with classical computers. Using a quantum computer to simulate such quantum systems has been viewed as a key application of quantum computation from the very beginning of the field in the 1980s. Moreover, useful results beyond the reach of classical computation are expected to be accessible with fewer than a hundred qubits, making quantum simulation potentially one of the earliest practical applications of quantum computers. In this paper we survey the theoretical and experimental development of quantum simulation using quantum computers, from the first ideas to the intense research efforts currently underway.",
"title": ""
},
{
"docid": "neg:1840610_11",
"text": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) isin IRm that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities.",
"title": ""
},
{
"docid": "neg:1840610_12",
"text": "Multi-agent systems are rapidly finding applications in a variety of domains, including robotics, distributed control, telecommunications, economics. Many tasks arising in these domains require that the agents learn behaviors online. A significant part of the research on multi-agent learning concerns reinforcement learning techniques. However, due to different viewpoints on central issues, such as the formal statement of the learning goal, a large number of different methods and approaches have been introduced. In this paper we aim to present an integrated survey of the field. First, the issue of the multi-agent learning goal is discussed, after which a representative selection of algorithms is reviewed. Finally, open issues are identified and future research directions are outlined",
"title": ""
},
{
"docid": "neg:1840610_13",
"text": "Chess programs ha ve three major components: mo ve generation, search, and evaluation. All components are important, although e valuation with its quiescence analysis is the part which mak es each program’ s play unique. The speed of a chess program is a function of its mo ve generation cost, the comple xity of the position under study and the bre vity of its evaluation. Moreimportant, however, is the quality of the mechanisms used to discontinue (prune) search of unprofitable continuations. The most reliable pruning method in popular use is the rob ust alpha-beta algorithm, and its man y supporting aids. These essential parts of g ame-tree searching and pruning are reviewed here, and the performance of refinements, such as aspiration and principal variation search, and aids lik e transposition and history tables are compared. † Much of this article is a re vision of material condensed from an entry entitled ‘‘ Computer Chess Methods, ’’ p repared for theEncyclopedia of Artificial Intellig ence, S. Shapiro (editor), to be published by John W iley & Sons in 1987.The transposition table pseudo code of Figure 7 is similar to that in another paper: ‘ ‘Pa allel Search of Strongly Ordered Game T rees, ’’ T. A. Marsland and M. Campbell, ACM Computing Surveys, Vol 14, No. 4, cop yright 1982, Association for Computing Machinery Inc., and is reprinted by permission. Final draft: ICCA Journal, V ol. 9, No. 1, March 1986, pp. 3-19.",
"title": ""
},
{
"docid": "neg:1840610_14",
"text": "When pedestrians encounter vehicles, they typically stop and wait for a signal from the driver to either cross or wait. What happens when the car is autonomous and there isn’t a human driver to signal them? This paper seeks to address this issue with an intent communication system (ICS) that acts in place of a human driver. This intent system has been developed to take into account the psychology behind what pedestrians are familiar with and what they expect from machines. The system integrates those expectations into the design of physical systems and mathematical algorithms. The goal of the system is to ensure that communication is simple, yet effective without leaving pedestrians with a sense of distrust in autonomous vehicles. To validate the ICS, two types of experiments have been run: field tests with an autonomous vehicle to determine how humans actually interact with the ICS and simulations to account for multiple potential behaviors.The results from both experiments show that humans react positively and more predictably when the intent of the vehicle is communicated compared to when the intent of the vehicle is unknown. In particular, the results from the simulation specifically showed a 142 percent difference between the pedestrian’s trust in the vehicle’s actions when the ICS is enabled and the pedestrian has prior knowledge of the vehicle than when the ICS is not enabled and the pedestrian having no prior knowledge of the vehicle.",
"title": ""
},
{
"docid": "neg:1840610_15",
"text": "Data stream is a potentially massive, continuous, rapid sequence of data information. It has aroused great concern and research upsurge in the field of data mining. Clustering is an effective tool of data mining, so data stream clustering will undoubtedly become the focus of the study in data stream mining. In view of the characteristic of the high dimension, dynamic, real-time, many effective data stream clustering algorithms have been proposed. In addition, data stream information are not deterministic and always exist outliers and contain noises, so developing effective data stream clustering algorithm is crucial. This paper reviews the development and trend of data stream clustering and analyzes typical data stream clustering algorithms proposed in recent years, such as Birch algorithm, Local Search algorithm, Stream algorithm and CluStream algorithm. We also summarize the latest research achievements in this field and introduce some new strategies to deal with outliers and noise data. At last, we put forward the focal points and difficulties of future research for data stream clustering.",
"title": ""
},
{
"docid": "neg:1840610_16",
"text": "Femoroacetabular impingement is a well-documented cause of hip pain. There is, however, increasing evidence for the presence of a previously unrecognised impingement-type condition around the hip - ischiofemoral impingement. This is caused by abnormal contact between the lesser trochanter of the femur and the ischium, and presents as atypical groin and/or posterior buttock pain. The symptoms are gradual in onset and may be similar to those of iliopsoas tendonitis, hamstring injury or bursitis. The presence of ischiofemoral impingement may be indicated by pain caused by a combination of hip extension, adduction and external rotation. Magnetic resonance imaging demonstrates inflammation and oedema in the ischiofemoral space and quadratus femoris, and is distinct from an acute tear. To date this has only appeared in the specialist orthopaedic literature as a problem that has developed after total hip replacement, not in the unreplaced joint.",
"title": ""
},
{
"docid": "neg:1840610_17",
"text": "Billions of dollars of loss are caused every year due to fraudulent credit card transactions. The design of efficient fraud detection algorithms is key for reducing these losses, and more and more algorithms rely on advanced machine learning techniques to assist fraud investigators. The design of fraud detection algorithms is however particularly challenging due to non stationary distribution of the data, highly imbalanced classes distributions and continuous streams of transactions. At the same time public data are scarcely available for confidentiality issues, leaving unanswered many questions about which is the best strategy to deal with them. In this paper we provide some answers from the practitioner’s perspective by focusing on three crucial issues: unbalancedness, non-stationarity and assessment. The analysis is made possible by a real credit card dataset provided by our industrial partner.",
"title": ""
},
{
"docid": "neg:1840610_18",
"text": "The ability to conduct logical reasoning is a fundamental aspect of intelligent behavior, and thus an important problem along the way to human-level artificial intelligence. Traditionally, symbolic logic-based methods from the field of knowledge representation and reasoning have been used to equip agents with capabilities that resemble human logical reasoning qualities. More recently, however, there has been an increasing interest in using machine learning rather than symbolic logic-based formalisms to tackle these tasks. In this paper, we employ state-of-the-art methods for training deep neural networks to devise a novel model that is able to learn how to effectively perform logical reasoning in the form of basic ontology reasoning. This is an important and at the same time very natural logical reasoning task, which is why the presented approach is applicable to a plethora of important real-world problems. We present the outcomes of several experiments, which show that our model learned to perform precise ontology reasoning on diverse and challenging tasks. Furthermore, it turned out that the suggested approach suffers much less from different obstacles that prohibit logic-based symbolic reasoning, and, at the same time, is surprisingly plausible from a biological point of view.",
"title": ""
},
{
"docid": "neg:1840610_19",
"text": "Analysis of satellite images plays an increasingly vital role in environment and climate monitoring, especially in detecting and managing natural disaster. In this paper, we proposed an automatic disaster detection system by implementing one of the advance deep learning techniques, convolutional neural network (CNN), to analysis satellite images. The neural network consists of 3 convolutional layers, followed by max-pooling layers after each convolutional layer, and 2 fully connected layers. We created our own disaster detection training data patches, which is currently focusing on 2 main disasters in Japan and Thailand: landslide and flood. Each disaster's training data set consists of 30000~40000 patches and all patches are trained automatically in CNN to extract region where disaster occurred instantaneously. The results reveal accuracy of 80%~90% for both disaster detection. The results presented here may facilitate improvements in detecting natural disaster efficiently by establishing automatic disaster detection system.",
"title": ""
}
] |
1840611 | Monopole Antenna With Inkjet-Printed EBG Array on Paper Substrate for Wearable Applications | [
{
"docid": "pos:1840611_0",
"text": "Body centric wireless communication is now accepted as an important part of 4th generation (and beyond) mobile communications systems, taking the form of human to human networking incorporating wearable sensors and communications. There are also a number of body centric communication systems for specialized occupations, such as paramedics and fire-fighters, military personnel and medical sensing and support. To support these developments there is considerable ongoing research into antennas and propagation for body centric communications systems, and this paper will summarise some of it, including the characterisation of the channel on the body, the optimisation of antennas for these channels, and communications to medical implants where advanced antenna design and characterisation and modelling of the internal body channel are important research needs. In all of these areas both measurement and simulation pose very different and challenging issues to be faced by the researcher.",
"title": ""
},
{
"docid": "pos:1840611_1",
"text": "In this paper, a review of the authors' work on inkjet-printed flexible antennas, fabricated on paper substrates, is given. This is presented as a system-level solution for ultra-low-cost mass production of UHF radio-frequency identification (RFID) tags and wireless sensor nodes (WSN), in an approach that could be easily extended to other microwave and wireless applications. First, we discuss the benefits of using paper as a substrate for high-frequency applications, reporting its very good electrical/dielectric performance up to at least 1 GHz. The RF characteristics of the paper-based substrate are studied by using a microstrip-ring resonator, in order to characterize the dielectric properties (dielectric constant and loss tangent). We then give details about the inkjet-printing technology, including the characterization of the conductive ink, which consists of nano-silver particles. We highlight the importance of this technology as a fast and simple fabrication technique, especially on flexible organic (e.g., LCP) or paper-based substrates. A compact inkjet-printed UHF ldquopassive RFIDrdquo antenna, using the classic T-match approach and designed to match the IC's complex impedance, is presented as a demonstration prototype for this technology. In addition, we briefly touch upon the state-of-the-art area of fully-integrated wireless sensor modules on paper. We show the first-ever two-dimensional sensor integration with an RFID tag module on paper, as well as the possibility of a three-dimensional multilayer paper-based RF/microwave structure.",
"title": ""
},
{
"docid": "pos:1840611_2",
"text": "The bi-directional beam from an equiangular spiral antenna (EAS) is changed to a unidirectional beam using an electromagnetic band gap (EBG) reflector. The antenna height, measured from the upper surface of the EBG reflector to the spiral arms, is chosen to be extremely small to realize a low-profile antenna: 0.07 wavelength at the lowest analysis frequency of 3 GHz. The analysis shows that the EAS backed by the EBG reflector does not reproduce the inherent wideband axial ratio characteristic observed when the EAS is isolated in free space. The deterioration in the axial ratio is examined by decomposing the total radiation field into two field components: one component from the equiangular spiral and the other from the EBG reflector. The examination reveals that the amplitudes and phases of these two field components do not satisfy the constructive relationship necessary for circularly polarized radiation. Based on this finding, next, the EBG reflector is modified by gradually removing the patch elements from the center region of the reflector, thereby satisfying the required constructive relationship between the two field components. This equiangular spiral with a modified EBG reflector shows wideband characteristics with respect to the axial ratio, input impedance and gain within the design frequency band (4-9 GHz). Note that, for comparison, the antenna characteristics for an EAS isolated in free space and an EAS backed by a perfect electric conductor are also presented.",
"title": ""
}
] | [
{
"docid": "neg:1840611_0",
"text": "Sustainable urban mobility is an important dimension in a Smart City, and one of the key issues for city sustainability. However, innovative and often costly mobility policies and solutions introduced by cities are liable to fail, if not combined with initiatives aimed at increasing the awareness of citizens, and promoting their behavioural change. This paper explores the potential of gamification mechanisms to incentivize voluntary behavioural changes towards sustainable mobility solutions. We present a service-based gamification framework, developed within the STREETLIFE EU Project, which can be used to develop games on top of existing services and systems within a Smart City, and discuss the empirical findings of an experiment conducted in the city of Rovereto on the effectiveness of gamification to promote sustainable urban mobility.",
"title": ""
},
{
"docid": "neg:1840611_1",
"text": "IP-based solutions to accommodate mobile hosts within existing internetworks do not address the distinctive features of wireless mobile computing. IP-based transport protocols thus suffer from poor performance when a mobile host communicates with a host on the fixed network. This is caused by frequent disruptions in network layer connectivity due to — i) mobility and ii) unreliable nature of the wireless link. We describe the design and implementation of I-TCP, which is an indirect transport layer protocol for mobile hosts. I-TCP utilizes the resources of Mobility Support Routers (MSRs) to provide transport layer communication between mobile hosts and hosts on the fixed network. With I-TCP, the problems related to mobility and the unreliability of wireless link are handled entirely within the wireless link; the TCP/IP software on the fixed hosts is not modified. Using I-TCP on our testbed, the throughput between a fixed host and a mobile host improved substantially in comparison to regular TCP.",
"title": ""
},
{
"docid": "neg:1840611_2",
"text": "Governance, Risk and Compliance (GRC) as an integrated concept has gained great interest recently among researchers in the Information Systems (IS) field. The need for more effective and efficient business processes in the area of financial controls drives enterprises to successfully implement GRC systems as an overall goal when they are striving for enterprise value of their integrated systems. The GRC implementation process is a significant parameter influencing the success of operational performance and financial governance and supports the practices for competitive advantage within the organisations. However, GRC literature is limited regarding the analysis of their implementation and adoption success. Therefore, there is a need for further research and contribution in the area of GRC systems and more specifically their implementation process. The research at hand recognizes GRC as a fundamental business requirement and focuses on the need to analyse the implementation process of such enterprise solutions. The research includes theoretical and empirical investigation of the GRC implementation within an enterprise and develops a framework for the analysis of the GRC adoption. The approach suggests that the three success factors (integration, optimisation, information) influence the adoption of the GRC and more specifically their implementation process. The proposed framework followed a case study approach to confirm its functionality and is evaluated through interviews with stakeholders involved in GRC implementations. Furthermore, it can be used by the organisations when considering the adoption of GRC solutions and can also suggest a tool for researchers to analyse and explain further the GRC implementation process.",
"title": ""
},
{
"docid": "neg:1840611_3",
"text": "Progress in Information and Communication Technologies (ICTs) is shaping more and more the healthcare domain. ICTs adoption provides new opportunities, as well as discloses novel and unforeseen application scenarios. As a result, the overall health sector is potentially benefited, as the quality of medical services is expected to be enhanced and healthcare costs are reduced, in spite of the increasing demand due to the aging population. Notwithstanding the above, the scientific literature appears to be still quite scattered and fragmented, also due to the interaction of scientific communities with different background, skills, and approaches. A number of specific terms have become of widespread use (e.g., regarding ICTs-based healthcare paradigms as well as at health-related data formats), but without commonly-agreed definitions. While scientific surveys and reviews have also been proposed, none of them aims at providing a holistic view of how today ICTs are able to support healthcare. This is the more and more an issue, as the integrated application of most if not all the main ICTs pillars is the most agreed upon trend, according to the Industry 4.0 paradigm about ongoing and future industrial revolution. In this paper we aim at shedding light on how ICTs and healthcare are related, identifying the most popular ICTs-based healthcare paradigms, together with the main ICTs backing them. Studying more than 300 papers, we survey outcomes of literature analyses and results from research activities carried out in this field. We characterize the main ICTs-based healthcare paradigms stemmed out in recent years fostered by the evolution of ICTs. Dissecting the scientific literature, we also identify the technological pillars underpinning the novel applications fueled by these technological advancements. Guided by the scientific literature, we review a number of application scenarios gaining momentum thanks to the beneficial impact of ICTs. As the evolution of ICTs enables to gather huge and invaluable data from numerous and highly varied sources in easier ways, here we also focus on the shapes that this healthcare-related data may take. This survey provides an up-to-date picture of the novel healthcare applications enabled by the ICTs advancements, with a focus on their specific hottest research challenges. It helps the interested readership (from both technological and medical fields) not to lose orientation in the complex landscapes possibly generated when advanced ICTs are adopted in application scenarios dictated by the critical healthcare domain.",
"title": ""
},
{
"docid": "neg:1840611_4",
"text": "hlany modern computing environments involve dynamic peer groups. Distributed Simdation, mtiti-user games, conferencing and replicated servers are just a few examples. Given the openness of today’s networks, communication among group members must be secure and, at the same time, efficient. This paper studies the problem of authenticated key agreement. in dynamic peer groups with the emphasis on efficient and provably secure key authentication, key confirmation and integrity. It begins by considering 2-party authenticateed key agreement and extends the restits to Group Dfi*Hehart key agreement. In the process, some new security properties (unique to groups) are discussed.",
"title": ""
},
{
"docid": "neg:1840611_5",
"text": "Cervical cancer represents the second leading cause of death for women worldwide. The importance of the diet and its impact on specific types of neoplasia has been highlighted, focusing again interest in the analysis of dietary phytochemicals. Polyphenols have shown a wide range of cellular effects: they may prevent carcinogens from reaching the targeted sites, support detoxification of reactive molecules, improve the elimination of transformed cells, increase the immune surveillance and the most important factor is that they can influence tumor suppressors and inhibit cellular proliferation, interfering in this way with the steps of carcinogenesis. From the studies reviewed in this paper, it is clear that certain dietary polyphenols hold great potential in the prevention and therapy of cervical cancer, because they interfere in carcinogenesis (in the initiation, development and progression) by modulating the critical processes of cellular proliferation, differentiation, apoptosis, angiogenesis and metastasis. Specifically, polyphenols inhibit the proliferation of HPV cells, through induction of apoptosis, growth arrest, inhibition of DNA synthesis and modulation of signal transduction pathways. The effects of combinations of polyphenols with chemotherapy and radiotherapy used in the treatment of cervical cancer showed results in the resistance of cervical tumor cells to chemo- and radiotherapy, one of the main problems in the treatment of cervical neoplasia that can lead to failure of the treatment because of the decreased efficiency of the therapy.",
"title": ""
},
{
"docid": "neg:1840611_6",
"text": "The localization of photosensitizers in the subcellular compartments during photodynamic therapy (PDT) plays a major role in the cell destruction; therefore, the aim of this study was to investigate the intracellular localization of Chlorin e6-PVP (Photolon™) in malignant and normal cells. Our study involves the characterization of the structural determinants of subcellular localization of Photolon, and how subcellular localization affects the selective toxicity of Photolon towards tumor cells. Using confocal laser scanning microscopy (CLSM) and fluorescent organelle probes; we examined the subcellular localization of Photolon™ in the murine colon carcinoma CT-26 and normal fibroblast (NHLC) cells. Our results demonstrated that after 30 min of incubation, the distribution of Photolon was localized mainly in the cytoplasmic organelles including the mitochondria, lysosomes, Golgi apparatus, around the nuclear envelope and also in the nucleus but not in the endo-plasmic reticulum whereas in NHLC cells, Photolon was found to be localized minimally only in the nucleus not in other organelles studied. The relationship between subcellular localization of Photolon and PDT-induced apoptosis was investigated. Apoptotic cell death was judged by the formation of known apoptotic hallmarks including, the phosphatidylserine externalization (PS), PARP cleavage, a substrate for caspase-3 and the formation of apoptotic nuclei. At the irradiation dose of 1 J/cm2, the percentage of apoptotic cells was 80%, respectively. This study provided substantial evidence that Photolon preferentially localized in the subcellular organelles in the following order: nucleus, mitochondria, lysosomes and the Golgi apparatus and subsequent photodamage of the mitochondria and lyso-somes played an important role in PDT-mediated apoptosis CT-26 cells. Our results based on the cytoplasmic organelles and the intranuclear localization extensively enhance the efficacy of PDT with appropriate photosensitizer and light dose and support the idea that PDT can contribute to elimination of malignant cells by inducing apoptosis, which is of physiological significance.",
"title": ""
},
{
"docid": "neg:1840611_7",
"text": "A combination of techniques that is becoming increasingly popular is the construction of part-based object representations using the outputs of interest-point detectors. Our contributions in this paper are twofold: first, we propose a primal-sketch-based set of image tokens that are used for object representation and detection. Second, top-down information is introduced based on an efficient method for the evaluation of the likelihood of hypothesized part locations. This allows us to use graphical model techniques to complement bottom-up detection, by proposing and finding the parts of the object that were missed by the front-end feature detection stage. Detection results for four object categories validate the merits of this joint top-down and bottom-up approach.",
"title": ""
},
{
"docid": "neg:1840611_8",
"text": "In this paper, a comparative study on frequency and time domain analyses for the evaluation of the seismic response of subsoil to the earthquake shaking is presented. After some remarks on the solutions given by the linear elasticity theory for this type of problem, the use of some widespread numerical codes is illustrated and the results are compared with the available theoretical predictions. Bedrock elasticity, viscous and hysteretic damping, stress-dependency of the stiffness and nonlinear behaviour of the soil are taken into account. A series of comparisons between the results obtained by the different computer programs is shown.",
"title": ""
},
{
"docid": "neg:1840611_9",
"text": "The problem of place recognition appears in different mobile robot navigation problems including localization, SLAM, or change detection in dynamic environments. Whereas this problem has been studied intensively in the context of robot vision, relatively few approaches are available for three-dimensional range data. In this paper, we present a novel and robust method for place recognition based on range images. Our algorithm matches a given 3D scan against a database using point features and scores potential transformations by comparing significant points in the scans. A further advantage of our approach is that the features allow for a computation of the relative transformations between scans which is relevant for registration processes. Our approach has been implemented and tested on different 3D data sets obtained outdoors. In several experiments we demonstrate the advantages of our approach also in comparison to existing techniques.",
"title": ""
},
{
"docid": "neg:1840611_10",
"text": "The lethal(3)malignant brain tumor [t(3)mbt] gene causes, when mutated, malignant growth of the adult optic neuroblasts and ganglion mother cells in the larval brain and imaginal disc overgrowth. Via overlapping deficiencies a genomic region of approximately 6.0 kb was identified, containing l(3)mbt+ gene sequences. The l(3)mbt+ gene encodes seven transcripts of 5.8 kb, 5.65 kb, 5.35 kb, 5.25 kb, 5.0 kb, 4.4 kb and 1.8 kb. The putative MBT163 protein, encompassing 1477 amino acids, is proline-rich and contains a novel zinc finger. In situ hybridizations of whole mount embryos and larval tissues revealed l(3)mbt+ RNA ubiquitously present in stage 1 embryos and throughout embryonic development in most tissues. In third instar larvae l(3)mbt+ RNA is detected in the adult optic anlagen and the imaginal discs, the tissues directly affected by l(3)mbt mutations, but also in tissues, showing normal development in the mutant, such as the gut, the goblet cells and the hematopoietic organs.",
"title": ""
},
{
"docid": "neg:1840611_11",
"text": "Accurately forecasting pollution concentration of PM2.5 can provide early warning for the government to alert the persons suffering from air pollution. Many existing approaches fail at providing favorable results duo to shallow architecture in forecasting model that can not learn suitable features. In addition, multiple meteorological factors increase the difficulty for understanding the influence of the PM2.5 concentration. In this paper, a deep neural network is proposed for accurately forecasting PM2.5 pollution concentration based on manifold learning. Firstly, meteorological factors are specified by the manifold learning method, reducing the dimension without any expert knowledge. Secondly, a deep belief network (DBN) is developed to learn the features of the input candidates obtained by the manifold learning and the one-day ahead PM2.5 concentration. Finally, the deep features are modeled by a regression neural network, and the local PM2.5 forecast is yielded. The addressed model is evaluated by the dataset in the period of 28/10/2013 to 31/3/2017 in Chongqing municipality of China. The study suggests that deep learning is a promising technique in PM2.5 concentration forecasting based on the manifold learning.",
"title": ""
},
{
"docid": "neg:1840611_12",
"text": "OBJECTIVE\nTo perform a cross-cultural adaptation of the Portuguese version of the Maslach Burnout Inventory for students (MBI-SS), and investigate its reliability, validity and cross-cultural invariance.\n\n\nMETHODS\nThe face validity involved the participation of a multidisciplinary team. Content validity was performed. The Portuguese version was completed in 2009, on the internet, by 958 Brazilian and 556 Portuguese university students from the urban area. Confirmatory factor analysis was carried out using as fit indices: the χ²/df, the Comparative Fit Index (CFI), the Goodness of Fit Index (GFI) and the Root Mean Square Error of Approximation (RMSEA). To verify the stability of the factor solution according to the original English version, cross-validation was performed in 2/3 of the total sample and replicated in the remaining 1/3. Convergent validity was estimated by the average variance extracted and composite reliability. The discriminant validity was assessed, and the internal consistency was estimated by the Cronbach's alpha coefficient. Concurrent validity was estimated by the correlational analysis of the mean scores of the Portuguese version and the Copenhagen Burnout Inventory, and the divergent validity was compared to the Beck Depression Inventory. The invariance of the model between the Brazilian and the Portuguese samples was assessed.\n\n\nRESULTS\nThe three-factor model of Exhaustion, Disengagement and Efficacy showed good fit (c 2/df = 8.498, CFI = 0.916, GFI = 0.902, RMSEA = 0.086). The factor structure was stable (λ:χ²dif = 11.383, p = 0.50; Cov: χ²dif = 6.479, p = 0.372; Residues: χ²dif = 21.514, p = 0.121). Adequate convergent validity (VEM = 0.45;0.64, CC = 0.82;0.88), discriminant (ρ² = 0.06;0.33) and internal consistency (α = 0.83;0.88) were observed. The concurrent validity of the Portuguese version with the Copenhagen Inventory was adequate (r = 0.21, 0.74). The assessment of the divergent validity was impaired by the approach of the theoretical concept of the dimensions Exhaustion and Disengagement of the Portuguese version with the Beck Depression Inventory. Invariance of the instrument between the Brazilian and Portuguese samples was not observed (λ:χ²dif = 84.768, p<0.001; Cov: χ²dif = 129.206, p < 0.001; Residues: χ²dif = 518.760, p < 0.001).\n\n\nCONCLUSIONS\nThe Portuguese version of the Maslach Burnout Inventory for students showed adequate reliability and validity, but its factor structure was not invariant between the countries, indicating the absence of cross-cultural stability.",
"title": ""
},
{
"docid": "neg:1840611_13",
"text": "The Elliptic Curve Digital Signature Algorithm (ECDSA) is the elliptic curve analogue of the Digital Signature Algorithm (DSA). It was accepted in 1999 as an ANSI standard and in 2000 as IEEE and NIST standards. It was also accepted in 1998 as an ISO standard and is under consideration for inclusion in some other ISO standards. Unlike the ordinary discrete logarithm problem and the integer factorization problem, no subexponential-time algorithm is known for the elliptic curve discrete logarithm problem. For this reason, the strength-per-key-bit is substantially greater in an algorithm that uses elliptic curves. This paper describes the ANSI X9.62 ECDSA, and discusses related security, implementation, and interoperability issues.",
"title": ""
},
{
"docid": "neg:1840611_14",
"text": "This revision to the EEG Guidelines is an update incorporating current EEG technology and practice. The role of the EEG in making the determination of brain death is discussed as are suggested technical criteria for making the diagnosis of electrocerebral inactivity.",
"title": ""
},
{
"docid": "neg:1840611_15",
"text": "This study investigates the case-based learning experience of 133 undergraduate veterinarian science students. Using qualitative methodologies from relational Student Learning Research, variation in the quality of the learning experience was identified, ranging from coherent, deep, quality experiences of the cases, to experiences that separated significant aspects, such as the online case histories, laboratory test results, and annotated images emphasizing symptoms, from the meaning of the experience. A key outcome of this study was that a significant percentage of the students surveyed adopted a poor approach to learning with online resources in a blended experience even when their overall learning experience was related to cohesive conceptions of veterinary science, and that the difference was even more marked for less successful students. The outcomes from the study suggest that many students are unsure of how to approach the use of online resources in ways that are likely to maximise benefits for learning in blended experiences, and that the benefits from case-based learning such as authenticity and active learning can be threatened if issues closely associated with qualitative variation arising from incoherence in the experience are not addressed.",
"title": ""
},
{
"docid": "neg:1840611_16",
"text": "The availability of multiple, essentially complete genome sequences of prokaryotes and eukaryotes spurred both the demand and the opportunity for the construction of an evolutionary classification of genes from these genomes. Such a classification system based on orthologous relationships between genes appears to be a natural framework for comparative genomics and should facilitate both functional annotation of genomes and large-scale evolutionary studies. We describe here a major update of the previously developed system for delineation of Clusters of Orthologous Groups of proteins (COGs) from the sequenced genomes of prokaryotes and unicellular eukaryotes and the construction of clusters of predicted orthologs for 7 eukaryotic genomes, which we named KOGs after euk aryotic o rthologous g roups. The COG collection currently consists of 138,458 proteins, which form 4873 COGs and comprise 75% of the 185,505 (predicted) proteins encoded in 66 genomes of unicellular organisms. The euk aryotic o rthologous g roups (KOGs) include proteins from 7 eukaryotic genomes: three animals (the nematode Caenorhabditis elegans, the fruit fly Drosophila melanogaster and Homo sapiens), one plant, Arabidopsis thaliana, two fungi (Saccharomyces cerevisiae and Schizosaccharomyces pombe), and the intracellular microsporidian parasite Encephalitozoon cuniculi. The current KOG set consists of 4852 clusters of orthologs, which include 59,838 proteins, or ~54% of the analyzed eukaryotic 110,655 gene products. Compared to the coverage of the prokaryotic genomes with COGs, a considerably smaller fraction of eukaryotic genes could be included into the KOGs; addition of new eukaryotic genomes is expected to result in substantial increase in the coverage of eukaryotic genomes with KOGs. Examination of the phyletic patterns of KOGs reveals a conserved core represented in all analyzed species and consisting of ~20% of the KOG set. This conserved portion of the KOG set is much greater than the ubiquitous portion of the COG set (~1% of the COGs). In part, this difference is probably due to the small number of included eukaryotic genomes, but it could also reflect the relative compactness of eukaryotes as a clade and the greater evolutionary stability of eukaryotic genomes. The updated collection of orthologous protein sets for prokaryotes and eukaryotes is expected to be a useful platform for functional annotation of newly sequenced genomes, including those of complex eukaryotes, and genome-wide evolutionary studies.",
"title": ""
},
{
"docid": "neg:1840611_17",
"text": "Reaction-time and eye-fixation data are analyzed to investigate how people infer the kinematics of simple mechanical systems (pulley systems) from diagrams showing their static configuration. It is proposed that this mental animation process involves decomposing the representation of a pulley system into smaller units corresponding to the machine components and animating these components in a sequence corresponding to the causal sequence of events in the machine's operation. Although it is possible for people to make inferences against the chain of causality in the machine, these inferences are more difficult, and people have a preference for inferences in the direction of causality. The mental animation process reflects both capacity limitations and limitations of mechanical knowledge.",
"title": ""
},
{
"docid": "neg:1840611_18",
"text": "Neural networks have had many great successes in recent years, particularly with the advent of deep learning and many novel training techniques. One issue that has prevented reinforcement learning from taking full advantage of scalable neural networks is that of catastrophic forgetting. The latter affects supervised learning systems when highly correlated input samples are presented, as well as when input patterns are non-stationary. However, most real-world problems are non-stationary in nature, resulting in prolonged periods of time separating inputs drawn from different regions of the input space. Unfortunately, reinforcement learning presents a worst-case scenario when it comes to precipitating catastrophic forgetting in neural networks. Meaningful training examples are acquired as the agent explores different regions of its state/action space. When the agent is in one such region, only highly correlated samples from that region are typically acquired. Moreover, the regions that the agent is likely to visit will depend on its current policy, suggesting that an agent that has a good policy may avoid exploring particular regions. The confluence of these factors means that without some mitigation techniques, supervised neural networks as function approximation in temporal-difference learning will only be applicable to the simplest test cases. In this work, we develop a feed forward neural network architecture that mitigates catastrophic forgetting by partitioning the input space in a manner that selectively activates a different subset of hidden neurons for each region of the input space. We demonstrate the effectiveness of the proposed framework on a cart-pole balancing problem for which other neural network architectures exhibit training instability likely due to catastrophic forgetting. We demonstrate that our technique produces better results, particularly with respect to a performance-stability measure.",
"title": ""
},
{
"docid": "neg:1840611_19",
"text": "There has been much recent work on training neural attention models at the sequencelevel using either reinforcement learning-style methods or by optimizing the beam. In this paper, we survey a range of classical objective functions that have been widely used to train linear models for structured prediction and apply them to neural sequence to sequence models. Our experiments show that these losses can perform surprisingly well by slightly outperforming beam search optimization in a like for like setup. We also report new state of the art results on both IWSLT’14 German-English translation as well as Gigaword abstractive summarization. On the large WMT’14 English-French task, sequence-level training achieves 41.5 BLEU which is on par with the state of the art.1",
"title": ""
}
] |
1840612 | Sentence Ranking with the Semantic Link Network in Scientific Paper | [
{
"docid": "pos:1840612_0",
"text": "‘Description’: ‘Microsoft will accelerate your journey to cloud computing with an! agile and responsive datacenter built from your existing technology investments.’,! ‘DisplayUrl’: ‘www.microsoft.com/en-us/server-cloud/ datacenter/virtualization.aspx’,! ‘ID’: ‘a42b0908-174e-4f25-b59c-70bdf394a9da’,! ‘Title’: ‘Microsoft | Server & Cloud | Datacenter | Virtualization ...’,! ‘Url’: ‘http://www.microsoft.com/en-us/server-cloud/datacenter/ virtualization.aspx’,! ...! Data! #Topics: 228! #Candidate Labels: ~6,000! Domains: BLOGS, BOOKS, NEWS, PUBMED! Candidate labels rated by humans (0-3) ! Published by Lau et al. (2011). 4. Scoring Candidate Labels! Candidate Label: L = {w1, w2, ..., wm}! Scoring Function: Task: The aim of the task is to associate labels with automatically generated topics.",
"title": ""
}
] | [
{
"docid": "neg:1840612_0",
"text": "This study investigated the extent of young adults’ (N = 393; 17–30 years old) experience of cyberbullying, from the perspectives of cyberbullies and cyber-victims using an online questionnaire survey. The overall prevalence rate shows cyberbullying is still present after the schooling years. No significant gender differences were noted, however females outnumbered males as cyberbullies and cyber-victims. Overall no significant differences were noted for age, but younger participants were found to engage more in cyberbullying activities (i.e. victims and perpetrators) than the older participants. Significant differences were noted for Internet frequency with those spending 2–5 h online daily reported being more victimized and engage in cyberbullying than those who spend less than an hour daily. Internet frequency was also found to significantly predict cyber-victimization and cyberbullying, indicating that as the time spent on Internet increases, so does the chances to be bullied and to bully someone. Finally, a positive significant association was observed between cyber-victims and cyberbullies indicating that there is a tendency for cyber-victims to become cyberbullies, and vice versa. Overall it can be concluded that cyberbullying incidences are still taking place, even though they are not as rampant as observed among the younger users. 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840612_1",
"text": "This paper presents a novel tightly-coupled monocular visual-inertial Simultaneous Localization and Mapping algorithm, which provides accurate and robust localization within the globally consistent map in real time on a standard CPU. This is achieved by firstly performing the visual-inertial extended kalman filter(EKF) to provide motion estimate at a high rate. However the filter becomes inconsistent due to the well known linearization issues. So we perform a keyframe-based visual-inertial bundle adjustment to improve the consistency and accuracy of the system. In addition, a loop closure detection and correction module is also added to eliminate the accumulated drift when revisiting an area. Finally, the optimized motion estimates and map are fed back to the EKF-based visual-inertial odometry module, thus the inconsistency and estimation error of the EKF estimator are reduced. In this way, the system can continuously provide reliable motion estimates for the long-term operation. The performance of the algorithm is validated on public datasets and real-world experiments, which proves the superiority of the proposed algorithm.",
"title": ""
},
{
"docid": "neg:1840612_2",
"text": "Cell phones are a pervasive new communication technology, especially among college students. This paper examines college students cell phone usage from a behavioral and psychological perspective. Utilizing both qualitative (focus groups) and quantitative (survey) approaches, the study suggests these individuals use the devices for a variety of purposes: to help them feel safe, for financial benefits, to manage time efficiently, to keep in touch with friends and family members, et al. The degree to which the individuals are dependent on the cell phones and what they view as the negatives of their utilization are also examined. The findings suggest people have various feelings and attitudes toward cell phone usage. This study serves as a foundation on which future studies will be built. 2003 Elsevier Science Ltd. All rights reserved.",
"title": ""
},
{
"docid": "neg:1840612_3",
"text": "Emerging low-power radio triggering techniques for wireless motes are a promising approach to prolong the lifetime of Wireless Sensor Networks (WSNs). By allowing nodes to activate their main transceiver only when data need to be transmitted or received, wake-up-enabled solutions virtually eliminate the need for idle listening, thus drastically reducing the energy toll of communication. In this paper we describe the design of a novel wake-up receiver architecture based on an innovative pass-band filter bank with high selectivity capability. The proposed concept, demonstrated by a prototype implementation, combines both frequency-domain and time-domain addressing space to allow selective addressing of nodes. To take advantage of the functionalities of the proposed receiver, as well as of energy-harvesting capabilities modern sensor nodes are equipped with, we present a novel wake-up-enabled harvesting-aware communication stack that supports both interest dissemination and converge casting primitives. This stack builds on the ability of the proposed WuR to support dynamic address assignment, which is exploited to optimize system performance. Comparison against traditional WSN protocols shows that the proposed concept allows to optimize performance tradeoffs with respect to existing low-power communication stacks.",
"title": ""
},
{
"docid": "neg:1840612_4",
"text": "Money laundering has become of increasing concern to law makers in recent years, principally because of its associations with terrorism. Recent legislative changes in the United Kingdom mean that auditors risk becoming state law enforcement agents in the private sector. We examine this legislation from the perspective of the changing nature of the relationship between auditors and the state, and the surveillant assemblage within which this is located. Auditors are statutorily obliged to file Suspicious Activity Reports (SARs) into an online database, ELMER, but without much guidance regarding how suspicion is determined. Criminal rather than civil or regulatory sanctions apply to auditors’ instances of non-compliance. This paper evaluates the surveillance implications of the legislation for auditors through lenses developed in the accounting and sociological literature by Brivot andGendron, Neu andHeincke, Deleuze and Guattari, and Haggerty and Ericson. It finds that auditors are generating information flows which are subsequently reassembled into discrete and virtual ‘data doubles’ to be captured and utilised by authorised third parties for unknown purposes. The paper proposes that the surveillant assemblage has extended into the space of the auditor-client relationship, but this extension remains inhibited as a result of auditors’ relatively weak level of engagement in providing SARs, thereby pointing to a degree of resistance in professional service firms regarding the deployment of regulation that compromises the foundations of this",
"title": ""
},
{
"docid": "neg:1840612_5",
"text": "Sparse singular value decomposition (SSVD) is proposed as a new exploratory analysis tool for biclustering or identifying interpretable row-column associations within high-dimensional data matrices. SSVD seeks a low-rank, checkerboard structured matrix approximation to data matrices. The desired checkerboard structure is achieved by forcing both the left- and right-singular vectors to be sparse, that is, having many zero entries. By interpreting singular vectors as regression coefficient vectors for certain linear regressions, sparsity-inducing regularization penalties are imposed to the least squares regression to produce sparse singular vectors. An efficient iterative algorithm is proposed for computing the sparse singular vectors, along with some discussion of penalty parameter selection. A lung cancer microarray dataset and a food nutrition dataset are used to illustrate SSVD as a biclustering method. SSVD is also compared with some existing biclustering methods using simulated datasets.",
"title": ""
},
{
"docid": "neg:1840612_6",
"text": "In many applications based on the use of unmanned aerial vehicles (UAVs), it is possible to establish a cluster of UAVs in which each UAV knows the other vehicle's position. Assuming that the common channel condition between any two nodes of UAVs is line-of-sight (LOS), the time and energy consumption for data transmission on each path that connecting two nodes may be estimated by a node itself. In this paper, we use a modified Bellman-Ford algorithm to find the best selection of relay nodes in order to minimize the time and energy consumption for data transmission between any UAV node in the cluster and the UAV acting as the cluster head. This algorithm is applied with a proposed cooperative MAC protocol that is compatible with the IEEE 802.11 standard. The evaluations under data saturation conditions illustrate noticeable benefits in successful packet delivery ratio, average delay, and in particular the cost of time and energy.",
"title": ""
},
{
"docid": "neg:1840612_7",
"text": "We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form–function relationship in language, our “composed” word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).",
"title": ""
},
{
"docid": "neg:1840612_8",
"text": "The past several years have seen remarkable progress in generative models which produce convincing samples of images and other modalities. A shared component of many powerful generative models is a decoder network, a parametric deep neural net that defines a generative distribution. Examples include variational autoencoders, generative adversarial networks, and generative moment matching networks. Unfortunately, it can be difficult to quantify the performance of these models because of the intractability of log-likelihood estimation, and inspecting samples can be misleading. We propose to use Annealed Importance Sampling for evaluating log-likelihoods for decoder-based models and validate its accuracy using bidirectional Monte Carlo. The evaluation code is provided at https:// github.com/tonywu95/eval_gen. Using this technique, we analyze the performance of decoder-based models, the effectiveness of existing log-likelihood estimators, the degree of overfitting, and the degree to which these models miss important modes of the data distribution.",
"title": ""
},
{
"docid": "neg:1840612_9",
"text": "relatedness between terms using the links found within their corresponding Wikipedia articles. Unlike other techniques based on Wikipedia, WLM is able to provide accurate measures efficiently, using only the links between articles rather than their textual content. Before describing the details, we first outline the other systems to which it can be compared. This is followed by a description of the algorithm, and its evaluation against manually-defined ground truth. The paper concludes with a discussion of the strengths and weaknesses of the new approach. Abstract",
"title": ""
},
{
"docid": "neg:1840612_10",
"text": "The fitness of an evolutionary individual can be understood in terms of its two basic components: survival and reproduction. As embodied in current theory, trade-offs between these fitness components drive the evolution of life-history traits in extant multicellular organisms. Here, we argue that the evolution of germ-soma specialization and the emergence of individuality at a new higher level during the transition from unicellular to multicellular organisms are also consequences of trade-offs between the two components of fitness-survival and reproduction. The models presented here explore fitness trade-offs at both the cell and group levels during the unicellular-multicellular transition. When the two components of fitness negatively covary at the lower level there is an enhanced fitness at the group level equal to the covariance of components at the lower level. We show that the group fitness trade-offs are initially determined by the cell level trade-offs. However, as the transition proceeds to multicellularity, the group level trade-offs depart from the cell level ones, because certain fitness advantages of cell specialization may be realized only by the group. The curvature of the trade-off between fitness components is a basic issue in life-history theory and we predict that this curvature is concave in single-celled organisms but becomes increasingly convex as group size increases in multicellular organisms. We argue that the increasingly convex curvature of the trade-off function is driven by the initial cost of reproduction to survival which increases as group size increases. To illustrate the principles and conclusions of the model, we consider aspects of the biology of the volvocine green algae, which contain both unicellular and multicellular members.",
"title": ""
},
{
"docid": "neg:1840612_11",
"text": "We present a novel hierarchical force-directed method for drawing large graphs. The algorithm produces a graph embedding in an Euclidean space E of any dimension. A two or three dimensional drawing of the graph is then obtained by projecting a higher-dimensional embedding into a two or three dimensional subspace of E. Projecting high-dimensional drawings onto two or three dimensions often results in drawings that are “smoother” and more symmetric. Among the other notable features of our approach are the utilization of a maximal independent set filtration of the set of vertices of a graph, a fast energy function minimization strategy, efficient memory management, and an intelligent initial placement of vertices. Our implementation of the algorithm can draw graphs with tens of thousands of vertices using a negligible amount of memory in less than one minute on a mid-range PC.",
"title": ""
},
{
"docid": "neg:1840612_12",
"text": "In this paper we present a framework for grasp planning with a humanoid robot arm and a five-fingered hand. The aim is to provide the humanoid robot with the ability of grasping objects that appear in a kitchen environment. Our approach is based on the use of an object model database that contains the description of all the objects that can appear in the robot workspace. This database is completed with two modules that make use of this object representation: an exhaustive offline grasp analysis system and a real-time stereo vision system. The offline grasp analysis system determines the best grasp for the objects by employing a simulation system, together with CAD models of the objects and the five-fingered hand. The results of this analysis are added to the object database using a description suited to the requirements of the grasp execution modules. A stereo camera system is used for a real-time object localization using a combination of appearance-based and model-based methods. The different components are integrated in a controller architecture to achieve manipulation task goals for the humanoid robot",
"title": ""
},
{
"docid": "neg:1840612_13",
"text": "PURPOSE OF REVIEW\nTo analyze the role of prepuce preservation in various disorders and discuss options available to reconstruct the prepuce.\n\n\nRECENT FINDINGS\nThe prepuce can be preserved in selected cases of penile degloving procedures, phimosis or hypospadias repair, and penile cancer resection. There is no clear evidence that debilitating and persistent preputial lymphedema develops after a prepuce-sparing penile degloving procedure. In fact, the prepuce can at times be preserved even if lymphedema develops. The prepuce can potentially be preserved in both phimosis and hypospadias repair. Penile cancer localized to the prepuce can be excised using Mohs' micrographic surgery without compromising survival. Reconstruction of the prepuce still remains a theoretical topic. There has been no study that has systematically evaluated efficacy of any reconstructive procedures.\n\n\nSUMMARY\nThe standard practice for preputial disorders remains circumcision. However, prepuce preservation is often technically feasible without compromising treatment. Preservative surgery combined with reconstruction may lead to better patient satisfaction and quality of life.",
"title": ""
},
{
"docid": "neg:1840612_14",
"text": "BACKGROUND\nPast research has found that playing a classic prosocial video game resulted in heightened prosocial behavior when compared to a control group, whereas playing a classic violent video game had no effect. Given purported links between violent video games and poor social behavior, this result is surprising. Here our aim was to assess whether this finding may be due to the specific games used. That is, modern games are experienced differently from classic games (more immersion in virtual environments, more connection with characters, etc.) and it may be that playing violent video games impacts prosocial behavior only when contemporary versions are used.\n\n\nMETHODS AND FINDINGS\nExperiments 1 and 2 explored the effects of playing contemporary violent, non-violent, and prosocial video games on prosocial behavior, as measured by the pen-drop task. We found that slight contextual changes in the delivery of the pen-drop task led to different rates of helping but that the type of game played had little effect. Experiment 3 explored this further by using classic games. Again, we found no effect.\n\n\nCONCLUSIONS\nWe failed to find evidence that playing video games affects prosocial behavior. Research on the effects of video game play is of significant public interest. It is therefore important that speculation be rigorously tested and findings replicated. Here we fail to substantiate conjecture that playing contemporary violent video games will lead to diminished prosocial behavior.",
"title": ""
},
{
"docid": "neg:1840612_15",
"text": "OBJECTIVES\nTo test a brief, non-sectarian program of meditation training for effects on perceived stress and negative emotion, and to determine effects of practice frequency and test the moderating effects of neuroticism (emotional lability) on treatment outcome.\n\n\nDESIGN AND SETTING\nThe study used a single-group, open-label, pre-test post-test design conducted in the setting of a university medical center.\n\n\nPARTICIPANTS\nHealthy adults (N=200) interested in learning meditation for stress-reduction were enrolled. One hundred thirty-three (76% females) completed at least 1 follow-up visit and were included in data analyses.\n\n\nINTERVENTION\nParticipants learned a simple mantra-based meditation technique in 4, 1-hour small-group meetings, with instructions to practice for 15-20 minutes twice daily. Instruction was based on a psychophysiological model of meditation practice and its expected effects on stress.\n\n\nOUTCOME MEASURES\nBaseline and monthly follow-up measures of Profile of Mood States; Perceived Stress Scale; State-Trait Anxiety Inventory (STAI); and Brief Symptom Inventory (BSI). Practice frequency was indexed by monthly retrospective ratings. Neuroticism was evaluated as a potential moderator of treatment effects.\n\n\nRESULTS\nAll 4 outcome measures improved significantly after instruction, with reductions from baseline that ranged from 14% (STAI) to 36% (BSI). More frequent practice was associated with better outcome. Higher baseline neuroticism scores were associated with greater improvement.\n\n\nCONCLUSIONS\nPreliminary evidence suggests that even brief instruction in a simple meditation technique can improve negative mood and perceived stress in healthy adults, which could yield long-term health benefits. Frequency of practice does affect outcome. Those most likely to experience negative emotions may benefit the most from the intervention.",
"title": ""
},
{
"docid": "neg:1840612_16",
"text": "Traditional vision-based localization methods such as visual SLAM suffer from practical problems in outdoor environments such as unstable feature detection and inability to perform location recognition under lighting, perspective, weather and appearance change. Additionally map construction on a large scale in these systems presents its own challenges. In this work, we present a novel method for precisely localizing vehicles on the road using signs marked on the road (road markings), which have the advantage of being distinct and easy to detect, their detection being robust under changes in lighting and weather. Our method uses corners detected on road markings to perform localization in global coordinates. The method consists of two phases - a mapping phase when a high-quality GPS device is used to automatically survey road marks and add them to a light-weight “map” or database, and a localization phase where road mark detection and look-up in the map, combined with visual odometry, produces precise localization. We present experiments using a real-time implementation operating in a car that demonstrates the improved localization robustness and accuracy of our system even when using road marks alone. However, in this case the trajectory between road marks has to be filled-in by visual odometry, which contributes drift. Hence, we also present a mechanism for combining road-mark-based maps with sparse feature-based maps that results in greater accuracy still. We see our use of road marks as a significant step in the general trend of using higher-level features for improved localization performance irrespective of environment conditions.",
"title": ""
},
{
"docid": "neg:1840612_17",
"text": "Interactive visualization requires the translation of data into a screen space of limited resolution. While currently ignored by most visualization models, this translation entails a loss of information and the introduction of a number of artifacts that can be useful, (e.g., aggregation, structures) or distracting (e.g., over-plotting, clutter) for the analysis. This phenomenon is observed in parallel coordinates, where overlapping lines between adjacent axes form distinct patterns, representing the relation between variables they connect. However, even for a small number of dimensions, the challenge is to effectively convey the relationships for all combinations of dimensions. The size of the dataset and a large number of dimensions only add to the complexity of this problem. To address these issues, we propose Pargnostics, parallel coordinates diagnostics, a model based on screen-space metrics that quantify the different visual structures. Pargnostics metrics are calculated for pairs of axes and take into account the resolution of the display as well as potential axis inversions. Metrics include the number of line crossings, crossing angles, convergence, overplotting, etc. To construct a visualization view, the user can pick from a ranked display showing pairs of coordinate axes and the structures between them, or examine all possible combinations of axes at once in a matrix display. Picking the best axes layout is an NP-complete problem in general, but we provide a way of automatically optimizing the display according to the user's preferences based on our metrics and model.",
"title": ""
},
{
"docid": "neg:1840612_18",
"text": "Human visual behaviour has significant potential for activity recognition and computational behaviour analysis, but previous works focused on supervised methods and recognition of predefined activity classes based on short-term eye movement recordings. We propose a fully unsupervised method to discover users' everyday activities from their long-term visual behaviour. Our method combines a bag-of-words representation of visual behaviour that encodes saccades, fixations, and blinks with a latent Dirichlet allocation (LDA) topic model. We further propose different methods to encode saccades for their use in the topic model. We evaluate our method on a novel long-term gaze dataset that contains full-day recordings of natural visual behaviour of 10 participants (more than 80 hours in total). We also provide annotations for eight sample activity classes (outdoor, social interaction, focused work, travel, reading, computer work, watching media, eating) and periods with no specific activity. We show the ability of our method to discover these activities with performance competitive with that of previously published supervised methods.",
"title": ""
},
{
"docid": "neg:1840612_19",
"text": "The trend towards more commercial-off-the-shelf (COTS) components in complex safety-critical systems is increasing the difficulty of verifying system correctness. Runtime verification (RV) is a lightweight technique to verify that certain properties hold over execution traces. RV is usually implemented as runtime monitors that can be used as runtime fault detectors or test oracles to analyze a system under test for bad behaviors. Most existing RV methods utilize some form of system or code instrumentation and thus are not designed to monitor potentially black-box COTS components. This thesis presents a suitable runtime monitoring framework for monitoring safety-critical embedded systems with black-box components. We provide an end-to-end framework including proven correct monitoring algorithms, a formal specification language with semi-formal techniques to map the system onto our formal system trace model, specification design patterns to aid translating informal specifications into the formal specification language, and a safety-case pattern example showing the argument that our monitor design can be safely integrated with a target system. We utilized our monitor implementation to check test logs from several system tests. We show the monitor being used to check system test logs offline for interesting properties. We also performed real-time replay of logs from a system network bus, demonstrating the feasibility of our embedded monitor implementation in real-time operation.",
"title": ""
}
] |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.