query_id
stringlengths
32
32
query
stringlengths
5
5.38k
positive_passages
listlengths
1
26
negative_passages
listlengths
7
100
subset
stringclasses
7 values
5a7052cb7df7235f112f0d4f750339a0
Exploring ROI size in deep learning based lipreading
[ { "docid": "7fe3cf6b8110c324a98a90f31064dadb", "text": "Traditional visual speech recognition systems consist of two stages, feature extraction and classification. Recently, several deep learning approaches have been presented which automatically extract features from the mouth images and aim to replace the feature extraction stage. However, research on joint learning of features and classification is very limited. In this work, we present an end-to-end visual speech recognition system based on Long-Short Memory (LSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and perform classification and also achieves state-of-the-art performance in visual speech classification. The model consists of two streams which extract features directly from the mouth and difference images, respectively. The temporal dynamics in each stream are modelled by an LSTM and the fusion of the two streams takes place via a Bidirectional LSTM (BLSTM). An absolute improvement of 9.7% over the base line is reported on the OuluVS2 database, and 1.5% on the CUAVE database when compared with other methods which use a similar visual front-end.", "title": "" } ]
[ { "docid": "335daed2a03f710d25e1e0a43c600453", "text": "The Digital Bibliography and Library Project (DBLP) is a popular computer science bibliography website hosted at the University of Trier in Germany. It currently contains 2,722,212 computer science publications with additional information about the authors and conferences, journals, or books in which these are published. Although the database covers the majority of papers published in this field of research, it is still hard to browse the vast amount of textual data manually to find insights and correlations in it, in particular time-varying ones. This is also problematic if someone is merely interested in all papers of a specific topic and possible correlated scientific words which may hint at related papers. To close this gap, we propose an interactive tool which consists of two separate components, namely data analysis and data visualization. We show the benefits of our tool and explain how it might be used in a scenario where someone is confronted with the task of writing a state-of-the art report on a specific topic. We illustrate how data analysis, data visualization, and the human user supported by interaction features can work together to find insights which makes typical literature search tasks faster.", "title": "" }, { "docid": "a601abae0a3d54d4aa3ecbb4bd09755a", "text": "Article history: Received 27 March 2008 Received in revised form 2 September 2008 Accepted 20 October 2008", "title": "" }, { "docid": "51fb43ac979ce0866eb541adc145ba70", "text": "In many cooperatively breeding species, group members form a dominance hierarchy or queue to inherit the position of breeder. Models aimed at understanding individual variation in helping behavior, however, rarely take into account the effect of dominance rank on expected future reproductive success and thus the potential direct fitness costs of helping. Here we develop a kin-selection model of helping behavior in multimember groups in which only the highest ranking individual breeds. Each group member can invest in the dominant’s offspring at a cost to its own survivorship. The model predicts that lower ranked subordinates, who have a smaller probability of inheriting the group, should work harder than higher ranked subordinates. This prediction holds regardless of whether the intrinsic mortality rate of subordinates increases or decreases with rank. The prediction does not necessarily hold, however, where the costs of helping are higher for lower ranked individuals: a situation that may be common in vertebrates. The model makes two further testable predictions: that the helping effort of an individual of given rank should be lower in larger groups, and the reproductive success of dominants should be greater where group members are more closely related. Empirical evidence for these predictions is discussed. We argue that the effects of rank on stable helping effort may explain why attempts to correlate individual helping effort with relatedness in cooperatively breeding species have met with limited success.", "title": "" }, { "docid": "e8b199733c0304731a60db7c42987cf6", "text": "This ethnographic study of 22 diverse families in the San Francisco Bay Area provides a holistic account of parents' attitudes about their children's use of technology. We found that parents from different socioeconomic classes have different values and practices around technology use, and that those values and practices reflect structural differences in their everyday lives. Calling attention to class differences in technology use challenges the prevailing practice in human-computer interaction of designing for those similar to oneself, which often privileges middle-class values and practices. By discussing the differences between these two groups and the advantages of researching both, this research highlights the benefits of explicitly engaging with socioeconomic status as a category of analysis in design.", "title": "" }, { "docid": "2ec9ac2c283fa0458eb97d1e359ec358", "text": "Multiple automakers have in development or in production automated driving systems (ADS) that offer freeway-pilot functions. This type of ADS is typically limited to restricted-access freeways only, that is, the transition from manual to automated modes takes place only after the ramp merging process is completed manually. One major challenge to extend the automation to ramp merging is that the automated vehicle needs to incorporate and optimize long-term objectives (e.g. successful and smooth merge) when near-term actions must be safely executed. Moreover, the merging process involves interactions with other vehicles whose behaviors are sometimes hard to predict but may influence the merging vehicle's optimal actions. To tackle such a complicated control problem, we propose to apply Deep Reinforcement Learning (DRL) techniques for finding an optimal driving policy by maximizing the long-term reward in an interactive environment. Specifically, we apply a Long Short-Term Memory (LSTM) architecture to model the interactive environment, from which an internal state containing historical driving information is conveyed to a Deep Q-Network (DQN). The DQN is used to approximate the Q-function, which takes the internal state as input and generates Q-values as output for action selection. With this DRL architecture, the historical impact of interactive environment on the long-term reward can be captured and taken into account for deciding the optimal control policy. The proposed architecture has the potential to be extended and applied to other autonomous driving scenarios such as driving through a complex intersection or changing lanes under varying traffic flow conditions.", "title": "" }, { "docid": "6566ad2c654274105e94f99ac5e20401", "text": "This paper presents a universal morphological feature schema that represents the finest distinctions in meaning that are expressed by overt, affixal inflectional morphology across languages. This schema is used to universalize data extracted from Wiktionary via a robust multidimensional table parsing algorithm and feature mapping algorithms, yielding 883,965 instantiated paradigms in 352 languages. These data are shown to be effective for training morphological analyzers, yielding significant accuracy gains when applied to Durrett and DeNero’s (2013) paradigm learning framework.", "title": "" }, { "docid": "405bae0d413aa4b5fef0ac8b8c639235", "text": "Leukocyte adhesion deficiency (LAD) type III is a rare syndrome characterized by severe recurrent infections, leukocytosis, and increased bleeding tendency. All integrins are normally expressed yet a defect in their activation leads to the observed clinical manifestations. Less than 20 patients have been reported world wide and the primary genetic defect was identified in some of them. Here we describe the clinical features of patients in whom a mutation in the calcium and diacylglycerol-regulated guanine nucleotide exchange factor 1 (CalDAG GEF1) was found and compare them to other cases of LAD III and to animal models harboring a mutation in the CalDAG GEF1 gene. The hallmarks of the syndrome are recurrent infections accompanied by severe bleeding episodes distinguished by osteopetrosis like bone abnormalities and neurodevelopmental defects.", "title": "" }, { "docid": "4a761bed54487cb9c34fc0ff27883944", "text": "We show that unsupervised training of latent capsule layers using only the reconstruction loss, without masking to select the correct output class, causes a loss of equivariances and other desirable capsule qualities. This implies that supervised capsules networks can’t be very deep. Unsupervised sparsening of latent capsule layer activity both restores these qualities and appears to generalize better than supervised masking, while potentially enabling deeper capsules networks. We train a sparse, unsupervised capsules network of similar geometry to (Sabour et al., 2017) on MNIST (LeCun et al., 1998) and then test classification accuracy on affNIST1 using an SVM layer. Accuracy is improved from benchmark 79% to 90%.", "title": "" }, { "docid": "c0762517ebbae00ab5ee1291460c164c", "text": "This paper compares various topologies for 6.6kW on-board charger (OBC) to find out suitable topology. In general, OBC consists of 2-stage; power factor correction (PFC) stage and DC-DC converter stage. Conventional boost PFC, interleaved boost PFC, and semi bridgeless PFC are considered as PFC circuit, and full-bridge converter, phase shift full-bridge converter, and series resonant converter are taken into account for DC-DC converter circuit. The design process of each topology is presented. Then, loss analysis is implemented in order to calculate the efficiency of each topology for PFC circuit and DC-DC converter circuit. In addition, the volume of magnetic components and number of semi-conductor elements are considered. Based on these results, topology selection guideline according to the system specification of 6.6kW OBC is proposed.", "title": "" }, { "docid": "12274a9b350f1d1f7a3eb0cd865f260c", "text": "A large amount of multimedia data (e.g., image and video) is now available on the Web. A multimedia entity does not appear in isolation, but is accompanied by various forms of metadata, such as surrounding text, user tags, ratings, and comments etc. Mining these textual metadata has been found to be effective in facilitating multimedia information processing and management. A wealth of research efforts has been dedicated to text mining in multimedia. This chapter provides a comprehensive survey of recent research efforts. Specifically, the survey focuses on four aspects: (a) surrounding text mining; (b) tag mining; (c) joint text and visual content mining; and (d) cross text and visual content mining. Furthermore, open research issues are identified based on the current research efforts.", "title": "" }, { "docid": "7f71e539817c80aaa0a4fe3b68d76948", "text": "We propose to help weakly supervised object localization for classes where location annotations are not available, by transferring things and stuff knowledge from a source set with available annotations. The source and target classes might share similar appearance (e.g. bear fur is similar to cat fur) or appear against similar background (e.g. horse and sheep appear against grass). To exploit this, we acquire three types of knowledge from the source set: a segmentation model trained on both thing and stuff classes; similarity relations between target and source classes; and cooccurrence relations between thing and stuff classes in the source. The segmentation model is used to generate thing and stuff segmentation maps on a target image, while the class similarity and co-occurrence knowledge help refining them. We then incorporate these maps as new cues into a multiple instance learning framework (MIL), propagating the transferred knowledge from the pixel level to the object proposal level. In extensive experiments, we conduct our transfer from the PASCAL Context dataset (source) to the ILSVRC, COCO and PASCAL VOC 2007 datasets (targets). We evaluate our transfer across widely different thing classes, including some that are not similar in appearance, but appear against similar background. The results demonstrate significant improvement over standard MIL, and we outperform the state-of-the-art in the transfer setting.", "title": "" }, { "docid": "a3585d424a54c31514aba579b80d8231", "text": "The vast majority of today's critical infrastructure is supported by numerous feedback control loops and an attack on these control loops can have disastrous consequences. This is a major concern since modern control systems are becoming large and decentralized and thus more vulnerable to attacks. This paper is concerned with the estimation and control of linear systems when some of the sensors or actuators are corrupted by an attacker. We give a new simple characterization of the maximum number of attacks that can be detected and corrected as a function of the pair (A,C) of the system and we show in particular that it is impossible to accurately reconstruct the state of a system if more than half the sensors are attacked. In addition, we show how the design of a secure local control loop can improve the resilience of the system. When the number of attacks is smaller than a threshold, we propose an efficient algorithm inspired from techniques in compressed sensing to estimate the state of the plant despite attacks. We give a theoretical characterization of the performance of this algorithm and we show on numerical simulations that the method is promising and allows to reconstruct the state accurately despite attacks. Finally, we consider the problem of designing output-feedback controllers that stabilize the system despite sensor attacks. We show that a principle of separation between estimation and control holds and that the design of resilient output feedback controllers can be reduced to the design of resilient state estimators.", "title": "" }, { "docid": "07941e1f7a8fd0bbc678b641b80dc037", "text": "This contribution presents a very brief and critical discussion on automated machine learning (AutoML), which is categorized here into two classes, referred to as narrow AutoML and generalized AutoML, respectively. The conclusions yielded from this discussion can be summarized as follows: (1) most existent research on AutoML belongs to the class of narrow AutoML; (2) advances in narrow AutoML are mainly motivated by commercial needs, while any possible benefit obtained is definitely at a cost of increase in computing burdens; (3)the concept of generalized AutoML has a strong tie in spirit with artificial general intelligence (AGI), also called “strong AI”, for which obstacles abound for obtaining pivotal progresses.", "title": "" }, { "docid": "ff20e5cd554cd628eba07776fa9a5853", "text": "We describe our early experience in applying our console log mining techniques [19, 20] to logs from production Google systems with thousands of nodes. This data set is five orders of magnitude in size and contains almost 20 times as many messages types as the Hadoop data set we used in [19]. It also has many properties that are unique to large scale production deployments (e.g., the system stays on for several months and multiple versions of the software can run concurrently). Our early experience shows that our techniques, including source code based log parsing, state and sequence based feature creation and problem detection, work well on this production data set. We also discuss our experience in using our log parser to assist the log sanitization.", "title": "" }, { "docid": "8fe6e954db9080e233bbc6dbf8117914", "text": "This document defines a deterministic digital signature generation procedure. Such signatures are compatible with standard Digital Signature Algorithm (DSA) and Elliptic Curve Digital Signature Algorithm (ECDSA) digital signatures and can be processed with unmodified verifiers, which need not be aware of the procedure described therein. Deterministic signatures retain the cryptographic security features associated with digital signatures but can be more easily implemented in various environments, since they do not need access to a source of high-quality randomness. Status of This Memo This document is not an Internet Standards Track specification; it is published for informational purposes. This is a contribution to the RFC Series, independently of any other RFC stream. The RFC Editor has chosen to publish this document at its discretion and makes no statement about its value for implementation or deployment. Documents approved for publication by the RFC Editor are not a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document.", "title": "" }, { "docid": "04f705462bdd34a8d82340fb59264a51", "text": "This paper describes EmoTweet-28, a carefully curated corpus of 15,553 tweets annotated with 28 emotion categories for the purpose of training and evaluating machine learning models for emotion classification. EmoTweet-28 is, to date, the largest tweet corpus annotated with fine-grained emotion categories. The corpus contains annotations for four facets of emotion: valence, arousal, emotion category and emotion cues. We first used small-scale content analysis to inductively identify a set of emotion categories that characterize the emotions expressed in microblog text. We then expanded the size of the corpus using crowdsourcing. The corpus encompasses a variety of examples including explicit and implicit expressions of emotions as well as tweets containing multiple emotions. EmoTweet-28 represents an important resource to advance the development and evaluation of more emotion-sensitive systems.", "title": "" }, { "docid": "0a3f5ff37c49840ec8e59cbc56d31be2", "text": "Convolutional neural networks (CNNs) are well known for producing state-of-the-art recognizers for document processing [1]. However, they can be difficult to implement and are usually slower than traditional multi-layer perceptrons (MLPs). We present three novel approaches to speeding up CNNs: a) unrolling convolution, b) using BLAS (basic linear algebra subroutines), and c) using GPUs (graphic processing units). Unrolled convolution converts the processing in each convolutional layer (both forward-propagation and back-propagation) into a matrix-matrix product. The matrix-matrix product representation of CNNs makes their implementation as easy as MLPs. BLAS is used to efficiently compute matrix products on the CPU. We also present a pixel shader based GPU implementation of CNNs. Results on character recognition problems indicate that unrolled convolution with BLAS produces a dramatic 2.4X−3.0X speedup. The GPU implementation is even faster and produces a 3.1X−4.1X speedup.", "title": "" }, { "docid": "f733b53147ce1765709acfcba52c8bbf", "text": "BACKGROUND\nIt is important to evaluate the impact of cannabis use on onset and course of psychotic illness, as the increasing number of novice cannabis users may translate into a greater public health burden. This study aims to examine the relationship between adolescent onset of regular marijuana use and age of onset of prodromal symptoms, or first episode psychosis, and the manifestation of psychotic symptoms in those adolescents who use cannabis regularly.\n\n\nMETHODS\nA review was conducted of the current literature for youth who initiated cannabis use prior to the age of 18 and experienced psychotic symptoms at, or prior to, the age of 25. Seventeen studies met eligibility criteria and were included in this review.\n\n\nRESULTS\nThe current weight of evidence supports the hypothesis that early initiation of cannabis use increases the risk of early onset psychotic disorder, especially for those with a preexisting vulnerability and who have greater severity of use. There is also a dose-response association between cannabis use and symptoms, such that those who use more tend to experience greater number and severity of prodromal and diagnostic psychotic symptoms. Those with early-onset psychotic disorder and comorbid cannabis use show a poorer course of illness in regards to psychotic symptoms, treatment, and functional outcomes. However, those with early initiation of cannabis use appear to show a higher level of social functioning than non-cannabis users.\n\n\nCONCLUSIONS\nAdolescent initiation of cannabis use is associated, in a dose-dependent fashion, with emergence and severity of psychotic symptoms and functional impairment such that those who initiate use earlier and use at higher frequencies demonstrate poorer illness and treatment outcomes. These associations appear more robust for adolescents at high risk for developing a psychotic disorder.", "title": "" }, { "docid": "f59adaac85f7131bf14335dad2337568", "text": "Product search is an important part of online shopping. In contrast to many search tasks, the objectives of product search are not confined to retrieving relevant products. Instead, it focuses on finding items that satisfy the needs of individuals and lead to a user purchase. The unique characteristics of product search make search personalization essential for both customers and e-shopping companies. Purchase behavior is highly personal in online shopping and users often provide rich feedback about their decisions (e.g. product reviews). However, the severe mismatch found in the language of queries, products and users make traditional retrieval models based on bag-of-words assumptions less suitable for personalization in product search. In this paper, we propose a hierarchical embedding model to learn semantic representations for entities (i.e. words, products, users and queries) from different levels with their associated language data. Our contributions are three-fold: (1) our work is one of the initial studies on personalized product search; (2) our hierarchical embedding model is the first latent space model that jointly learns distributed representations for queries, products and users with a deep neural network; (3) each component of our network is designed as a generative model so that the whole structure is explainable and extendable. Following the methodology of previous studies, we constructed personalized product search benchmarks with Amazon product data. Experiments show that our hierarchical embedding model significantly outperforms existing product search baselines on multiple benchmark datasets.", "title": "" } ]
scidocsrr
ffbcb3fd0a81574fee47ea757dcc44e4
Estimation accuracy of a vector-controlled frequency converter used in the determination of the pump system operating state
[ { "docid": "63755caaaad89e0ef6a687bb5977f5de", "text": "rotor field orientation stator field orientation stator model rotor model MRAS, observers, Kalman filter parasitic properties field angle estimation Abstract — Controlled induction motor drives without mechanical speed sensors at the motor shaft have the attractions of low cost and high reliability. To replace the sensor, the information on the rotor speed is extracted from measured stator voltages and currents at the motor terminals. Vector controlled drives require estimating the magnitude and spatial orientation of the fundamental magnetic flux waves in the stator or in the rotor. Open loop estimators or closed loop observers are used for this purpose. They differ with respect to accuracy, robustness, and sensitivity against model parameter variations. Dynamic performance and steady-state speed accuracy in the low speed range can be achieved by exploiting parasitic effects of the machine. The overview in this paper uses signal flow graphs of complex space vector quantities to provide an insightful description of the systems used in sensorless control of induction motors.", "title": "" }, { "docid": "b8ce74fc2a02a1a5c2d93e2922529bb0", "text": "The basic evolution of direct torque control from other drive types is explained. Qualitative comparisons with other drives are included. The basic concepts behind direct torque control are clarified. An explanation of direct self-control and the field orientation concepts implemented in the adaptive motor model block is presented. The reliance of the control method on fast processing techniques is stressed. The theoretical foundations for the control concept are provided in summary format. Information on the ancillary control blocks outside the basic direct torque control is given. The implementation of special functions directly related to the control approach is described. Finally, performance data from an actual system is presented.", "title": "" } ]
[ { "docid": "530ef3f5d2f7cb5cc93243e2feb12b8e", "text": "Online personal health record (PHR) enables patients to manage their own medical records in a centralized way, which greatly facilitates the storage, access and sharing of personal health data. With the emergence of cloud computing, it is attractive for the PHR service providers to shift their PHR applications and storage into the cloud, in order to enjoy the elastic resources and reduce the operational cost. However, by storing PHRs in the cloud, the patients lose physical control to their personal health data, which makes it necessary for each patient to encrypt her PHR data before uploading to the cloud servers. Under encryption, it is challenging to achieve fine-grained access control to PHR data in a scalable and efficient way. For each patient, the PHR data should be encrypted so that it is scalable with the number of users having access. Also, since there are multiple owners (patients) in a PHR system and every owner would encrypt her PHR files using a different set of cryptographic keys, it is important to reduce the key distribution complexity in such multi-owner settings. Existing cryptographic enforced access control schemes are mostly designed for the single-owner scenarios. In this paper, we propose a novel framework for access control to PHRs within cloud computing environment. To enable fine-grained and scalable access control for PHRs, we leverage attribute based encryption (ABE) techniques to encrypt each patients’ PHR data. To reduce the key distribution complexity, we divide the system into multiple security domains, where each domain manages only a subset of the users. In this way, each patient has full control over her own privacy, and the key management complexity is reduced dramatically. Our proposed scheme is also flexible, in that it supports efficient and on-demand revocation of user access rights, and break-glass access under emergency scenarios.", "title": "" }, { "docid": "f6679ca9f6c9efcb4093a33af15176d3", "text": "This paper reports our recent finding that a laser that is radiated on a thin light-absorbing elastic medium attached on the skin can elicit a tactile sensation of mechanical tap. Laser radiation to the elastic medium creates inner elastic waves on the basis of thermoelastic effects, which subsequently move the medium and stimulate the skin. We characterize the associated stimulus by measuring its physical properties. In addition, the perceptual identity of the stimulus is confirmed by comparing it to mechanical and electrical stimuli by means of perceptual spaces. All evidence claims that indirect laser radiation conveys a sensation of short mechanical tap with little individual difference. To the best of our knowledge, this is the first study that discovers the possibility of using indirect laser radiation for mid-air tactile rendering.", "title": "" }, { "docid": "63efc8aecf9b28b2a2bbe4514ed3a7fe", "text": "Reading is a hobby to open the knowledge windows. Besides, it can provide the inspiration and spirit to face this life. By this way, concomitant with the technology development, many companies serve the e-book or book in soft file. The system of this book of course will be much easier. No worry to forget bringing the statistics and chemometrics for analytical chemistry book. You can open the device and get the book by on-line.", "title": "" }, { "docid": "53c962bd71abbe13d59e03e01c19d82e", "text": "Correctness of SQL queries is usually tested by executing the queries on one or more datasets. Erroneous queries are often the results of small changes or mutations of the correct query. A mutation Q $$'$$ ′ of a query Q is killed by a dataset D if Q(D) $$\\ne $$ ≠ Q $$'$$ ′ (D). Earlier work on the XData system showed how to generate datasets that kill all mutations in a class of mutations that included join type and comparison operation mutations. In this paper, we extend the XData data generation techniques to handle a wider variety of SQL queries and a much larger class of mutations. We have also built a system for grading SQL queries using the datasets generated by XData. We present a study of the effectiveness of the datasets generated by the extended XData approach, using a variety of queries including queries submitted by students as part of a database course. We show that the XData datasets outperform predefined datasets as well as manual grading done earlier by teaching assistants, while also avoiding the drudgery of manual correction. Thus, we believe that our techniques will be of great value to database course instructors and TAs, particularly to those of MOOCs. It will also be valuable to database application developers and testers for testing SQL queries.", "title": "" }, { "docid": "55f80d7b459342a41bb36a5c0f6f7e0d", "text": "A smart phone is a handheld device that combines the functionality of a cellphone, a personal digital assistant (PDA) and other information appliances such a music player. These devices can however be used in a crime and would have to be quickly analysed for evidence. This data is collected using either a forensic tool which resides on a PC or specialised hardware. This paper proposes the use of an on-phone forensic tool to collect the contents of the device and store it on removable storage. This approach requires less equipment and can retrieve the volatile information that resides on the phone such as running processes. The paper discusses the Symbian operating system, the evidence that is stored on the device and contrasts the approach with that followed by other tools.", "title": "" }, { "docid": "17ccae5f98711c8698f0fb4a449a591f", "text": "Blind image deconvolution: theory and applications Images are ubiquitous and indispensable in science and everyday life. Mirroring the abilities of our own human visual system, it is natural to display observations of the world in graphical form. Images are obtained in areas 1 2 Blind Image Deconvolution: problem formulation and existing approaches ranging from everyday photography to astronomy, remote sensing, medical imaging, and microscopy. In each case, there is an underlying object or scene we wish to observe; the original or true image is the ideal representation of the observed scene. Yet the observation process is never perfect: there is uncertainty in the measurements , occurring as blur, noise, and other degradations in the recorded images. Digital image restoration aims to recover an estimate of the original image from the degraded observations. The key to being able to solve this ill-posed inverse problem is proper incorporation of prior knowledge about the original image into the restoration process. Classical image restoration seeks an estimate of the true image assuming the blur is known. In contrast, blind image restoration tackles the much more difficult, but realistic, problem where the degradation is unknown. In general, the degradation is nonlinear (including, for example, saturation and quantization) and spatially varying (non uniform motion, imperfect optics); however, for most of the work, it is assumed that the observed image is the output of a Linear Spatially Invariant (LSI) system to which noise is added. Therefore it becomes a Blind Deconvolution (BD) problem, with the unknown blur represented as a Point Spread Function (PSF). Classical restoration has matured since its inception, in the context of space exploration in the 1960s, and numerous techniques can be found in the literature (for recent reviews see [1, 2]). These differ primarily in the prior information about the image they include to perform the restoration task. The earliest algorithms to tackle the BD problem appeared as long ago as the mid-1970s [3, 4], and attempted to identify known patterns in the blur; a small but dedicated effort followed through the late 1980s (see for instance [5, 6, 7, 8, 9]), and a resurgence was seen in the 1990s (see the earlier reviews in [10, 11]). Since then, the area has been extensively explored by the signal processing , astronomical, and optics communities. Many of the BD algorithms have their roots in estimation theory, linear algebra, and numerical analysis. An important question …", "title": "" }, { "docid": "b386c24fc4412d050c1fb71692540b45", "text": "In this paper, we consider the problem of approximating the densest subgraph in the dynamic graph stream model. In this model of computation, the input graph is defined by an arbitrary sequence of edge insertions and deletions and the goal is to analyze properties of the resulting graph given memory that is sub-linear in the size of the stream. We present a single-pass algorithm that returns a (1 + ) approximation of the maximum density with high probability; the algorithm uses O( −2npolylog n) space, processes each stream update in polylog(n) time, and uses poly(n) post-processing time where n is the number of nodes. The space used by our algorithm matches the lower bound of Bahmani et al. (PVLDB 2012) up to a poly-logarithmic factor for constant . The best existing results for this problem were established recently by Bhattacharya et al. (STOC 2015). They presented a (2 + ) approximation algorithm using similar space and another algorithm that both processed each update and maintained a (4 + ) approximation of the current maximum density in polylog(n) time per-update.", "title": "" }, { "docid": "494b375064fbbe012b382d0ad2db2900", "text": "You are smart to question how different medications interact when used concurrently. Champix, called Chantix in the United States and globally by its generic name varenicline [2], is a prescription medication that can help individuals quit smoking by partially stimulating nicotine receptors in cells throughout the body. Nicorette gum, a type of nicotine replacement therapy (NRT), is also a tool to help smokers quit by providing individuals with the nicotine they crave by delivering the substance in controlled amounts through the lining of the mouth. NRT is available in many other forms including lozenges, patches, inhalers, and nasal sprays. The short answer is that there is disagreement among researchers about whether or not there are negative consequences to chewing nicotine gum while taking varenicline. While some studies suggest no harmful side effects to using them together, others have found that adverse effects from using both at the same time. So, what does the current evidence say?", "title": "" }, { "docid": "f11ee9f354936eefa539d9aa518ac6b1", "text": "This paper presents a modified priority based probe algorithm for deadlock detection and resolution in distributed database systems. The original priority based probe algorithm was presented by Sinha and Natarajan based on work by Chandy, Misra, and Haas. Various examples are used to show that the original priority based algorithm either fails to detect deadlocks or reports deadlocks which do not exist in many situations. A modified algorithm which eliminates these problems is proposed. This algorithm has been tested through simulation and appears to be error free. Finally, the performance of the modified algorithm is briefly discussed.", "title": "" }, { "docid": "225fa1a3576bc8cea237747cb25fc38d", "text": "Common video systems for laparoscopy provide the surgeon a two-dimensional image (2D), where information on spatial depth can be derived only from secondary spatial depth cues and experience. Although the advantage of stereoscopy for surgical task efficiency has been clearly shown, several attempts to introduce three-dimensional (3D) video systems into clinical routine have failed. The aim of this study is to evaluate users’ performances in standardised surgical phantom model tasks using 3D HD visualisation compared with 2D HD regarding precision and working speed. This comparative study uses a 3D HD video system consisting of a dual-channel laparoscope, a stereoscopic camera, a camera controller with two separate outputs and a wavelength multiplex stereoscopic monitor. Each of 20 medical students and 10 laparoscopically experienced surgeons (more than 100 laparoscopic cholecystectomies each) pre-selected in a stereo vision test were asked to perform one task to familiarise themselves with the system and subsequently a set of five standardised tasks encountered in typical surgical procedures. The tasks were performed under either 3D or 2D conditions at random choice and subsequently repeated under the other vision condition. Predefined errors were counted, and time needed was measured. In four of the five tasks the study participants made fewer mistakes in 3D than in 2D vision. In four of the tasks they needed significantly more time in the 2D mode. Both the student group and the surgeon group showed similarly improved performance, while the surgeon group additionally saved more time on difficult tasks. This study shows that 3D HD using a state-of-the-art 3D monitor permits superior task efficiency, even as compared with the latest 2D HD video systems.", "title": "" }, { "docid": "13449fab143effbaf5408ce4abcdbeea", "text": "Extractive summarization typically uses sentences as summarization units. In contrast, joint compression and summarization can use smaller units such as words and phrases, resulting in summaries containing more information. The goal of compressive summarization is to find a subset of words that maximize the total score of concepts and cutting dependency arcs under the grammar constraints and summary length constraint. We propose an efficient decoding algorithm for fast compressive summarization using graph cuts. Our approach first relaxes the length constraint using Lagrangian relaxation. Then we propose to bound the relaxed objective function by the supermodular binary quadratic programming problem, which can be solved efficiently using graph max-flow/min-cut. Since finding the tightest lower bound suffers from local optimality, we use convex relaxation for initialization. Experimental results on TAC2008 dataset demonstrate our method achieves competitive ROUGE score and has good readability, while is much faster than the integer linear programming (ILP) method.", "title": "" }, { "docid": "b72f5bfc24139c309c196d80956a2241", "text": "The SIMC method for PID controller tuning (Skogestad 2003) has already found widespread industrial usage in Norway. This chapter gives an updated overview of the method, mainly from a user’s point of view. The basis for the SIMC method is a first-order plus time delay model, and we present a new effective method to obtain the model from a simple closed-loop experiment. An important advantage of the SIMC rule is that there is a single tuning parameter (τc) that gives a good balance between the PID parameters (Kc,τI ,τD), and which can be adjusted to get a desired trade-off between performance (“tight” control) and robustness (“smooth” control). Compared to the original paper of Skogestad (2003), the choice of the tuning parameter τc is discussed in more detail, and lower and upper limits are presented for tight and smooth tuning, respectively. Finally, the optimality of the SIMC PI rules is studied by comparing the performance (IAE) versus robustness (Ms) trade-off with the Pareto-optimal curve. The difference is small which leads to the conclusion that the SIMC rules are close to optimal. The only exception is for pure time delay processes, so we introduce the “improved” SIMC rule to improve the performance for this case. Chapter for PID book (planned: Springer, 2011, Editor: R. Vilanova) This version: September 7, 2011 Sigurd Skogestad Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim, e-mail: [email protected] Chriss Grimholt Department of Chemical Engineering, Norwegian University of Science and Technology (NTNU), Trondheim", "title": "" }, { "docid": "91eecde9d0e3b67d7af0194782923ead", "text": "The burden of entry into mobile crowdsensing (MCS) is prohibitively high for human-subject researchers who lack a technical orientation. As a result, the benefits of MCS remain beyond the reach of research communities (e.g., psychologists) whose expertise in the study of human behavior might advance applications and understanding of MCS systems. This paper presents Sensus, a new MCS system for human-subject studies that bridges the gap between human-subject researchers and MCS methods. Sensus alleviates technical burdens with on-device, GUI-based design of sensing plans, simple and efficient distribution of sensing plans to study participants, and uniform participant experience across iOS and Android devices. Sensing plans support many hardware and software sensors, automatic deployment of sensor-triggered surveys, and double-blind assignment of participants within randomized controlled trials. Sensus offers these features to study designers without requiring knowledge of markup and programming languages. We demonstrate the feasibility of using Sensus within two human-subject studies, one in psychology and one in engineering. Feedback from non-technical users indicates that Sensus is an effective and low-burden system for MCS-based data collection and analysis.", "title": "" }, { "docid": "b856dcd9db802260ca22e7b426b87afa", "text": "This research seeks to validate a comprehensive model of quality in the context of e-business systems. It also extends the UTAUT model with e-quality, trust, and satisfaction constructs. The proposed model brings together extant research on systems and data quality, trust, and satisfaction and provides an important cluster of antecedents to eventual technology acceptance via constructs of behavioral intention to use and actual system usage.", "title": "" }, { "docid": "369e5fb60d3afc993821159b64bc3560", "text": "For five years, we collected annual snapshots of file-system metadata from over 60,000 Windows PC file systems in a large corporation. In this article, we use these snapshots to study temporal changes in file size, file age, file-type frequency, directory size, namespace structure, file-system population, storage capacity and consumption, and degree of file modification. We present a generative model that explains the namespace structure and the distribution of directory sizes. We find significant temporal trends relating to the popularity of certain file types, the origin of file content, the way the namespace is used, and the degree of variation among file systems, as well as more pedestrian changes in size and capacities. We give examples of consequent lessons for designers of file systems and related software.", "title": "" }, { "docid": "b620dd7e1db47db6c37ea3bcd2d83744", "text": "Software failures due to configuration errors are commonplace as computer systems continue to grow larger and more complex. Troubleshooting these configuration errors is a major administration cost, especially in server clusters where problems often go undetected without user interference. This paper presents CODE–a tool that automatically detects software configuration errors. Our approach is based on identifying invariant configuration access rules that predict what access events follow what contexts. It requires no source code, application-specific semantics, or heavyweight program analysis. Using these rules, CODE can sift through a voluminous number of events and detect deviant program executions. This is in contrast to previous approaches that focus on only diagnosis. In our experiments, CODE successfully detected a real configuration error in one of our deployment machines, in addition to 20 user-reported errors that we reproduced in our test environment. When analyzing month-long event logs from both user desktops and production servers, CODE yielded a low false positive rate. The efficiency ofCODE makes it feasible to be deployed as a practical management tool with low overhead.", "title": "" }, { "docid": "3ea104489fb5ac5b3e671659f8498530", "text": "In this paper, we present our work of humor recognition on Twitter, which will facilitate affect and sentimental analysis in the social network. The central question of what makes a tweet (Twitter post) humorous drives us to design humor-related features, which are derived from influential humor theories, linguistic norms, and affective dimensions. Using machine learning techniques, we are able to recognize humorous tweets with high accuracy and F-measure. More importantly, we single out features that contribute to distinguishing non-humorous tweets from humorous tweets, and humorous tweets from other short humorous texts (non-tweets). This proves that humorous tweets possess discernible characteristics that are neither found in plain tweets nor in humorous non-tweets. We believe our novel findings will inform and inspire the burgeoning field of computational humor research in the social media.", "title": "" }, { "docid": "8d24516bda25e60bf68362a88668f675", "text": "Plenoptic cameras, constructed with internal microlens arrays, focus those microlenses at infinity in order to sample the 4D radiance directly at the microlenses. The consequent assumption is that each microlens image is completely defocused with respect to to the image created by the main camera lens and the outside object. As a result, only a single pixel in the final image can be rendered from it, resulting in disappointingly low resolution. In this paper, we present a new approach to lightfield capture and image rendering that interprets the microlens array as an imaging system focused on the focal plane of the main camera lens. This approach captures a lightfield with significantly higher spatial resolution than the traditional approach, allowing us to render high resolution images that meet the expectations of modern photographers. Although the new approach samples the lightfield with reduced angular density, analysis and experimental results demonstrate that there is sufficient parallax to completely support lightfield manipulation algorithms such as refocusing and novel views", "title": "" }, { "docid": "119215115226e0bd3ee4c2762433aad5", "text": "Super-coiled polymer (SCP) artificial muscles have many attractive properties, such as high energy density, large contractions, and good dynamic range. To fully utilize them for robotic applications, it is necessary to determine how to scale them up effectively. Bundling of SCP actuators, as though they are individual threads in woven textiles, can demonstrate the versatility of SCP actuators and artificial muscles in general. However, this versatility comes with a need to understand how different bundling techniques can be achieved with these actuators and how they may trade off in performance. This letter presents the first quantitative comparison, analysis, and modeling of bundled SCP actuators. By exploiting weaving and braiding techniques, three new types of bundled SCP actuators are created: woven bundles, two-dimensional, and three-dimensional braided bundles. The bundle performance is adjustable by employing different numbers of individual actuators. Experiments are conducted to characterize and compare the force, strain, and speed of different bundles, and a linear model is proposed to predict their performance. This work lays the foundation for model-based SCP-actuated textiles, and physically scaling robots that employ SCP actuators as the driving mechanism.", "title": "" } ]
scidocsrr
8d85a5075e6ae5ee69d0ad8f11759355
Contactless payment systems based on RFID technology
[ { "docid": "8ae12d8ef6e58cb1ac376eb8c11cd15a", "text": "This paper surveys recent technical research on the problems of privacy and security for radio frequency identification (RFID). RFID tags are small, wireless devices that help identify objects and people. Thanks to dropping cost, they are likely to proliferate into the billions in the next several years-and eventually into the trillions. RFID tags track objects in supply chains, and are working their way into the pockets, belongings, and even the bodies of consumers. This survey examines approaches proposed by scientists for privacy protection and integrity assurance in RFID systems, and treats the social and technical context of their work. While geared toward the nonspecialist, the survey may also serve as a reference for specialist readers.", "title": "" } ]
[ { "docid": "ab08118b53dd5eee3579260e8b23a9c5", "text": "We have trained a deep (convolutional) neural network to predict the ground-state energy of an electron in four classes of confining two-dimensional electrostatic potentials. On randomly generated potentials, for which there is no analytic form for either the potential or the ground-state energy, the neural network model was able to predict the ground-state energy to within chemical accuracy, with a median absolute error of 1.49 mHa. We also investigate the performance of the model in predicting other quantities such as the kinetic energy and the first excited-state energy of random potentials. While we demonstrated this approach on a simple, tractable problem, the transferability and excellent performance of the resulting model suggests further applications of deep neural networks to problems of electronic structure.", "title": "" }, { "docid": "d8bb742d4d341a4919132408100fcfa5", "text": "In this study we represent malware as opcode sequences and detect it using a deep belief network (DBN). Compared with traditional shallow neural networks, DBNs can use unlabeled data to pretrain a multi-layer generative model, which can better represent the characteristics of data samples. We compare the performance of DBNs with that of three baseline malware detection models, which use support vector machines, decision trees, and the k-nearest neighbor algorithm as classifiers. The experiments demonstrate that the DBN model provides more accurate detection than the baseline models. When additional unlabeled data are used for DBN pretraining, the DBNs perform better than the other detection models. We also use the DBNs as an autoencoder to extract the feature vectors of executables. The experiments indicate that the autoencoder can effectively model the underlying structure of input data and significantly reduce the dimensions of feature vectors.", "title": "" }, { "docid": "f6c7cf332ad766a0f915ddcace8d5a83", "text": "Despite the recent trend of increasingly large datasets for object detection, there still exist many classes with few training examples. To overcome this lack of training data for certain classes, we propose a novel way of augmenting the training data for each class by borrowing and transforming examples from other classes. Our model learns which training instances from other classes to borrow and how to transform the borrowed examples so that they become more similar to instances from the target class. Our experimental results demonstrate that our new object detector, with borrowed and transformed examples, improves upon the current state-of-the-art detector on the challenging SUN09 object detection dataset. Thesis Supervisor: Antonio Torralba Title: Associate Professor of Electrical Engineering and Computer Science", "title": "" }, { "docid": "2a45f4ed21d9534a937129532cb32020", "text": "BACKGROUND\nCore stability training has grown in popularity over 25 years, initially for back pain prevention or therapy. Subsequently, it developed as a mode of exercise training for health, fitness and sport. The scientific basis for traditional core stability exercise has recently been questioned and challenged, especially in relation to dynamic athletic performance. Reviews have called for clarity on what constitutes anatomy and function of the core, especially in healthy and uninjured people. Clinical research suggests that traditional core stability training is inappropriate for development of fitness for heath and sports performance. However, commonly used methods of measuring core stability in research do not reflect functional nature of core stability in uninjured, healthy and athletic populations. Recent reviews have proposed a more dynamic, whole body approach to training core stabilization, and research has begun to measure and report efficacy of these modes training. The purpose of this study was to assess extent to which these developments have informed people currently working and participating in sport.\n\n\nMETHODS\nAn online survey questionnaire was developed around common themes on core stability training as defined in the current scientific literature and circulated to a sample population of people working and participating in sport. Survey results were assessed against key elements of the current scientific debate.\n\n\nRESULTS\nPerceptions on anatomy and function of the core were gathered from a representative cohort of athletes, coaches, sports science and sports medicine practitioners (n = 241), along with their views on effectiveness of various current and traditional exercise training modes. Most popular method of testing and measuring core function was subjective assessment through observation (43%), while a quarter (22%) believed there was no effective method of measurement. Perceptions of people in sport reflect the scientific debate, and practitioners have adopted a more functional approach to core stability training. There was strong support for loaded, compound exercises performed upright, compared to moderate support for traditional core stability exercises. Half of the participants (50%) in the survey, however, still support a traditional isolation core stability training.\n\n\nCONCLUSION\nPerceptions in applied practice on core stability training for dynamic athletic performance are aligned to a large extent to the scientific literature.", "title": "" }, { "docid": "8860af067ed1af9aba072d85f3e6171b", "text": "In this paper, we introduce a new channel pruning method to accelerate very deep convolutional neural networks. Given a trained CNN model, we propose an iterative two-step algorithm to effectively prune each layer, by a LASSO regression based channel selection and least square reconstruction. We further generalize this algorithm to multi-layer and multi-branch cases. Our method reduces the accumulated error and enhance the compatibility with various architectures. Our pruned VGG-16 achieves the state-of-the-art results by 5× speed-up along with only 0.3% increase of error. More importantly, our method is able to accelerate modern networks like ResNet, Xception and suffers only 1.4%, 1.0% accuracy loss under 2× speedup respectively, which is significant.", "title": "" }, { "docid": "bb65f9fec86c2f66b5b61be527b2bdf4", "text": "Success in natural language inference (NLI) should require a model to understand both lexical and compositional semantics. However, through adversarial evaluation, we find that several state-of-the-art models with diverse architectures are over-relying on the former and fail to use the latter. Further, this compositionality unawareness is not reflected via standard evaluation on current datasets. We show that removing RNNs in existing models or shuffling input words during training does not induce large performance loss despite the explicit removal of compositional information. Therefore, we propose a compositionality-sensitivity testing setup that analyzes models on natural examples from existing datasets that cannot be solved via lexical features alone (i.e., on which a bag-of-words model gives a high probability to one wrong label), hence revealing the models’ actual compositionality awareness. We show that this setup not only highlights the limited compositional ability of current NLI models, but also differentiates model performance based on design, e.g., separating shallow bag-of-words models from deeper, linguistically-grounded tree-based models. Our evaluation setup is an important analysis tool: complementing currently existing adversarial and linguistically driven diagnostic evaluations, and exposing opportunities for future work on evaluating models’ compositional understanding.", "title": "" }, { "docid": "e61a0ba24db737d42a730d5738583ffa", "text": "We present a logical formalism for expressing properties of continuous time Markov chains. The semantics for such properties arise as a natural extension of previous work on discrete time Markov chains to continuous time. The major result is that the veriication problem is decidable; this is shown using results in algebraic and transcendental number theory.", "title": "" }, { "docid": "ad6763de671234eb48b3629c25ab9113", "text": "Photovoltaic (PV) system performance is influenced by several factors, including irradiance, temperature, shading, degradation, mismatch losses, soiling, etc. Shading of a PV array, in particular, either complete or partial, can have a significant impact on its power output and energy yield, depending on array configuration, shading pattern, and the bypass diodes incorporated in the PV modules. In this paper, the effect of partial shading on multicrystalline silicon (mc-Si) PV modules is investigated. A PV module simulation model implemented in P-Spice is first employed to quantify the effect of partial shading on the I-V curve and the maximum power point (MPP) voltage and power. Then, generalized formulae are derived, which permit accurate enough evaluation of MPP voltage and power of mc-Si PV modules, without the need to resort to detailed modeling and simulation. The equations derived are validated via experimental results.", "title": "" }, { "docid": "cf9fe52efd734c536d0a7daaf59a9bcd", "text": "Image-based sequence recognition has been a long-standing research topic in computer vision. In this paper, we investigate the problem of scene text recognition, which is among the most important and challenging tasks in image-based sequence recognition. A novel neural network architecture, which integrates feature extraction, sequence modeling and transcription into a unified framework, is proposed. Compared with previous systems for scene text recognition, the proposed architecture possesses four distinctive properties: (1) It is end-to-end trainable, in contrast to most of the existing algorithms whose components are separately trained and tuned. (2) It naturally handles sequences in arbitrary lengths, involving no character segmentation or horizontal scale normalization. (3) It is not confined to any predefined lexicon and achieves remarkable performances in both lexicon-free and lexicon-based scene text recognition tasks. (4) It generates an effective yet much smaller model, which is more practical for real-world application scenarios. The experiments on standard benchmarks, including the IIIT-5K, Street View Text and ICDAR datasets, demonstrate the superiority of the proposed algorithm over the prior arts. Moreover, the proposed algorithm performs well in the task of image-based music score recognition, which evidently verifies the generality of it.", "title": "" }, { "docid": "2e65ae613aa80aac27d5f8f6e00f5d71", "text": "Industrial systems, e.g., wind turbines, generate big amounts of data from reliable sensors with high velocity. As it is unfeasible to store and query such big amounts of data, only simple aggregates are currently stored. However, aggregates remove fluctuations and outliers that can reveal underlying problems and limit the knowledge to be gained from historical data. As a remedy, we present the distributed Time Series Management System (TSMS) ModelarDB that uses models to store sensor data. We thus propose an online, adaptive multi-model compression algorithm that maintains data values within a user-defined error bound (possibly zero). We also propose (i) a database schema to store time series as models, (ii) methods to push-down predicates to a key-value store utilizing this schema, (iii) optimized methods to execute aggregate queries on models, (iv) a method to optimize execution of projections through static code-generation, and (v) dynamic extensibility that allows new models to be used without recompiling the TSMS. Further, we present a general modular distributed TSMS architecture and its implementation, ModelarDB, as a portable library, using Apache Spark for query processing and Apache Cassandra for storage. An experimental evaluation shows that, unlike current systems, ModelarDB hits a sweet spot and offers fast ingestion, good compression, and fast, scalable online aggregate query processing at the same time. This is achieved by dynamically adapting to data sets using multiple models. The system degrades gracefully as more outliers occur and the actual errors are much lower than the bounds. PVLDB Reference Format: Søren Kejser Jensen, Torben Bach Pedersen, Christian Thomsen. ModelarDB: Modular Model-Based Time Series Management with Spark and Cassandra. PVLDB, 11(11): 1688-1701, 2018. DOI: https://doi.org/10.14778/3236187.3236215", "title": "" }, { "docid": "c7237823182b47cc03c70937bbbb0be0", "text": "To discover patterns in historical data, climate scientists have applied various clustering methods with the goal of identifying regions that share some common climatological behavior. However, past approaches are limited by the fact that they either consider only a single time period (snapshot) of multivariate data, or they consider only a single variable by using the time series data as multi-dimensional feature vector. In both cases, potentially useful information may be lost. Moreover, clusters in high-dimensional data space can be difficult to interpret, prompting the need for a more effective data representation. We address both of these issues by employing a complex network (graph) to represent climate data, a more intuitive model that can be used for analysis while also having a direct mapping to the physical world for interpretation. A cross correlation function is used to weight network edges, thus respecting the temporal nature of the data, and a community detection algorithm identifies multivariate clusters. Examining networks for consecutive periods allows us to study structural changes over time. We show that communities have a climatological interpretation and that disturbances in structure can be an indicator of climate events (or lack thereof). Finally, we discuss how this model can be applied for the discovery of more complex concepts such as unknown teleconnections or the development of multivariate climate indices and predictive insights.", "title": "" }, { "docid": "552d9591ea3bebb0316fb4111707b3a3", "text": "The long jump has been widely studied in recent years. Two models exist in the literature which define the relationship between selected variables that affect performance. Both models suggest that the critical phase of the long jump event is the touch-down to take-off phase, as it is in this phase that the necessary vertical velocity is generated. Many three dimensional studies of the long jump exist, but the only studies to have reported detailed data on this phase were two-dimensional in nature. In these, the poor relationships obtained between key variables and performance led to the suggestion that there may be some relevant information in data in the third dimension. The aims of this study were to conduct a three-dimensional analysis of the touch-down to take-off phase in the long jump and to explore the interrelationships between key variables. Fourteen male long jumpers were filmed using three-dimensional methods during the finals of the 1994 (n = 8) and 1995 (n = 6) UK National Championships. Various key variables for the long jump were used in a series of correlational and multiple regression analyses. The relationships between key variables when correlated directly one-to-one were generally poor. However, when analysed using a multiple regression approach, a series of variables was identified which supported the general principles outlined in the two models. These variables could be interpreted in terms of speed, technique and strength. We concluded that in the long jump, variables that are important to performance are interdependent and can only be identified by using appropriate statistical techniques. This has implications for a better understanding of the long jump event and it is likely that this finding can be generalized to other technical sports skills.", "title": "" }, { "docid": "cc08118c532cbe4665f8a3ac8b7d5fd7", "text": "We evaluated the use of gamification to facilitate a student-centered learning environment within an undergraduate Year 2 Personal and Professional Development (PPD) course. In addition to face-to-face classroom practices, an information technology-based gamified system with a range of online learning activities was presented to students as support material. The implementation of the gamified course lasted two academic terms. The subsequent evaluation from a cohort of 136 students indicated that student performance was significantly higher among those who participated in the gamified system than in those who engaged with the nongamified, traditional delivery, while behavioral engagement in online learning activities was positively related to course performance, after controlling for gender, attendance, and Year 1 PPD performance. Two interesting phenomena appeared when we examined the influence of student background: female students participated significantly more in online learning activities than male students, and students with jobs engaged significantly more in online learning activities than students without jobs. The gamified course design advocated in this work may have significant implications for educators who wish to develop engaging technology-mediated learning environments that enhance students’ learning, or for a broader base of professionals who wish to engage a population of potential users, such as managers engaging employees or marketers engaging customers.", "title": "" }, { "docid": "71a262b1c91c89f379527b271e45e86e", "text": "Geospatial object detection from high spatial resolution (HSR) remote sensing imagery is a heated and challenging problem in the field of automatic image interpretation. Despite convolutional neural networks (CNNs) having facilitated the development in this domain, the computation efficiency under real-time application and the accurate positioning on relatively small objects in HSR images are two noticeable obstacles which have largely restricted the performance of detection methods. To tackle the above issues, we first introduce semantic segmentation-aware CNN features to activate the detection feature maps from the lowest level layer. In conjunction with this segmentation branch, another module which consists of several global activation blocks is proposed to enrich the semantic information of feature maps from higher level layers. Then, these two parts are integrated and deployed into the original single shot detection framework. Finally, we use the modified multi-scale feature maps with enriched semantics and multi-task training strategy to achieve end-to-end detection with high efficiency. Extensive experiments and comprehensive evaluations on a publicly available 10-class object detection dataset have demonstrated the superiority of the presented method.", "title": "" }, { "docid": "f8fc595f60fda530cc7796dbba83481c", "text": "This paper proposes a pseudo random number generator using Elman neural network. The proposed neural network is a recurrent neural network able to generate pseudo-random numbers from the weight matrices obtained from the layer weights of the Elman network. The proposed method is not computationally demanding and is easy to implement for varying bit sequences. The random numbers generated using our method have been subjected to frequency test and ENT test program. The results show that recurrent neural networks can be used as a pseudo random number generator(prng).", "title": "" }, { "docid": "2e7513624eed605a4e0da539162dd715", "text": "In the domain of Internet of Things (IoT), applications are modeled to understand and react based on existing contextual and situational parameters. This work implements a management flow for the abstraction of real world objects and virtual composition of those objects to provide IoT services. We also present a real world knowledge model that aggregates constraints defining a situation, which is then used to detect and anticipate future potential situations. It is implemented based on reasoning and machine learning mechanisms. This work showcases a prototype implementation of the architectural framework in a smart home scenario, targeting two functionalities: actuation and automation based on the imposed constraints and thereby responding to situations and also adapting to the user preferences. It thus provides a productive integration of heterogeneous devices, IoT platforms, and cognitive technologies to improve the services provided to the user.", "title": "" }, { "docid": "d09f433d8b9776e45fd3a9516cde004d", "text": "The review focuses on one growing dimension of health care globalisation - medical tourism, whereby consumers elect to travel across borders or to overseas destinations to receive their treatment. Such treatments include cosmetic and dental surgery; cardio, orthopaedic and bariatric surgery; IVF treatment; and organ and tissue transplantation. The review sought to identify the medical tourist literature for out-of-pocket payments, focusing wherever possible on evidence and experience pertaining to patients in mid-life and beyond. Despite increasing media interest and coverage hard empirical findings pertaining to out-of-pocket medical tourism are rare. Despite a number of countries offering relatively low cost treatments we know very little about many of the numbers and key indicators on medical tourism. The narrative review traverses discussion on medical tourist markets, consumer choice, clinical outcomes, quality and safety, and ethical and legal dimensions. The narrative review draws attention to gaps in research evidence and strengthens the call for more empirical research on the role, process and outcomes of medical tourism. In concluding it makes suggestion for the content of such a strategy.", "title": "" }, { "docid": "0fac1fde74f99bd6b4e9338f54ec41d6", "text": "This thesis addresses total variation (TV) image restoration and blind image deconvolution. Classical image processing problems, such as deblurring, call for some kind of regularization. Total variation is among the state-of-the-art regularizers, as it provides a good balance between the ability to describe piecewise smooth images and the complexity of the resulting algorithms. In this thesis, we propose a minimization algorithm for TV-based image restoration that belongs to the majorization-minimization class (MM). The proposed algorithm is similar to the known iterative re-weighted least squares (IRSL) approach, although it constitutes an original interpretation of this method from the MM perspective. The problem of choosing the regularization parameter is also addressed in this thesis. A new Bayesian method is introduced to automatically estimate the parameter, by assigning it a non-informative prior, followed by integration based on an approximation of the associated partition function. The proposed minimization problem, also addressed using the MM framework, results on an update rule for the regularization parameter, and can be used with any TV-based image deblurring algorithm. Blind image deconvolution is the third topic of this thesis. We consider the case of linear motion blurs. We propose a new discretization of the motion blur kernel, and a new estimation algorithm to recover the motion blur parameters (orientation and length) from blurred natural images, based on the Radon transform of the spectrum of the blurred images.", "title": "" }, { "docid": "fbd390ed58529fc5dc552d7550168546", "text": "Recently, tuple-stores have become pivotal structures in many information systems. Their ability to handle large datasets makes them important in an era with unprecedented amounts of data being produced and exchanged. However, these tuple-stores typically rely on structured peer-to-peer protocols which assume moderately stable environments. Such assumption does not always hold for very large scale systems sized in the scale of thousands of machines. In this paper we present a novel approach to the design of a tuple-store. Our approach follows a stratified design based on an unstructured substrate. We focus on this substrate and how the use of epidemic protocols allow reaching high dependability and scalability.", "title": "" } ]
scidocsrr
981451d8cc78bac714c568f7e27729a1
Lossy Image Compression with Compressive Autoencoders
[ { "docid": "e2009f56982f709671dcfe43048a8919", "text": "Probabilistic generative models can be used for compression, denoising, inpainting, texture synthesis, semi-supervised learning, unsupervised feature learning, and other tasks. Given this wide range of applications, it is not surprising that a lot of heterogeneity exists in the way these models are formulated, trained, and evaluated. As a consequence, direct comparison between models is often difficult. This article reviews mostly known but often underappreciated properties relating to the evaluation and interpretation of generative models with a focus on image models. In particular, we show that three of the currently most commonly used criteria—average log-likelihood, Parzen window estimates, and visual fidelity of samples—are largely independent of each other when the data is high-dimensional. Good performance with respect to one criterion therefore need not imply good performance with respect to the other criteria. Our results show that extrapolation from one criterion to another is not warranted and generative models need to be evaluated directly with respect to the application(s) they were intended for. In addition, we provide examples demonstrating that Parzen window estimates should generally be avoided.", "title": "" }, { "docid": "9ece98aee7056ff6c686c12bcdd41d31", "text": "Modeling the distribution of natural images is challenging, partly because of strong statistical dependencies which can extend over hundreds of pixels. Recurrent neural networks have been successful in capturing long-range dependencies in a number of problems but only recently have found their way into generative image models. We here introduce a recurrent image model based on multidimensional long short-term memory units which are particularly suited for image modeling due to their spatial structure. Our model scales to images of arbitrary size and its likelihood is computationally tractable. We find that it outperforms the state of the art in quantitative comparisons on several image datasets and produces promising results when used for texture synthesis and inpainting.", "title": "" } ]
[ { "docid": "45c6d576e6c8e1dbd731126c4fb36b62", "text": "Marine debris is listed among the major perceived threats to biodiversity, and is cause for particular concern due to its abundance, durability and persistence in the marine environment. An extensive literature search reviewed the current state of knowledge on the effects of marine debris on marine organisms. 340 original publications reported encounters between organisms and marine debris and 693 species. Plastic debris accounted for 92% of encounters between debris and individuals. Numerous direct and indirect consequences were recorded, with the potential for sublethal effects of ingestion an area of considerable uncertainty and concern. Comparison to the IUCN Red List highlighted that at least 17% of species affected by entanglement and ingestion were listed as threatened or near threatened. Hence where marine debris combines with other anthropogenic stressors it may affect populations, trophic interactions and assemblages.", "title": "" }, { "docid": "9081cb169f74b90672f84afa526f40b3", "text": "The paper presents an analysis of the main mechanisms of decryption of SSL/TLS traffic. Methods and technologies for detecting malicious activity in encrypted traffic that are used by leading companies are also considered. Also, the approach for intercepting and decrypting traffic transmitted over SSL/TLS is developed, tested and proposed. The developed approach has been automated and can be used for remote listening of the network, which will allow to decrypt transmitted data in a mode close to real time.", "title": "" }, { "docid": "a4c17b823d325ed5f339f78cd4d1e9ab", "text": "A 34–40 GHz VCO fabricated in 65 nm digital CMOS technology is demonstrated in this paper. The VCO uses a combination of switched capacitors and varactors for tuning and has a maximum Kvco of 240 MHz/V. It exhibits a phase noise of better than −98 dBc/Hz @ 1-MHz offset across the band while consuming 12 mA from a 1.2-V supply, an FOMT of −182.1 dBc/Hz. A cascode buffer following the VCO consumes 11 mA to deliver 0 dBm LO signal to a 50Ω load.", "title": "" }, { "docid": "371ab49af58c0eb4dc55f3fdf1c741f0", "text": "Reinforcement learning has shown promise in learning policies that can solve complex problems. However, manually specifying a good reward function can be difficult, especially for intricate tasks. Inverse reinforcement learning offers a useful paradigm to learn the underlying reward function directly from expert demonstrations. Yet in reality, the corpus of demonstrations may contain trajectories arising from a diverse set of underlying reward functions rather than a single one. Thus, in inverse reinforcement learning, it is useful to consider such a decomposition. The options framework in reinforcement learning is specifically designed to decompose policies in a similar light. We therefore extend the options framework and propose a method to simultaneously recover reward options in addition to policy options. We leverage adversarial methods to learn joint reward-policy options using only observed expert states. We show that this approach works well in both simple and complex continuous control tasks and shows significant performance increases in one-shot transfer learning.", "title": "" }, { "docid": "4915acc826761f950783d9d4206857c0", "text": "The cognitive modulation of pain is influenced by a number of factors ranging from attention, beliefs, conditioning, expectations, mood, and the regulation of emotional responses to noxious sensory events. Recently, mindfulness meditation has been found attenuate pain through some of these mechanisms including enhanced cognitive and emotional control, as well as altering the contextual evaluation of sensory events. This review discusses the brain mechanisms involved in mindfulness meditation-related pain relief across different meditative techniques, expertise and training levels, experimental procedures, and neuroimaging methodologies. Converging lines of neuroimaging evidence reveal that mindfulness meditation-related pain relief is associated with unique appraisal cognitive processes depending on expertise level and meditation tradition. Moreover, it is postulated that mindfulness meditation-related pain relief may share a common final pathway with other cognitive techniques in the modulation of pain.", "title": "" }, { "docid": "afe1711ee0fbd412f0b425c488f46fbc", "text": "The Iterated Prisoner’s Dilemma has guided research on social dilemmas for decades. However, it distinguishes between only two atomic actions: cooperate and defect. In real world prisoner’s dilemmas, these choices are temporally extended and different strategies may correspond to sequences of actions, reflecting grades of cooperation. We introduce a Sequential Prisoner’s Dilemma (SPD) game to better capture the aforementioned characteristics. In this work, we propose a deep multiagent reinforcement learning approach that investigates the evolution of mutual cooperation in SPD games. Our approach consists of two phases. The first phase is offline: it synthesizes policies with different cooperation degrees and then trains a cooperation degree detection network. The second phase is online: an agent adaptively selects its policy based on the detected degree of opponent cooperation. The effectiveness of our approach is demonstrated in two representative SPD 2D games: the Apple-Pear game and the Fruit Gathering game. Experimental results show that our strategy can avoid being exploited by exploitative opponents and achieve cooperation with cooperative opponents.", "title": "" }, { "docid": "0206cbec556e66fd19aa42c610cdccfa", "text": "The adoption of the General Data Protection Regulation (GDPR) is a major concern for data controllers of the public and private sector, as they are obliged to conform to the new principles and requirements managing personal data. In this paper, we propose that the data controllers adopt the concept of the Privacy Level Agreement. We present a metamodel for PLAs to support privacy management, based on analysis of privacy threats, vulnerabilities and trust relationships in their Information Systems, whilst complying with laws and regulations, and we illustrate the relevance of the metamodel with the GDPR.", "title": "" }, { "docid": "e7a86eeb576d4aca3b5e98dc53fcb52d", "text": "Dictionary methods for cross-language information retrieval give performance below that for mono-lingual retrieval. Failure to translate multi-term phrases has km shown to be one of the factors responsible for the errors associated with dictionary methods. First, we study the importance of phrasaI translation for this approach. Second, we explore the role of phrases in query expansion via local context analysis and local feedback and show how they can be used to significantly reduce the error associated with automatic dictionary translation.", "title": "" }, { "docid": "85c360e0354e5eab69dc26b7a2dd715e", "text": "1,2,3,4 Department of Information Technology, Matoshri Collage of Engineering & Reasearch Centre Eklahare, Nashik, India ---------------------------------------------------------------------***--------------------------------------------------------------------Abstract Waste management is one of the primary problem that the world faces irrespective of the case of developed or developing country. The key issue in the waste management is that the garbage bin at public places gets overflowed well in advance before the commencement of the next cleaning process. It in turn leads to various hazards such as bad odor & ugliness to that place which may be the root cause for spread of various diseases. To avoid all such hazardous scenario and maintain public cleanliness and health this work is mounted on a smart garbage system. The main theme of the work is to develop a smart intelligent garbage alert system for a proper garbage management .This paper proposes a smart alert system for garbage clearance by giving an alert signal to the municipal web server for instant cleaning of dustbin with proper verification based on level of garbage filling. This process is aided by the ultrasonic sensor which is interfaced with Arduino UNO to check the level of garbage filled in the dustbin and sends the alert to the municipal web server once if garbage is filled . After cleaning the dustbin, the driver confirms the task of emptying the garbage with the aid of RFID Tag. RFID is a computing technology that is used for verification process and in addition, it also enhances the smart garbage alert system by providing automatic identification of garbage filled in the dustbin and sends the status of clean-up to the server affirming that the work is done. The whole process is upheld by an embedded module integrated with RF ID and IOT Facilitation. The real time status of how waste collection is being done could be monitored and followed up by the municipality authority with the aid of this system. In addition to this the necessary remedial / alternate measures could be adapted. An Android application is developed and linked to a web server to intimate the alerts from the microcontroller to the urban office and to perform the remote monitoring of the cleaning process, done by the workers, thereby reducing the manual process of monitoring and verification. The notifications are sent to the Android application using Wi-Fi module.", "title": "" }, { "docid": "5aab6cd36899f3d5e3c93cf166563a3e", "text": "Vein images generally appear darker with low contrast, which require contrast enhancement during preprocessing to design satisfactory hand vein recognition system. However, the modification introduced by contrast enhancement (CE) is reported to bring side effects through pixel intensity distribution adjustments. Furthermore, the inevitable results of fake vein generation or information loss occur and make nearly all vein recognition systems unconvinced. In this paper, a “CE-free” quality-specific vein recognition system is proposed, and three improvements are involved. First, a high-quality lab-vein capturing device is designed to solve the problem of low contrast from the view of hardware improvement. Then, a high quality lab-made database is established. Second, CFISH score, a fast and effective measurement for vein image quality evaluation, is proposed to obtain quality index of lab-made vein images. Then, unsupervised $K$ -means with optimized initialization and convergence condition is designed with the quality index to obtain the grouping results of the database, namely, low quality (LQ) and high quality (HQ). Finally, discriminative local binary pattern (DLBP) is adopted as the basis for feature extraction. For the HQ image, DLBP is adopted directly for feature extraction, and for the LQ one. CE_DLBP could be utilized for discriminative feature extraction for LQ images. Based on the lab-made database, rigorous experiments are conducted to demonstrate the effectiveness and feasibility of the proposed system. What is more, an additional experiment with PolyU database illustrates its generalization ability and robustness.", "title": "" }, { "docid": "83102f60343312aa0cc510550c196ae3", "text": "A method for the on-line calibration of a circuit board trace resistance at the output of a buck converter is described. The input current is measured with a precision resistor and processed to obtain a dc reference for the output current. The voltage drop across a trace resistance at the output is amplified with a gain that is adaptively adjusted to match the dc reference. This method is applied to obtain an accurate and high-bandwidth measurement of the load current in the modern microprocessor voltage regulator application (VRM), thus enabling an accurate dc load-line regulation as well as a fast transient response. Experimental results show an accuracy well within the tolerance band of this application, and exceeding all other popular methods.", "title": "" }, { "docid": "c934f44f485f41676dfed35afbf2d1f2", "text": "Many icon taxonomy systems have been developed by researchers that organise icons based on their graphic elements. Most of these taxonomies classify icons according to how abstract or concrete they are. Categories however overlap and different researchers use different terminology, sometimes to describe what in essence is the same thing. This paper describes nine taxonomies and compares the terminologies they use. Aware of the lack of icon taxonomy systems in the field of icon design, the authors provide an overview of icon taxonomy and develop an icon taxonomy system that could bring practical benefits to the performance of computer related tasks.", "title": "" }, { "docid": "de364eb64d2377c278cd71d98c2c0729", "text": "In recent years methods of data analysis for point processes have received some attention, for example, by Cox & Lewis (1966) and Lewis (1964). In particular Bartlett (1963a, b) has introduced methods of analysis based on the point spectrum. Theoretical models are relatively sparse. In this paper the theoretical properties of a class of processes with particular reference to the point spectrum or corresponding covariance density functions are discussed. A particular result is a self-exciting process with the same second-order properties as a certain doubly stochastic process. These are not distinguishable by methods of data analysis based on these properties.", "title": "" }, { "docid": "e625c5dc123f0b1e7394c4bae47f7cd8", "text": "Interconnected embedded devices are increasingly used in various scenarios, including industrial control, building automation, or emergency communication. As these systems commonly process sensitive information or perform safety critical tasks, they become appealing targets for cyber attacks. A promising technique to remotely verify the safe and secure operation of networked embedded devices is remote attestation. However, existing attestation protocols only protect against software attacks or show very limited scalability. In this paper, we present the first scalable attestation protocol for interconnected embedded devices that is resilient to physical attacks. Based on the assumption that physical attacks require an adversary to capture and disable devices for some time, our protocol identifies devices with compromised hardware and software. Compared to existing solutions, our protocol reduces communication complexity and runtimes by orders of magnitude, precisely identifies compromised devices, supports highly dynamic and partitioned network topologies, and is robust against failures. We show the security of our protocol and evaluate it in static as well as dynamic network topologies. Our results demonstrate that our protocol is highly efficient in well-connected networks and robust to network disruptions.", "title": "" }, { "docid": "36a0bdd558de66f1126bbaea287d882a", "text": "BACKGROUND\nThe aim of this paper was to summarise the anatomical knowledge on the subject of the maxillary nerve and its branches, and to show the clinical usefulness of such information in producing anaesthesia in the region of the maxilla.\n\n\nMATERIALS AND METHODS\nA literature search was performed in Pubmed, Scopus, Web of Science and Google Scholar databases, including studies published up to June 2014, with no lower data limit.\n\n\nRESULTS\nThe maxillary nerve (V2) is the middle sized branch of the trigeminal nerve - the largest of the cranial nerves. The V2 is a purely sensory nerve supplying the maxillary teeth and gingiva, the adjoining part of the cheek, hard and soft palate mucosa, pharynx, nose, dura mater, skin of temple, face, lower eyelid and conjunctiva, upper lip, labial glands, oral mucosa, mucosa of the maxillary sinus, as well as the mobile part of the nasal septum. The branches of the maxillary nerve can be divided into four groups depending on the place of origin i.e. in the cranium, in the sphenopalatine fossa, in the infraorbital canal, and on the face.\n\n\nCONCLUSIONS\nThis review summarises the data on the anatomy and variations of the maxillary nerve and its branches. A thorough understanding of the anatomy will allow for careful planning and execution of anaesthesiological and surgical procedures involving the maxillary nerve and its branches.", "title": "" }, { "docid": "5515e892363c3683e39c6d5ec4abe22d", "text": "Government agencies are investing a considerable amount of resources into improving security systems as result of recent terrorist events that dangerously exposed flaws and weaknesses in today’s safety mechanisms. Badge or password-based authentication procedures are too easy to hack. Biometrics represents a valid alternative but they suffer of drawbacks as well. Iris scanning, for example, is very reliable but too intrusive; fingerprints are socially accepted, but not applicable to non-consentient people. On the other hand, face recognition represents a good compromise between what’s socially acceptable and what’s reliable, even when operating under controlled conditions. In last decade, many algorithms based on linear/nonlinear methods, neural networks, wavelets, etc. have been proposed. Nevertheless, Face Recognition Vendor Test 2002 shown that most of these approaches encountered problems in outdoor conditions. This lowered their reliability compared to state of the art biometrics. This paper provides an ‘‘ex cursus’’ of recent face recognition research trends in 2D imagery and 3D model based algorithms. To simplify comparisons across different approaches, tables containing different collection of parameters (such as input size, recognition rate, number of addressed problems) are provided. This paper concludes by proposing possible future directions. 2007 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "815e0ad06fdc450aa9ba3f56ab19ab05", "text": "A member of the Liliaceae family, garlic ( Allium sativum) is highly regarded throughout the world for both its medicinal and culinary value. Early men of medicine such as Hippocrates, Pliny and Aristotle encouraged a number of therapeutic uses for this botanical. Today, it is commonly used in many cultures as a seasoning or spice. Garlic also stands as the second most utilized supplement. With its sulfur containing compounds, high trace mineral content, and enzymes, garlic has shown anti-viral, anti-bacterial, anti-fungal and antioxidant abilities. Diseases that may be helped or prevented by garlic’s medicinal actions include Alzheimer’s Disease, cancer, cardiovascular disease (including atherosclerosis, strokes, hypertension, thrombosis and hyperlipidemias) children’s conditions, dermatologic applications, stress, and infections. Some research points to possible benefits in diabetes, drug toxicity, and osteoporosis.", "title": "" }, { "docid": "fb0fdbdff165a83671dd9373b36caac4", "text": "In this paper, we propose a system, that automatically transfers human body motion captured from an ordinary video camera to an unknown 3D character mesh. In our system, no manual intervention is required for specifying the internal skeletal structure or defining how the mesh surfaces deform. A sparse graph is generated from the input polygons based on their connectivity and geometric distributions. To estimate articulated body parts in the video, a progressive particle filter is used for identifying correspondences. We anticipate our proposed system can bring animation to a new audience with a more intuitive user interface.", "title": "" }, { "docid": "ff8cc7166b887990daa6ef355695e54f", "text": "The knowledge-based theory of the firm suggests that knowledge is the organizational asset that enables sustainable competitive advantage in hypercompetitive environments. The emphasis on knowledge in today’s organizations is based on the assumption that barriers to the transfer and replication of knowledge endow it with strategic importance. Many organizations are developing information systems designed specifically to facilitate the sharing and integration of knowledge. Such systems are referred to as Knowledge Management System (KMS). Because KMS are just beginning to appear in organizations, little research and field data exists to guide the development and implementation of such systems or to guide expectations of the potential benefits of such systems. This study provides an analysis of current practices and outcomes of KMS and the nature of KMS as they are evolving in fifty organizations. The findings suggest that interest in KMS across a variety of industries is very high, the technological foundations are varied, and the major", "title": "" }, { "docid": "793cd937ea1fc91e73735b2b8246f1f5", "text": "Using data from a national probability sample of heterosexual U.S. adults (N02,281), the present study describes the distribution and correlates of men’s and women’s attitudes toward transgender people. Feeling thermometer ratings of transgender people were strongly correlated with attitudes toward gay men, lesbians, and bisexuals, but were significantly less favorable. Attitudes toward transgender people were more negative among heterosexual men than women. Negative attitudes were associated with endorsement of a binary conception of gender; higher levels of psychological authoritarianism, political conservatism, and anti-egalitarianism, and (for women) religiosity; and lack of personal contact with sexual minorities. In regression analysis, sexual prejudice accounted for much of the variance in transgender attitudes, but respondent gender, educational level, authoritarianism, anti-egalitarianism, and (for women) religiosity remained significant predictors with sexual prejudice statistically controlled. Implications and directions for future research on attitudes toward transgender people are discussed.", "title": "" } ]
scidocsrr
29fb6d39a7bbf4fceac6f6b6d3a18387
Designing the digital workplace of the future - what scholars recommend to practitioners
[ { "docid": "5c96222feacb0454d353dcaa1f70fb83", "text": "Geographically dispersed teams are rarely 100% dispersed. However, by focusing on teams that are either fully dispersed or fully co-located, team research to date has lived on the ends of a spectrum at which relatively few teams may actually work. In this paper, we develop a more robust view of geographic dispersion in teams. Specifically, we focus on the spatialtemporal distances among team members and the configuration of team members across sites (independent of the spatial and temporal distances separating those sites). To better understand the nature of dispersion, we develop a series of five new measures and explore their relationships with communication frequency data from a sample of 182 teams (of varying degrees of dispersion) from a Fortune 500 telecommunications firm. We conclude with recommendations regarding the use of different measures and important questions that they could help address. Geographic Dispersion in Teams 1", "title": "" } ]
[ { "docid": "1a3cad2f10dd5c6a5aacb3676ca8917a", "text": "BACKGROUND\nRecent findings suggest that the mental health costs of unemployment are related to both short- and long-term mental health scars. The main policy tools for dealing with young people at risk of labor market exclusion are Active Labor Market Policy programs for youths (youth programs). There has been little research on the potential effects of participation in youth programs on mental health and even less on whether participation in such programs alleviates the long-term mental health scarring caused by unemployment. This study compares exposure to open youth unemployment and exposure to youth program participation between ages 18 and 21 in relation to adult internalized mental health immediately after the end of the exposure period at age 21 and two decades later at age 43.\n\n\nMETHODS\nThe study uses a five wave Swedish 27-year prospective cohort study consisting of all graduates from compulsory school in an industrial town in Sweden initiated in 1981. Of the original 1083 participants 94.3% of those alive were still participating at the 27-year follow up. Exposure to open unemployment and youth programs were measured between ages 18-21. Mental health, indicated through an ordinal level three item composite index of internalized mental health symptoms (IMHS), was measured pre-exposure at age 16 and post exposure at ages 21 and 42. Ordinal regressions of internalized mental health at ages 21 and 43 were performed using the Polytomous Universal Model (PLUM). Models were controlled for pre-exposure internalized mental health as well as other available confounders.\n\n\nRESULTS\nResults show strong and significant relationships between exposure to open youth unemployment and IMHS at age 21 (OR = 2.48, CI = 1.57-3.60) as well as at age 43 (OR = 1.71, CI = 1.20-2.43). No such significant relationship is observed for exposure to youth programs at age 21 (OR = 0.95, CI = 0.72-1.26) or at age 43 (OR = 1.23, CI = 0.93-1.63).\n\n\nCONCLUSIONS\nA considered and consistent active labor market policy directed at youths could potentially reduce the short- and long-term mental health costs of youth unemployment.", "title": "" }, { "docid": "e206ea46c20fb0ceb03ad8b535eadebc", "text": "Recently, increasing attention has been directed to the study of the speech emotion recognition, in which global acoustic features of an utterance are mostly used to eliminate the content differences. However, the expression of speech emotion is a dynamic process, which is reflected through dynamic durations, energies, and some other prosodic information when one speaks. In this paper, a novel local dynamic pitch probability distribution feature, which is obtained by drawing the histogram, is proposed to improve the accuracy of speech emotion recognition. Compared with most of the previous works using global features, the proposed method takes advantage of the local dynamic information conveyed by the emotional speech. Several experiments on Berlin Database of Emotional Speech are conducted to verify the effectiveness of the proposed method. The experimental results demonstrate that the local dynamic information obtained with the proposed method is more effective for speech emotion recognition than the traditional global features.", "title": "" }, { "docid": "65ed76ddd6f7fd0aea717d2e2643dd16", "text": "In semi-supervised learning, a number of labeled examples are usually required for training an initial weakly useful predictor which is in turn used for exploiting the unlabeled examples. However, in many real-world applications there may exist very few labeled training examples, which makes the weakly useful predictor difficult to generate, and therefore these semisupervised learning methods cannot be applied. This paper proposes a method working under a two-view setting. By taking advantages of the correlations between the views using canonical component analysis, the proposed method can perform semi-supervised learning with only one labeled training example. Experiments and an application to content-based image retrieval validate the effectiveness of the proposed method.", "title": "" }, { "docid": "2a3f5f621195c036064e3d8c0b9fc884", "text": "This paper describes our system for the CoNLL 2016 Shared Task’s supplementary task on Discourse Relation Sense Classification. Our official submission employs a Logistic Regression classifier with several cross-argument similarity features based on word embeddings and performs with overall F-scores of 64.13 for the Dev set, 63.31 for the Test set and 54.69 for the Blind set, ranking first in the Overall ranking for the task. We compare the feature-based Logistic Regression classifier to different Convolutional Neural Network architectures. After the official submission we enriched our model for Non-Explicit relations by including similarities of explicit connectives with the relation arguments, and part of speech similarities based on modal verbs. This improved our Non-Explicit result by 1.46 points on the Dev set and by 0.36 points on the Blind set.", "title": "" }, { "docid": "80fed8845ca14843855383d714600960", "text": "In this paper, a methodology is developed to use data acquisition derived from condition monitoring and standard diagnosis for rehabilitation purposes of transformers. The interpretation and understanding of the test data are obtained from international test standards to determine the current condition of transformers. In an attempt to ascertain monitoring priorities, the effective test methods are selected for transformer diagnosis. In particular, the standardization of diagnostic and analytical techniques are being improved that will enable field personnel to more easily use the test results and will reduce the need for interpretation by experts. In addition, the advanced method has the potential to reduce the time greatly and increase the accuracy of diagnostics. The important aim of the standardization is to develop the multiple diagnostic models that combine results from the different tests and give an overall assessment of reliability and maintenance for transformers.", "title": "" }, { "docid": "7355bf66dac6e027c1d6b4c2631d8780", "text": "Cannabidiol is a component of marijuana that does not activate cannabinoid receptors, but moderately inhibits the degradation of the endocannabinoid anandamide. We previously reported that an elevation of anandamide levels in cerebrospinal fluid inversely correlated to psychotic symptoms. Furthermore, enhanced anandamide signaling let to a lower transition rate from initial prodromal states into frank psychosis as well as postponed transition. In our translational approach, we performed a double-blind, randomized clinical trial of cannabidiol vs amisulpride, a potent antipsychotic, in acute schizophrenia to evaluate the clinical relevance of our initial findings. Either treatment was safe and led to significant clinical improvement, but cannabidiol displayed a markedly superior side-effect profile. Moreover, cannabidiol treatment was accompanied by a significant increase in serum anandamide levels, which was significantly associated with clinical improvement. The results suggest that inhibition of anandamide deactivation may contribute to the antipsychotic effects of cannabidiol potentially representing a completely new mechanism in the treatment of schizophrenia.", "title": "" }, { "docid": "bdfb3a761d7d9dbb96fa4f07bc2c1f89", "text": "We present an algorithm for recognition and reconstruction of scanned 3D indoor scenes. 3D indoor reconstruction is particularly challenging due to object interferences, occlusions and overlapping which yield incomplete yet very complex scene arrangements. Since it is hard to assemble scanned segments into complete models, traditional methods for object recognition and reconstruction would be inefficient. We present a search-classify approach which interleaves segmentation and classification in an iterative manner. Using a robust classifier we traverse the scene and gradually propagate classification information. We reinforce classification by a template fitting step which yields a scene reconstruction. We deform-to-fit templates to classified objects to resolve classification ambiguities. The resulting reconstruction is an approximation which captures the general scene arrangement. Our results demonstrate successful classification and reconstruction of cluttered indoor scenes, captured in just few minutes.", "title": "" }, { "docid": "036ac7fc6886f1f7d1734be18a11951f", "text": "Often the challenge associated with tasks like fraud and spam detection is the lack of all likely patterns needed to train suitable supervised learning models. This problem accentuates when the fraudulent patterns are not only scarce, they also change over time. Change in fraudulent pattern is because fraudsters continue to innovate novel ways to circumvent measures put in place to prevent fraud. Limited data and continuously changing patterns makes learning signi cantly di cult. We hypothesize that good behavior does not change with time and data points representing good behavior have consistent spatial signature under di erent groupings. Based on this hypothesis we are proposing an approach that detects outliers in large data sets by assigning a consistency score to each data point using an ensemble of clustering methods. Our main contribution is proposing a novel method that can detect outliers in large datasets and is robust to changing patterns. We also argue that area under the ROC curve, although a commonly used metric to evaluate outlier detection methods is not the right metric. Since outlier detection problems have a skewed distribution of classes, precision-recall curves are better suited because precision compares false positives to true positives (outliers) rather than true negatives (inliers) and therefore is not a ected by the problem of class imbalance. We show empirically that area under the precision-recall curve is a better than ROC as an evaluation metric. The proposed approach is tested on the modi ed version of the Landsat satellite dataset, the modi ed version of the ann-thyroid dataset and a large real world credit card fraud detection dataset available through Kaggle where we show signi cant improvement over the baseline methods.", "title": "" }, { "docid": "02c41dae589c89c297977d48b90f7218", "text": "Recently new data center topologies have been proposed that offer higher aggregate bandwidth and location independence by creating multiple paths in the core of the network. To effectively use this bandwidth requires ensuring different flows take different paths, which poses a challenge.\n Plainly put, there is a mismatch between single-path transport and the multitude of available network paths. We propose a natural evolution of data center transport from TCP to multipath TCP. We show that multipath TCP can effectively and seamlessly use available bandwidth, providing improved throughput and better fairness in these new topologies when compared to single path TCP and randomized flow-level load balancing. We also show that multipath TCP outperforms laggy centralized flow scheduling without needing centralized control or additional infrastructure.", "title": "" }, { "docid": "b783e3a8b9aaec7114603bafffcb5bfd", "text": "Acknowledgements This paper has benefited from conversations and collaborations with colleagues, including most notably Stefan Dercon, Cheryl Doss, and Chris Udry. None of them has read this manuscript, however, and they are not responsible for the views expressed here. Steve Wiggins provided critical comments on the first draft of the document and persuaded me to rethink a number of points. The aim of the Natural Resources Group is to build partnerships, capacity and wise decision-making for fair and sustainable use of natural resources. Our priority in pursuing this purpose is on local control and management of natural resources and other ecosystems. The Institute of Development Studies (IDS) is a leading global Institution for international development research, teaching and learning, and impact and communications, based at the University of Sussex. Its vision is a world in which poverty does not exist, social justice prevails and sustainable economic growth is focused on improving human wellbeing. The Overseas Development Institute (ODI) is a leading independent think tank on international development and humanitarian issues. Its mission is to inspire and inform policy and practice which lead to the reduction of poverty, the alleviation of suffering and the achievement of sustainable livelihoods. Smallholder agriculture has long served as the dominant economic activity for people in sub-Saharan Africa, and it will remain enormously important for the foreseeable future. But the size of the sector does not necessarily imply that investments in the smallholder sector will yield high social benefits in comparison to other possible uses of development resources. Large changes could potentially affect the viability of smallholder systems, emanating from shifts in technology, markets, climate and the global environment. The priorities for development policy will vary across and within countries due to the highly heterogeneous nature of the smallholder sector.", "title": "" }, { "docid": "ff952443eef41fb430ff2831b5ee33d5", "text": "The increasing activity in the Intelligent Transportation Systems (ITS) area faces a strong limitation: the slow pace at which the automotive industry is making cars \"smarter\". On the contrary, the smartphone industry is advancing quickly. Existing smartphones are endowed with multiple wireless interfaces and high computational power, being able to perform a wide variety of tasks. By combining smartphones with existing vehicles through an appropriate interface we are able to move closer to the smart vehicle paradigm, offering the user new functionalities and services when driving. In this paper we propose an Android-based application that monitors the vehicle through an On Board Diagnostics (OBD-II) interface, being able to detect accidents. Our proposed application estimates the G force experienced by the passengers in case of a frontal collision, which is used together with airbag triggers to detect accidents. The application reacts to positive detection by sending details about the accident through either e-mail or SMS to pre-defined destinations, immediately followed by an automatic phone call to the emergency services. Experimental results using a real vehicle show that the application is able to react to accident events in less than 3 seconds, a very low time, validating the feasibility of smartphone based solutions for improving safety on the road.", "title": "" }, { "docid": "d141c13cea52e72bb7b84d3546496afb", "text": "A number of resource-intensive applications, such as augmented reality, natural language processing, object recognition, and multimedia-based software are pushing the computational and energy boundaries of smartphones. Cloud-based services augment the resource-scare capabilities of smartphones while offloading compute-intensive methods to resource-rich cloud servers. The amalgam of cloud and mobile computing technologies has ushered the rise of Mobile Cloud Computing (MCC) paradigm which envisions operating smartphones and modern mobile devices beyond their intrinsic capabilities. System virtualization, application virtualization, and dynamic binary translation (DBT) techniques are required to address the heterogeneity of smartphone and cloud architectures. However, most of the current research work has only focused on the offloading of virtualized applications while giving limited consideration to native code offloading. Moreover, researchers have not attended to the requirements of multimedia based applications in MCC offloading frameworks. In this study, we present a survey and taxonomy of state-of-the-art MCC frameworks, DBT techniques for native offloading, and cross-platform execution techniques for multimedia based applications. We survey the MCC frameworks from the perspective of offload enabling techniques. We focus on native code offloading frameworks and analyze the DBT and emulation techniques of smartphones (ARM) on a cloud server (x86) architectures. Furthermore, we debate the open research issues and challenges to native offloading of multimedia based smartphone applications.", "title": "" }, { "docid": "d269ebe2bc6ab4dcaaac3f603037b846", "text": "The contribution of power production by photovoltaic (PV) systems to the electricity supply is constantly increasing. An efficient use of the fluctuating solar power production will highly benefit from forecast information on the expected power production. This forecast information is necessary for the management of the electricity grids and for solar energy trading. This paper presents an approach to predict regional PV power output based on forecasts up to three days ahead provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). Focus of the paper is the description and evaluation of the approach of irradiance forecasting, which is the basis for PV power prediction. One day-ahead irradiance forecasts for single stations in Germany show a rRMSE of 36%. For regional forecasts, forecast accuracy is increasing in dependency on the size of the region. For the complete area of Germany, the rRMSE amounts to 13%. Besides the forecast accuracy, also the specification of the forecast uncertainty is an important issue for an effective application. We present and evaluate an approach to derive weather specific prediction intervals for irradiance forecasts. The accuracy of PV power prediction is investigated in a case study.", "title": "" }, { "docid": "f3e219c14f495762a2a6ced94708a477", "text": "We present novel empirical observations regarding how stochastic gradient descent (SGD) navigates the loss landscape of over-parametrized deep neural networks (DNNs). These observations expose the qualitatively different roles of learning rate and batch-size in DNN optimization and generalization. Specifically we study the DNN loss surface along the trajectory of SGD by interpolating the loss surface between parameters from consecutive iterations and tracking various metrics during training. We find that the loss interpolation between parameters before and after each training iteration’s update is roughly convex with a minimum (valley floor) in between for most of the training. Based on this and other metrics, we deduce that for most of the training update steps, SGD moves in valley like regions of the loss surface by jumping from one valley wall to another at a height above the valley floor. This ’bouncing between walls at a height’ mechanism helps SGD traverse larger distance for small batch sizes and large learning rates which we find play qualitatively different roles in the dynamics. While a large learning rate maintains a large height from the valley floor, a small batch size injects noise facilitating exploration. We find this mechanism is crucial for generalization because the valley floor has barriers and this exploration above the valley floor allows SGD to quickly travel far away from the initialization point (without being affected by barriers) and find flatter regions, corresponding to better generalization.", "title": "" }, { "docid": "9f2ade778dce9e007e9f5fa47af861b2", "text": "The potency of the environment to shape brain function changes dramatically across the lifespan. Neural circuits exhibit profound plasticity during early life and are later stabilized. A focus on the cellular and molecular bases of these developmental trajectories has begun to unravel mechanisms, which control the onset and closure of such critical periods. Two important concepts have emerged from the study of critical periods in the visual cortex: (1) excitatory-inhibitory circuit balance is a trigger; and (2) molecular \"brakes\" limit adult plasticity. The onset of the critical period is determined by the maturation of specific GABA circuits. Targeting these circuits using pharmacological or genetic approaches can trigger premature onset or induce a delay. These manipulations are so powerful that animals of identical chronological age may be at the peak, before, or past their plastic window. Thus, critical period timing per se is plastic. Conversely, one of the outcomes of normal development is to stabilize the neural networks initially sculpted by experience. Rather than being passively lost, the brain's intrinsic potential for plasticity is actively dampened. This is demonstrated by the late expression of brake-like factors, which reversibly limit excessive circuit rewiring beyond a critical period. Interestingly, many of these plasticity regulators are found in the extracellular milieu. Understanding why so many regulators exist, how they interact and, ultimately, how to lift them in noninvasive ways may hold the key to novel therapies and lifelong learning.", "title": "" }, { "docid": "3f4fcbc355d7f221eb6c9bc4a26b0448", "text": "BACKGROUND\nMost of our social interactions involve perception of emotional information from the faces of other people. Furthermore, such emotional processes are thought to be aberrant in a range of clinical disorders, including psychosis and depression. However, the exact neurofunctional maps underlying emotional facial processing are not well defined.\n\n\nMETHODS\nTwo independent researchers conducted separate comprehensive PubMed (1990 to May 2008) searches to find all functional magnetic resonance imaging (fMRI) studies using a variant of the emotional faces paradigm in healthy participants. The search terms were: \"fMRI AND happy faces,\" \"fMRI AND sad faces,\" \"fMRI AND fearful faces,\" \"fMRI AND angry faces,\" \"fMRI AND disgusted faces\" and \"fMRI AND neutral faces.\" We extracted spatial coordinates and inserted them in an electronic database. We performed activation likelihood estimation analysis for voxel-based meta-analyses.\n\n\nRESULTS\nOf the originally identified studies, 105 met our inclusion criteria. The overall database consisted of 1785 brain coordinates that yielded an overall sample of 1600 healthy participants. Quantitative voxel-based meta-analysis of brain activation provided neurofunctional maps for 1) main effect of human faces; 2) main effect of emotional valence; and 3) modulatory effect of age, sex, explicit versus implicit processing and magnetic field strength. Processing of emotional faces was associated with increased activation in a number of visual, limbic, temporoparietal and prefrontal areas; the putamen; and the cerebellum. Happy, fearful and sad faces specifically activated the amygdala, whereas angry or disgusted faces had no effect on this brain region. Furthermore, amygdala sensitivity was greater for fearful than for happy or sad faces. Insular activation was selectively reported during processing of disgusted and angry faces. However, insular sensitivity was greater for disgusted than for angry faces. Conversely, neural response in the visual cortex and cerebellum was observable across all emotional conditions.\n\n\nLIMITATIONS\nAlthough the activation likelihood estimation approach is currently one of the most powerful and reliable meta-analytical methods in neuroimaging research, it is insensitive to effect sizes.\n\n\nCONCLUSION\nOur study has detailed neurofunctional maps to use as normative references in future fMRI studies of emotional facial processing in psychiatric populations. We found selective differences between neural networks underlying the basic emotions in limbic and insular brain regions.", "title": "" }, { "docid": "03aac64e2d209d628874614d061b90f9", "text": "Patterns of reading development were examined in native English-speaking (L1) children and children who spoke English as a second language (ESL). Participants were 978 (790 L1 speakers and 188 ESL speakers) Grade 2 children involved in a longitudinal study that began in kindergarten. In kindergarten and Grade 2, participants completed standardized and experimental measures including reading, spelling, phonological processing, and memory. All children received phonological awareness instruction in kindergarten and phonics instruction in Grade 1. By the end of Grade 2, the ESL speakers' reading skills were comparable to those of L1 speakers, and ESL speakers even outperformed L1 speakers on several measures. The findings demonstrate that a model of early identification and intervention for children at risk is beneficial for ESL speakers and also suggest that the effects of bilingualism on the acquisition of early reading skills are not negative and may be positive.", "title": "" }, { "docid": "0d95c132ff0dcdb146ed433987c426cf", "text": "A smart connected car in conjunction with the Internet of Things (IoT) is an emerging topic. The fundamental concept of the smart connected car is connectivity, and such connectivity can be provided by three aspects, such as Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I), and Vehicle-to-Everything (V2X). To meet the aspects of V2V and V2I connectivity, we developed modules in accordance with international standards with respect to On-Board Diagnostics II (OBDII) and 4G Long Term Evolution (4G-LTE) to obtain and transmit vehicle information. We also developed software to visually check information provided by our modules. Information related to a user’s driving, which is transmitted to a cloud-based Distributed File System (DFS), was then analyzed for the purpose of big data analysis to provide information on driving habits to users. Yet, since this work is an ongoing research project, we focus on proposing an idea of system architecture and design in terms of big data analysis. Therefore, our contributions through this work are as follows: (1) Develop modules based on Controller Area Network (CAN) bus, OBDII, and 4G-LTE; (2) Develop software to check vehicle information on a PC; (3) Implement a database related to vehicle diagnostic codes; (4) Propose system architecture and design for big data analysis.", "title": "" }, { "docid": "cad8b81a115a2a59c8e3e6d44519b850", "text": "Inter-cell interference is the main obstacle for increasing the network capacity of Long-Term Evolution Advanced (LTE-A) system. Interference Cancellation (IC) is a promising way to improve spectral efficiency. 3rd Generation Partnership Project (3GPP) raises a new research project — Network-Assisted Interference Cancellation and Suppression (NAICS) in LTE Rel-12. Advanced receivers used in NAICS include maximum likelihood (ML) receiver and symbol-level IC (SLIC) receiver. Those receivers need some interference parameters, such as rank indicator (RI), precoding matrix indicator (PMI) and modulation level (MOD). This paper presents a new IC receiver based on detection. We get the clean interfering signal assisted by detection and use it in SLIC. The clean interfering signal makes the estimation of interfering transmitted signal more accurate. So interference cancellation would be more significant. We also improve the method of interference parameter estimation that avoids estimating power boosting and precoding matrix simultaneously. The simulation results show that the performance of proposed SLIC is better than traditional SLIC and close to ML.", "title": "" }, { "docid": "58e84998bca4d4d9368f5bc5879e64c0", "text": "This paper summarizes the effect of age, gender and race on Electrocardiographic parameters. The conduction system and heart muscles get degenerative changes with advancing age so these parameters get changed. The ECG parameters also changes under certain diseases. Then it is essential to know the normal limits of these parameters for diagnostic purpose under the influence of age, gender and race. The automated ECG analysis systems require normal limits of these parameters. The age and gender of the population clearly influencing the normal limits of ECG parameters. However the investigation of the effect of Body Mass Index on cross section of the population is further warranted.", "title": "" } ]
scidocsrr
de73727725559471811181920e733481
Moving average reversion strategy for on-line portfolio selection
[ { "docid": "dc187c1fb2af0cfdf0d39295151f9075", "text": "Online portfolio selection is a fundamental problem in computational finance, which has been extensively studied across several research communities, including finance, statistics, artificial intelligence, machine learning, and data mining. This article aims to provide a comprehensive survey and a structural understanding of online portfolio selection techniques published in the literature. From an online machine learning perspective, we first formulate online portfolio selection as a sequential decision problem, and then we survey a variety of state-of-the-art approaches, which are grouped into several major categories, including benchmarks, Follow-the-Winner approaches, Follow-the-Loser approaches, Pattern-Matching--based approaches, and Meta-Learning Algorithms. In addition to the problem formulation and related algorithms, we also discuss the relationship of these algorithms with the capital growth theory so as to better understand the similarities and differences of their underlying trading ideas. This article aims to provide a timely and comprehensive survey for both machine learning and data mining researchers in academia and quantitative portfolio managers in the financial industry to help them understand the state of the art and facilitate their research and practical applications. We also discuss some open issues and evaluate some emerging new trends for future research.", "title": "" } ]
[ { "docid": "ae92750b161381ac02c8600eb4c93beb", "text": "Textual-based password authentication scheme tends to be more vulnerable to attacks such as shouldersurfing and hidden camera. To overcome the vulnerabilities of traditional methods, visual or graphical password schemes have been developed as possible alternative solutions to text-based password schemes. Because simply adopting graphical password authentication also has some drawbacks, schemes using graphic and text have been developed. In this paper, we propose a hybrid password authentication scheme based on shape and text. It uses shapes of strokes on the grid as the origin passwords and allows users to login with text passwords via traditional input devices. The method provides strong resistant to hidden-camera and shoulder-surfing. Moreover, the scheme has high scalability and flexibility to enhance the authentication process security. The analysis of the security level of this approach is also discussed.", "title": "" }, { "docid": "f5f6036fa3f8c16ad36b3c65794fc86b", "text": "Cloud computing has become the buzzword in the industry today. Though, it is not an entirely new concept but in today’s digital age, it has become ubiquitous due to the proliferation of Internet, broadband, mobile devices, better bandwidth and mobility requirements for end-users (be it consumers, SMEs or enterprises). In this paper, the focus is on the perceived inclination of micro and small businesses (SMEs or SMBs) toward cloud computing and the benefits reaped by them. This paper presents five factors nfrastructure-as-a-Service (IaaS) mall and medium enterprises (SMEs’) mall and medium businesses (SMBs’) influencing the cloud usage by this business community, whose needs and business requirements are very different from large enterprises. Firstly, ease of use and convenience is the biggest favorable factor followed by security and privacy and then comes the cost reduction. The fourth factor reliability is ignored as SMEs do not consider cloud as reliable. Lastly but not the least, SMEs do not want to use cloud for sharing and collaboration and prefer their old conventional methods for sharing and collaborating with their stakeholders.", "title": "" }, { "docid": "790ac9330d698cf5d6f3f8fc7891f090", "text": "It is well known that the convergence rate of the expectation-maximization (EM) algorithm can be faster than those of convention first-order iterative algorithms when the overlap in the given mixture is small. But this argument has not been mathematically proved yet. This article studies this problem asymptotically in the setting of gaussian mixtures under the theoretical framework of Xu and Jordan (1996). It has been proved that the asymptotic convergence rate of the EM algorithm for gaussian mixtures locally around the true solution is o(e0.5()), where > 0 is an arbitrarily small number, o(x) means that it is a higher-order infinitesimal as x 0, and e() is a measure of the average overlap of gaussians in the mixture. In other words, the large sample local convergence rate for the EM algorithm tends to be asymptotically superlinear when e() tends to zero.", "title": "" }, { "docid": "cb59a7493f6b9deee4691e6f97c93a1f", "text": "AIMS AND OBJECTIVES\nThis integrative review of the literature addresses undergraduate nursing students' attitudes towards and use of research and evidence-based practice, and factors influencing this. Current use of research and evidence within practice, and the influences and perceptions of students in using these tools in the clinical setting are explored.\n\n\nBACKGROUND\nEvidence-based practice is an increasingly critical aspect of quality health care delivery, with nurses requiring skills in sourcing relevant information to guide the care they provide. Yet, barriers to engaging in evidence-based practice remain. To increase nurses' use of evidence-based practice within healthcare settings, the concepts and skills required must be introduced early in their career. To date, however, there is little evidence to show if and how this inclusion makes a difference.\n\n\nDESIGN\nIntegrative literature review.\n\n\nMETHODS\nProQuest, Summon, Science Direct, Ovid, CIAP, Google scholar and SAGE databases were searched, and Snowball search strategies used. One hundred and eighty-one articles were reviewed. Articles were then discarded for irrelevance. Nine articles discussed student attitudes and utilisation of research and evidence-based practice.\n\n\nRESULTS\nFactors surrounding the attitudes and use of research and evidence-based practice were identified, and included the students' capability beliefs, the students' attitudes, and the attitudes and support capabilities of wards/preceptors.\n\n\nCONCLUSIONS\nUndergraduate nursing students are generally positive toward using research for evidence-based practice, but experience a lack of support and opportunity. These students face cultural and attitudinal disadvantage, and lack confidence to practice independently. Further research and collaboration between educational facilities and clinical settings may improve utilisation.\n\n\nRELEVANCE TO CLINICAL PRACTICE\nThis paper adds further discussion to the topic from the perspective of and including influences surrounding undergraduate students and new graduate nurses.", "title": "" }, { "docid": "a09fb2b15ebf81006ccda273a141412a", "text": "Computing containment relations between massive collections of sets is a fundamental operation in data management, for example in graph analytics and data mining applications. Motivated by recent hardware trends, in this paper we present two novel solutions for computing set-containment joins over massive sets: the Patricia Trie-based Signature Join (PTSJ) and PRETTI+, a Patricia trie enhanced extension of the state-of-the-art PRETTI join. The compact trie structure not only enables efficient use of main-memory, but also significantly boosts the performance of both approaches. By carefully analyzing the algorithms and conducting extensive experiments with various synthetic and real-world datasets, we show that, in many practical cases, our algorithms are an order of magnitude faster than the state-of-the-art.", "title": "" }, { "docid": "b0991cd60b3e94c0ed3afede89e13f36", "text": "It has been established that incorporating word cluster features derived from large unlabeled corpora can significantly improve prediction of linguistic structure. While previous work has focused primarily on English, we extend these results to other languages along two dimensions. First, we show that these results hold true for a number of languages across families. Second, and more interestingly, we provide an algorithm for inducing cross-lingual clusters and we show that features derived from these clusters significantly improve the accuracy of cross-lingual structure prediction. Specifically, we show that by augmenting direct-transfer systems with cross-lingual cluster features, the relative error of delexicalized dependency parsers, trained on English treebanks and transferred to foreign languages, can be reduced by up to 13%. When applying the same method to direct transfer of named-entity recognizers, we observe relative improvements of up to 26%.", "title": "" }, { "docid": "19aa8d26eae39aa1360aba38aaefc29e", "text": "We present a matrix factorization model inspired by challenges we encountered while working on the Xbox movies recommendation system. The item catalog in a recommender system is typically equipped with meta-data features in the form of labels. However, only part of these features are informative or useful with regard to collaborative filtering. By incorporating a novel sparsity prior on feature parameters, the model automatically discerns and utilizes informative features while simultaneously pruning non-informative features.\n The model is designed for binary feedback, which is common in many real-world systems where numeric rating data is scarce or non-existent. However, the overall framework is applicable to any likelihood function. Model parameters are estimated with a Variational Bayes inference algorithm, which is robust to over-fitting and does not require cross-validation and fine tuning of regularization coefficients. The efficacy of our method is illustrated on a sample from the Xbox movies dataset as well as on the publicly available MovieLens dataset. In both cases, the proposed solution provides superior predictive accuracy, especially for long-tail items. We then demonstrate the feature selection capabilities and compare against the common case of simple Gaussian priors. Finally, we show that even without features, our model performs better than a baseline model trained with the popular stochastic gradient descent approach.", "title": "" }, { "docid": "601748e27c7b3eefa4ff29252b42bf93", "text": "A simple, fast method is presented for the interpolation of texture coordinates and shading parameters for polygons viewed in perspective. The method has application in scan conversion algorithms like z-bu er and painter's algorithms that perform screen space interpolation of shading parameters such as texture coordinates, colors, and normal vectors. Some previous methods perform linear interpolation in screen space, but this is rotationally variant, and in the case of texture mapping, causes a disturbing \\rubber sheet\" e ect. To correctly compute the nonlinear, projective transformation between screen space and parameter space, we use rational linear interpolation across the polygon, performing several divisions at each pixel. We present simpler formulas for setting up these interpolation computations, reducing the setup cost per polygon to nil and reducing the cost per vertex to a handful of divisions. Additional keywords: incremental, perspective, projective, a ne.", "title": "" }, { "docid": "77059bf4b66792b4f34bc78bbb0b373a", "text": "Cloud computing systems host most of today's commercial business applications yielding it high revenue which makes it a target of cyber attacks. This emphasizes the need for a digital forensic mechanism for the cloud environment. Conventional digital forensics cannot be directly presented as a cloud forensic solution due to the multi tenancy and virtualization of resources prevalent in cloud. While we do cloud forensics, the data to be inspected are cloud component logs, virtual machine disk images, volatile memory dumps, console logs and network captures. In this paper, we have come up with a remote evidence collection and pre-processing framework using Struts and Hadoop distributed file system. Collection of VM disk images, logs etc., are initiated through a pull model when triggered by the investigator, whereas cloud node periodically pushes network captures to HDFS. Pre-processing steps such as clustering and correlation of logs and VM disk images are carried out through Mahout and Weka to implement cross drive analysis.", "title": "" }, { "docid": "03f99359298276cb588eb8fa85f1e83e", "text": "In recent years, there has been a growing interest in the wireless sensor networks (WSN) for a variety of applications such as the localization and real time positioning. Different approaches based on artificial intelligence are applied to solve common issues in WSN and improve network performance. This paper addresses a survey on machine learning techniques for localization in WSNs using Received Signal Strength Indicator.", "title": "" }, { "docid": "c4616ae56dd97595f63b60abc2bea55c", "text": "Driven by the challenges of rapid urbanization, cities are determined to implement advanced socio-technological changes and transform into smarter cities. The success of such transformation, however, greatly relies on a thorough understanding of the city's states of spatiotemporal flux. The ability to understand such fluctuations in context and in terms of interdependencies that exist among various entities across time and space is crucial, if cities are to maintain their smart growth. Here, we introduce a Smart City Digital Twin paradigm that can enable increased visibility into cities' human-infrastructure-technology interactions, in which spatiotemporal fluctuations of the city are integrated into an analytics platform at the real-time intersection of reality-virtuality. Through learning and exchange of spatiotemporal information with the city, enabled through virtualization and the connectivity offered by Internet of Things (IoT), this Digital Twin of the city becomes smarter over time, able to provide predictive insights into the city's smarter performance and growth.", "title": "" }, { "docid": "14fe96edca3ae38979c5d72f1d8aef40", "text": "How can prior knowledge on the transformation invariances of a domain be incorporated into the architecture of a neural network? We propose Equivariant Transformers (ETs), a family of differentiable image-to-image mappings that improve the robustness of models towards pre-defined continuous transformation groups. Through the use of specially-derived canonical coordinate systems, ETs incorporate functions that are equivariant by construction with respect to these transformations. We show empirically that ETs can be flexibly composed to improve model robustness towards more complicated transformation groups in several parameters. On a real-world image classification task, ETs improve the sample efficiency of ResNet classifiers, achieving relative improvements in error rate of up to 15% in the limited data regime while increasing model parameter count by less than 1%.", "title": "" }, { "docid": "15208617386aeb77f73ca7c2b7bb2656", "text": "Multiplication is the basic building block for several DSP processors, Image processing and many other. Over the years the computational complexities of algorithms used in Digital Signal Processors (DSPs) have gradually increased. This requires a parallel array multiplier to achieve high execution speed or to meet the performance demands. A typical implementation of such an array multiplier is Braun design. Braun multiplier is a type of parallel array multiplier. The architecture of Braun multiplier mainly consists of some Carry Save Adders, array of AND gates and one Ripple Carry Adder. In this research work, a new design of Braun Multiplier is proposed and this proposed design of multiplier uses a very fast parallel prefix adder ( Kogge Stone Adder) in place of Ripple Carry Adder. The architecture of standard Braun Multiplier is modified in this work for reducing the delay due to Ripple Carry Adder and performing faster multiplication of two binary numbers. This research also presents a comparative study of FPGA implementation on Spartan2 and Spartartan2E for new multiplier design and standard braun multiplier. The RTL design of proposed new Braun Multiplier and standard braun multiplier is done using Verilog HDL. The simulation is performed using ModelSim. The Xilinx ISE design tool is used for FPGA implementation. Comparative result shows the modified design is effective when compared in terms of delay with the standard design.", "title": "" }, { "docid": "17eded575bf5e123030b93ec5dc19bc5", "text": "Our research is aimed at developing a quantitative approach for assessing supply chain resilience to disasters, a topic that has been discussed primarily in a qualitative manner in the literature. For this purpose, we propose a simulation-based framework that incorporates concepts of resilience into the process of supply chain design. In this context, resilience is defined as the ability of a supply chain system to reduce the probabilities of disruptions, to reduce the consequences of those disruptions, and to reduce the time to recover normal performance. The decision framework incorporates three determinants of supply chain resilience (density, complexity, and node criticality) and discusses their relationship to the occurrence of disruptions, to the impacts of those disruptions on the performance of a supply chain system and to the time needed for recovery. Different preliminary strategies for evaluating supply chain resilience to disasters are identified, and directions for future research are discussed.", "title": "" }, { "docid": "5b4045a80ae584050a9057ba32c9296b", "text": "Electro-rheological (ER) fluids are smart fluids which can transform into solid-like phase by applying an electric field. This process is reversible and can be strategically used to build fluidic components for innovative soft robots capable of soft locomotion. In this work, we show the potential applications of ER fluids to build valves that simplify design of fluidic based soft robots. We propose the design and development of a composite ER valve, aimed at controlling the flexibility of soft robots bodies by controlling the ER fluid flow. We present how an ad hoc number of such soft components can be embodied in a simple crawling soft robot (Wormbot); in a locomotion mechanism capable of forward motion through rotation; and, in a tendon driven continuum arm. All these embodiments show how simplification of the hydraulic circuits relies on the simple structure of ER valves. Finally, we address preliminary experiments to characterize the behavior of Wormbot in terms of actuation forces.", "title": "" }, { "docid": "e060548f90eb06f359b2d8cfcf713c29", "text": "Objective\nTo conduct a systematic review of deep learning models for electronic health record (EHR) data, and illustrate various deep learning architectures for analyzing different data sources and their target applications. We also highlight ongoing research and identify open challenges in building deep learning models of EHRs.\n\n\nDesign/method\nWe searched PubMed and Google Scholar for papers on deep learning studies using EHR data published between January 1, 2010, and January 31, 2018. We summarize them according to these axes: types of analytics tasks, types of deep learning model architectures, special challenges arising from health data and tasks and their potential solutions, as well as evaluation strategies.\n\n\nResults\nWe surveyed and analyzed multiple aspects of the 98 articles we found and identified the following analytics tasks: disease detection/classification, sequential prediction of clinical events, concept embedding, data augmentation, and EHR data privacy. We then studied how deep architectures were applied to these tasks. We also discussed some special challenges arising from modeling EHR data and reviewed a few popular approaches. Finally, we summarized how performance evaluations were conducted for each task.\n\n\nDiscussion\nDespite the early success in using deep learning for health analytics applications, there still exist a number of issues to be addressed. We discuss them in detail including data and label availability, the interpretability and transparency of the model, and ease of deployment.", "title": "" }, { "docid": "521fd4ce53761c9bda64b13a91513c18", "text": "The importance of organizational agility in a competitive environment is nowadays widely recognized and accepted. However, despite this awareness, the availability of tools and methods that support an organization in assessing and improving their organizational agility is scarce. Therefore, this study introduces the Organizational Agility Maturity Model in order to provide an easy-to-use yet powerful assessment tool for organizations in the software and IT service industry. Based on a design science research approach with a comprehensive literature review and an empirical investigation utilizing factor analysis, both scientific rigor as well as practical relevance is ensured. The applicability is further demonstrated by a cluster analysis identifying patterns of organizational agility that fit to the maturity model. The Organizational Agility Maturity Model further contributes to the field by providing a theoretically and empirically grounded structure of organizational agility supporting the efforts of developing a common understanding of the concept.", "title": "" }, { "docid": "44dfc8c3c5c1f414197ad7cd8dedfb2e", "text": "In this paper, we propose a framework for formation stabilization of multiple autonomous vehicles in a distributed fashion. Each vehicle is assumed to have simple dynamics, i.e. a double-integrator, with a directed (or an undirected) information flow over the formation graph of the vehicles. Our goal is to find a distributed control law (with an efficient computational cost) for each vehicle that makes use of limited information regarding the state of other vehicles. Here, the key idea in formation stabilization is the use of natural potential functions obtained from structural constraints of a desired formation in a way that leads to a collision-free, distributed, and bounded state feedback law for each vehicle.", "title": "" }, { "docid": "2acdc7dfe5ae0996ef0234ec51a34fe5", "text": "The on-line or automatic visual inspection of PCB is basically a very first examination before its electronic testing. This inspection consists of mainly missing or wrongly placed components in the PCB. If there is any missing electronic component then it is not so damaging the PCB. But if any of the component that can be placed only in one way and has been soldered in other way around, then the same will be damaged and there are chances that other components may also get damaged. To avoid this, an automatic visual inspection is in demand that may take care of the missing or wrongly placed electronic components. In the presented paper work, an automatic machine vision system for inspection of PCBs for any missing component as compared with the standard one has been proposed. The system primarily consists of two parts: 1) the learning process, where the system is trained for the standard PCB, and 2) inspection process where the PCB under test is inspected for any missing component as compared with the standard one. The proposed system can be deployed on a manufacturing line with a much more affordable price comparing to other commercial inspection systems.", "title": "" } ]
scidocsrr
64d42a604baece201ba258cf06ac275b
DCN+: Mixed Objective and Deep Residual Coattention for Question Answering
[ { "docid": "4337f8c11a71533d38897095e5e6847a", "text": "A number of recent works have proposed attention models for Visual Question Answering (VQA) that generate spatial maps highlighting image regions relevant to answering the question. In this paper, we argue that in addition to modeling “where to look” or visual attention, it is equally important to model “what words to listen to” or question attention. We present a novel co-attention model for VQA that jointly reasons about image and question attention. In addition, our model reasons about the question (and consequently the image via the co-attention mechanism) in a hierarchical fashion via a novel 1-dimensional convolution neural networks (CNN). Our model improves the state-of-the-art on the VQA dataset from 60.3% to 60.5%, and from 61.6% to 63.3% on the COCO-QA dataset. By using ResNet, the performance is further improved to 62.1% for VQA and 65.4% for COCO-QA.1. 1 Introduction Visual Question Answering (VQA) [2, 7, 16, 17, 29] has emerged as a prominent multi-discipline research problem in both academia and industry. To correctly answer visual questions about an image, the machine needs to understand both the image and question. Recently, visual attention based models [20, 23–25] have been explored for VQA, where the attention mechanism typically produces a spatial map highlighting image regions relevant to answering the question. So far, all attention models for VQA in literature have focused on the problem of identifying “where to look” or visual attention. In this paper, we argue that the problem of identifying “which words to listen to” or question attention is equally important. Consider the questions “how many horses are in this image?” and “how many horses can you see in this image?\". They have the same meaning, essentially captured by the first three words. A machine that attends to the first three words would arguably be more robust to linguistic variations irrelevant to the meaning and answer of the question. Motivated by this observation, in addition to reasoning about visual attention, we also address the problem of question attention. Specifically, we present a novel multi-modal attention model for VQA with the following two unique features: Co-Attention: We propose a novel mechanism that jointly reasons about visual attention and question attention, which we refer to as co-attention. Unlike previous works, which only focus on visual attention, our model has a natural symmetry between the image and question, in the sense that the image representation is used to guide the question attention and the question representation(s) are used to guide image attention. Question Hierarchy: We build a hierarchical architecture that co-attends to the image and question at three levels: (a) word level, (b) phrase level and (c) question level. At the word level, we embed the words to a vector space through an embedding matrix. At the phrase level, 1-dimensional convolution neural networks are used to capture the information contained in unigrams, bigrams and trigrams. The source code can be downloaded from https://github.com/jiasenlu/HieCoAttenVQA 30th Conference on Neural Information Processing Systems (NIPS 2016), Barcelona, Spain. ar X iv :1 60 6. 00 06 1v 3 [ cs .C V ] 2 6 O ct 2 01 6 Ques%on:\t\r What\t\r color\t\r on\t\r the stop\t\r light\t\r is\t\r lit\t\r up\t\r \t\r ? ...\t\r ... color\t\r stop\t\r light\t\r lit co-­‐a7en%on color\t\r ...\t\r stop\t\r \t\r light\t\r \t\r ... What color\t\r ... the stop light light\t\r \t\r ... What color What\t\r color\t\r on\t\r the\t\r stop\t\r light\t\r is\t\r lit\t\r up ...\t\r ... the\t\r stop\t\r light ...\t\r ... stop Image Answer:\t\r green Figure 1: Flowchart of our proposed hierarchical co-attention model. Given a question, we extract its word level, phrase level and question level embeddings. At each level, we apply co-attention on both the image and question. The final answer prediction is based on all the co-attended image and question features. Specifically, we convolve word representations with temporal filters of varying support, and then combine the various n-gram responses by pooling them into a single phrase level representation. At the question level, we use recurrent neural networks to encode the entire question. For each level of the question representation in this hierarchy, we construct joint question and image co-attention maps, which are then combined recursively to ultimately predict a distribution over the answers. Overall, the main contributions of our work are: • We propose a novel co-attention mechanism for VQA that jointly performs question-guided visual attention and image-guided question attention. We explore this mechanism with two strategies, parallel and alternating co-attention, which are described in Sec. 3.3; • We propose a hierarchical architecture to represent the question, and consequently construct image-question co-attention maps at 3 different levels: word level, phrase level and question level. These co-attended features are then recursively combined from word level to question level for the final answer prediction; • At the phrase level, we propose a novel convolution-pooling strategy to adaptively select the phrase sizes whose representations are passed to the question level representation; • Finally, we evaluate our proposed model on two large datasets, VQA [2] and COCO-QA [17]. We also perform ablation studies to quantify the roles of different components in our model.", "title": "" } ]
[ { "docid": "236896835b48994d7737b9152c0e435f", "text": "A network is said to show assortative mixing if the nodes in the network that have many connections tend to be connected to other nodes with many connections. Here we measure mixing patterns in a variety of networks and find that social networks are mostly assortatively mixed, but that technological and biological networks tend to be disassortative. We propose a model of an assortatively mixed network, which we study both analytically and numerically. Within this model we find that networks percolate more easily if they are assortative and that they are also more robust to vertex removal.", "title": "" }, { "docid": "bfa2f3edf0bd1c27bfe3ab90dde6fd75", "text": "Sophorolipids are biosurfactants belonging to the class of the glycolipid, produced mainly by the osmophilic yeast Candida bombicola. Structurally they are composed by a disaccharide sophorose (2’-O-β-D-glucopyranosyl-β-D-glycopyranose) which is linked β -glycosidically to a long fatty acid chain with generally 16 to 18 atoms of carbon with one or more unsaturation. They are produced as a complex mix containing up to 40 molecules and associated isomers, depending on the species which produces it, the substrate used and the culture conditions. They present properties which are very similar or superior to the synthetic surfactants and other biosurfactants with the advantage of presenting low toxicity, higher biodegradability, better environmental compatibility, high selectivity and specific activity in a broad range of temperature, pH and salinity conditions. Its biological activities are directly related with its chemical structure. Sophorolipids possess a great potential for application in areas such as: food; bioremediation; cosmetics; pharmaceutical; biomedicine; nanotechnology and enhanced oil recovery.", "title": "" }, { "docid": "a5a1dd08d612db28770175cc578dd946", "text": "A novel soft-robotic gripper design is presented, with three soft bending fingers and one passively adaptive palm. Each soft finger comprises two ellipse-profiled pneumatic chambers. Combined with the adaptive palm and the surface patterned feature, the soft gripper could achieve 40-N grasping force in practice, 10 times the self-weight, at a very low actuation pressure below 100 kPa. With novel soft finger design, the gripper could pick up small objects, as well as conform to large convex-shape objects with reliable contact. The fabrication process was presented in detail, involving commercial-grade three-dimensional printing and molding of silicone rubber. The fabricated actuators and gripper were tested on a dedicated platform, showing the gripper could reliably grasp objects of various shapes and sizes, even with external disturbances.", "title": "" }, { "docid": "56287b9aea445b570aa7fe77f1b7751a", "text": "Unsupervised machine translation—i.e., not assuming any cross-lingual supervision signal, whether a dictionary, translations, or comparable corpora—seems impossible, but nevertheless, Lample et al. (2018a) recently proposed a fully unsupervised machine translation (MT) model. The model relies heavily on an adversarial, unsupervised alignment of word embedding spaces for bilingual dictionary induction (Conneau et al., 2018), which we examine here. Our results identify the limitations of current unsupervised MT: unsupervised bilingual dictionary induction performs much worse on morphologically rich languages that are not dependent marking, when monolingual corpora from different domains or different embedding algorithms are used. We show that a simple trick, exploiting a weak supervision signal from identical words, enables more robust induction, and establish a near-perfect correlation between unsupervised bilingual dictionary induction performance and a previously unexplored graph similarity metric.", "title": "" }, { "docid": "dd4cc15729f65a0102028949b34cc56f", "text": "Autonomous vehicles platooning has received considerable attention in recent years, due to its potential to significantly benefit road transportation, improving traffic efficiency, enhancing road safety and reducing fuel consumption. The Vehicular ad hoc Networks and the de facto vehicular networking standard IEEE 802.11p communication protocol are key tools for the deployment of platooning applications, since the cooperation among vehicles is based on a reliable communication structure. However, vehicular networks can suffer different security threats. Indeed, in collaborative driving applications, the sudden appearance of a malicious attack can mainly compromise: (i) the correctness of data traffic flow on the vehicular network by sending malicious messages that alter the platoon formation and its coordinated motion; (ii) the safety of platooning application by altering vehicular network communication capability. In view of the fact that cyber attacks can lead to dangerous implications for the security of autonomous driving systems, it is fundamental to consider their effects on the behavior of the interconnected vehicles, and to try to limit them from the control design stage. To this aim, in this work we focus on some relevant types of malicious threats that affect the platoon safety, i.e. application layer attacks (Spoofing and Message Falsification) and network layer attacks (Denial of Service and Burst Transmission), and we propose a novel collaborative control strategy for enhancing the protection level of autonomous platoons. The control protocol is designed and validated in both analytically and experimental way, for the appraised malicious attack scenarios and for different communication topology structures. The effectiveness of the proposed strategy is shown by using PLEXE, a state of the art inter-vehicular communications and mobility simulator that includes basic building blocks for platooning. A detailed experimental analysis discloses the robustness of the proposed approach and its capabilities in reacting to the malicious attack effects.", "title": "" }, { "docid": "c6bd4cd6f90abf20f2619b1d1af33680", "text": "General human action recognition requires understanding of various visual cues. In this paper, we propose a network architecture that computes and integrates the most important visual cues for action recognition: pose, motion, and the raw images. For the integration, we introduce a Markov chain model which adds cues successively. The resulting approach is efficient and applicable to action classification as well as to spatial and temporal action localization. The two contributions clearly improve the performance over respective baselines. The overall approach achieves state-of-the-art action classification performance on HMDB51, J-HMDB and NTU RGB+D datasets. Moreover, it yields state-of-the-art spatio-temporal action localization results on UCF101 and J-HMDB.", "title": "" }, { "docid": "384dfe9f80cd50ce3a41cd0fdc494e43", "text": "Optical Character Recognition (OCR) systems often generate errors for images with noise or with low scanning resolution. In this paper, a novel approach that can be used to improve and restore the quality of any clean lower resolution images for easy recognition by OCR process. The method relies on the production of four copies of the original image so that each picture undergoes different restoration processes. These four copies of the images are then passed to a single OCR engine in parallel. In addition to that, the method does not need any traditional alignment between the four resulting texts, which is time consuming and needs complex calculation. It implements a new procedure to choose the best among them and can be applied without prior training on errors. The experimental results show improvement in word error rate for low resolution images by more than 67%.", "title": "" }, { "docid": "29822df06340218a43fbcf046cbeb264", "text": "Twitter provides search services to help people find new users to follow by recommending popular users or their friends' friends. However, these services do not offer the most relevant users to follow for a user. Furthermore, Twitter does not provide yet the search services to find the most interesting tweet messages for a user either. In this paper, we propose TWITOBI, a recommendation system for Twitter using probabilistic modeling for collaborative filtering which can recommend top-K users to follow and top-K tweets to read for a user. Our novel probabilistic model utilizes not only tweet messages but also the relationships between users. We develop an estimation algorithm for learning our model parameters and present its parallelized algorithm using MapReduce to handle large data. Our performance study with real-life data sets confirms the effectiveness and scalability of our algorithms.", "title": "" }, { "docid": "f095118c63d1531ebdbaec3565b0d91f", "text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.", "title": "" }, { "docid": "47ae3428ecddd561b678e5715dfd59ab", "text": "Social media have become an established feature of the dynamic information space that emerges during crisis events. Both emergency responders and the public use these platforms to search for, disseminate, challenge, and make sense of information during crises. In these situations rumors also proliferate, but just how fast such information can spread is an open question. We address this gap, modeling the speed of information transmission to compare retransmission times across content and context features. We specifically contrast rumor-affirming messages with rumor-correcting messages on Twitter during a notable hostage crisis to reveal differences in transmission speed. Our work has important implications for the growing field of crisis informatics.", "title": "" }, { "docid": "e3664eb9901464d6af312e817393e712", "text": "The security of computer systems fundamentally relies on memory isolation, e.g., kernel address ranges are marked as non-accessible and are protected from user access. In this paper, we present Meltdown. Meltdown exploits side effects of out-of-order execution on modern processors to read arbitrary kernel-memory locations including personal data and passwords. Out-of-order execution is an indispensable performance feature and present in a wide range of modern processors. The attack is independent of the operating system, and it does not rely on any software vulnerabilities. Meltdown breaks all security guarantees provided by address space isolation as well as paravirtualized environments and, thus, every security mechanism building upon this foundation. On affected systems, Meltdown enables an adversary to read memory of other processes or virtual machines in the cloud without any permissions or privileges, affecting millions of customers and virtually every user of a personal computer. We show that the KAISER defense mechanism for KASLR has the important (but inadvertent) side effect of impeding Meltdown. We stress that KAISER must be deployed immediately to prevent largescale exploitation of this severe information leakage.", "title": "" }, { "docid": "ae9469b80390e5e2e8062222423fc2cd", "text": "Social media such as those residing in the popular photo sharing websites is attracting increasing attention in recent years. As a type of user-generated data, wisdom of the crowd is embedded inside such social media. In particular, millions of users upload to Flickr their photos, many associated with temporal and geographical information. In this paper, we investigate how to rank the trajectory patterns mined from the uploaded photos with geotags and timestamps. The main objective is to reveal the collective wisdom recorded in the seemingly isolated photos and the individual travel sequences reflected by the geo-tagged photos. Instead of focusing on mining frequent trajectory patterns from geo-tagged social media, we put more effort into ranking the mined trajectory patterns and diversifying the ranking results. Through leveraging the relationships among users, locations and trajectories, we rank the trajectory patterns. We then use an exemplar-based algorithm to diversify the results in order to discover the representative trajectory patterns. We have evaluated the proposed framework on 12 different cities using a Flickr dataset and demonstrated its effectiveness.", "title": "" }, { "docid": "55a29653163bdf9599bf595154a99a25", "text": "The effect of the steel slag aggregate aging on mechanical properties of the high performance concrete is analysed in the paper. The effect of different aging periods of steel slag aggregate on mechanical properties of high performance concrete is studied. It was observed that properties of this concrete are affected by the steel slag aggregate aging process. The compressive strength increases with an increase in the aging period of steel slag aggregate. The flexural strength, Young’s modulus, and impact strength of concrete, increase at the rate similar to that of the compressive strength. The workability and the abrasion loss of concrete decrease with an increase of the steel slag aggregate aging period.", "title": "" }, { "docid": "424bf67761e234f6cf85eacabf38a502", "text": "Due to poor efficiencies of Incandescent Lamps (ILs), Fluorescent Lamps (FLs) and Compact Fluorescent Lamps (CFLs) are increasingly used in residential and commercial applications. This proliferation of FLs and CFLs increases the harmonics level in distribution systems that could affect power systems and end users. In order to quantify the harmonics produced by FLs and CFLs precisely, accurate modelling of these loads are required. Matlab Simulink is used to model and simulate the full models of FLs and CFLs to give close results to the experimental measurements. Moreover, a Constant Load Power (CLP) model is also modelled and its results are compared with the full models of FLs and CFLs. This CLP model is much faster to simulate and easier to model than the full model. Such models help engineers and researchers to evaluate the harmonics exist within households and commercial buildings.", "title": "" }, { "docid": "69dc7ae1e3149d475dabb4bbf8f05172", "text": "Knowledge about entities is essential for natural language understanding. This knowledge includes several facts about entities such as their names, properties, relations and types. This data is usually stored in large scale structures called knowledge bases (KB) and therefore building and maintaining KBs is very important. Examples of such KBs are Wikipedia, Freebase and Google knowledge graph. Incompleteness is unfortunately a reality for every KB, because the world is changing – new entities are emerging, and existing entities are getting new properties. Therefore, we always need to update KBs. To do so, we propose an information extraction method that processes large raw corpora in order to gather knowledge about entities. We focus on extraction of entity types and address the task of fine-grained entity typing: given a KB and a large corpus of text with mentions of entities in the KB, find all fine-grained types of the entities. For example given a large corpus and the entity “Barack Obama” we need to find all his types including PERSON, POLITICIAN, and AUTHOR. Artificial neural networks (NNs) have shown promising results in different machine learning problems. Distributed representation (embedding) is an effective way of representing data for NNs. In this work, we introduce two models for fine-grained entity typing using NNs with distributed representations of language units: (i) A global model that predicts types of an entity based on its global representation learned from the entity’s name and contexts. (ii) A context model that predicts types of an entity based on its context-level predictions. Each of the two proposed models has some specific properties. For the global model, learning high quality entity representations is crucial because it is the only source used for the predictions. Therefore, we introduce representations using name and contexts of entities on three levels of entity, word, and character. We show each has complementary information and a multi-level representation is the best. For the context model, we need to use distant supervision since the contextlevel labels are not available for entities. Distant supervised labels are noisy and this harms the performance of models. Therefore, we introduce and apply new algorithms for noise mitigation using multi-instance learning.", "title": "" }, { "docid": "afefd32f480dbb5880eea1d9e489147e", "text": "Creating mechanical automata that can walk in stable and pleasing manners is a challenging task that requires both skill and expertise. We propose to use computational design to offset the technical difficulties of this process. A simple drag-and-drop interface allows casual users to create personalized walking toys from a library of pre-defined template mechanisms. Provided with this input, our method leverages physical simulation and evolutionary optimization to refine the mechanical designs such that the resulting toys are able to walk. The optimization process is guided by an intuitive set of objectives that measure the quality of the walking motions. We demonstrate our approach on a set of simulated mechanical toys with different numbers of legs and various distinct gaits. Two fabricated prototypes showcase the feasibility of our designs.", "title": "" }, { "docid": "5fbedf5f399ee19d083a73f962cd9f29", "text": "A 70 mm-open-ended coaxial line probe was developed to perform measurements of the dielectric properties of large concrete samples. The complex permittivity was measured in the frequency range 50 MHz – 1.5 GHz during the hardening process of the concrete. As expected, strong dependence of water content was observed.", "title": "" }, { "docid": "1a77d9ee6da4620b38efec315c6357a1", "text": "The authors present a new approach to culture and cognition, which focuses on the dynamics through which specific pieces of cultural knowledge (implicit theories) become operative in guiding the construction of meaning from a stimulus. Whether a construct comes to the fore in a perceiver's mind depends on the extent to which the construct is highly accessible (because of recent exposure). In a series of cognitive priming experiments, the authors simulated the experience of bicultural individuals (people who have internalized two cultures) of switching between different cultural frames in response to culturally laden symbols. The authors discuss how this dynamic, constructivist approach illuminates (a) when cultural constructs are potent drivers of behavior and (b) how bicultural individuals may control the cognitive effects of culture.", "title": "" }, { "docid": "cba3209a27e1332f25f29e8b2c323d37", "text": "One of the technologies that has been showing possibilities of application in educational environments is the Augmented Reality (AR), in addition to its application to other fields such as tourism, advertising, video games, among others. The present article shows the results of an experiment carried out at the National University of Colombia, with the design and construction of augmented learning objects for the seventh and eighth grades of secondary education, which were tested and evaluated by students of a school in the department of Caldas. The study confirms the potential of this technology to support educational processes represented in the creation of digital resources for mobile devices. The development of learning objects in AR for mobile devices can support teachers in the integration of information and communication technologies (ICT) in the teaching-learning processes.", "title": "" } ]
scidocsrr
d47312497b8018730d33a0545a46c4fa
Animated narrative visualization for video clickstream data
[ { "docid": "5f04fcacc0dd325a1cd3ba5a846fe03f", "text": "Web clickstream data are routinely collected to study how users browse the web or use a service. It is clear that the ability to recognize and summarize user behavior patterns from such data is valuable to e-commerce companies. In this paper, we introduce a visual analytics system to explore the various user behavior patterns reflected by distinct clickstream clusters. In a practical analysis scenario, the system first presents an overview of clickstream clusters using a Self-Organizing Map with Markov chain models. Then the analyst can interactively explore the clusters through an intuitive user interface. He can either obtain summarization of a selected group of data or further refine the clustering result. We evaluated our system using two different datasets from eBay. Analysts who were working on the same data have confirmed the system's effectiveness in extracting user behavior patterns from complex datasets and enhancing their ability to reason.", "title": "" } ]
[ { "docid": "f29d0ea5ff5c96dadc440f4d4aa229c6", "text": "Wikipedia infoboxes are a valuable source of structured knowledge for global knowledge sharing. However, infobox information is very incomplete and imbalanced among the Wikipedias in different languages. It is a promising but challenging problem to utilize the rich structured knowledge from a source language Wikipedia to help complete the missing infoboxes for a target language. In this paper, we formulate the problem of cross-lingual knowledge extraction from multilingual Wikipedia sources, and present a novel framework, called WikiCiKE, to solve this problem. An instancebased transfer learning method is utilized to overcome the problems of topic drift and translation errors. Our experimental results demonstrate that WikiCiKE outperforms the monolingual knowledge extraction method and the translation-based method.", "title": "" }, { "docid": "97a1d44956f339a678da4c7a32b63bf6", "text": "As a first step towards agents learning to communicate about their visual environment, we propose a system that, given visual representations of a referent (CAT) and a context (SOFA), identifies their discriminative attributes, i.e., properties that distinguish them (has_tail). Moreover, although supervision is only provided in terms of discriminativeness of attributes for pairs, the model learns to assign plausible attributes to specific objects (SOFA-has_cushion). Finally, we present a preliminary experiment confirming the referential success of the predicted discriminative attributes.", "title": "" }, { "docid": "98ff207ca344eb058c6bf7ba87751822", "text": "Ultra-wideband radar is an excellent tool for nondestructive examination of walls and highway structures. Therefore often steep edged narrow pulses with rise-, fall-times in the range of 100 ps are used. For digitizing of the reflected pulses a down conversion has to be accomplished. A new low cost sampling down converter with a sampling phase detector for use in ultra-wideband radar applications is presented.", "title": "" }, { "docid": "d14812771115b4736c6d46aecadb2d8a", "text": "This article reports on a helical spring-like piezoresistive graphene strain sensor formed within a microfluidic channel. The helical spring has a tubular hollow structure and is made of a thin graphene layer coated on the inner wall of the channel using an in situ microfluidic casting method. The helical shape allows the sensor to flexibly respond to both tensile and compressive strains in a wide dynamic detection range from 24 compressive strain to 20 tensile strain. Fabrication of the sensor involves embedding a helical thin metal wire with a plastic wrap into a precursor solution of an elastomeric polymer, forming a helical microfluidic channel by removing the wire from cured elastomer, followed by microfluidic casting of a graphene thin layer directly inside the helical channel. The wide dynamic range, in conjunction with mechanical flexibility and stretchability of the sensor, will enable practical wearable strain sensor applications where large strains are often involved.", "title": "" }, { "docid": "0f927fc7b8005ee6bb6ec22d8070a062", "text": "We propose a Dynamic-Spatial-Attention (DSA) Recurrent Neural Network (RNN) for anticipating accidents in dashcam videos (Fig. 1). Our DSA-RNN learns to (1) distribute soft-attention to candidate objects dynamically to gather subtle cues and (2) model the temporal dependencies of all cues to robustly anticipate an accident. Anticipating accidents is much less addressed than anticipating events such as changing a lane, making a turn, etc., since accidents are rare to be observed and can happen in many different ways mostly in a sudden. To overcome these challenges, we (1) utilize state-of-the-art object detector [3] to detect candidate objects, and (2) incorporate full-frame and object-based appearance and motion features in our model. We also harvest a diverse dataset of 678 dashcam accident videos on the web (Fig. 3). The dataset is unique, since various accidents (e.g., a motorbike hits a car, a car hits another car, etc.) occur in all videos. We manually mark the time-location of accidents and use them as supervision to train and evaluate our method. We show that our method anticipates accidents about 2 seconds before they occur with 80% recall and 56.14% precision. Most importantly, it achieves the highest mean average precision (74.35%) outperforming other baselines without attention or RNN. 2 Fu-Hsiang Chan, Yu-Ting Chen, Yu Xiang, Min Sun", "title": "" }, { "docid": "a4267e0cd6300dc128bfe9de62322ac7", "text": "According to the most common definition, idioms are linguistic expressions whose overall meaning cannot be predicted from the meanings of the constituent parts Although we agree with the traditional view that there is no complete predictability, we suggest that there is a great deal of systematic conceptual motivation for the meaning of most idioms Since most idioms are based on conceptual metaphors and metonymies, systematic motivation arises from sets of 'conceptual mappings or correspondences' that obtain between a source and a target domain in the sense of Lakoff and Koiecses (1987) We distinguish among three aspects of idiomatic meaning First, the general meaning of idioms appears to be determined by the particular 'source domains' that apply to a particular target domain Second, more specific aspects ot idiomatic meaning are provided by the 'ontological mapping that applies to a given idiomatic expression Third, connotative aspects ot idiomatic meaning can be accounted for by 'epistemic correspondences' Finally, we also present an informal experimental study the results of which show that the cognitive semantic view can facilitate the learning of idioms for non-native speakers", "title": "" }, { "docid": "c38a6685895c23620afb6570be4c646b", "text": "Today, artificial neural networks (ANNs) are widely used in a variety of applications, including speech recognition, face detection, disease diagnosis, etc. And as the emerging field of ANNs, Long Short-Term Memory (LSTM) is a recurrent neural network (RNN) which contains complex computational logic. To achieve high accuracy, researchers always build large-scale LSTM networks which are time-consuming and power-consuming. In this paper, we present a hardware accelerator for the LSTM neural network layer based on FPGA Zedboard and use pipeline methods to parallelize the forward computing process. We also implement a sparse LSTM hidden layer, which consumes fewer storage resources than the dense network. Our accelerator is power-efficient and has a higher speed than ARM Cortex-A9 processor.", "title": "" }, { "docid": "a3f2e552e5bbf2b4bab55963ee84915d", "text": "-risks, and balance formalization and portfolio management, improvement.", "title": "" }, { "docid": "e1096df0a86d37c11ed4a31d9e67ac6e", "text": "............................................................................................................................................... 4", "title": "" }, { "docid": "f5d412649f974245fb7142ea66e3e794", "text": "Inflammation clearly occurs in pathologically vulnerable regions of the Alzheimer's disease (AD) brain, and it does so with the full complexity of local peripheral inflammatory responses. In the periphery, degenerating tissue and the deposition of highly insoluble abnormal materials are classical stimulants of inflammation. Likewise, in the AD brain damaged neurons and neurites and highly insoluble amyloid beta peptide deposits and neurofibrillary tangles provide obvious stimuli for inflammation. Because these stimuli are discrete, microlocalized, and present from early preclinical to terminal stages of AD, local upregulation of complement, cytokines, acute phase reactants, and other inflammatory mediators is also discrete, microlocalized, and chronic. Cumulated over many years, direct and bystander damage from AD inflammatory mechanisms is likely to significantly exacerbate the very pathogenic processes that gave rise to it. Thus, animal models and clinical studies, although still in their infancy, strongly suggest that AD inflammation significantly contributes to AD pathogenesis. By better understanding AD inflammatory and immunoregulatory processes, it should be possible to develop anti-inflammatory approaches that may not cure AD but will likely help slow the progression or delay the onset of this devastating disorder.", "title": "" }, { "docid": "73a998535ab03730595ce5d9c1f071f7", "text": "This article familiarizes counseling psychologists with qualitative research methods in psychology developed in the tradition of European phenomenology. A brief history includes some of Edmund Husserl’s basic methods and concepts, the adoption of existential-phenomenology among psychologists, and the development and formalization of qualitative research procedures in North America. The choice points and alternatives in phenomenological research in psychology are delineated. The approach is illustrated by a study of a recovery program for persons repeatedly hospitalized for chronic mental illness. Phenomenological research is compared with other qualitative methods, and some of its benefits for counseling psychology are identified.", "title": "" }, { "docid": "8bd9e3fe5d2b6fe8d58a86baf3de3522", "text": "Hand pose estimation from single depth images is an essential topic in computer vision and human computer interaction. Despite recent advancements in this area promoted by convolutional neural networks, accurate hand pose estimation is still a challenging problem. In this paper we propose a novel approach named as Pose guided structured Region Ensemble Network (Pose-REN) to boost the performance of hand pose estimation. Under the guidance of an initially estimated pose, the proposed method extracts regions from the feature maps of convolutional neural network and generates more optimal and representative features for hand pose estimation. The extracted feature regions are then integrated hierarchically according to the topology of hand joints by tree-structured fully connections to regress the refined hand pose. The final hand pose is obtained by an iterative cascaded method. Comprehensive experiments on public hand pose datasets demonstrate that our proposed method outperforms state-of-the-art algorithms.", "title": "" }, { "docid": "78539b627037a491dade4a1e8abdaa0b", "text": "Scholarly citations from one publication to another, expressed as reference lists within academic articles, are core elements of scholarly communication. Unfortunately, they usually can be accessed en masse only by paying significant subscription fees to commercial organizations, while those few services that do made them available for free impose strict limitations on their reuse. In this paper we provide an overview of the OpenCitations Project (http://opencitations.net) undertaken to remedy this situation, and of its main product, the OpenCitations Corpus, which is an open repository of accurate bibliographic citation data harvested from the scholarly literature, made available in RDF under a Creative Commons public domain dedication. RASH version: https://w3id.org/oc/paper/occ-lisc2016.html", "title": "" }, { "docid": "f6df414f8f61dbdab32be2f05d921cb8", "text": "The task of discriminating one object from another is almost trivial for a human being. However, this task is computationally taxing for most modern machine learning methods, whereas, we perform this task at ease given very few examples for learning. It has been proposed that the quick grasp of concept may come from the shared knowledge between the new example and examples previously learned. We believe that the key to one-shot learning is the sharing of common parts as each part holds immense amounts of information on how a visual concept is constructed. We propose an unsupervised method for learning a compact dictionary of image patches representing meaningful components of an objects. Using those patches as features, we build a compositional model that outperforms a number of popular algorithms on a one-shot learning task. We demonstrate the effectiveness of this approach on hand-written digits and show that this model generalizes to multiple datasets.", "title": "" }, { "docid": "6c9c06604d5ef370b803bb54b4fe1e0c", "text": "Self-paced learning and hard example mining re-weight training instances to improve learning accuracy. This paper presents two improved alternatives based on lightweight estimates of sample uncertainty in stochastic gradient descent (SGD): the variance in predicted probability of the correct class across iterations of minibatch SGD, and the proximity of the correct class probability to the decision threshold. Extensive experimental results on six datasets show that our methods reliably improve accuracy in various network architectures, including additional gains on top of other popular training techniques, such as residual learning, momentum, ADAM, batch normalization, dropout, and distillation.", "title": "" }, { "docid": "eaf7f022e04a27c1616bff2d052d0e06", "text": "The human hand moves in complex and high-dimensional ways, making estimation of 3D hand pose configurations from images alone a challenging task. In this work we propose a method to learn a statistical hand model represented by a cross-modal trained latent space via a generative deep neural network. We derive an objective function from the variational lower bound of the VAE framework and jointly optimize the resulting cross-modal KL-divergence and the posterior reconstruction objective, naturally admitting a training regime that leads to a coherent latent space across multiple modalities such as RGB images, 2D keypoint detections or 3D hand configurations. Additionally, it grants a straightforward way of using semi-supervision. This latent space can be directly used to estimate 3D hand poses from RGB images, outperforming the state-of-the art in different settings. Furthermore, we show that our proposed method can be used without changes on depth images and performs comparably to specialized methods. Finally, the model is fully generative and can synthesize consistent pairs of hand configurations across modalities. We evaluate our method on both RGB and depth datasets and analyze the latent space qualitatively.", "title": "" }, { "docid": "b50efa7b82d929c1b8767e23e8359a06", "text": "Intrusion detection (ID) is an important component of infrastructure protection mechanisms. Intrusion detection systems (IDSs) need to be accurate, adaptive, and extensible. Given these requirements and the complexities of today's network environments, we need a more systematic and automated IDS development process rather that the pure knowledge encoding and engineering approaches. This article describes a novel framework, MADAM ID, for Mining Audit Data for Automated Models for Instrusion Detection. This framework uses data mining algorithms to compute activity patterns from system audit data and extracts predictive features from the patterns. It then applies machine learning algorithms to the audit records taht are processed according to the feature definitions to generate intrusion detection rules. Results from the 1998 DARPA Intrusion Detection Evaluation showed that our ID model was one of the best performing of all the participating systems. We also briefly discuss our experience in converting the detection models produced by off-line data mining programs to real-time modules of existing IDSs.", "title": "" }, { "docid": "58fbd637f7c044aeb0d55ba015c70f61", "text": "This paper outlines an innovative software development that utilizes Quality of Service (QoS) and parallel technologies in Cisco Catalyst Switches to increase the analytical performance of a Network Intrusion Detection and Protection System (NIDPS) when deployed in highspeed networks. We have designed a real network to present experiments that use a Snort NIDPS. Our experiments demonstrate the weaknesses of NIDPSes, such as inability to process multiple packets and propensity to drop packets in heavy traffic and high-speed networks without analysing them. We tested Snort’s analysis performance, gauging the number of packets sent, analysed, dropped, filtered, injected, and outstanding. We suggest using QoS configuration technologies in a Cisco Catalyst 3560 Series Switch and parallel Snorts to improve NIDPS performance and to reduce the number of dropped packets. Our results show that our novel configuration improves performance.", "title": "" } ]
scidocsrr
ce005239bc1f2180ad8508470e4a168d
Agent-based decision-making process in airport ground handling management
[ { "docid": "b20aa2222759644b4b60b5b450424c9e", "text": "Manufacturing has faced significant changes during the last years, namely the move from a local economy towards a global and competitive economy, with markets demanding for highly customized products of high quality at lower costs, and with short life cycles. In this environment, manufacturing enterprises, to remain competitive, must respond closely to customer demands by improving their flexibility and agility, while maintaining their productivity and quality. Dynamic response to emergence is becoming a key issue in manufacturing field because traditional manufacturing control systems are built upon rigid control architectures, which cannot respond efficiently and effectively to dynamic change. In these circumstances, the current challenge is to develop manufacturing control systems that exhibit intelligence, robustness and adaptation to the environment changes and disturbances. The introduction of multi-agent systems and holonic manufacturing systems paradigms addresses these requirements, bringing the advantages of modularity, decentralization, autonomy, scalability and reusability. This paper surveys the literature in manufacturing control systems using distributed artificial intelligence techniques, namely multi-agent systems and holonic manufacturing systems principles. The paper also discusses the reasons for the weak adoption of these approaches by industry and points out the challenges and research opportunities for the future. & 2008 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "36b609f1c748154f0f6193c6578acec9", "text": "Effective supply chain design calls for robust analytical models and design tools. Previous works in this area are mostly Operation Research oriented without considering manufacturing aspects. Recently, researchers have begun to realize that the decision and integration effort in supply chain design should be driven by the manufactured product, specifically, product characteristics and product life cycle. In addition, decision-making processes should be guided by a comprehensive set of performance metrics. In this paper, we relate product characteristics to supply chain strategy and adopt supply chain operations reference (SCOR) model level I performance metrics as the decision criteria. An integrated analytic hierarchy process (AHP) and preemptive goal programming (PGP) based multi-criteria decision-making methodology is then developed to take into account both qualitative and quantitative factors in supplier selection. While the AHP process matches product characteristics with supplier characteristics (using supplier ratings derived from pairwise comparisons) to qualitatively determine supply chain strategy, PGP mathematically determines the optimal order quantity from the chosen suppliers. Since PGP uses AHP ratings as input, the variations of pairwise comparisons in AHP will influence the final order quantity. Therefore, users of this methodology should put greater emphasis on the AHP progress to ensure the accuracy of supplier ratings. r 2003 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "df7a68ebb9bc03d8a73a54ab3474373f", "text": "We report on the implementation of a color-capable sub-pixel resolving optofluidic microscope based on the pixel super-resolution algorithm and sequential RGB illumination, for low-cost on-chip color imaging of biological samples with sub-cellular resolution.", "title": "" }, { "docid": "720a3d65af4905cbffe74ab21d21dd3f", "text": "Fluorescent carbon nanoparticles or carbon quantum dots (CQDs) are a new class of carbon nanomaterials that have emerged recently and have garnered much interest as potential competitors to conventional semiconductor quantum dots. In addition to their comparable optical properties, CQDs have the desired advantages of low toxicity, environmental friendliness low cost and simple synthetic routes. Moreover, surface passivation and functionalization of CQDs allow for the control of their physicochemical properties. Since their discovery, CQDs have found many applications in the fields of chemical sensing, biosensing, bioimaging, nanomedicine, photocatalysis and electrocatalysis. This article reviews the progress in the research and development of CQDs with an emphasis on their synthesis, functionalization and technical applications along with some discussion on challenges and perspectives in this exciting and promising field.", "title": "" }, { "docid": "6f1e71399e5786eb9c3923a1e967cd8f", "text": "This Working Paper should not be reported as representing the views of the IMF. The views expressed in this Working Paper are those of the author(s) and do not necessarily represent those of the IMF or IMF policy. Working Papers describe research in progress by the author(s) and are published to elicit comments and to further debate. Using a dataset which breaks down FDI flows into primary, secondary and tertiary sector investments and a GMM dynamic approach to address concerns about endogeneity, the paper analyzes various macroeconomic, developmental, and institutional/qualitative determinants of FDI in a sample of emerging market and developed economies. While FDI flows into the primary sector show little dependence on any of these variables, secondary and tertiary sector investments are affected in different ways by countries’ income levels and exchange rate valuation, as well as development indicators such as financial depth and school enrollment, and institutional factors such as judicial independence and labor market flexibility. Finally, we find that the effect of these factors often differs between advanced and emerging economies. JEL Classification Numbers: F21, F23", "title": "" }, { "docid": "39d15901cd5fbd1629d64a165a94c5f5", "text": "This paper shows how to use modular Marx multilevel converter diode (M3CD) modules to apply unipolar or bipolar high-voltage pulses for pulsed power applications. The M3CD cells allow the assembly of a multilevel converter without needing complex algorithms and parameter measurement to balance the capacitor voltages. This paper also explains how to supply all the modular cells in order to ensure galvanic isolation between control circuits and power circuits. The experimental results for a generator with seven levels, and unipolar and bipolar pulses into resistive, inductive, and capacitive loads are presented.", "title": "" }, { "docid": "01e064e0f2267de5a26765f945114a6e", "text": "In this paper, we make contact with the field of nonparametric statistics and present a development and generalization of tools and results for use in image processing and reconstruction. In particular, we adapt and expand kernel regression ideas for use in image denoising, upscaling, interpolation, fusion, and more. Furthermore, we establish key relationships with some popular existing methods and show how several of these algorithms, including the recently popularized bilateral filter, are special cases of the proposed framework. The resulting algorithms and analyses are amply illustrated with practical examples", "title": "" }, { "docid": "4d445832d38c288b1b59a3df7b38eb1b", "text": "UNLABELLED\nThe aim of this prospective study was to assess the predictive value of (18)F-FDG PET/CT imaging for pathologic response to neoadjuvant chemotherapy (NACT) and outcome in inflammatory breast cancer (IBC) patients.\n\n\nMETHODS\nTwenty-three consecutive patients (51 y ± 12.7) with newly diagnosed IBC, assessed by PET/CT at baseline (PET1), after the third course of NACT (PET2), and before surgery (PET3), were included. The patients were divided into 2 groups according to pathologic response as assessed by the Sataloff classification: pathologic complete response for complete responders (stage TA and NA or NB) and non-pathologic complete response for noncomplete responders (not stage A for tumor or not stage NA or NB for lymph nodes). In addition to maximum standardized uptake value (SUVmax) measurements, a global breast metabolic tumor volume (MTV) was delineated using a semiautomatic segmentation method. Changes in SUVmax and MTV between PET1 and PET2 (ΔSUV1-2; ΔMTV1-2) and PET1 and PET3 (ΔSUV1-3; ΔMTV1-3) were measured.\n\n\nRESULTS\nMean SUVmax on PET1, PET2, and PET3 did not statistically differ between the 2 pathologic response groups. On receiver-operating-characteristic analysis, a 72% cutoff for ΔSUV1-3 provided the best performance to predict residual disease, with sensitivity, specificity, and accuracy of 61%, 80%, and 65%, respectively. On univariate analysis, the 72% cutoff for ΔSUV1-3 was the best predictor of distant metastasis-free survival (P = 0.05). On multivariate analysis, the 72% cutoff for ΔSUV1-3 was an independent predictor of distant metastasis-free survival (P = 0.01).\n\n\nCONCLUSION\nOur results emphasize the good predictive value of change in SUVmax between baseline and before surgery to assess pathologic response and survival in IBC patients undergoing NACT.", "title": "" }, { "docid": "53a67740e444b5951bc6ab257236996e", "text": "Although human perception appears to be automatic and unconscious, complex sensory mechanisms exist that form the preattentive component of understanding and lead to awareness. Considerable research has been carried out into these preattentive mechanisms and computational models have been developed for similar problems in the fields of computer vision and speech analysis. The focus here is to explore aural and visual information in video streams for modeling attention and detecting salient events. The separate aural and visual modules may convey explicit, complementary or mutually exclusive information around the detected audiovisual events. Based on recent studies on perceptual and computational attention modeling, we formulate measures of attention using features of saliency for the audiovisual stream. Audio saliency is captured by signal modulations and related multifrequency band features, extracted through nonlinear operators and energy tracking. Visual saliency is measured by means of a spatiotemporal attention model driven by various feature cues (intensity, color, motion). Features from both modules mapped to one-dimensional, time-varying saliency curves, from which statistics of salient segments can be extracted and important audio or visual events can be detected through adaptive, threshold-based mechanisms. Audio and video curves are integrated in a single attention curve, where events may be enhanced, suppressed or vanished. Salient events from the audiovisual curve are detected through geometrical features such as local extrema, sharp transitions and level sets. The potential of inter-module fusion and audiovisual event detection is demonstrated in applications such as video key-frame selection, video skimming and video annotation.", "title": "" }, { "docid": "c7160e93c9cce017adc1200dc7d597f2", "text": "The transcription factor, nuclear factor erythroid 2 p45-related factor 2 (Nrf2), acts as a sensor of oxidative or electrophilic stresses and plays a pivotal role in redox homeostasis. Oxidative or electrophilic agents cause a conformational change in the Nrf2 inhibitory protein Keap1 inducing the nuclear translocation of the transcription factor which, through its binding to the antioxidant/electrophilic response element (ARE/EpRE), regulates the expression of antioxidant and detoxifying genes such as heme oxygenase 1 (HO-1). Nrf2 and HO-1 are frequently upregulated in different types of tumours and correlate with tumour progression, aggressiveness, resistance to therapy, and poor prognosis. This review focuses on the Nrf2/HO-1 stress response mechanism as a promising target for anticancer treatment which is able to overcome resistance to therapies.", "title": "" }, { "docid": "f2c203e9364fee062747468dc7995429", "text": "Microinverters are module-level power electronic (MLPE) systems that are expected to have a service life more than 25 years. The general practice for providing assurance in long-term reliability under humid climatic conditions is to subject the microinverters to ‘damp heat test’ at 85°C/85%RH for 1000hrs as recommended in lEC 61215 standard. However, there is limited understanding on the correlation between the said ‘damp heat’ test and field conditions for microinverters. In this paper, a physics-of-failure (PoF)-based approach is used to correlate damp heat test to field conditions. Results of the PoF approach indicates that even 3000hrs at 85°C/85%RH may not be sufficient to guarantee 25-years' service life in certain places in the world. Furthermore, we also demonstrate that use of Miami, FL weathering data as benchmark for defining damp heat test durations will not be sufficient to guarantee 25 years' service life. Finally, when tests were conducted at 85°C/85%RH for more than 3000hrs, it was found that the PV connectors are likely to fail before the actual power electronics could fail.", "title": "" }, { "docid": "bd8f4d5181d0b0bcaacfccd6fb0edd8b", "text": "Mass deployment of RF identification (RFID) is hindered by its cost per tag. The main cost comes from the application-specific integrated circuit (ASIC) chip set in a tag. A chipless tag costs less than a cent, and these have the potential for mass deployment for low-cost, item-level tagging as the replacement technology for optical barcodes. Chipless RFID tags can be directly printed on paper or plastic packets just like barcodes. They are highly useful for automatic identification and authentication, supply-chain automation, and medical applications. Among their potential industrial applications are authenticating of polymer bank notes; scanning of credit cards, library cards, and the like; tracking of inventory in retail settings; and identification of pathology and other medical test samples.", "title": "" }, { "docid": "f44b5199f93d4b441c125ac55e4e0497", "text": "A modified method for better superpixel generation based on simple linear iterative clustering (SLIC) is presented and named BSLIC in this paper. By initializing cluster centers in hexagon distribution and performing k-means clustering in a limited region, the generated superpixels are shaped into regular and compact hexagons. The additional cluster centers are initialized as edge pixels to improve boundary adherence, which is further promoted by incorporating the boundary term into the distance calculation of the k-means clustering. Berkeley Segmentation Dataset BSDS500 is used to qualitatively and quantitatively evaluate the proposed BSLIC method. Experimental results show that BSLIC achieves an excellent compromise between boundary adherence and regularity of size and shape. In comparison with SLIC, the boundary adherence of BSLIC is increased by at most 12.43% for boundary recall and 3.51% for under segmentation error.", "title": "" }, { "docid": "54bee01d53b8bcb6ca067493993b4ff3", "text": "Large, relational factor graphs with structure defined by first-order logic or other languages give rise to notoriously difficult inference problems. Because unrolling the structure necessary to represent distributions over all hypotheses has exponential blow-up, solutions are often derived from MCMC. However, because of limitations in the design and parameterization of the jump function, these sampling-based methods suffer from local minima—the system must transition through lower-scoring configurations before arriving at a better MAP solution. This paper presents a new method of explicitly selecting fruitful downward jumps by leveraging reinforcement learning (RL) to model delayed reward with a log-linear function approximation of residual future score improvement. Our method provides dramatic empirical success, producing new state-of-the-art results on a complex joint model of ontology alignment, with a 48% reduction in error over state-of-the-art in that domain.", "title": "" }, { "docid": "6f1fc6a07d0beb235f5279e17a46447f", "text": "Nowadays, automatic multidocument text summarization systems can successfully retrieve the summary sentences from the input documents. But, it has many limitations such as inaccurate extraction to essential sentences, low coverage, poor coherence among the sentences, and redundancy. This paper introduces a new concept of timestamp approach with Naïve Bayesian Classification approach for multidocument text summarization. The timestamp provides the summary an ordered look, which achieves the coherent looking summary. It extracts the more relevant information from the multiple documents. Here, scoring strategy is also used to calculate the score for the words to obtain the word frequency. The higher linguistic quality is estimated in terms of readability and comprehensibility. In order to show the efficiency of the proposed method, this paper presents the comparison between the proposed methods with the existing MEAD algorithm. The timestamp procedure is also applied on the MEAD algorithm and the results are examined with the proposed method. The results show that the proposed method results in lesser time than the existing MEAD algorithm to execute the summarization process. Moreover, the proposed method results in better precision, recall, and F-score than the existing clustering with lexical chaining approach.", "title": "" }, { "docid": "fad164e21c7ec013450a8b96d75d9457", "text": "Pinterest is a visual discovery tool for collecting and organizing content on the Web with over 70 million users. Users “pin” images, videos, articles, products, and other objects they find on the Web, and organize them into boards by topic. Other users can repin these and also follow other users or boards. Each user organizes things differently, and this produces a vast amount of human-curated content. For example, someone looking to decorate their home might pin many images of furniture that fits their taste. These curated collections produce a large number of associations between pins, and we investigate how to leverage these associations to surface personalized content to users. Little work has been done on the Pinterest network before due to lack of availability of data. We first performed an analysis on a representative sample of the Pinterest network. After analyzing the network, we created recommendation systems, suggesting pins that users would be likely to repin or like based on their previous interactions on Pinterest. We created recommendation systems using four approaches: a baseline recommendation system using the power law distribution of the images; a content-based filtering algorithm; and two collaborative filtering algorithms, one based on one-mode projection of a bipartite graph, and the second using a label propagation approach.", "title": "" }, { "docid": "05477664471a71eebc26d59aed9b0350", "text": "This article serves as a quick reference for respiratory alkalosis. Guidelines for analysis and causes, signs, and a stepwise approach are presented.", "title": "" }, { "docid": "9078698db240725e1eb9d1f088fb05f4", "text": "Broadcasting is a common operation in a network to resolve many issues. In a mobile ad hoc network (MANET) in particular, due to host mobility, such operations are expected to be executed more frequently (such as finding a route to a particular host, paging a particular host, and sending an alarm signal). Because radio signals are likely to overlap with others in a geographical area, a straightforward broadcasting by flooding is usually very costly and will result in serious redundancy, contention, and collision, to which we call the broadcast storm problem. In this paper, we identify this problem by showing how serious it is through analyses and simulations. We propose several schemes to reduce redundant rebroadcasts and differentiate timing of rebroadcasts to alleviate this problem. Simulation results are presented, which show different levels of improvement over the basic flooding approach.", "title": "" }, { "docid": "cb641fc639b86abadec4f85efc226c14", "text": "The modernization of the US electric power infrastructure, especially in lieu of its aging, overstressed networks; shifts in social, energy and environmental policies, and also new vulnerabilities, is a national concern. Our system are required to be more adaptive and secure more than every before. Consumers are also demanding increased power quality and reliability of supply and delivery. As such, power industries, government and national laboratories and consortia have developed increased interest in what is now called the Smart Grid of the future. The paper outlines Smart Grid intelligent functions that advance interactions of agents such as telecommunication, control, and optimization to achieve adaptability, self-healing, efficiency and reliability of power systems. The author also presents a special case for the development of Dynamic Stochastic Optimal Power Flow (DSOPF) technology as a tool needed in Smart Grid design. The integration of DSOPF to achieve the design goals with advanced DMS capabilities are discussed herein. This reference paper also outlines research focus for developing next generation of advance tools for efficient and flexible power systems operation and control.", "title": "" }, { "docid": "d775cdc31c84d94d95dc132b88a37fae", "text": "Image guided filtering has been widely used in many image processing applications. However, it is a local filtering method and has limited propagation ability. In this paper, we propose a new image filtering method: nonlocal image guided averaging (NLGA). Derived from a nonlocal linear model, the proposed method can utilize the nonlocal similarity of the guidance image, so that it can propagate nonlocal information reliably. Consequently, NLGA can obtain a sharper filtering results in the edge regions and more smooth results in the smooth regions. It shows superiority over image guided filtering in different applications, such as image dehazing, depth map super-resolution and image denoising.", "title": "" }, { "docid": "4630ade03760cb8ec1da11b16703b3f1", "text": "Dengue infection is a major cause of morbidity and mortality in Malaysia. To date, much research on dengue infection conducted in Malaysia have been published. One hundred and sixty six articles related to dengue in Malaysia were found from a search through a database dedicated to indexing all original data relevant to medicine published between the years 2000-2013. Ninety articles with clinical relevance and future research implications were selected and reviewed. These papers showed evidence of an exponential increase in the disease epidemic and a varying pattern of prevalent dengue serotypes at different times. The early febrile phase of dengue infection consist of an undifferentiated fever. Clinical suspicion and ability to identify patients at risk of severe dengue infection is important. Treatment of dengue infection involves judicious use of volume expander and supportive care. Potential future research areas are discussed to narrow our current knowledge gaps on dengue infection.", "title": "" }, { "docid": "6cb480efca7138e26ce484eb28f0caec", "text": "Given the demand for authentic personal interactions over social media, it is unclear how much firms should actively manage their social media presence. We study this question empirically in a healthcare setting. We show empirically that active social media management drives more user-generated content. However, we find that this is due to an increase in incremental user postings from an organization’s employees rather than from its clients. This result holds when we explore exogenous variation in social media policies, employees and clients that are explained by medical marketing laws, medical malpractice laws and distortions in Medicare incentives. Further examination suggests that content being generated mainly by employees can be avoided if a firm’s postings are entirely client-focused. However, empirically the majority of firm postings seem not to be specifically targeted to clients’ interests, instead highlighting more general observations or achievements of the firm itself. We show that untargeted postings like this provoke activity by employees rather than clients. This may not be a bad thing, as employee-generated content may help with employee motivation, recruitment or retention, but it does suggest that social media should not be funded or managed exclusively as a marketing function of the firm. ∗Economics Department, University of Virginia, Charlottesville, VA and RAND Corporation †MIT Sloan School of Management, MIT, Cambridge, MA and NBER ‡All errors are our own.", "title": "" } ]
scidocsrr
ce039a9a63bbaf7898379e83b597090f
On brewing fresh espresso: LinkedIn's distributed data serving platform
[ { "docid": "1ac8e84ada32efd6f6c7c9fdfd969ec0", "text": "Spanner is Google's scalable, multi-version, globally-distributed, and synchronously-replicated database. It provides strong transactional semantics, consistent replication, and high performance reads and writes for a variety of Google's applications. I'll discuss the design and implementation of Spanner, as well as some of the lessons we have learned along the way. I'll also discuss some open challenges that we still see in building scalable distributed storage systems.", "title": "" } ]
[ { "docid": "7974d8e70775f1b7ef4d8c9aefae870e", "text": "Low-rank decomposition plays a central role in accelerating convolutional neural network (CNN), and the rank of decomposed kernel-tensor is a key parameter that determines the complexity and accuracy of a neural network. In this paper, we define rank selection as a combinatorial optimization problem and propose a methodology to minimize network complexity while maintaining the desired accuracy. Combinatorial optimization is not feasible due to search space limitations. To restrict the search space and obtain the optimal rank, we define the space constraint parameters with a boundary condition. We also propose a linearly-approximated accuracy function to predict the fine-tuned accuracy of the optimized CNN model during the cost reduction. Experimental results on AlexNet and VGG-16 show that the proposed rank selection algorithm satisfies the accuracy constraint. Our method combined with truncated-SVD outperforms state-of-the-art methods in terms of inference and training time at almost the same accuracy.", "title": "" }, { "docid": "80c21770ada160225e17cb9673fff3b3", "text": "This paper describes a model to address the task of named-entity recognition on Indonesian microblog messages due to its usefulness for higher-level tasks or text mining applications on Indonesian microblogs. We view our task as a sequence labeling problem using machine learning approach. We also propose various word-level and orthographic features, including the ones that are specific to the Indonesian language. Finally, in our experiment, we compared our model with a baseline model previously proposed for Indonesian formal documents, instead of microblog messages. Our contribution is two-fold: (1) we developed NER tool for Indonesian microblog messages, which was never addressed before, (2) we developed NER corpus containing around 600 Indonesian microblog messages available for future development.", "title": "" }, { "docid": "8b515e03e551d120db9ce670d930adeb", "text": "In this letter, a broadband planar substrate truncated tapered microstrip line-to-dielectric image line transition on a single substrate is proposed. The design uses substrate truncated microstrip line which helps to minimize the losses due to the surface wave generation on thick microstrip line. Generalized empirical equations are proposed for the transition design and validated for different dielectric constants in millimeter-wave frequency band. Full-wave simulations are carried out using high frequency structural simulator. A back-to-back transition prototype of Ku-band is fabricated and measured. The measured return loss for 80-mm-long structure is better than 10 dB and the insertion loss is better than 2.5 dB in entire Ku-band (40% impedance bandwidth).", "title": "" }, { "docid": "a93361b09b4aaf1385569a9efce7087e", "text": "Cortical surface mapping has been widely used to compensate for individual variability of cortical shape and topology in anatomical and functional studies. While many surface mapping methods were proposed based on landmarks, curves, spherical or native cortical coordinates, few studies have extensively and quantitatively evaluated surface mapping methods across different methodologies. In this study we compared five cortical surface mapping algorithms, including large deformation diffeomorphic metric mapping (LDDMM) for curves (LDDMM-curve), for surfaces (LDDMM-surface), multi-manifold LDDMM (MM-LDDMM), FreeSurfer, and CARET, using 40 MRI scans and 10 simulated datasets. We computed curve variation errors and surface alignment consistency for assessing the mapping accuracy of local cortical features (e.g., gyral/sulcal curves and sulcal regions) and the curvature correlation for measuring the mapping accuracy in terms of overall cortical shape. In addition, the simulated datasets facilitated the investigation of mapping error distribution over the cortical surface when the MM-LDDMM, FreeSurfer, and CARET mapping algorithms were applied. Our results revealed that the LDDMM-curve, MM-LDDMM, and CARET approaches best aligned the local curve features with their own curves. The MM-LDDMM approach was also found to be the best in aligning the local regions and cortical folding patterns (e.g., curvature) as compared to the other mapping approaches. The simulation experiment showed that the MM-LDDMM mapping yielded less local and global deformation errors than the CARET and FreeSurfer mappings.", "title": "" }, { "docid": "33b281b2f3509a6fdc3fd5f17f219820", "text": "Personal robots will contribute mobile manipulation capabilities to our future smart homes. In this paper, we propose a low-cost object localization system that uses static devices with Bluetooth capabilities, which are distributed in an environment, to detect and localize active Bluetooth beacons and mobile devices. This system can be used by a robot to coarsely localize objects in retrieval tasks. We attach small Bluetooth low energy tags to objects and require at least four static Bluetooth receivers. While commodity Bluetooth devices could be used, we have built low-cost receivers from Raspberry Pi computers. The location of a tag is estimated by lateration of its received signal strengths. In experiments, we evaluate accuracy and timing of our approach, and report on the successful demonstration at the RoboCup German Open 2014 competition in Magdeburg.", "title": "" }, { "docid": "7ce147a433a376dd1cc0f7f09576e1bd", "text": "Introduction Dissolution testing is routinely carried out in the pharmaceutical industry to determine the rate of dissolution of solid dosage forms. In addition to being a regulatory requirement, in-vitro dissolution testing is used to assist with formulation design, process development, and the demonstration of batch-to-batch reproducibility in production. The most common of such dissolution test apparatuses is the USP Dissolution Test Apparatus II, consisting of an unbaffled vessel stirred by a paddle, whose dimensions, characteristics, and operating conditions are detailed by the USP (Cohen et al., 1990; The United States Pharmacopeia & The National Formulary, 2004).", "title": "" }, { "docid": "eb962e14f34ea53dec660dfe304756b0", "text": "It is difficult to train a personalized task-oriented dialogue system because the data collected from each individual is often insufficient. Personalized dialogue systems trained on a small dataset can overfit and make it difficult to adapt to different user needs. One way to solve this problem is to consider a collection of multiple users’ data as a source domain and an individual user’s data as a target domain, and to perform a transfer learning from the source to the target domain. By following this idea, we propose the “PETAL” (PErsonalized Task-oriented diALogue), a transfer learning framework based on POMDP to learn a personalized dialogue system. The system first learns common dialogue knowledge from the source domain and then adapts this knowledge to the target user. This framework can avoid the negative transfer problem by considering differences between source and target users. The policy in the personalized POMDP can learn to choose different actions appropriately for different users. Experimental results on a real-world coffee-shopping data and simulation data show that our personalized dialogue system can choose different optimal actions for different users, and thus effectively improve the dialogue quality under the personalized setting.", "title": "" }, { "docid": "6cd301f1b6ffe64f95b7d63eb0356a87", "text": "The purpose of this study is to analyze factors affecting on online shopping behavior of consumers that might be one of the most important issues of e-commerce and marketing field. However, there is very limited knowledge about online consumer behavior because it is a complicated socio-technical phenomenon and involves too many factors. One of the objectives of this study is covering the shortcomings of previous studies that didn't examine main factors that influence on online shopping behavior. This goal has been followed by using a model examining the impact of perceived risks, infrastructural variables and return policy on attitude toward online shopping behavior and subjective norms, perceived behavioral control, domain specific innovativeness and attitude on online shopping behavior as the hypotheses of study. To investigate these hypotheses 200 questionnaires dispersed among online stores of Iran. Respondents to the questionnaire were consumers of online stores in Iran which randomly selected. Finally regression analysis was used on data in order to test hypothesizes of study. This study can be considered as an applied research from purpose perspective and descriptive-survey with regard to the nature and method (type of correlation). The study identified that financial risks and non-delivery risk negatively affected attitude toward online shopping. Results also indicated that domain specific innovativeness and subjective norms positively affect online shopping behavior. Furthermore, attitude toward online shopping positively affected online shopping behavior of consumers.", "title": "" }, { "docid": "7a8ded6daecbee4492f19ef85c92b0fd", "text": "Sleep problems bave become epidemic aod traditional research has discovered many causes of poor sleep. The purpose of this study was to complement existiog research by using a salutogenic or health origins framework to investigate the correlates of good sleep. The aoalysis for this study used the National College Health Assessment data that included 54,111 participaots at 71 institutions. Participaots were raodomly selected or were in raodomly selected classrooms. Results of these aoalyses indicated that males aod females who reported \"good sleep\" were more likely to have engaged regularly in physical activity, felt less exhausted, were more likely to have a healthy Body Mass Index (BMI), aod also performed better academically. In addition, good male sleepers experienced less anxietY aod had less back pain. Good female sleepers also had fewer abusive relationships aod fewer broken bones, were more likely to have been nonsmokers aod were not binge drinkers. Despite the limitations of this exploratory study, these results are compelling, however they suggest the need for future research to clarify the identified relationships.", "title": "" }, { "docid": "a6f534f6d6a27b076cee44a8a188bb72", "text": "Managing models requires extracting information from them and modifying them, and this is performed through queries. Queries can be executed at the model or at the persistence-level. Both are complementary but while model-level queries are closer to modelling engineers, persistence-level queries are specific to the persistence technology and leverage its capabilities. This paper presents MQT, an approach that translates EOL (model-level queries) to SQL (persistence-level queries) at runtime. Runtime translation provides several benefits: (i) queries are executed only when the information is required; (ii) context and metamodel information is used to get more performant translated queries; and (iii) supports translating query programs using variables and dependant queries. Translation process used by MQT is described through two examples and we also evaluate performance of the approach.", "title": "" }, { "docid": "87b5c0021e513898693e575ca5479757", "text": "We present a statistical mechanics model of deep feed forward neural networks (FFN). Our energy-based approach naturally explains several known results and heuristics, providing a solid theoretical framework and new instruments for a systematic development of FFN. We infer that FFN can be understood as performing three basic steps: encoding, representation validation and propagation. We obtain a set of natural activations – such as sigmoid, tanh and ReLu – together with a state-of-the-art one, recently obtained by Ramachandran et al. [1] using an extensive search algorithm. We term this activation ESP (Expected Signal Propagation), explain its probabilistic meaning, and study the eigenvalue spectrum of the associated Hessian on classification tasks. We find that ESP allows for faster training and more consistent performances over a wide range of network architectures.", "title": "" }, { "docid": "058a128a15c7d0e343adb3ada80e18d3", "text": "PURPOSE OF REVIEW\nOdontogenic causes of sinusitis are frequently missed; clinicians often overlook odontogenic disease whenever examining individuals with symptomatic rhinosinusitis. Conventional treatments for chronic rhinosinusitis (CRS) will often fail in odontogenic sinusitis. There have been several recent developments in the understanding of mechanisms, diagnosis, and treatment of odontogenic sinusitis, and clinicians should be aware of these advances to best treat this patient population.\n\n\nRECENT FINDINGS\nThe majority of odontogenic disease is caused by periodontitis and iatrogenesis. Notably, dental pain or dental hypersensitivity is very commonly absent in odontogenic sinusitis, and symptoms are very similar to those seen in CRS overall. Unilaterality of nasal obstruction and foul nasal drainage are most suggestive of odontogenic sinusitis, but computed tomography is the gold standard for diagnosis. Conventional panoramic radiographs are very poorly suited to rule out odontogenic sinusitis, and cannot be relied on to identify disease. There does not appear to be an optimal sequence of treatment for odontogenic sinusitis; the dental source should be addressed and ESS is frequently also necessary to alleviate symptoms.\n\n\nSUMMARY\nOdontogenic sinusitis has distinct pathophysiology, diagnostic considerations, microbiology, and treatment strategies whenever compared with chronic rhinosinusitis. Clinicians who can accurately identify odontogenic sources can increase efficacy of medical and surgical treatments and improve patient outcomes.", "title": "" }, { "docid": "e2e640c34a9c30a24b068afa23f916d4", "text": "BACKGROUND\nMicrosurgical resection of arteriovenous malformations (AVMs) located in the language and motor cortex is associated with the risk of neurological deterioration, yet electrocortical stimulation mapping has not been widely used.\n\n\nOBJECTIVE\nTo demonstrate the usefulness of intraoperative mapping with language/motor AVMs.\n\n\nMETHODS\nDuring an 11-year period, mapping was used in 12 of 431 patients (2.8%) undergoing AVM resection (5 patients with language and 7 patients with motor AVMs). Language mapping was performed under awake anesthesia and motor mapping under general anesthesia.\n\n\nRESULTS\nIdentification of a functional cortex enabled its preservation in 11 patients (92%), guided dissection through overlying sulci down to the nidus in 3 patients (25%), and influenced the extent of resection in 4 patients (33%). Eight patients (67%) had complete resections. Four patients (33%) had incomplete resections, with circumferentially dissected and subtotally disconnected AVMs left in situ, attached to areas of eloquence and with preserved venous drainage. All were subsequently treated with radiosurgery. At follow-up, 6 patients recovered completely, 3 patients were neurologically improved, and 3 patients had new neurological deficits.\n\n\nCONCLUSION\nIndications for intraoperative mapping include preoperative functional imaging that identifies the language/motor cortex adjacent to the AVM; larger AVMs with higher Spetzler-Martin grades; and patients presenting with unruptured AVMs without deficits. Mapping identified the functional cortex, promoted careful tissue handling, and preserved function. Mapping may guide dissection to AVMs beneath the cortical surface, and it may impact the decision to resect the AVM completely. More conservative, subtotal circumdissections followed by radiosurgery may be an alternative to observation or radiosurgery alone in patients with larger language/motor cortex AVMs.", "title": "" }, { "docid": "aee5eb38d6cbcb67de709a30dd37c29a", "text": "Correct disassembly of the HIV-1 capsid shell, called uncoating, is increasingly recognised as central for multiple steps during retroviral replication. However, the timing, localisation and mechanism of uncoating are poorly understood and progress in this area is hampered by difficulties in measuring the process. Previous work suggested that uncoating occurs soon after entry of the viral core into the cell, but recent studies report later uncoating, at or in the nucleus. Furthermore, inhibiting reverse transcription delays uncoating, linking these processes. Here, we have used a combined approach of experimental interrogation of viral mutants and mathematical modelling to investigate the timing of uncoating with respect to reverse transcription. By developing a minimal, testable, model and employing multiple uncoating assays to overcome the disadvantages of each single assay, we find that uncoating is not concomitant with the initiation of reverse transcription. Instead, uncoating appears to be triggered once reverse transcription reaches a certain stage, namely shortly after first strand transfer. Using multiple approaches, we have identified a point during reverse transcription that induces uncoating of the HIV-1 CA shell. We propose that uncoating initiates after the first strand transfer of reverse transcription.", "title": "" }, { "docid": "554a0628270978757eda989c67ac3416", "text": "An accurate rainfall forecasting is very important for agriculture dependent countries like India. For analyzing the crop productivity, use of water resources and pre-planning of water resources, rainfall prediction is important. Statistical techniques for rainfall forecasting cannot perform well for long-term rainfall forecasting due to the dynamic nature of climate phenomena. Artificial Neural Networks (ANNs) have become very popular, and prediction using ANN is one of the most widely used techniques for rainfall forecasting. This paper provides a detailed survey and comparison of different neural network architectures used by researchers for rainfall forecasting. The paper also discusses the issues while applying different neural networks for yearly/monthly/daily rainfall forecasting. Moreover, the paper also presents different accuracy measures used by researchers for evaluating performance of ANN.", "title": "" }, { "docid": "4592c8f5758ccf20430dbec02644c931", "text": "Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content.", "title": "" }, { "docid": "3b000325d8324942fc192c3df319c21d", "text": "The proposed automatic bone age estimation system was based on the phalanx geometric characteristics and carpals fuzzy information. The system could do automatic calibration by analyzing the geometric properties of hand images. Physiological and morphological features are extracted from medius image in segmentation stage. Back-propagation, radial basis function, and support vector machine neural networks were applied to classify the phalanx bone age. In addition, the proposed fuzzy bone age (BA) assessment was based on normalized bone area ratio of carpals. The result reveals that the carpal features can effectively reduce classification errors when age is less than 9 years old. Meanwhile, carpal features will become less influential to assess BA when children grow up to 10 years old. On the other hand, phalanx features become the significant parameters to depict the bone maturity from 10 years old to adult stage. Owing to these properties, the proposed novel BA assessment system combined the phalanxes and carpals assessment. Furthermore, the system adopted not only neural network classifiers but fuzzy bone age confinement and got a result nearly to be practical clinically.", "title": "" }, { "docid": "61a9bc06d96eb213ed5142bfa47920b9", "text": "This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.", "title": "" }, { "docid": "e4179fd890a55f829e398a6f80f1d26a", "text": "This paper presents a soft-start circuit that adopts a pulse-skipping control to prevent inrush current and output voltage overshoot during the start-up period of dc-dc converters. The purpose of the pulse-skipping control is to significantly restrain the increasing rate of the reference voltage of the error amplifier. Thanks to the pulse-skipping mechanism and the duty cycle minimization, the soft-start-up time can be extended and the restriction of the charging current and the capacitance can be relaxed. The proposed soft-start circuit is fully integrated on chip without external components, leading to a reduction in PCB area and cost. A current-mode buck converter is implemented with TSMC 0.35-μm 2P4M CMOS process. Simulation results show the output voltage of the buck converter increases smoothly and inrush current is less than 300 mA.", "title": "" }, { "docid": "69504625b05c735dd80135ef106a8677", "text": "The amount of videos available on the Web is growing explosively. While some videos are very interesting and receive high rating from viewers, many of them are less interesting or even boring. This paper conducts a pilot study on the understanding of human perception of video interestingness, and demonstrates a simple computational method to identify more interesting videos. To this end we first construct two datasets of Flickr and YouTube videos respectively. Human judgements of interestingness are collected and used as the groundtruth for training computational models. We evaluate several off-the-shelf visual and audio features that are potentially useful for predicting interestingness on both datasets. Results indicate that audio and visual features are equally important and the combination of both modalities shows very promising results.", "title": "" } ]
scidocsrr
b2f87f4a0421f6a01a15ce452ee81fc3
Dataset for forensic analysis of B-tree file system
[ { "docid": "61953281f4b568ad15e1f62be9d68070", "text": "Most of the effort in today’s digital forensics community lies in the retrieval and analysis of existing information from computing systems. Little is being done to increase the quantity and quality of the forensic information on today’s computing systems. In this paper we pose the question of what kind of information is desired on a system by a forensic investigator. We give an overview of the information that exists on current systems and discuss its shortcomings. We then examine the role that file system metadata plays in digital forensics and analyze what kind of information is desirable for different types of forensic investigations, how feasible it is to obtain it, and discuss issues about storing the information.", "title": "" } ]
[ { "docid": "ff572d9c74252a70a48d4ba377f941ae", "text": "This paper considers how design fictions in the form of 'imaginary abstracts' can be extended into complete 'fictional papers'. Imaginary abstracts are a type of design fiction that are usually included within the content of 'real' research papers, they comprise brief accounts of fictional problem frames, prototypes, user studies and findings. Design fiction abstracts have been proposed as a means to move beyond solutionism to explore the potential societal value and consequences of new HCI concepts. In this paper we contrast the properties of imaginary abstracts, with the properties of a published paper that presents fictional research, Game of Drones. Extending the notion of imaginary abstracts so that rather than including fictional abstracts within a 'non-fiction' research paper, Game of Drones is fiction from start to finish (except for the concluding paragraph where the fictional nature of the paper is revealed). In this paper we review the scope of design fiction in HCI research before contrasting the properties of imaginary abstracts with the properties of our example fictional research paper. We argue that there are clear merits and weaknesses to both approaches, but when used tactfully and carefully fictional research papers may further empower HCI's burgeoning design discourse with compelling new methods.", "title": "" }, { "docid": "e668ffe258772aa5eb425cdfa5edb5ed", "text": "A novel method of on-line 2,2′-Azinobis-(3-ethylbenzthiazoline-6-sulphonate)-Capillary Electrophoresis-Diode Array Detector (on-line ABTS+-CE-DAD) was developed to screen the major antioxidants from complex herbal medicines. ABTS+, one of well-known oxygen free radicals was firstly integrated into the capillary. For simultaneously detecting and separating ABTS+ and chemical components of herb medicines, some conditions were optimized. The on-line ABTS+-CE-DAD method has successfully been used to screen the main antioxidants from Shuxuening injection (SI), an herbal medicines injection. Under the optimum conditions, nine ingredients of SI including clitorin, rutin, isoquercitrin, Quercetin-3-O-D-glucosyl]-(1-2)-L-rhamnoside, kaempferol-3-O-rutinoside, kaempferol-7-O-β-D-glucopyranoside, apigenin-7-O-Glucoside, quercetin-3-O-[2-O-(6-O-p-hydroxyl-E-coumaroyl)-D-glucosyl]-(1-2)-L-rhamnoside, 3-O-{2-O-[6-O-(p-hydroxyl-E-coumaroyl)-glucosyl]}-(1-2) rhamnosyl kaempfero were separated and identified as the major antioxidants. There is a linear relationship between the total amount of major antioxidants and total antioxidative activity of SI with a linear correlation coefficient of 0.9456. All the Relative standard deviations of recovery, precision and stability were below 7.5%. Based on these results, these nine ingredients could be selected as combinatorial markers to evaluate quality control of SI. It was concluded that on-line ABTS+-CE-DAD method was a simple, reliable and powerful tool to screen and quantify active ingredients for evaluating quality of herbal medicines.", "title": "" }, { "docid": "5404c00708c64d9f254c25f0065bc13c", "text": "In this paper, we discuss the problem of automatic skin lesion analysis, specifically melanoma detection and semantic segmentation. We accomplish this by using deep learning techniques to perform classification on publicly available dermoscopic images. Skin cancer, of which melanoma is a type, is the most prevalent form of cancer in the US and more than four million cases are diagnosed in the US every year. In this work, we present our efforts towards an accessible, deep learning-based system that can be used for skin lesion classification, thus leading to an improved melanoma screening system. For classification, a deep convolutional neural network architecture is first implemented over the raw images. In addition, hand-coded features such as 166-D color histogram distribution, edge histogram and Multiscale Color local binary patterns are extracted from the images and presented to a random forest classifier. The average of the outputs from the two mentioned classifiers is taken as the final classification result. The classification task achieves an accuracy of 80.3%, AUC score of 0.69 and a precision score of 0.81. For segmentation, we implement a convolutional-deconvolutional architecture and the segmentation model achieves a Dice coefficient of 73.5%.", "title": "" }, { "docid": "8e8c566d93f11bd96318978dd4b21ed1", "text": "Recently, neural-network based word embedding models have been shown to produce high-quality distributional representations capturing both semantic and syntactic information. In this paper, we propose a grouping-based context predictive model by considering the interactions of context words, which generalizes the widely used CBOW model and Skip-Gram model. In particular, the words within a context window are split into several groups with a grouping function, where words in the same group are combined while different groups are treated as independent. To determine the grouping function, we propose a relatedness hypothesis stating the relationship among context words and propose several context grouping methods. Experimental results demonstrate better representations can be learned with suitable context groups.", "title": "" }, { "docid": "fe1bc993047a95102f4331f57b1f9197", "text": "Document classification tasks were primarily tackled at word level. Recent research that works with character-level inputs shows several benefits over word-level approaches such as natural incorporation of morphemes and better handling of rare words. We propose a neural network architecture that utilizes both convolution and recurrent layers to efficiently encode character inputs. We validate the proposed model on eight large scale document classification tasks and compare with character-level convolution-only models. It achieves comparable performances with much less parameters.", "title": "" }, { "docid": "b6e5f04832ece23bf74e49a3dd191eef", "text": "Integration of knowledge concerning circadian rhythms, metabolic networks, and sleep-wake cycles is imperative for unraveling the mysteries of biological cycles and their underlying mechanisms. During the last decade, enormous progress in circadian biology research has provided a plethora of new insights into the molecular architecture of circadian clocks. However, the recent identification of autonomous redox oscillations in cells has expanded our view of the clockwork beyond conventional transcription/translation feedback loop models, which have been dominant since the first circadian period mutants were identified in fruit fly. Consequently, non-transcriptional timekeeping mechanisms have been proposed, and the antioxidant peroxiredoxin proteins have been identified as conserved markers for 24-hour rhythms. Here, we review recent advances in our understanding of interdependencies amongst circadian rhythms, sleep homeostasis, redox cycles, and other cellular metabolic networks. We speculate that systems-level investigations implementing integrated multi-omics approaches could provide novel mechanistic insights into the connectivity between daily cycles and metabolic systems.", "title": "" }, { "docid": "935ebaec03bd12c85731eb42abcd578e", "text": "Utilization of polymers as biomaterials has greatly impacted the advancement of modern medicine. Specifically, polymeric biomaterials that are biodegradable provide the significant advantage of being able to be broken down and removed after they have served their function. Applications are wide ranging with degradable polymers being used clinically as surgical sutures and implants. In order to fit functional demand, materials with desired physical, chemical, biological, biomechanical and degradation properties must be selected. Fortunately, a wide range of natural and synthetic degradable polymers has been investigated for biomedical applications with novel materials constantly being developed to meet new challenges. This review summarizes the most recent advances in the field over the past 4 years, specifically highlighting new and interesting discoveries in tissue engineering and drug delivery applications.", "title": "" }, { "docid": "9002cca44b21fb7923ae18ced55bbcc2", "text": "Species extinctions pose serious threats to the functioning of ecological communities worldwide. We used two qualitative and quantitative pollination networks to simulate extinction patterns following three removal scenarios: random removal and systematic removal of the strongest and weakest interactors. We accounted for pollinator behaviour by including potential links into temporal snapshots (12 consecutive 2-week networks) to reflect mutualists' ability to 'switch' interaction partners (re-wiring). Qualitative data suggested a linear or slower than linear secondary extinction while quantitative data showed sigmoidal decline of plant interaction strength upon removal of the strongest interactor. Temporal snapshots indicated greater stability of re-wired networks over static systems. Tolerance of generalized networks to species extinctions was high in the random removal scenario, with an increase in network stability if species formed new interactions. Anthropogenic disturbance, however, that promote the extinction of the strongest interactors might induce a sudden collapse of pollination networks.", "title": "" }, { "docid": "b67acf80642aa2ba8ba01c362303857c", "text": "Storm has long served as the main platform for real-time analytics at Twitter. However, as the scale of data being processed in real-time at Twitter has increased, along with an increase in the diversity and the number of use cases, many limitations of Storm have become apparent. We need a system that scales better, has better debug-ability, has better performance, and is easier to manage -- all while working in a shared cluster infrastructure. We considered various alternatives to meet these needs, and in the end concluded that we needed to build a new real-time stream data processing system. This paper presents the design and implementation of this new system, called Heron. Heron is now the de facto stream data processing engine inside Twitter, and in this paper we also share our experiences from running Heron in production. In this paper, we also provide empirical evidence demonstrating the efficiency and scalability of Heron.", "title": "" }, { "docid": "c12d534d219e3d249ba3da1c0956c540", "text": "Within the research on Micro Aerial Vehicles (MAVs), the field on flight control and autonomous mission execution is one of the most active. A crucial point is the localization of the vehicle, which is especially difficult in unknown, GPS-denied environments. This paper presents a novel vision based approach, where the vehicle is localized using a downward looking monocular camera. A state-of-the-art visual SLAM algorithm tracks the pose of the camera, while, simultaneously, building an incremental map of the surrounding region. Based on this pose estimation a LQG/LTR based controller stabilizes the vehicle at a desired setpoint, making simple maneuvers possible like take-off, hovering, setpoint following or landing. Experimental data show that this approach efficiently controls a helicopter while navigating through an unknown and unstructured environment. To the best of our knowledge, this is the first work describing a micro aerial vehicle able to navigate through an unexplored environment (independently of any external aid like GPS or artificial beacons), which uses a single camera as only exteroceptive sensor.", "title": "" }, { "docid": "933807e4458fb12ad45a3e951f53bb6d", "text": "Zusammenfassung Es wird eine neuartige hybride Systemarchitektur für kontinuierliche Steuerungsund Regelungssysteme mit diskreten Entscheidungsfindungsprozessen vorgestellt. Die Funktionsweise wird beispielhaft für das hochautomatisierte Fahren auf Autobahnen und den Nothalteassistenten dargestellt. Da für einen zukünftigen Einsatz derartiger Systeme deren Robustheit entscheidend ist, wurde diese bei der Entwicklung des Ansatzes in den Mittelpunkt gestellt. Summary An innovative hybrid system structure for continuous control systems with discrete decisionmaking processes is presented. The functionality is demonstrated on a highly automated driving system on freeways and on the emergency stop assistant. Due to the fact that the robustness will be a determining factor for future usage of these systems, the presented structure focuses on this feature.", "title": "" }, { "docid": "4a83c053ed9c17ed99262d926394ec83", "text": "Multiangle social network recommendation algorithms (MSN) and a new assessment method, called similarity network evaluation (SNE), are both proposed. From the viewpoint of six dimensions, the MSN are classified into six algorithms, including user-based algorithm from resource point (UBR), user-based algorithm from tag point (UBT), resource-based algorithm from tag point (RBT), resource-based algorithm from user point (RBU), tag-based algorithm from resource point (TBR), and tag-based algorithm from user point (TBU). Compared with the traditional recall/precision (RP) method, the SNE is more simple, effective, and visualized. The simulation results show that TBR and UBR are the best algorithms, RBU and TBU are the worst ones, and UBT and RBT are in the medium levels.", "title": "" }, { "docid": "af1257e27c0a6010a902e78dc8301df4", "text": "A 20-MHz to 3-GHz wide-range multiphase delay-locked loop (DLL) has been realized in 90-nm CMOS technology. The proposed delay cell extends the operation frequency range. A scaling circuit is adopted to lower the large delay gain when the frequency of the input clock is low. The core area of this DLL is 0.005 mm2. The measured power consumption values are 0.4 and 3.6 mW for input clocks of 20 MHz and 3 GHz, respectively. The measured peak-to-peak and root-mean-square jitters are 2.3 and 16 ps at 3 GHz, respectively.", "title": "" }, { "docid": "dca156a404916f2ab274406ad565e391", "text": "Liang Zhou, member IEEE and YiFeng Wu, member IEEE Transphorm, Inc. 75 Castilian Dr., Goleta, CA, 93117 USA [email protected] Abstract: This paper presents a true bridgeless totem-pole Power-Factor-Correction (PFC) circuit using GaN HEMT. Enabled by a diode-free GaN power HEMT bridge with low reverse-recovery charge, very-high-efficiency single-phase AC-DC conversion is realized using a totem-pole topology without the limit of forward voltage drop from a fast diode. When implemented with a pair of sync-rec MOSFETs for line rectification, 99% efficiency is achieved at 230V ac input and 400 dc output in continuous-current mode.", "title": "" }, { "docid": "2dd8b7004f45ae374a72e2c7d40b0892", "text": "In this letter, a multifeed tightly coupled patch array antenna capable of broadband operation is analyzed and designed. First, an antenna array composed of infinite elements with each element excited by a feed is proposed. To produce specific polarized radiation efficiently, a new patch element is proposed, and its characteristics are studied based on a 2-port network model. Full-wave simulation results show that the infinite antenna array exhibits both a high efficiency and desirable radiation pattern in a wide frequency band (10 dB bandwidth) from 1.91 to 5.35 GHz (94.8%). Second, to validate its outstanding performance, a realistic finite 4 × 4 antenna prototype is designed, fabricated, and measured in our laboratory. The experimental results agree well with simulated ones, where the frequency bandwidth (VSWR < 2) is from 2.5 to 3.8 GHz (41.3%). The inherent compact size, light weight, broad bandwidth, and good radiation characteristics make this array antenna a promising candidate for future communication and advanced sensing systems.", "title": "" }, { "docid": "21b07dc04d9d964346748eafe3bcfc24", "text": "Online social data like user-generated content, expressed or implicit relations among people, and behavioral traces are at the core of many popular web applications and platforms, driving the research agenda of researchers in both academia and industry. The promises of social data are many, including the understanding of \"what the world thinks»» about a social issue, brand, product, celebrity, or other entity, as well as enabling better decision-making in a variety of fields including public policy, healthcare, and economics. However, many academics and practitioners are increasingly warning against the naive usage of social data. They highlight that there are biases and inaccuracies occurring at the source of the data, but also introduced during data processing pipeline; there are methodological limitations and pitfalls, as well as ethical boundaries and unexpected outcomes that are often overlooked. Such an overlook can lead to wrong or inappropriate results that can be consequential.", "title": "" }, { "docid": "57c780448d8771a0d22c8ed147032a71", "text": "“Social TV” is a term that broadly describes the online social interactions occurring between viewers while watching television. In this paper, we show that TV networks can derive value from social media content placed in shows because it leads to increased word of mouth via online posts, and it highly correlates with TV show related sales. In short, we show that TV event triggers change the online behavior of viewers. In this paper, we first show that using social media content on the televised American reality singing competition, The Voice, led to increased social media engagement during the TV broadcast. We then illustrate that social media buzz about a contestant after a performance is highly correlated with song sales from that contestant’s performance. We believe this to be the first study linking TV content to buzz and sales in real time.", "title": "" }, { "docid": "e2f2961ab8c527914c3d23f8aa03e4bf", "text": "Pedestrian detection based on the combination of convolutional neural network (CNN) and traditional handcrafted features (i.e., HOG+LUV) has achieved great success. In general, HOG+LUV are used to generate the candidate proposals and then CNN classifies these proposals. Despite its success, there is still room for improvement. For example, CNN classifies these proposals by the fully connected layer features, while proposal scores and the features in the inner-layers of CNN are ignored. In this paper, we propose a unifying framework called multi-layer channel features (MCF) to overcome the drawback. It first integrates HOG+LUV with each layer of CNN into a multi-layer image channels. Based on the multi-layer image channels, a multi-stage cascade AdaBoost is then learned. The weak classifiers in each stage of the multi-stage cascade are learned from the image channels of corresponding layer. Experiments on Caltech data set, INRIA data set, ETH data set, TUD-Brussels data set, and KITTI data set are conducted. With more abundant features, an MCF achieves the state of the art on Caltech pedestrian data set (i.e., 10.40% miss rate). Using new and accurate annotations, an MCF achieves 7.98% miss rate. As many non-pedestrian detection windows can be quickly rejected by the first few stages, it accelerates detection speed by 1.43 times. By eliminating the highly overlapped detection windows with lower scores after the first stage, it is 4.07 times faster than negligible performance loss.", "title": "" }, { "docid": "9b17dd1fc2c7082fa8daecd850fab91c", "text": "This paper presents all the stages of development of a solar tracker for a photovoltaic panel. The system was made with a microcontroller which was design as an embedded control. It has a data base of the angles of orientation horizontal axle, therefore it has no sensor inlet signal and it function as an open loop control system. Combined of above mention characteristics in one the tracker system is a new technique of the active type. It is also a rotational robot of 1 degree of freedom.", "title": "" } ]
scidocsrr
c5f63d9c38b752c288cf04ab7a471093
Non-Negative Matrix Factorization Revisited: Uniqueness and Algorithm for Symmetric Decomposition
[ { "docid": "9c949a86346bda32a73f986651ab8067", "text": "Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have b ecome prominent techniques for blind sources separation (BSS), analys is of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of e fficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representatio n, that has many potential applications in computational neur oscience, multisensory processing, compressed sensing and multidimensio nal data analysis. We have developed a class of optimized local algorithm s which are referred to as Hierarchical Alternating Least Squares (HAL S) algorithms. For these purposes, we have performed sequential constrain ed minimization on a set of squared Euclidean distances. We then extend t his approach to robust cost functions using the Alpha and Beta divergence s and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the ove r-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are su fficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorit hms can be tuned to different noise statistics by adjusting a single parameter. Ext ensive experimental results confirm the accuracy and computational p erformance of the developed algorithms, especially, with usage of multilayer hierarchical NMF approach [3]. key words: Nonnegative matrix factorization (NMF), nonnegative tensor factorizations (NTF), nonnegative PARAFAC, model reduction, feature extraction, compression, denoising, multiplicative local learning (adaptive) algorithms, Alpha and Beta divergences.", "title": "" } ]
[ { "docid": "0960aa1abdac4254b84912b14d653ba9", "text": "Latent Dirichlet Allocation (LDA) mining thematic structure of documents plays an important role in nature language processing and machine learning areas. However, the probability distribution from LDA only describes the statistical relationship of occurrences in the corpus and usually in practice, probability is not the best choice for feature representations. Recently, embedding methods have been proposed to represent words and documents by learning essential concepts and representations, such as Word2Vec and Doc2Vec. The embedded representations have shown more effectiveness than LDA-style representations in many tasks. In this paper, we propose the Topic2Vec approach which can learn topic representations in the same semantic vector space with words, as an alternative to probability distribution. The experimental results show that Topic2Vec achieves interesting and meaningful results.", "title": "" }, { "docid": "2a4439b4368af6317b14d6de03b27e44", "text": "We introduce an algorithm for tracking deformable objects from a sequence of point clouds. The proposed tracking algorithm is based on a probabilistic generative model that incorporates observations of the point cloud and the physical properties of the tracked object and its environment. We propose a modified expectation maximization algorithm to perform maximum a posteriori estimation to update the state estimate at each time step. Our modification makes it practical to perform the inference through calls to a physics simulation engine. This is significant because (i) it allows for the use of highly optimized physics simulation engines for the core computations of our tracking algorithm, and (ii) it makes it possible to naturally, and efficiently, account for physical constraints imposed by collisions, grasping actions, and material properties in the observation updates. Even in the presence of the relatively large occlusions that occur during manipulation tasks, our algorithm is able to robustly track a variety of types of deformable objects, including ones that are one-dimensional, such as ropes; two-dimensional, such as cloth; and three-dimensional, such as sponges. Our implementation can track these objects in real time.", "title": "" }, { "docid": "a25041f4b95b68d2b8b9356d2f383b69", "text": "The authors review evidence that self-control may consume a limited resource. Exerting self-control may consume self-control strength, reducing the amount of strength available for subsequent self-control efforts. Coping with stress, regulating negative affect, and resisting temptations require self-control, and after such self-control efforts, subsequent attempts at self-control are more likely to fail. Continuous self-control efforts, such as vigilance, also degrade over time. These decrements in self-control are probably not due to negative moods or learned helplessness produced by the initial self-control attempt. These decrements appear to be specific to behaviors that involve self-control; behaviors that do not require self-control neither consume nor require self-control strength. It is concluded that the executive component of the self--in particular, inhibition--relies on a limited, consumable resource.", "title": "" }, { "docid": "ab83fb07e4f9f70a3e4f22620ba551fc", "text": "OBJECTIVES:Biliary cannulation is frequently the most difficult component of endoscopic retrograde cholangiopancreatography (ERCP). Techniques employed to improve safety and efficacy include wire-guided access and the use of sphincterotomes. However, a variety of options for these techniques are available and optimum strategies are not defined. We assessed whether the use of endoscopist- vs. assistant-controlled wire guidance and small vs. standard-diameter sphincterotomes improves safety and/or efficacy of bile duct cannulation.METHODS:Patients were randomized using a 2 × 2 factorial design to initial cannulation attempt with endoscopist- vs. assistant-controlled wire systems (1:1 ratio) and small (3.9Fr tip) vs. standard (4.4Fr tip) sphincterotomes (1:1 ratio). The primary efficacy outcome was successful deep bile duct cannulation within 8 attempts. Sample size of 498 was planned to demonstrate a significant increase in cannulation of 10%. Interim analysis was planned after 200 patients–with a stopping rule pre-defined for a significant difference in the composite safety end point (pancreatitis, cholangitis, bleeding, and perforation).RESULTS:The study was stopped after the interim analysis, with 216 patients randomized, due to a significant difference in the safety end point with endoscopist- vs. assistant-controlled wire guidance (3/109 (2.8%) vs. 12/107 (11.2%), P=0.016), primarily due to a lower rate of post-ERCP pancreatitis (3/109 (2.8%) vs. 10/107 (9.3%), P=0.049). The difference in successful biliary cannulation for endoscopist- vs. assistant-controlled wire guidance was −0.5% (95% CI−12.0 to 11.1%) and for small vs. standard sphincerotome −0.9% (95% CI–12.5 to 10.6%).CONCLUSIONS:Use of the endoscopist- rather than assistant-controlled wire guidance for bile duct cannulation reduces complications of ERCP such as pancreatitis.", "title": "" }, { "docid": "b3e1bdd7cfca17782bde698297e191ab", "text": "Synthetic aperture radar (SAR) raw signal simulation is a powerful tool for designing new sensors, testing processing algorithms, planning missions, and devising inversion algorithms. In this paper, a spotlight SAR raw signal simulator for distributed targets is presented. The proposed procedure is based on a Fourier domain analysis: a proper analytical reformulation of the spotlight SAR raw signal expression is presented. It is shown that this reformulation allows us to design a very efficient simulation scheme that employs fast Fourier transform codes. Accordingly, the computational load is dramatically reduced with respect to a time-domain simulation and this, for the first time, makes spotlight simulation of extended scenes feasible.", "title": "" }, { "docid": "155c9444bfdb61352eddd7140ae75125", "text": "To the best of our knowledge, we present the first hardware implementation of isogeny-based cryptography available in the literature. Particularly, we present the first implementation of the supersingular isogeny Diffie-Hellman (SIDH) key exchange, which features quantum-resistance. We optimize this design for speed by creating a high throughput multiplier unit, taking advantage of parallelization of arithmetic in $\\mathbb {F}_{p^{2}}$ , and minimizing pipeline stalls with optimal scheduling. Consequently, our results are also faster than software libraries running affine SIDH even on Intel Haswell processors. For our implementation at 85-bit quantum security and 128-bit classical security, we generate ephemeral public keys in 1.655 million cycles for Alice and 1.490 million cycles for Bob. We generate the shared secret in an additional 1.510 million cycles for Alice and 1.312 million cycles for Bob. On a Virtex-7, these results are approximately 1.5 times faster than known software implementations running the same 512-bit SIDH. Our results and observations show that the isogeny-based schemes can be implemented with high efficiency on reconfigurable hardware.", "title": "" }, { "docid": "7f9a565c10fdee58cbe76b7e9351f037", "text": "The effects of iron substitution on the structural and magnetic properties of the GdCo(12-x)Fe(x)B6 (0 ≤ x ≤ 3) series of compounds have been studied. All of the compounds form in the rhombohedral SrNi12B6-type structure and exhibit ferrimagnetic behaviour below room temperature: T(C) decreases from 158 K for x = 0 to 93 K for x = 3. (155)Gd Mössbauer spectroscopy indicates that the easy magnetization axis changes from axial to basal-plane upon substitution of Fe for Co. This observation has been confirmed using neutron powder diffraction. The axial to basal-plane transition is remarkably sensitive to the Fe content and comparison with earlier (57)Fe-doping studies suggests that the boundary lies below x = 0.1.", "title": "" }, { "docid": "e668f84e16a5d17dff7d638a5543af82", "text": "Mining topics in Twitter is increasingly attracting more attention. However, the shortness and informality of tweets leads to extreme sparse vector representation with a large vocabulary, which makes the conventional topic models (e.g., Latent Dirichlet Allocation) often fail to achieve high quality underlying topics. Luckily, tweets always show up with rich user-generated hash tags as keywords. In this paper, we propose a novel topic model to handle such semi-structured tweets, denoted as Hash tag Graph based Topic Model (HGTM). By utilizing relation information between hash tags in our hash tag graph, HGTM establishes word semantic relations, even if they haven't co-occurred within a specific tweet. In addition, we enhance the dependencies of both multiple words and hash tags via latent variables (topics) modeled by HGTM. We illustrate that the user-contributed hash tags could serve as weakly-supervised information for topic modeling, and hash tag relation could reveal the semantic relation between tweets. Experiments on a real-world twitter data set show that our model provides an effective solution to discover more distinct and coherent topics than the state-of-the-art baselines and has a strong ability to control sparseness and noise in tweets.", "title": "" }, { "docid": "211058f2d0d5b9cf555a6e301cd80a5d", "text": "We present a method based on header paths for efficient and complete extraction of labeled data from tables meant for humans. Although many table configurations yield to the proposed syntactic analysis, some require access to semantic knowledge. Clicking on one or two critical cells per table, through a simple interface, is sufficient to resolve most of these problem tables. Header paths, a purely syntactic representation of visual tables, can be transformed (\"factored\") into existing representations of structured data such as category trees, relational tables, and RDF triples. From a random sample of 200 web tables from ten large statistical web sites, we generated 376 relational tables and 34,110 subject-predicate-object RDF triples.", "title": "" }, { "docid": "b52bfe9169e1b68fec9ec11b76f458f9", "text": "Copyright (©) 1999–2003 R Foundation for Statistical Computing. Permission is granted to make and distribute verbatim copies of this manual provided the copyright notice and this permission notice are preserved on all copies. Permission is granted to copy and distribute modified versions of this manual under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one. Permission is granted to copy and distribute translations of this manual into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the R Development Core Team.", "title": "" }, { "docid": "27d0d038c827884b50d1932945a29d94", "text": "0957-4174/$ see front matter 2010 Elsevier Ltd. A doi:10.1016/j.eswa.2010.10.024 E-mail addresses: [email protected], ca Software engineering discipline contains several prediction approaches such as test effort prediction, correction cost prediction, fault prediction, reusability prediction, security prediction, effort prediction, and quality prediction. However, most of these prediction approaches are still in preliminary phase and more research should be conducted to reach robust models. Software fault prediction is the most popular research area in these prediction approaches and recently several research centers started new projects on this area. In this study, we investigated 90 software fault prediction papers published between year 1990 and year 2009 and then we categorized these papers according to the publication year. This paper surveys the software engineering literature on software fault prediction and both machine learning based and statistical based approaches are included in this survey. Papers explained in this article reflect the outline of what was published so far, but naturally this is not a complete review of all the papers published so far. This paper will help researchers to investigate the previous studies from metrics, methods, datasets, performance evaluation metrics, and experimental results perspectives in an easy and effective manner. Furthermore, current trends are introduced and discussed. 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "db0c7a200d76230740e027c2966b066c", "text": "BACKGROUND\nPromotion and provision of low-cost technologies that enable improved water, sanitation, and hygiene (WASH) practices are seen as viable solutions for reducing high rates of morbidity and mortality due to enteric illnesses in low-income countries. A number of theoretical models, explanatory frameworks, and decision-making models have emerged which attempt to guide behaviour change interventions related to WASH. The design and evaluation of such interventions would benefit from a synthesis of this body of theory informing WASH behaviour change and maintenance.\n\n\nMETHODS\nWe completed a systematic review of existing models and frameworks through a search of related articles available in PubMed and in the grey literature. Information on the organization of behavioural determinants was extracted from the references that fulfilled the selection criteria and synthesized. Results from this synthesis were combined with other relevant literature, and from feedback through concurrent formative and pilot research conducted in the context of two cluster-randomized trials on the efficacy of WASH behaviour change interventions to inform the development of a framework to guide the development and evaluation of WASH interventions: the Integrated Behavioural Model for Water, Sanitation, and Hygiene (IBM-WASH).\n\n\nRESULTS\nWe identified 15 WASH-specific theoretical models, behaviour change frameworks, or programmatic models, of which 9 addressed our review questions. Existing models under-represented the potential role of technology in influencing behavioural outcomes, focused on individual-level behavioural determinants, and had largely ignored the role of the physical and natural environment. IBM-WASH attempts to correct this by acknowledging three dimensions (Contextual Factors, Psychosocial Factors, and Technology Factors) that operate on five-levels (structural, community, household, individual, and habitual).\n\n\nCONCLUSIONS\nA number of WASH-specific models and frameworks exist, yet with some limitations. The IBM-WASH model aims to provide both a conceptual and practical tool for improving our understanding and evaluation of the multi-level multi-dimensional factors that influence water, sanitation, and hygiene practices in infrastructure-constrained settings. We outline future applications of our proposed model as well as future research priorities needed to advance our understanding of the sustained adoption of water, sanitation, and hygiene technologies and practices.", "title": "" }, { "docid": "346bedcddf74d56db8b2d5e8b565efef", "text": "Ulric Neisser (Chair) Gwyneth Boodoo Thomas J. Bouchard, Jr. A. Wade Boykin Nathan Brody Stephen J. Ceci Diane E Halpern John C. Loehlin Robert Perloff Robert J. Sternberg Susana Urbina Emory University Educational Testing Service, Princeton, New Jersey University of Minnesota, Minneapolis Howard University Wesleyan University Cornell University California State University, San Bernardino University of Texas, Austin University of Pittsburgh Yale University University of North Florida", "title": "" }, { "docid": "08847edfd312791b67c34b79d362cde7", "text": "We describe a formally well founded approach to link data and processes conceptually, based on adopting UML class diagrams to represent data, and BPMN to represent the process. The UML class diagram together with a set of additional process variables, called Artifact, form the information model of the process. All activities of the BPMN process refer to such an information model by means of OCL operation contracts. We show that the resulting semantics while abstract is fully executable. We also provide an implementation of the executor.", "title": "" }, { "docid": "279302300cbdca5f8d7470532928f9bd", "text": "The problem of feature selection is a difficult combinatorial task in Machine Learning and of high practical relevance, e.g. in bioinformatics. Genetic Algorithms (GAs) of fer a natural way to solve this problem. In this paper we present a special Genetic Algorithm, which especially take s into account the existing bounds on the generalization erro r for Support Vector Machines (SVMs). This new approach is compared to the traditional method of performing crossvalidation and to other existing algorithms for feature selection.", "title": "" }, { "docid": "455a2974a8cda70c6b72819d96c867d9", "text": "We have developed Cu-Cu/adhesives hybrid bonding technique by using collective cutting of Cu bumps and adhesives in order to achieve high density 2.5D/3D integration. It is considered that progression of high density interconnection leads to lower height of bonding electrodes, resulting in narrow gap between ICs. Therefore, it is difficult to fill in adhesive to such a narrow gap ICs after bonding. Thus, we consider that hybrid bonding of pre-applied adhesives and Cu-Cu thermocompression bonding must be advantageous, in terms of void less bonding and minimizing bonding stress by adhesives and also low electricity by Cu-Cu solid diffusion bonding. In the present study, we adapted the following process; at first adhesives were spin coated on the wafer with Cu post and then pre-baked. After that, pre-applied adhesives and Cu bumps were simultaneously cut by single crystal diamond bite. We found that both adhesives and Cu post surfaces after cutting have highly smooth surface less than 10nm, and dishing phenomena, which might be occurred in typical CMP process, could not be seen on the cut Cu post/ adhesives surfaces.", "title": "" }, { "docid": "4fa43a3d0631d9cd2cdc87e9f0c97136", "text": "Recent trends on how video games are played have pushed for the need to revise the game engine architecture. Indeed, game players are more mobile, using smartphones and tablets that lack CPU resources compared to PC and dedicated box. Two emerging solutions, cloud gaming and computing offload, would represent the next steps toward improving game player experience. By consequence, dissecting and analyzing game engines performances would help to better understand how to move to these new directions, which is so far missing in the literature. In this paper, we fill this gap by analyzing and evaluating one of the most popular game engine, namely Unity3D. First, we dissected the Unity3D architecture and modules. A benchmark was then used to evaluate the CPU and GPU performances of the different modules constituting Unity3D, for five representative games.", "title": "" }, { "docid": "6bdf0850725f091fea6bcdf7961e27d0", "text": "The aim of this review is to document the advantages of exclusive breastfeeding along with concerns which may hinder the practice of breastfeeding and focuses on the appropriateness of complementary feeding and feeding difficulties which infants encounter. Breastfeeding, as recommended by the World Health Organisation, is the most cost effective way for reducing childhood morbidity such as obesity, hypertension and gastroenteritis as well as mortality. There are several factors that either promote or act as barriers to good infant nutrition. Factors which influence breastfeeding practice in terms of initiation, exclusivity and duration are namely breast engorgement, sore nipples, milk insufficiency and availability of various infant formulas. On the other hand, introduction of complementary foods, also known as weaning, is done around 4 to 6 months and mothers usually should start with home-made nutritious food. Difficulties encountered during the weaning process are often refusal to eat followed by vomiting, colic, allergic reactions and diarrhoea. key words: Exclusive breastfeeding, Weaning, Complementary feeding, Feeding difficulties.", "title": "" }, { "docid": "6c58c147bef99a2408859bdfa63da3a7", "text": "We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or -greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates nearoptimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.", "title": "" }, { "docid": "3cf7fc89e6a9b7295079dd74014f166b", "text": "BACKGROUND\nHigh-resolution MRI has been shown to be capable of identifying plaque constituents, such as the necrotic core and intraplaque hemorrhage, in human carotid atherosclerosis. The purpose of this study was to evaluate differential contrast-weighted images, specifically a multispectral MR technique, to improve the accuracy of identifying the lipid-rich necrotic core and acute intraplaque hemorrhage in vivo.\n\n\nMETHODS AND RESULTS\nEighteen patients scheduled for carotid endarterectomy underwent a preoperative carotid MRI examination in a 1.5-T GE Signa scanner using a protocol that generated 4 contrast weightings (T1, T2, proton density, and 3D time of flight). MR images of the vessel wall were examined for the presence of a lipid-rich necrotic core and/or intraplaque hemorrhage. Ninety cross sections were compared with matched histological sections of the excised specimen in a double-blinded fashion. Overall accuracy (95% CI) of multispectral MRI was 87% (80% to 94%), sensitivity was 85% (78% to 92%), and specificity was 92% (86% to 98%). There was good agreement between MRI and histological findings, with a value of kappa=0.69 (0.53 to 0.85).\n\n\nCONCLUSIONS\nMultispectral MRI can identify the lipid-rich necrotic core in human carotid atherosclerosis in vivo with high sensitivity and specificity. This MRI technique provides a noninvasive tool to study the pathogenesis and natural history of carotid atherosclerosis. Furthermore, it will permit a direct assessment of the effect of pharmacological therapy, such as aggressive lipid lowering, on plaque lipid composition.", "title": "" } ]
scidocsrr
e60df0a203c3d0a5152375c99dfb9fe7
The relationship between social network usage and some personality traits
[ { "docid": "7ede96303aa3c7f98f60cb545d51ccae", "text": "The explosion in social networking sites such as MySpace, Facebook, Bebo and Friendster is widely regarded as an exciting opportunity, especially for youth. Yet the public response tends to be one of puzzled dismay regarding, supposedly, a generation with many friends but little sense of privacy and a narcissistic fascination with self-display. This article explores teenagers” practices of social networking in order to uncover the subtle connections between online opportunity and risk. While younger teenagers relish the opportunities to continuously recreate a highly decorated, stylistically elaborate identity, older teenagers favour a plain aesthetic that foregrounds their links to others, thus expressing a notion of identity lived through authentic relationships. The article further contrasts teenagers” graded conception of “friends” with the binary 1 Published as Livingstone, S. (2008) Taking risky opportunities in youthful content creation: teenagers’ use of social networking sites for intimacy, privacy and self-expression. New Media & Society, 10(3): 393-411. Available in Sage Journal Online (Sage Publications Ltd. – All rights reserved): http://nms.sagepub.com/content/10/3/393.abstract 2 Thanks to the Research Council of Norway for funding the Mediatized Stories: Mediation Perspectives On Digital Storytelling Among Youth of which this project is part. I also thank David Brake, Shenja van der Graaf, Angela Jones, Ellen Helsper, Maria Kyriakidou, Annie Mullins, Toshie Takahashi, and two anonymous reviewers for their comments on an earlier version of this article. Last, thanks to the teenagers who participated in this project. 3 Sonia Livingstone is Professor of Social Psychology in the Department of Media and Communications at the London School of Economics and Political Science. She is author or editor of ten books and 100+ academic articles and chapters in the fields of media audiences, children and the internet, domestic contexts of media use and media literacy. Recent books include Young People and New Media (Sage, 2002), The Handbook of New Media (edited, with Leah Lievrouw, Sage, 2006), and Public Connection? Media Consumption and the Presumption of Attention (with Nick Couldry and Tim Markham, Palgrave, 2007). She currently directs the thematic research network, EU Kids Online, for the EC’s Safer Internet Plus programme. Email [email protected]", "title": "" }, { "docid": "e66fb8ed9e26b058a419d34d9c015a4c", "text": "Children and adolescents now communicate online to form and/or maintain relationships with friends, family, and strangers. Relationships in \"real life\" are important for children's and adolescents' psychosocial development; however, they can be difficult for those who experience feelings of loneliness and/or social anxiety. The aim of this study was to investigate differences in usage of online communication patterns between children and adolescents with and without self-reported loneliness and social anxiety. Six hundred twenty-six students ages 10 to 16 years completed a survey on the amount of time they spent communicating online, the topics they discussed, the partners they engaged with, and their purposes for communicating over the Internet. Participants were administered a shortened version of the UCLA Loneliness Scale and an abbreviated subscale of the Social Anxiety Scale for Adolescents (SAS-A). Additionally, age and gender differences in usage of the online communication patterns were examined across the entire sample. Findings revealed that children and adolescents who self-reported being lonely communicated online significantly more frequently about personal and intimate topics than did those who did not self-report being lonely. The former were motivated to use online communication significantly more frequently to compensate for their weaker social skills to meet new people. Results suggest that Internet usage allows them to fulfill critical needs of social interactions, self-disclosure, and identity exploration. Future research, however, should explore whether or not the benefits derived from online communication may also facilitate lonely children's and adolescents' offline social relationships.", "title": "" } ]
[ { "docid": "97abbb650710386d1e28533e8134c42c", "text": "Airway pressure limitation is now a largely accepted strategy in adult respiratory distress syndrome (ARDS) patients; however, some debate persists about the exact level of plateau pressure which can be safely used. The objective of the present study was to examine if the echocardiographic evaluation of right ventricular function performed in ARDS may help to answer to this question. For more than 20 years, we have regularly monitored right ventricular function by echocardiography in ARDS patients, during two different periods, a first (1980–1992) where airway pressure was not limited, and a second (1993–2006) where airway pressure was limited. By pooling our data, we can observe the effect of a large range of plateau pressure upon mortality rate and incidence of acute cor pulmonale. In this whole group of 352 ARDS patients, mortality rate and incidence of cor pulmonale were 80 and 56%, respectively, when plateau pressure was > 35 cmH2O; 42 and 32%, respectively, when plateau pressure was between 27 and 35 cmH2O; and 30 and 13%, respectively, when plateau pressure was < 27 cmH2O. Moreover, a clear interaction between plateau pressure and cor pulmonale was evidenced: whereas the odd ratio of dying for an increase in plateau pressure from 18–26 to 27–35 cm H2O in patients without cor pulmonale was 1.05 (p = 0.635), it was 3.32 in patients with cor pulmonale (p < 0.034). We hypothesize that monitoring of right ventricular function by echocardiography at bedside might help to control the safety of plateau pressure used in ARDS.", "title": "" }, { "docid": "ff95e468402fde74e334b83e2a1f1d23", "text": "The composition of fatty acids in the diets of both human and domestic animal species can regulate inflammation through the biosynthesis of potent lipid mediators. The substrates for lipid mediator biosynthesis are derived primarily from membrane phospholipids and reflect dietary fatty acid intake. Inflammation can be exacerbated with intake of certain dietary fatty acids, such as some ω-6 polyunsaturated fatty acids (PUFA), and subsequent incorporation into membrane phospholipids. Inflammation, however, can be resolved with ingestion of other fatty acids, such as ω-3 PUFA. The influence of dietary PUFA on phospholipid composition is influenced by factors that control phospholipid biosynthesis within cellular membranes, such as preferential incorporation of some fatty acids, competition between newly ingested PUFA and fatty acids released from stores such as adipose, and the impacts of carbohydrate metabolism and physiological state. The objective of this review is to explain these factors as potential obstacles to manipulating PUFA composition of tissue phospholipids by specific dietary fatty acids. A better understanding of the factors that influence how dietary fatty acids can be incorporated into phospholipids may lead to nutritional intervention strategies that optimize health.", "title": "" }, { "docid": "721b0ac6cc52ea434e51d95376cf0a60", "text": "The recent years have witnessed the rise of accurate but obscure decision systems which hide the logic of their internal decision processes to the users. The lack of explanations for the decisions of black box systems is a key ethical issue, and a limitation to the adoption of machine learning components in socially sensitive and safety-critical contexts. In this paper we focus on the problem of black box outcome explanation, i.e., explaining the reasons of the decision taken on a specific instance. We propose LORE, an agnostic method able to provide interpretable and faithful explanations. LORE first leans a local interpretable predictor on a synthetic neighborhood generated by a genetic algorithm. Then it derives from the logic of the local interpretable predictor a meaningful explanation consisting of: a decision rule, which explains the reasons of the decision; and a set of counterfactual rules, suggesting the changes in the instance’s features that lead to a different outcome. Wide experiments show that LORE outperforms existing methods and baselines both in the quality of explanations and in the accuracy in mimicking the black box.", "title": "" }, { "docid": "de9aa1b5c6e61da518e87a55d02c45e9", "text": "A novel type of dual-mode microstrip bandpass filter using degenerate modes of a meander loop resonator has been developed for miniaturization of high selectivity narrowband microwave bandpass filters. A filter of this type having a 2.5% bandwidth at 1.58 GHz was designed and fabricated. The measured filter performance is presented.", "title": "" }, { "docid": "1986179d7d985114fa14bbbe01770d8a", "text": "A low-power consumption, small-size smart antenna, named electronically steerable parasitic array radiator (ESPAR), has been designed. Beamforming is achieved by tuning the load reactances at parasitic elements surrounding the active central element. A fast beamforming algorithm based on simultaneous perturbation stochastic approximation with a maximum cross correlation coefficient criterion is proposed. The simulation and experimental results validate the algorithm. In an environment where the signal-to-interference-ratio is 0 dB, the algorithm converges within 50 iterations and achieves an output signal-to-interference-plus-noise-ratio of 10 dB. With the fast beamforming ability and its low-power consumption attribute, the ESPAR antenna makes the mass deployment of smart antenna technologies practical.", "title": "" }, { "docid": "88d377a1317eb45b8650947af5883255", "text": "Social entrepreneurship has raised increasing interest among scholars, yet we still know relatively little about the particular dynamics and processes involved. This paper aims at contributing to the field of social entrepreneurship by clarifying key elements, providing working definitions, and illuminating the social entrepreneurship process. In the first part of the paper we review the existing literature. In the second part we develop a model on how intentions to create a social venture –the tangible outcome of social entrepreneurship– get formed. Combining insights from traditional entrepreneurship literature and anecdotal evidence in the field of social entrepreneurship, we propose that behavioral intentions to create a social venture are influenced, first, by perceived social venture desirability, which is affected by attitudes such as empathy and moral judgment, and second, by perceived social venture feasibility, which is facilitated by social support and self-efficacy beliefs.", "title": "" }, { "docid": "c117da74c302d9e108970854d79e54fd", "text": "Entailment recognition is a primary generic task in natural language inference, whose focus is to detect whether the meaning of one expression can be inferred from the meaning of the other. Accordingly, many NLP applications would benefit from high coverage knowledgebases of paraphrases and entailment rules. To this end, learning such knowledgebases from the Web is especially appealing due to its huge size as well as its highly heterogeneous content, allowing for a more scalable rule extraction of various domains. However, the scalability of state-of-the-art entailment rule acquisition approaches from the Web is still limited. We present a fully unsupervised learning algorithm for Webbased extraction of entailment relations. We focus on increased scalability and generality with respect to prior work, with the potential of a large-scale Web-based knowledgebase. Our algorithm takes as its input a lexical–syntactic template and searches the Web for syntactic templates that participate in an entailment relation with the input template. Experiments show promising results, achieving performance similar to a state-of-the-art unsupervised algorithm, operating over an offline corpus, but with the benefit of learning rules for different domains with no additional effort.", "title": "" }, { "docid": "96053a9bd2faeff5ddf61f15f2b989c4", "text": "Poly(vinyl alcohol) cryogel, PVA-C, is presented as a tissue-mimicking material, suitable for application in magnetic resonance (MR) imaging and ultrasound imaging. A 10% by weight poly(vinyl alcohol) in water solution was used to form PVA-C, which is solidified through a freeze-thaw process. The number of freeze-thaw cycles affects the properties of the material. The ultrasound and MR imaging characteristics were investigated using cylindrical samples of PVA-C. The speed of sound was found to range from 1520 to 1540 m s(-1), and the attenuation coefficients were in the range of 0.075-0.28 dB (cm MHz)(-1). T1 and T2 relaxation values were found to be 718-1034 ms and 108-175 ms, respectively. We also present applications of this material in an anthropomorphic brain phantom, a multi-volume stenosed vessel phantom and breast biopsy phantoms. Some suggestions are made for how best to handle this material in the phantom design and development process.", "title": "" }, { "docid": "6465daca71e18cb76ec5442fb94f625a", "text": "In this paper, we show how an open-source, language-independent proofreading tool has been built. Many languages lack contextual proofreading tools; for many others, only partial solutions are available. Using existing, largely language-independent tools and collaborative processes it is possible to develop a practical style and grammar checker and to fight the digital divide in countries where commercial linguistic application software is unavailable or too expensive for average users. The described solution depends on relatively easily available language resources and does not require a fully formalized grammar nor a deep parser, yet it can detect many frequent context-dependent spelling mistakes, as well as grammatical, punctuation, usage, and stylistic errors. Copyright q 2010 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "2c289744ea8ae9d8f0c6ce4ba356b6cb", "text": "The mission of the IPTS is to provide customer-driven support to the EU policy-making process by researching science-based responses to policy challenges that have both a socioeconomic and a scientific or technological dimension. Legal Notice Neither the European Commission nor any person acting on behalf of the Commission is responsible for the use which might be made of this publication. (*) Certain mobile telephone operators do not allow access to 00 800 numbers or these calls may be billed.", "title": "" }, { "docid": "8d79675b0db5d84251bea033808396c3", "text": "This paper discusses verification and validation of simulation models. The different approaches to deciding model validity am presented; how model verification and validation relate to the model development process are discussed; various validation techniques are defined, conceptual model validity, model verification, operational validity, and data validity are described; ways to document results are given; and a recommended procedure is presented.", "title": "" }, { "docid": "61bb811aa336e77d2549c51939f9668d", "text": "Policy languages (such as privacy and rights) have had little impact on the wider community. Now that Social Networks have taken off, the need to revisit Policy languages and realign them towards Social Networks requirements has become more apparent. One such language is explored as to its applicability to the Social Networks masses. We also argue that policy languages alone are not sufficient and thus they should be paired with reasoning mechanisms to provide precise and unambiguous execution models of the policies. To this end we propose a computationally oriented model to represent, reason with and execute policies for Social Networks.", "title": "" }, { "docid": "713c7761ecba317bdcac451fcc60e13d", "text": "We describe a method for automatically transcribing guitar tablatures from audio signals in accordance with the player's proficiency for use as support for a guitar player's practice. The system estimates the multiple pitches in each time frame and the optimal fingering considering playability and player's proficiency. It combines a conventional multipitch estimation method with a basic dynamic programming method. The difficulty of the fingerings can be changed by tuning the parameter representing the relative weights of the acoustical reproducibility and the fingering easiness. Experiments conducted using synthesized guitar audio signals to evaluate the transcribed tablatures in terms of the multipitch estimation accuracy and fingering easiness demonstrated that the system can simplify the fingering with higher precision of multipitch estimation results than the conventional method.", "title": "" }, { "docid": "b2e62194ce1eb63e0d13659a546db84b", "text": "The rapid advance of mobile computing technology and wireless networking, there is a significant increase of mobile subscriptions. This drives a strong demand for mobile cloud applications and services for mobile device users. This brings out a great business and research opportunity in mobile cloud computing (MCC). This paper first discusses the market trend and related business driving forces and opportunities. Then it presents an overview of MCC in terms of its concepts, distinct features, research scope and motivations, as well as advantages and benefits. Moreover, it discusses its opportunities, issues and challenges. Furthermore, the paper highlights a research roadmap for MCC.", "title": "" }, { "docid": "3c33528735b53a4f319ce4681527c163", "text": "Within the past two years, important advances have been made in modeling credit risk at the portfolio level. Practitioners and policy makers have invested in implementing and exploring a variety of new models individually. Less progress has been made, however, with comparative analyses. Direct comparison often is not straightforward, because the different models may be presented within rather different mathematical frameworks. This paper offers a comparative anatomy of two especially influential benchmarks for credit risk models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We then design simulation exercises which evaluate the effect of each of these differences individually. JEL Codes: G31, C15, G11 ∗The views expressed herein are my own and do not necessarily reflect those of the Board of Governors or its staff. I would like to thank David Jones for drawing my attention to this issue, and for his helpful comments. I am also grateful to Mark Carey for data and advice useful in calibration of the models, and to Chris Finger and Tom Wilde for helpful comments. Please address correspondence to the author at Division of Research and Statistics, Mail Stop 153, Federal Reserve Board, Washington, DC 20551, USA. Phone: (202)452-3705. Fax: (202)452-5295. Email: 〈[email protected]〉. Over the past decade, financial institutions have developed and implemented a variety of sophisticated models of value-at-risk for market risk in trading portfolios. These models have gained acceptance not only among senior bank managers, but also in amendments to the international bank regulatory framework. Much more recently, important advances have been made in modeling credit risk in lending portfolios. The new models are designed to quantify credit risk on a portfolio basis, and thus have application in control of risk concentration, evaluation of return on capital at the customer level, and more active management of credit portfolios. Future generations of today’s models may one day become the foundation for measurement of regulatory capital adequacy. Two of the models, J.P. Morgan’s CreditMetrics and Credit Suisse Financial Product’s CreditRisk+, have been released freely to the public since 1997 and have quickly become influential benchmarks. Practitioners and policy makers have invested in implementing and exploring each of the models individually, but have made less progress with comparative analyses. The two models are intended to measure the same risks, but impose different restrictions and distributional assumptions, and suggest different techniques for calibration and solution. Thus, given the same portfolio of credit exposures, the two models will, in general, yield differing evaluations of credit risk. Determining which features of the models account for differences in output would allow us a better understanding of the sensitivity of the models to the particular assumptions they employ. Unfortunately, direct comparison of the models is not straightforward, because the two models are presented within rather different mathematical frameworks. The CreditMetrics model is familiar to econometricians as an ordered probit model. Credit events are driven by movements in underlying unobserved latent variables. The latent variables are assumed to depend on external “risk factors.” Common dependence on the same risk factors gives rise to correlations in credit events across obligors. The CreditRisk+ model is based instead on insurance industry models of event risk. Instead of a latent variable, each obligor has a default probability. The default probabilities are not constant over time, but rather increase or decrease in response to background macroeconomic factors. To the extent that two obligors are sensitive to the same set of background factors, their default probabilities will move together. These co-movements in probability give rise to correlations in defaults. CreditMetrics and CreditRisk+ may serve essentially the same function, but they appear to be constructed quite differently. This paper offers a comparative anatomy of CreditMetrics and CreditRisk+. We show that, despite differences on the surface, the underlying mathematical structures are similar. The structural parallels provide intuition for the relationship between the two models and allow us to describe quite precisely where the models differ in functional form, distributional assumptions, and reliance on approximation formulae. We can then design simulation exercises which evaluate the effect of these differences individually. We proceed as follows. Section 1 presents a summary of the CreditRisk+ model, and introduces a restricted version of CreditMetrics. The restrictions are imposed to facilitate direct comparison of CreditMetrics and CreditRisk+. While some of the richness of the full CreditMetrics implementation is sacrificed, the essential mathematical characteristics of the model are preserved. Our", "title": "" }, { "docid": "564185f1eaa04d4d968ffcae05f030f5", "text": "Municipal solid waste is a major challenge facing developing countries [1]. Amount of waste generated by developing countries is increasing as a result of urbanisation and economic growth [2]. In Africa and other developing countries waste is disposed of in poorly managed landfills, controlled and uncontrolled dumpsites increasing environmental health risks [3]. Households have a major role to play in reducing the amount of waste sent to landfills [4]. Recycling is accepted by developing and developed countries as one of the best solution in municipal solid waste management [5]. Households influence the quality and amount of recyclable material recovery [1]. Separation of waste at source can reduce contamination of recyclable waste material. Households are the key role players in ensuring that waste is separated at source and their willingness to participate in source separation of waste should be encouraged by municipalities and local regulatory authorities [6,7].", "title": "" }, { "docid": "f249a6089a789e52eeadc8ae16213bc1", "text": "We have collected a new face data set that will facilitate research in the problem of frontal to profile face verification `in the wild'. The aim of this data set is to isolate the factor of pose variation in terms of extreme poses like profile, where many features are occluded, along with other `in the wild' variations. We call this data set the Celebrities in Frontal-Profile (CFP) data set. We find that human performance on Frontal-Profile verification in this data set is only slightly worse (94.57% accuracy) than that on Frontal-Frontal verification (96.24% accuracy). However we evaluated many state-of-the-art algorithms, including Fisher Vector, Sub-SML and a Deep learning algorithm. We observe that all of them degrade more than 10% from Frontal-Frontal to Frontal-Profile verification. The Deep learning implementation, which performs comparable to humans on Frontal-Frontal, performs significantly worse (84.91% accuracy) on Frontal-Profile. This suggests that there is a gap between human performance and automatic face recognition methods for large pose variation in unconstrained images.", "title": "" }, { "docid": "89ae73a8337870e8ef5e078de7bf2f58", "text": "In grid connected photovoltaic (PV) systems, maximum power point tracking (MPPT) algorithm plays an important role in optimizing the solar energy efficiency. In this paper, the new artificial neural network (ANN) based MPPT method has been proposed for searching maximum power point (MPP) fast and exactly. For the first time, the combined method is proposed, which is established on the ANN-based PV model method and incremental conductance (IncCond) method. The advantage of ANN-based PV model method is the fast MPP approximation base on the ability of ANN according the parameters of PV array that used. The advantage of IncCond method is the ability to search the exactly MPP based on the feedback voltage and current but don't care the characteristic on PV array‥ The effectiveness of proposed algorithm is validated by simulation using Matlab/ Simulink and experimental results using kit field programmable gate array (FPGA) Virtex II pro of Xilinx.", "title": "" }, { "docid": "9737feb4befdaf995b1f9e88535577ec", "text": "This paper addresses the problem of detecting the presence of malware that leaveperiodictraces innetworktraffic. This characteristic behavior of malware was found to be surprisingly prevalent in a parallel study. To this end, we propose a visual analytics solution that supports both automatic detection and manual inspection of periodic signals hidden in network traffic. The detected periodic signals are visually verified in an overview using a circular graph and two stacked histograms as well as in detail using deep packet inspection. Our approach offers the capability to detect complex periodic patterns, but avoids the unverifiability issue often encountered in related work. The periodicity assumption imposed on malware behavior is a relatively weak assumption, but initial evaluations with a simulated scenario as well as a publicly available network capture demonstrate its applicability.", "title": "" }, { "docid": "ac529a455bcefa58abafa6c679bec2b4", "text": "This article presents near-optimal guarantees for stable and robust image recovery from undersampled noisy measurements using total variation minimization. In particular, we show that from O(s log(N)) nonadaptive linear measurements, an image can be reconstructed to within the best s-term approximation of its gradient up to a logarithmic factor, and this factor can be removed by taking slightly more measurements. Along the way, we prove a strengthened Sobolev inequality for functions lying in the null space of suitably incoherent matrices.", "title": "" } ]
scidocsrr
c95113263d1ab33b8fa34bfec122bcff
CoBoLD — A bonding mechanism for modular self-reconfigurable mobile robots
[ { "docid": "9055008e0c6837b6c9b494922eb0770a", "text": "One of the primary impediments to building ensembles of modular robots is the complexity and number of mechanical mechanisms used to construct the individual modules. As part of the Claytronics project - which aims to build very large ensembles of modular robots - we investigate how to simplify each module by eliminating moving parts and reducing the number of mechanical mechanisms on each robot by using force-at-a-distance actuators. Additionally, we are also investigating the feasibility of using these unary actuators to improve docking performance, implement intermodule adhesion, power transfer, communication, and sensing. In this paper we describe our most recent results in the magnetic domain, including our first design sufficiently robust to operate reliably in groups greater than two modules. Our work should be seen as an extension of systems such as Fracta [9], and a contrasting line of inquiry to several other researchers' prior efforts that have used magnetic latching to attach modules to one another but relied upon a powered hinge [10] or telescoping mechanism [12] within each module to facilitate self-reconfiguration.", "title": "" }, { "docid": "6befac01d5a3f21100a54de43ee62845", "text": "Robots used for tasks in space have strict requirements. Modular reconfigurable robots have a variety of attributes that are advantageous for these conditions including the ability to serve as many tools at once saving weight, packing into compressed forms saving space and having large redundancy to increase robustness. Self-reconfigurable systems can also self-repair as well as automatically adapt to changing conditions or ones that were not anticipated. PolyBot may serve well in the space manipulation and surface mobility class of space applications.", "title": "" } ]
[ { "docid": "1cd77d97f27b45d903ffcecda02795a5", "text": "Molecular machine learning has been maturing rapidly over the last few years. Improved methods and the presence of larger datasets have enabled machine learning algorithms to make increasingly accurate predictions about molecular properties. However, algorithmic progress has been limited due to the lack of a standard benchmark to compare the efficacy of proposed methods; most new algorithms are benchmarked on different datasets making it challenging to gauge the quality of proposed methods. This work introduces MoleculeNet, a large scale benchmark for molecular machine learning. MoleculeNet curates multiple public datasets, establishes metrics for evaluation, and offers high quality open-source implementations of multiple previously proposed molecular featurization and learning algorithms (released as part of the DeepChem open source library). MoleculeNet benchmarks demonstrate that learnable representations are powerful tools for molecular machine learning and broadly offer the best performance. However, this result comes with caveats. Learnable representations still struggle to deal with complex tasks under data scarcity and highly imbalanced classification. For quantum mechanical and biophysical datasets, the use of physics-aware featurizations can be more important than choice of particular learning algorithm.", "title": "" }, { "docid": "58317baa129fd1f164813dcaf566b543", "text": "Affective image understanding has been extensively studied in the last decade since more and more users express emotion via visual contents. While current algorithms based on convolutional neural networks aim to distinguish emotional categories in a discrete label space, the task is inherently ambiguous. This is mainly because emotional labels with the same polarity (i.e., positive or negative) are highly related, which is different from concrete object concepts such as cat, dog and bird. To the best of our knowledge, few methods focus on leveraging such characteristic of emotions for affective image understanding. In this work, we address the problem of understanding affective images via deep metric learning and propose a multi-task deep framework to optimize both retrieval and classification goals. We propose the sentiment constraints adapted from the triplet constraints, which are able to explore the hierarchical relation of emotion labels. We further exploit the sentiment vector as an effective representation to distinguish affective images utilizing the texture representation derived from convolutional layers. Extensive evaluations on four widely-used affective datasets, i.e., Flickr and Instagram, IAPSa, Art Photo, and Abstract Paintings, demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods on both affective image retrieval and classification tasks.", "title": "" }, { "docid": "6fe39cbe3811ac92527ba60620b39170", "text": "Providing accurate information about human's state, activity is one of the most important elements in Ubiquitous Computing. Various applications can be enabled if one's state, activity can be recognized. Due to the low deployment cost, non-intrusive sensing nature, Wi-Fi based activity recognition has become a promising, emerging research area. In this paper, we survey the state-of-the-art of the area from four aspects ranging from historical overview, theories, models, key techniques to applications. In addition to the summary about the principles, achievements of existing work, we also highlight some open issues, research directions in this emerging area.", "title": "" }, { "docid": "5570d8a799dfffa220e5d81a03468a45", "text": "Several results appeared that show significant reduction in time for matrix multiplication, singular value decomposition as well as linear (lscr2) regression, all based on data dependent random sampling. Our key idea is that low dimensional embeddings can be used to eliminate data dependence and provide more versatile, linear time pass efficient matrix computation. Our main contribution is summarized as follows. 1) Independent of the results of Har-Peled and of Deshpande and Vempala, one of the first - and to the best of our knowledge the most efficient - relative error (1 + epsi) parA $AkparF approximation algorithms for the singular value decomposition of an m times n matrix A with M non-zero entries that requires 2 passes over the data and runs in time O((M(k/epsi+k log k) + (n+m)(k/epsi+k log k)2)log (1/sigma)). 2) The first o(nd2) time (1 + epsi) relative error approximation algorithm for n times d linear (lscr2) regression. 3) A matrix multiplication and norm approximation algorithm that easily applies to implicitly given matrices and can be used as a black box probability boosting tool", "title": "" }, { "docid": "39ccad7a2c779e277194e958820b82ad", "text": "Smart cities are struggling with using public space efficiently and decreasing pollution at the same time. For this governments have embraced smart parking initiatives, which should result in a high utilization of public space and minimization of the driving, in this way reducing the emissions of cars. Yet, simply opening data about the availability of public spaces results in more congestions as multiple cars might be heading for the same parking space. In this work, we propose a Multiple Criteria based Parking space Reservation (MCPR) algorithm, for reserving a space for a user to deal with parking space in a fair way. Users' requirements are the main driving factor for the algorithm and used as criteria in MCPR. To evaluate the algorithm, simulations for three set of user preferences were made. The simulation results show that the algorithm satisfied the users' request fairly for all the three preferences. The algorithm helps users automatically to find a parking space according to the users' requirements. The algorithm can be used in a smart parking system to search for a parking space on behalf of user and send parking space information to the user.", "title": "" }, { "docid": "3012eafa396cc27e8b05fd71dd9bc13b", "text": "An assessment of Herman and Chomsky’s 1988 five-filter propaganda model suggests it is mainly valuable for identifying areas in which researchers should look for evidence of collaboration (whether intentional or otherwise) between mainstream media and the propaganda aims of the ruling establishment. The model does not identify methodologies for determining the relative weight of independent filters in different contexts, something that would be useful in its future development. There is a lack of precision in the characterization of some of the filters. The model privileges the structural factors that determine propagandized news selection, and therefore eschews or marginalizes intentionality. This paper extends the model to include the “buying out” of journalists or their publications by intelligence and related special interest organizations. It applies the extended six-filter model to controversies over reporting by The New York Times of the build-up towards the US invasion of Iraq in 2003, the issue of weapons of mass destruction in general, and the reporting of The New York Times correspondent Judith Miller in particular, in the context of broader critiques of US mainstream media war coverage. The controversies helped elicit evidence of the operation of some filters of the propaganda model, including dependence on official sources, fear of flak, and ideological convergence. The paper finds that the filter of routine news operations needs to be counterbalanced by its opposite, namely non-routine abuses of standard operating procedures. While evidence of the operation of other filters was weaker, this is likely due to difficulties of observability, as there are powerful deductive reasons for maintaining all six filters within the framework of media propaganda analysis.", "title": "" }, { "docid": "cb6d60c4948bcf2381cb03a0e7dc8312", "text": "While humor has been historically studied from a psychological, cognitive and linguistic standpoint, its study from a computational perspective is an area yet to be explored in Computational Linguistics. There exist some previous works, but a characterization of humor that allows its automatic recognition and generation is far from being specified. In this work we build a crowdsourced corpus of labeled tweets, annotated according to its humor value, letting the annotators subjectively decide which are humorous. A humor classifier for Spanish tweets is assembled based on supervised learning, reaching a precision of 84% and a recall of 69%.", "title": "" }, { "docid": "a7181a3ddebed92d352ecf67e76c6e81", "text": "Empirical, hypothesis-driven, experimentation is at the heart of the scientific discovery process and has become commonplace in human-factors related fields. To enable the integration of visual analytics in such experiments, we introduce VEEVVIE, the Visual Explorer for Empirical Visualization, VR and Interaction Experiments. VEEVVIE is comprised of a back-end ontology which can model several experimental designs encountered in these fields. This formalization allows VEEVVIE to capture experimental data in a query-able form and makes it accessible through a front-end interface. This front-end offers several multi-dimensional visualization widgets with built-in filtering and highlighting functionality. VEEVVIE is also expandable to support custom experimental measurements and data types through a plug-in visualization widget architecture. We demonstrate VEEVVIE through several case studies of visual analysis, performed on the design and data collected during an experiment on the scalability of high-resolution, immersive, tiled-display walls.", "title": "" }, { "docid": "ad950cf335913941803a7af7cba969d3", "text": "Storage systems rely on maintenance tasks, such as backup and layout optimization, to ensure data availability and good performance. These tasks access large amounts of data and can significantly impact foreground applications. We argue that storage maintenance can be performed more efficiently by prioritizing processing of data that is currently cached in memory. Data can be cached either due to other maintenance tasks requesting it previously, or due to overlapping foreground I/O activity.\n We present Duet, a framework that provides notifications about page-level events to maintenance tasks, such as a page being added or modified in memory. Tasks use these events as hints to opportunistically process cached data. We show that tasks using Duet can complete maintenance work more efficiently because they perform fewer I/O operations. The I/O reduction depends on the amount of data overlap with other maintenance tasks and foreground applications. Consequently, Duet's efficiency increases with additional tasks because opportunities for synergy appear more often.", "title": "" }, { "docid": "c508f62dfd94d3205c71334638790c54", "text": "Financial and capital markets (especially stock markets) are considered high return investment fields, which in the same time are dominated by uncertainty and volatility. Stock market prediction tries to reduce this uncertainty and consequently the risk. As stock markets are influenced by many economical, political and even psychological factors, it is very difficult to forecast the movement of future values. Since classical statistical methods (primarily technical and fundamental analysis) are unable to deal with the non-linearity in the dataset, thus it became necessary the utilization of more advanced forecasting procedures. Financial prediction is a research active area and neural networks have been proposed as one of the most promising methods for such predictions. Artificial Neural Networks (ANNs) mimics, simulates the learning capability of the human brain. NNs are able to find accurate solutions in a complex, noisy environment or even to deal efficiently with partial information. In the last decade the ANNs have been widely used for predicting financial markets, because they are capable to detect and reproduce linear and nonlinear relationships among a set of variables. Furthermore they have a potential of learning the underlying mechanics of stock markets, i.e. to capture the complex dynamics and non-linearity of the stock market time series. In this paper, study we will get acquainted with some financial time series analysis concepts and theories linked to stock markets, as well as with the neural networks based systems and hybrid techniques that were used to solve several forecasting problems concerning the capital, financial and stock markets. Putting the foregoing experimental results to use, we will develop, implement a multilayer feedforward neural network based financial time series forecasting system. Thus, this system will be used to predict the future index values of major US and European stock exchanges and the evolution of interest rates as well as the future stock price of some US mammoth companies (primarily from IT branch).", "title": "" }, { "docid": "26439bd538c8f0b5d6fba3140e609aab", "text": "A planar antenna with a broadband feeding structure is presented and analyzed for ultrawideband applications. The proposed antenna consists of a suspended radiator fed by an n-shape microstrip feed. Study shows that this antenna achieves an impedance bandwidth from 3.1-5.1 GHz (48%) for a reflection of coefficient of iotaS11iota < -10 dB, and an average gain of 7.7 dBi. Stable boresight radiation patterns are achieved across the entire operating frequency band, by suppressing the high order mode resonances. This design exhibits good mechanical tolerance and manufacturability.", "title": "" }, { "docid": "25b183ce7ecc4b9203686c7ea68aacea", "text": "A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual inspection of sampled images, and often offer only fairly limited scalability. In this paper, we propose new techniques that employ a classification–based perspective to evaluate synthetic GAN distributions and their capability to accurately reflect the essential properties of the training data. These techniques require only minimal human supervision and can easily be scaled and adapted to evaluate a variety of state-of-the-art GANs on large, popular datasets. Our analysis indicates that GANs have significant problems in reproducing the more distributional properties of the training dataset. In particular, when seen through the lens of classification, the diversity of GAN data is orders of magnitude less than that of the original data.", "title": "" }, { "docid": "813a0d47405d133263deba0da6da27a8", "text": "The demands on dielectric material measurements have increased over the years as electrical components have been miniaturized and device frequency bands have increased. Well-characterized dielectric measurements on thin materials are needed for circuit design, minimization of crosstalk, and characterization of signal-propagation speed. Bulk material applications have also increased. For accurate dielectric measurements, each measurement band and material geometry requires specific fixtures. Engineers and researchers must carefully match their material system and uncertainty requirements to the best available measurement system. Broadband measurements require transmission-line methods, and accurate measurements on low-loss materials are performed in resonators. The development of the most accurate methods for each application requires accurate fixture selection in terms of field geometry, accurate field models, and precise measurement apparatus.", "title": "" }, { "docid": "40649a3bc0ea3ac37ed99dca22e52b92", "text": "This paper presents a 40 Gb/s serial-link receiver including an adaptive equalizer and a CDR circuit. A parallel-path equalizing filter is used to compensate the high-frequency loss in copper cables. The adaptation is performed by only varying the gain in the high-pass path, which allows a single loop for proper control and completely removes the RC filters used for separately extracting the high- and low-frequency contents of the signal. A full-rate bang-bang phase detector with only five latches is proposed in the following CDR circuit. Minimizing the number of latches saves the power consumption and the area occupied by inductors. The performance is also improved by avoiding complicated routing of high-frequency signals. The receiver is able to recover 40 Gb/s data passing through a 4 m cable with 10 dB loss at 20 GHz. For an input PRBS of 2 7-1, the recovered clock jitter is 0.3 psrms and 4.3 pspp. The retimed data exhibits 500 mV pp output swing and 9.6 pspp jitter with BER <10-12. Fabricated in 90 nm CMOS technology, the receiver consumes 115 mW , of which 58 mW is dissipated in the equalizer and 57 mW in the CDR.", "title": "" }, { "docid": "e0b1056544c3dc5c3b6f5bc072a72831", "text": "In recent years, unfolding iterative algorithms as neural networks has become an empirical success in solving sparse recovery problems. However, its theoretical understanding is still immature, which prevents us from fully utilizing the power of neural networks. In this work, we study unfolded ISTA (Iterative Shrinkage Thresholding Algorithm) for sparse signal recovery. We introduce a weight structure that is necessary for asymptotic convergence to the true sparse signal. With this structure, unfolded ISTA can attain a linear convergence, which is better than the sublinear convergence of ISTA/FISTA in general cases. Furthermore, we propose to incorporate thresholding in the network to perform support selection, which is easy to implement and able to boost the convergence rate both theoretically and empirically. Extensive simulations, including sparse vector recovery and a compressive sensing experiment on real image data, corroborate our theoretical results and demonstrate their practical usefulness. We have made our codes publicly available.2.", "title": "" }, { "docid": "37c8fa72d0959a64460dbbe4fdb8c296", "text": "This paper presents a model which generates architectural layout for a single flat having regular shaped spaces; Bedroom, Bathroom, Kitchen, Balcony, Living and Dining Room. Using constraints at two levels; Topological (Adjacency, Compactness, Vaastu, and Open and Closed face constraints) and Dimensional (Length to Width ratio constraint), Genetic Algorithms have been used to generate the topological arrangement of spaces in the layout and further if required, feasibility have been dimensionally analyzed. Further easy evacuation form the selected layout in case of adversity has been proposed using Dijkstra's Algorithm. Later the proposed model has been tested for efficiency using various test cases. This paper also presents a classification and categorization of various problems of space planning.", "title": "" }, { "docid": "885b7e9fb662d938fc8264597fa070b8", "text": "Learning word embeddings on large unlabeled corpus has been shown to be successful in improving many natural language tasks. The most efficient and popular approaches learn or retrofit such representations using additional external data. Resulting embeddings are generally better than their corpus-only counterparts, although such resources cover a fraction of words in the vocabulary. In this paper, we propose a new approach, Dict2vec, based on one of the largest yet refined datasource for describing words – natural language dictionaries. Dict2vec builds new word pairs from dictionary entries so that semantically-related words are moved closer, and negative sampling filters out pairs whose words are unrelated in dictionaries. We evaluate the word representations obtained using Dict2vec on eleven datasets for the word similarity task and on four datasets for a text classification task.", "title": "" }, { "docid": "95410e1bfb8a5f42ff949d061b1cd4b9", "text": "This paper presents a high-level hand feature extraction method for real-time gesture recognition. Firstly, the fingers are modelled as cylindrical objects due to their parallel edge feature. Then a novel algorithm is proposed to directly extract fingers from salient hand edges. Considering the hand geometrical characteristics, the hand posture is segmented and described based on the finger positions, palm center location and wrist position. A weighted radial projection algorithm with the origin at the wrist position is applied to localize each finger. The developed system can not only extract extensional fingers but also flexional fingers with high accuracy. Furthermore, hand rotation and finger angle variation have no effect on the algorithm performance. The orientation of the gesture can be calculated without the aid of arm direction and it would not be disturbed by the bare arm area. Experiments have been performed to demonstrate that the proposed method can directly extract high-level hand feature and estimate hand poses in real-time. & 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "ceedf70c92099fc8612a38f91f2c9507", "text": "Recent work has demonstrated the value of social media monitoring for health surveillance (e.g., tracking influenza or depression rates). It is an open question whether such data can be used to make causal inferences (e.g., determining which activities lead to increased depression rates). Even in traditional, restricted domains, estimating causal effects from observational data is highly susceptible to confounding bias. In this work, we estimate the effect of exercise on mental health from Twitter, relying on statistical matching methods to reduce confounding bias. We train a text classifier to estimate the volume of a user’s tweets expressing anxiety, depression, or anger, then compare two groups: those who exercise regularly (identified by their use of physical activity trackers like Nike+), and a matched control group. We find that those who exercise regularly have significantly fewer tweets expressing depression or anxiety; there is no significant difference in rates of tweets expressing anger. We additionally perform a sensitivity analysis to investigate how the many experimental design choices in such a study impact the final conclusions, including the quality of the classifier and the construction of the control group.", "title": "" }, { "docid": "3eb8a99236905f59af8a32e281189925", "text": "F2FS is a Linux file system designed to perform well on modern flash storage devices. The file system builds on append-only logging and its key design decisions were made with the characteristics of flash storage in mind. This paper describes the main design ideas, data structures, algorithms and the resulting performance of F2FS. Experimental results highlight the desirable performance of F2FS; on a state-of-the-art mobile system, it outperforms EXT4 under synthetic workloads by up to 3.1× (iozone) and 2× (SQLite). It reduces elapsed time of several realistic workloads by up to 40%. On a server system, F2FS is shown to perform better than EXT4 by up to 2.5× (SATA SSD) and 1.8× (PCIe SSD).", "title": "" } ]
scidocsrr
9edda51d7574f2e83972bb4d6b033a3f
A review of methods for automatic understanding of natural language mathematical problems
[ { "docid": "de43054eb774df93034ffc1976a932b7", "text": "Recent experiments in programming natural language question-answering systems are reviewed to summarize the methods that have been developed for syntactic, semantic, and logical analysis of English strings. It is concluded that at least minimally effective techniques have been devised for answering questions from natural language subsets in small scale experimental systems and that a useful paradigm has evolved to guide research efforts in the field. Current approaches to semantic analysis and logical inference are seen to be effective beginnings but of questionable generality with respect either to subtle aspects of meaning or to applications over large subsets of English. Generalizing from current small-scale experiments to language-processing systems based on dictionaries with thousands of entries—with correspondingly large grammars and semantic systems—may entail a new order of complexity and require the invention and development of entirely different approaches to semantic analysis and question answering.", "title": "" } ]
[ { "docid": "d9e09589352431cafb6e579faf91afa8", "text": "The purpose of this study was to investigate the effects of training muscle groups 1 day per week using a split-body routine (SPLIT) vs. 3 days per week using a total-body routine (TOTAL) on muscular adaptations in well-trained men. Subjects were 20 male volunteers (height = 1.76 ± 0.05 m; body mass = 78.0 ± 10.7 kg; age = 23.5 ± 2.9 years) recruited from a university population. Participants were pair matched according to baseline strength and then randomly assigned to 1 of the 2 experimental groups: a SPLIT, where multiple exercises were performed for a specific muscle group in a session with 2-3 muscle groups trained per session (n = 10) or a TOTAL, where 1 exercise was performed per muscle group in a session with all muscle groups trained in each session (n = 10). Subjects were tested pre- and poststudy for 1 repetition maximum strength in the bench press and squat, and muscle thickness (MT) of forearm flexors, forearm extensors, and vastus lateralis. Results showed significantly greater increases in forearm flexor MT for TOTAL compared with SPLIT. No significant differences were noted in maximal strength measures. The findings suggest a potentially superior hypertrophic benefit to higher weekly resistance training frequencies.", "title": "" }, { "docid": "166b16222ecc15048972e535dbf4cb38", "text": "Fingerprint matching systems generally use four types of representation schemes: grayscale image, phase image, skeleton image, and minutiae, among which minutiae-based representation is the most widely adopted one. The compactness of minutiae representation has created an impression that the minutiae template does not contain sufficient information to allow the reconstruction of the original grayscale fingerprint image. This belief has now been shown to be false; several algorithms have been proposed that can reconstruct fingerprint images from minutiae templates. These techniques try to either reconstruct the skeleton image, which is then converted into the grayscale image, or reconstruct the grayscale image directly from the minutiae template. However, they have a common drawback: Many spurious minutiae not included in the original minutiae template are generated in the reconstructed image. Moreover, some of these reconstruction techniques can only generate a partial fingerprint. In this paper, a novel fingerprint reconstruction algorithm is proposed to reconstruct the phase image, which is then converted into the grayscale image. The proposed reconstruction algorithm not only gives the whole fingerprint, but the reconstructed fingerprint contains very few spurious minutiae. Specifically, a fingerprint image is represented as a phase image which consists of the continuous phase and the spiral phase (which corresponds to minutiae). An algorithm is proposed to reconstruct the continuous phase from minutiae. The proposed reconstruction algorithm has been evaluated with respect to the success rates of type-I attack (match the reconstructed fingerprint against the original fingerprint) and type-II attack (match the reconstructed fingerprint against different impressions of the original fingerprint) using a commercial fingerprint recognition system. Given the reconstructed image from our algorithm, we show that both types of attacks can be successfully launched against a fingerprint recognition system.", "title": "" }, { "docid": "0ff76204fcdf1a7cf2a6d13a5d3b1597", "text": "In this study, we found that the optimum take-off angle for a long jumper may be predicted by combining the equation for the range of a projectile in free flight with the measured relations between take-off speed, take-off height and take-off angle for the athlete. The prediction method was evaluated using video measurements of three experienced male long jumpers who performed maximum-effort jumps over a wide range of take-off angles. To produce low take-off angles the athletes used a long and fast run-up, whereas higher take-off angles were produced using a progressively shorter and slower run-up. For all three athletes, the take-off speed decreased and the take-off height increased as the athlete jumped with a higher take-off angle. The calculated optimum take-off angles were in good agreement with the athletes' competition take-off angles.", "title": "" }, { "docid": "a7bd7a5b7d79ce8c5691abfdcecfeec7", "text": "We consider the problems of learning forward models that map state to high-dimensional images and inverse models that map high-dimensional images to state in robotics. Specifically, we present a perceptual model for generating video frames from state with deep networks, and provide a framework for its use in tracking and prediction tasks. We show that our proposed model greatly outperforms standard deconvolutional methods and GANs for image generation, producing clear, photo-realistic images. We also develop a convolutional neural network model for state estimation and compare the result to an Extended Kalman Filter to estimate robot trajectories. We validate all models on a real robotic system.", "title": "" }, { "docid": "82a3fe6dfa81e425eb3aa3404799e72d", "text": "ABSTRACT: Nonlinear control problem for a missile autopilot is quick adaptation and minimizing the desired acceleration to missile nonlinear model. For this several missile controllers are provided which are on the basis of nonlinear control or design of linear control for the linear missile system. In this paper a linear control of dynamic matrix type is proposed for the linear model of missile. In the first section, an approximate two degrees of freedom missile model, known as Horton model, is introduced. Then, the nonlinear model is converted into observable and controllable model base on the feedback linear rule of input-state mode type. Finally for design of control model, the dynamic matrix flight control, which is one of the linear predictive control design methods on the basis of system step response information, is used. This controller is a recursive method which calculates the development of system input by definition and optimization of a cost function and using system dynamic matrix. So based on the applied inputs and previous output information, the missile acceleration would be calculated. Unlike other controllers, this controller doesn’t require an interaction effect and accurate model. Although, it has predicting and controlling horizon, there isn’t such horizons in non-predictive methods.", "title": "" }, { "docid": "a984a54369a1db6a0165a96695c94de5", "text": "IT projects have certain features that make them different from other engineering projects. These include increased complexity and higher chances of project failure. To increase the chances of an IT project to be perceived as successful by all the parties involved in the project from its conception, development and implementation, it is necessary to identify at the outset of the project what the important factors influencing that success are. Current methodologies and tools used for identifying, classifying and evaluating the indicators of success in IT projects have several limitations that can be overcome by employing the new methodology presented in this paper. This methodology is based on using Fuzzy Cognitive Maps (FCM) for mapping success, modelling Critical Success Factors (CSFs) perceptions and the relations between them. This is an area where FCM has never been applied before. The applicability of the FCM methodology is demonstrated through a case study based on a new project idea, the Mobile Payment System (MPS) Project, related to the fast evolving world of mobile telecommunications.", "title": "" }, { "docid": "bbe59dd74c554d92167f42701a1f8c3d", "text": "Finding subgraph isomorphisms is an important problem in many applications which deal with data modeled as graphs. While this problem is NP-hard, in recent years, many algorithms have been proposed to solve it in a reasonable time for real datasets using different join orders, pruning rules, and auxiliary neighborhood information. However, since they have not been empirically compared one another in most research work, it is not clear whether the later work outperforms the earlier work. Another problem is that reported comparisons were often done using the original authors’ binaries which were written in different programming environments. In this paper, we address these serious problems by re-implementing five state-of-the-art subgraph isomorphism algorithms in a common code base and by comparing them using many real-world datasets and their query loads. Through our in-depth analysis of experimental results, we report surprising empirical findings.", "title": "" }, { "docid": "627d938cf2194cd0cab09f36a0bd50a9", "text": "This chapter focuses on the why, what, and how of bodily expression analysis for automatic affect recognition. It first asks the question of ‘why bodily expression?’ and attempts to find answers by reviewing the latest bodily expression perception literature. The chapter then turns its attention to the question of ‘what are the bodily expressions recognized automatically?’ by providing an overview of the automatic bodily expression recognition literature. The chapter then provides representative answers to how bodily expression analysis can aid affect recognition by describing three case studies: (1) data acquisition and annotation of the first publicly available database of affective face-and-body displays (i.e., the FABO database); (2) a representative approach for affective state recognition from face-and-body display by detecting the space-time interest points in video and using Canonical Correlation Analysis (CCA) for fusion, and (3) a representative approach for explicit detection of the temporal phases (segments) of affective states (start/end of the expression and its subdivision into phases such as neutral, onset, apex, and offset) from bodily expressions. The chapter concludes by summarizing the main challenges faced and discussing how we can advance the state of the art in the field.", "title": "" }, { "docid": "e9aea5919d3d38184fc13c10f1751293", "text": "The distinct protein aggregates that are found in Alzheimer's, Parkinson's, Huntington's and prion diseases seem to cause these disorders. Small intermediates — soluble oligomers — in the aggregation process can confer synaptic dysfunction, whereas large, insoluble deposits might function as reservoirs of the bioactive oligomers. These emerging concepts are exemplified by Alzheimer's disease, in which amyloid β-protein oligomers adversely affect synaptic structure and plasticity. Findings in other neurodegenerative diseases indicate that a broadly similar process of neuronal dysfunction is induced by diffusible oligomers of misfolded proteins.", "title": "" }, { "docid": "a271371ba28be10b67e31ecca6f3aa88", "text": "The toxicity and repellency of the bioactive chemicals of clove (Syzygium aromaticum) powder, eugenol, eugenol acetate, and beta-caryophyllene were evaluated against workers of the red imported fire ant, Solenopsis invicta Buren. Clove powder applied at 3 and 12 mg/cm2 provided 100% ant mortality within 6 h, and repelled 99% within 3 h. Eugenol was the fastest acting compound against red imported fire ant compared with eugenol acetate, beta-caryophyllene, and clove oil. The LT50 values inclined exponentially with the increase in the application rate of the chemical compounds tested. However, repellency did not increase with the increase in the application rate of the chemical compounds tested, but did with the increase in exposure time. Eugenol, eugenol acetate, as well as beta-caryophyllene and clove oil may provide another tool for red imported fire ant integrated pest management, particularly in situations where conventional insecticides are inappropriate.", "title": "" }, { "docid": "88f60c6835fed23e12c56fba618ff931", "text": "Design of fault tolerant systems is a popular subject in flight control system design. In particular, adaptive control approach has been successful in recovering aircraft in a wide variety of different actuator/sensor failure scenarios. However, if the aircraft goes under a severe actuator failure, control system might not be able to adapt fast enough to changes in the dynamics, which would result in performance degradation or even loss of the aircraft. Inspired by the recent success of deep learning applications, this work builds a hybrid recurren-t/convolutional neural network model to estimate adaptation parameters for aircraft dynamics under actuator/engine faults. The model is trained offline from a database of different failure scenarios. In case of an actuator/engine failure, the model identifies adaptation parameters and feeds this information to the adaptive control system, which results in significantly faster convergence of the controller coefficients. Developed control system is implemented on a nonlinear 6-DOF F-16 aircraft, and the results show that the proposed architecture is especially beneficial in severe failure scenarios.", "title": "" }, { "docid": "c0a05cad5021b1e779682b50a53f25fd", "text": "We initiate the formal study of functional encryption by giving precise definitions of the concept and its security. Roughly speaking, functional encryption supports restricted secret keys that enable a key holder to learn a specific function of encrypted data, but learn nothing else about the data. For example, given an encrypted program the secret key may enable the key holder to learn the output of the program on a specific input without learning anything else about the program. We show that defining security for functional encryption is non-trivial. First, we show that a natural game-based definition is inadequate for some functionalities. We then present a natural simulation-based definition and show that it (provably) cannot be satisfied in the standard model, but can be satisfied in the random oracle model. We show how to map many existing concepts to our formalization of functional encryption and conclude with several interesting open problems in this young area. ∗Supported by NSF, MURI, and the Packard foundation. †Supported by NSF CNS-0716199, CNS-0915361, and CNS-0952692, Air Force Office of Scientific Research (AFO SR) under the MURI award for “Collaborative policies and assured information sharing” (Project PRESIDIO), Department of Homeland Security Grant 2006-CS-001-000001-02 (subaward 641), and the Alfred P. Sloan Foundation.", "title": "" }, { "docid": "240c47d27533069f339d8eb090a637a9", "text": "This paper discusses the active and reactive power control method for a modular multilevel converter (MMC) based grid-connected PV system. The voltage vector space analysis is performed by using average value models for the feasibility analysis of reactive power compensation (RPC). The proposed double-loop control strategy enables the PV system to handle unidirectional active power flow and bidirectional reactive power flow. Experiments have been performed on a laboratory-scaled modular multilevel PV inverter. The experimental results verify the correctness and feasibility of the proposed strategy.", "title": "" }, { "docid": "3ae5e7ac5433f2449cd893e49f1b2553", "text": "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.", "title": "" }, { "docid": "1dfa61f341919dcb4169c167a92c2f43", "text": "This paper presents an algorithm for the detection of micro-crack defects in the multicrystalline solar cells. This detection goal is very challenging due to the presence of various types of image anomalies like dislocation clusters, grain boundaries, and other artifacts due to the spurious discontinuities in the gray levels. In this work, an algorithm featuring an improved anisotropic diffusion filter and advanced image segmentation technique is proposed. The methods and procedures are assessed using 600 electroluminescence images, comprising 313 intact and 287 defected samples. Results indicate that the methods and procedures can accurately detect micro-crack in solar cells with sensitivity, specificity, and accuracy averaging at 97%, 80%, and 88%, respectively.", "title": "" }, { "docid": "0c842ef34f1924e899e408309f306640", "text": "A single-tube 5' nuclease multiplex PCR assay was developed on the ABI 7700 Sequence Detection System (TaqMan) for the detection of Neisseria meningitidis, Haemophilus influenzae, and Streptococcus pneumoniae from clinical samples of cerebrospinal fluid (CSF), plasma, serum, and whole blood. Capsular transport (ctrA), capsulation (bexA), and pneumolysin (ply) gene targets specific for N. meningitidis, H. influenzae, and S. pneumoniae, respectively, were selected. Using sequence-specific fluorescent-dye-labeled probes and continuous real-time monitoring, accumulation of amplified product was measured. Sensitivity was assessed using clinical samples (CSF, serum, plasma, and whole blood) from culture-confirmed cases for the three organisms. The respective sensitivities (as percentages) for N. meningitidis, H. influenzae, and S. pneumoniae were 88.4, 100, and 91.8. The primer sets were 100% specific for the selected culture isolates. The ctrA primers amplified meningococcal serogroups A, B, C, 29E, W135, X, Y, and Z; the ply primers amplified pneumococcal serotypes 1, 2, 3, 4, 5, 6, 7, 8, 9, 10A, 11A, 12, 14, 15B, 17F, 18C, 19, 20, 22, 23, 24, 31, and 33; and the bexA primers amplified H. influenzae types b and c. Coamplification of two target genes without a loss of sensitivity was demonstrated. The multiplex assay was then used to test a large number (n = 4,113) of culture-negative samples for the three pathogens. Cases of meningococcal, H. influenzae, and pneumococcal disease that had not previously been confirmed by culture were identified with this assay. The ctrA primer set used in the multiplex PCR was found to be more sensitive (P < 0.0001) than the ctrA primers that had been used for meningococcal PCR testing at that time.", "title": "" }, { "docid": "c699ce2a06276f722bf91806378b11eb", "text": "The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines.", "title": "" }, { "docid": "0ac7db546c11b9d18897ceeb2e5be70f", "text": "A backstepping approach is proposed in this paper to cope with the failure of a quadrotor propeller. The presented methodology supposes to turn off also the motor which is opposite to the broken one. In this way, a birotor configuration with fixed propellers is achieved. The birotor is controlled to follow a planned emergency landing trajectory. Theory shows that the birotor can reach any point in the Cartesian space losing the possibility to control the yaw angle. Simulation tests are employed to validate the proposed controller design.", "title": "" }, { "docid": "0521fe73626d12a3962934cf2b2ee2e9", "text": "General as well as the MSW management in Thailand is reviewed in this paper. Topics include the MSW generation, sources, composition, and trends. The review, then, moves to sustainable solutions for MSW management, sustainable alternative approaches with an emphasis on an integrated MSW management. Information of waste in Thailand is also given at the beginning of this paper for better understanding of later contents. It is clear that no one single method of MSW disposal can deal with all materials in an environmentally sustainable way. As such, a suitable approach in MSW management should be an integrated approach that could deliver both environmental and economic sustainability. With increasing environmental concerns, the integrated MSW management system has a potential to maximize the useable waste materials as well as produce energy as a by-product. In Thailand, the compositions of waste (86%) are mainly organic waste, paper, plastic, glass, and metal. As a result, the waste in Thailand is suitable for an integrated MSW management. Currently, the Thai national waste management policy starts to encourage the local administrations to gather into clusters to establish central MSW disposal facilities with suitable technologies and reducing the disposal cost based on the amount of MSW generated. Keywords— MSW, management, sustainable, Thailand", "title": "" }, { "docid": "70b900d196f689caf9c3051cc27792ae", "text": "This paper describes the hardware and software design of the kidsize humanoid robot systems of the Darmstadt Dribblers in 2007. The robots are used as a vehicle for research in control of locomotion and behavior of autonomous humanoid robots and robot teams with many degrees of freedom and many actuated joints. The Humanoid League of RoboCup provides an ideal testbed for such aspects of dynamics in motion and autonomous behavior as the problem of generating and maintaining statically or dynamically stable bipedal locomotion is predominant for all types of vision guided motions during a soccer game. A modular software architecture as well as further technologies have been developed for efficient and effective implementation and test of modules for sensing, planning, behavior, and actions of humanoid robots.", "title": "" } ]
scidocsrr
a49cedfb08b746c108e496b0c9f8fa5e
An Ensemble Approach for Incremental Learning in Nonstationary Environments
[ { "docid": "101af2d0539fa1470e8acfcf7c728891", "text": "OnlineEnsembleLearning", "title": "" }, { "docid": "fc5782aa3152ca914c6ca5cf1aef84eb", "text": "We introduce Learn++, an algorithm for incremental training of neural network (NN) pattern classifiers. The proposed algorithm enables supervised NN paradigms, such as the multilayer perceptron (MLP), to accommodate new data, including examples that correspond to previously unseen classes. Furthermore, the algorithm does not require access to previously used data during subsequent incremental learning sessions, yet at the same time, it does not forget previously acquired knowledge. Learn++ utilizes ensemble of classifiers by generating multiple hypotheses using training data sampled according to carefully tailored distributions. The outputs of the resulting classifiers are combined using a weighted majority voting procedure. We present simulation results on several benchmark datasets as well as a real-world classification task. Initial results indicate that the proposed algorithm works rather well in practice. A theoretical upper bound on the error of the classifiers constructed by Learn++ is also provided.", "title": "" } ]
[ { "docid": "8f2be7a7f6b5f5ba1412e8635a6aa755", "text": "In this paper, we propose to infer music genre embeddings from audio datasets carrying semantic information about genres. We show that such embeddings can be used for disambiguating genre tags (identification of different labels for the same genre, tag translation from a tag system to another, inference of hierarchical taxonomies on these genre tags). These embeddings are built by training a deep convolutional neural network genre classifier with large audio datasets annotated with a flat tag system. We show empirically that they makes it possible to retrieve the original taxonomy of a tag system, spot duplicates tags and translate tags from a tag system to another.", "title": "" }, { "docid": "16a30db315374b42d721a91bb5549763", "text": "The display units integrated in todays head-mounted displays (HMDs) provide only a limited field of view (FOV) to the virtual world. In order to present an undistorted view to the virtual environment (VE), the perspective projection used to render the VE has to be adjusted to the limitations caused by the HMD characteristics. In particular, the geometric field of view (GFOV), which defines the virtual aperture angle used for rendering of the 3D scene, is set up according to the display's field of view. A discrepancy between these two fields of view distorts the geometry of the VE in a way that either minifies or magnifies the imagery displayed to the user. Discrepancies between the geometric and physical FOV causes the imagery to be minified or magnified. This distortion has the potential to negatively or positively affect a user's perception of the virtual space, sense of presence, and performance on visual search tasks.\n In this paper we analyze if a user is consciously aware of perspective distortions of the VE displayed in the HMD. We introduce a psychophysical calibration method to determine the HMD's actual field of view, which may vary from the nominal values specified by the manufacturer. Furthermore, we conducted an experiment to identify perspective projections for HMDs which are identified as natural by subjects---even if these perspectives deviate from the perspectives that are inherently defined by the display's field of view. We found that subjects evaluate a field of view as natural when it is larger than the actual field of view of the HMD---in some cases up to 50%.", "title": "" }, { "docid": "325d6c44ef7f4d4e642e882a56f439b7", "text": "In announcing the news that “post-truth” is the Oxford Dictionaries’ 2016 word of the year, the Chicago Tribune declared that “Truth is dead. Facts are passé.”1 Politicians have shoveled this mantra our direction for centuries, but during this past presidential election, they really rubbed our collective faces in it. To be fair, the word “post” isn’t to be taken to mean “after,” as in its normal sense, but rather as “irrelevant.” Careful observers of the recent US political campaigns came to appreciate this difference. Candidates spewed streams of rhetorical effluent that didn’t even pretend to pass the most perfunctory fact-checking smell test. As the Tribune noted, far too many voters either didn’t notice or didn’t care. That said, recognizing an unwelcome phenomenon isn’t the same as legitimizing it, and now the Oxford Dictionaries group has gone too far toward the latter. They say “post-truth” captures the “ethos, mood or preoccupations of [2016] to have lasting potential as a word of cultural significance.”1 I emphatically disagree. I don’t know what post-truth did capture, but it didn’t capture that. We need a phrase for the 2016 mood that’s a better fit. I propose the term “gaudy facts,” for it emphasizes the garish and tawdry nature of the recent political dialog. Further, “gaudy facts” has the advantage of avoiding the word truth altogether, since there’s precious little of that in political discourse anyway. I think our new term best captures the ethos and mood of today’s political delusionists. There’s no ground truth data in sight, all claims are imaginary and unsupported without pretense of facts, and distortion is reality. This seems to fit our present experience well. The only tangible remnant of reality that isn’t subsumed under our new term is the speakers’ underlying narcissism, but at least we’re closer than we were with “post-truth.” We need to forever banish the association of the word “truth” with “politics”—these two terms just don’t play well with each other. Lies, Damn Lies, and Fake News", "title": "" }, { "docid": "55658c75bcc3a12c1b3f276050f28355", "text": "Sensing systems such as biomedical implants, infrastructure monitoring systems, and military surveillance units are constrained to consume only picowatts to nanowatts in standby and active mode, respectively. This tight power budget places ultra-low power demands on all building blocks in the systems. This work proposes a voltage reference for use in such ultra-low power systems, referred to as the 2T voltage reference, which has been demonstrated in silicon across three CMOS technologies. Prototype chips in 0.13 μm show a temperature coefficient of 16.9 ppm/°C (best) and line sensitivity of 0.033%/V, while consuming 2.22 pW in 1350 μm2. The lowest functional Vdd 0.5 V. The proposed design improves energy efficiency by 2 to 3 orders of magnitude while exhibiting better line sensitivity and temperature coefficient in less area, compared to other nanowatt voltage references. For process spread analysis, 49 dies are measured across two runs, showing the design exhibits comparable spreads in TC and output voltage to existing voltage references in the literature. Digital trimming is demonstrated, and assisted one temperature point digital trimming, guided by initial samples with two temperature point trimming, enables TC <; 50 ppm/°C and ±0.35% output precision across all 25 dies. Ease of technology portability is demonstrated with silicon measurement results in 65 nm, 0.13 μm, and 0.18 μm CMOS technologies.", "title": "" }, { "docid": "4edb9dea1e949148598279c0111c4531", "text": "This paper presents a design of highly effective triple band microstrip antenna for wireless communication applications. The triple band design is a metamaterial-based design for WLAN and WiMAX (2.4/3.5/5.6 GHz) applications. The triple band response is obtained by etching two circular and one rectangular split ring resonator (SRR) unit cells on the ground plane of a conventional patch operating at 3.56 GHz. The circular cells are introduced to resonate at 5.3 GHz for the upper WiMAX band, while the rectangular cell is designed to resonate at 2.45 GHz for the lower WLAN band. Furthermore, a novel complementary H-shaped unit cell oriented above the triple band antenna is proposed. The proposed H-shaped is being used as a lens to significantly increase the antenna gain. To investigate the left-handed behavior of the proposed H-shaped, extensive parametric study for the placement of each unit cell including the metamaterial lens, which is the main parameter affecting the antenna performance, is presented and discussed comprehensively. Good consistency between the measured and simulated results is achieved. The proposed antenna meets the requirements of WiMAX and WLAN standards with high peak realized gain.", "title": "" }, { "docid": "c93c690ecb038a87c351d9674f0a881a", "text": "Foot-operated computer interfaces have been studied since the inception of human--computer interaction. Thanks to the miniaturisation and decreasing cost of sensing technology, there is an increasing interest exploring this alternative input modality, but no comprehensive overview of its research landscape. In this survey, we review the literature on interfaces operated by the lower limbs. We investigate the characteristics of users and how they affect the design of such interfaces. Next, we describe and analyse foot-based research prototypes and commercial systems in how they capture input and provide feedback. We then analyse the interactions between users and systems from the perspective of the actions performed in these interactions. Finally, we discuss our findings and use them to identify open questions and directions for future research.", "title": "" }, { "docid": "3ec603c63166167c88dc6d578a7c652f", "text": "Peer-to-peer (P2P) lending or crowdlending, is a recent innovation allows a group of individual or institutional lenders to lend funds to individuals or businesses in return for interest payment on top of capital repayments. The rapid growth of P2P lending marketplaces has heightened the need to develop a support system to help lenders make sound lending decisions. But realizing such system is challenging in the absence of formal credit data used by the banking sector. In this paper, we attempt to explore the possible connections between user credit risk and how users behave in the lending sites. We present the first analysis of user detailed clickstream data from a large P2P lending provider. Our analysis reveals that the users’ sequences of repayment histories and financial activities in the lending site, have significant predictive value for their future loan repayments. In the light of this, we propose a deep architecture named DeepCredit, to automatically acquire the knowledge of credit risk from the sequences of activities that users conduct on the site. Experiments on our large-scale real-world dataset show that our model generates a high accuracy in predicting both loan delinquency and default, and significantly outperforms a number of baselines and competitive alternatives.", "title": "" }, { "docid": "5dba3258382d9781287cdcb6b227153c", "text": "Mobile sensing systems employ various sensors in smartphones to extract human-related information. As the demand for sensing systems increases, a more effective mechanism is required to sense information about human life. In this paper, we present a systematic study on the feasibility and gaining properties of a crowdsensing system that primarily concerns sensing WiFi packets in the air. We propose that this method is effective for estimating urban mobility by using only a small number of participants. During a seven-week deployment, we collected smartphone sensor data, including approximately four million WiFi packets from more than 130,000 unique devices in a city. Our analysis of this dataset examines core issues in urban mobility monitoring, including feasibility, spatio-temporal coverage, scalability, and threats to privacy. Collectively, our findings provide valuable insights to guide the development of new mobile sensing systems for urban life monitoring.", "title": "" }, { "docid": "bfc8a36a8b3f1d74bad5f2e25ad3aae5", "text": "This paper presents a novel ac-dc power factor correction (PFC) power conversion architecture for a single-phase grid interface. The proposed architecture has significant advantages for achieving high efficiency, good power factor, and converter miniaturization, especially in low-to-medium power applications. The architecture enables twice-line-frequency energy to be buffered at high voltage with a large voltage swing, enabling reduction in the energy buffer capacitor size and the elimination of electrolytic capacitors. While this architecture can be beneficial with a variety of converter topologies, it is especially suited for the system miniaturization by enabling designs that operate at high frequency (HF, 3-30 MHz). Moreover, we introduce circuit implementations that provide efficient operation in this range. The proposed approach is demonstrated for an LED driver converter operating at a (variable) HF switching frequency (3-10 MHz) from 120 Vac, and supplying a 35 Vdc output at up to 30 W. The prototype converter achieves high efficiency (92%) and power factor (0.89), and maintains a good performance over a wide load range. Owing to the architecture and HF operation, the prototype achieves a high “box” power density of 50 W/in3 (“displacement” power density of 130 W/in3), with miniaturized inductors, ceramic energy buffer capacitors, and a small-volume EMI filter.", "title": "" }, { "docid": "fe5377214840549fbbb6ad520592191d", "text": "The ability to exert an appropriate amount of force on brain tissue during surgery is an important component of instrument handling. It allows surgeons to achieve the surgical objective effectively while maintaining a safe level of force in tool-tissue interaction. At the present time, this knowledge, and hence skill, is acquired through experience and is qualitatively conveyed from an expert surgeon to trainees. These forces can be assessed quantitatively by retrofitting surgical tools with sensors, thus providing a mechanism for improved performance and safety of surgery, and enhanced surgical training. This paper presents the development of a force-sensing bipolar forceps, with installation of a sensory system, that is able to measure and record interaction forces between the forceps tips and brain tissue in real time. This research is an extension of a previous research where a bipolar forceps was instrumented to measure dissection and coagulation forces applied in a single direction. Here, a planar forceps with two sets of strain gauges in two orthogonal directions was developed to enable measuring the forces with a higher accuracy. Implementation of two strain gauges allowed compensation of strain values due to deformations of the forceps in other directions (axial stiffening) and provided more accurate forces during microsurgery. An experienced neurosurgeon performed five neurosurgical tasks using the axial setup and repeated the same tasks using the planar device. The experiments were performed on cadaveric brains. Both setups were shown to be capable of measuring real-time interaction forces. Comparing the two setups, under the same experimental condition, indicated that the peak and mean forces quantified by planar forceps were at least 7% and 10% less than those of axial tool, respectively; therefore, utilizing readings of all strain gauges in planar forceps provides more accurate values of both peak and mean forces than axial forceps. Cross-correlation analysis between the two force signals obtained, one from each cadaveric practice, showed a high similarity between the two force signals.", "title": "" }, { "docid": "a6d4b6a0cd71a8e64c9a2429b95cd7da", "text": "Creativity research has traditionally focused on human creativity, and even more specifically, on the psychology of individual creative people. In contrast, computational creativity research involves the development and evaluation of creativity in a computational system. As we study the effect of scaling up from the creativity of a computational system and individual people to large numbers of diverse computational agents and people, we have a new perspective: creativity can ascribed to a computational agent, an individual person, collectives of people and agents and/or their interaction. By asking “Who is being creative?” this paper examines the source of creativity in computational and collective creativity. A framework based on ideation and interaction provides a way of characterizing existing research in computational and collective creativity and identifying directions for future research. Human and Computational Creativity Creativity is a topic of philosophical and scientific study considering the scenarios and human characteristics that facilitate creativity as well as the properties of computational systems that exhibit creative behavior. “The four Ps of creativity”, as introduced in Rhodes (1987) and more recently summarized by Runco (2011), decompose the complexity of creativity into separate but related influences: • Person: characteristics of the individual, • Product: an outcome focus on ideas, • Press: the environmental and contextual factors, • Process: cognitive process and thinking techniques. While the four Ps are presented in the context of the psychology of human creativity, they can be modified for computational creativity if process includes a computational process. The study of human creativity has a focus on the characteristics and cognitive behavior of creative people and the environments in which creativity is facilitated. The study of computational creativity, while inspired by concepts of human creativity, is often expressed in the formal language of search spaces and algorithms. Why do we ask who is being creative? Firstly, there is an increasing interest in understanding computational systems that can formalize or model creative processes and therefore exhibit creative behaviors or acts. Yet there are still skeptics that claim computers aren’t creative, the computer is just following instructions. Second and in contrast, there is increasing interest in computational systems that encourage and enhance human creativity that make no claims about whether the computer is being or could be creative. Finally, as we develop more capable socially intelligent computational systems and systems that enable collective intelligence among humans and computers, the boundary between human creativity and computer creativity blurs. As the boundary blurs, we need to develop ways of recognizing creativity that makes no assumptions about whether the creative entity is a person, a computer, a potentially large group of people, or the collective intelligence of human and computational entities. This paper presents a framework that characterizes the source of creativity from two perspectives, ideation and interaction, as a guide to current and future research in computational and collective creativity. Creativity: Process and Product Understanding the nature of creativity as process and product is critical in computational creativity if we want to avoid any bias that only humans are creative and computers are not. While process and product in creativity are tightly coupled in practice, a distinction between the two provides two ways of recognizing computational creativity by describing the characteristics of a creative process and separately, the characteristics of a creative product. Studying and describing the processes that generate creative products focus on the cognitive behavior of a creative person or the properties of a computational system, and describing ways of recognizing a creative product focus on the characteristics of the result of a creative process. When describing creative processes there is an assumption that there is a space of possibilities. Boden (2003) refers to this as conceptual spaces and describes these spaces as structured styles of thought. In computational systems such a space is called a state space. How such spaces are changed, or the relationship between the set of known products, the space of possibilities, and the potentially creative product, is the basis for describing processes that can generate potentially creative artifacts. There are many accounts of the processes for generating creative products. Two sources are described here: Boden (2003) from the philosophical and artificial intelligence perspective and Gero (2000) from the design science perspective. Boden (2003) describes three ways in which creative products can be generated: combination, exploration, International Conference on Computational Creativity 2012 67 and transformation: each one describes the way in which the conceptual space of known products provides a basis for generating a creative product and how the conceptual space changes as a result of the creative artifact. Combination brings together two or more concepts in ways that hasn’t occurred in existing products. Exploration finds concepts in parts of the space that have not been considered in existing products. Transformation modifies concepts in the space to generate products that change the boundaries of the space. Gero (2000) describes computational processes for creative design as combination, transformation, analogy, emergence, and first principles. Combination and transformation are similar to Boden’s processes. Analogy transfers concepts from a source product that may be in a different conceptual space to a target product to generate a novel product in the target’s space. Emergence is a process that finds new underlying structures in a concept that give rise to a new product, effectively a re-representation process. First principles as a process generates new products without relying on concepts as defined in existing products. While these processes provide insight into the nature of creativity and provide a basis for computational creativity, they have little to say about how we recognize a creative product. As we move towards computational systems that enhance or contribute to human creativity, the articulation of process models for generating creative artifacts does not provide an evaluation of the product. Computational systems that generate creative products need evaluation criteria that are independent of the process by which the product was generated. There are also numerous approaches to defining characteristics of creative products as the basis for evaluating or assessing creativity. Boden (2003) claims that novelty and value are the essential criteria and that other aspects, such as surprise, are kinds of novelty or value. Wiggins (2006) often uses value to indicate all valuable aspects of a creative products, yet provides definitions for novelty and value as different features that are relevant to creativity. Oman and Tumer (2009) combine novelty and quality to evaluate individual ideas in engineering design as a relative measure of creativity. Shah, Smith, and Vargas-Hernandez (2003) associate creative design with ideation and develop metrics for novelty, variety, quality, and quantity of ideas. Wiggins (2006) argues that surprise is a property of the receiver of a creative artifact, that is, it is an emotional response. Cropley and Cropley (2005) propose four broad properties of products that can be used to describe the level and kind of creativity they possess: effectiveness, novelty, elegance, genesis. Besemer and O'Quin (1987) describe a Creative Product Semantic Scale which defines the creativity of products in three dimensions: novelty (the product is original, surprising and germinal), resolution (the product is valuable, logical, useful, and understandable), and elaboration and synthesis (the product is organic, elegant, complex, and well-crafted). Horn and Salvendy (2006) after doing an analysis of many properties of creative products, report on consumer perception of creativity in three critical perceptions: affect (our emotional response to the product), importance, and novelty. Goldenberg and Mazursky (2002) report on research that has found the observable characteristics of creativity in products to include \"original, of value, novel, interesting, elegant, unique, surprising.\" Amabile (1982) says it most clearly when she summarizes the social psychology literature on the assessment of creativity: While most definitions of creativity refer to novelty, appropriateness, and surprise, current creativity tests or assessment techniques are not closely linked to these criteria. She further argues that “There is no clear, explicit statement of the criteria that conceptually underlie the assessment procedures.” In response to an inability to establish and define criteria for evaluating creativity that is acceptable to all domains, Amabile (1982, 1996) introduced a Consensual Assessment Technique (CAT) in which creativity is assessed by a group of judges that are knowledgeable of the field. Since then, several scales for assisting human evaluators have been developed to guide human evaluators, for example, Besemer and O'Quin's (1999) Creative Product Semantic Scale, Reis and Renzulli's (1991) Student Product Assessment Form, and Cropley et al’s (2011) Creative Solution Diagnosis Scale. Maher (2010) presents an AI approach to evaluating creativity of a product by measuring novelty, value and surprise that provides a formal model for evaluating creative products. Novelty is a measure of how different the product is from existing products and is measured as a distance from clusters of other products in a conceptual space, characterizing the artifact as similar but different. Value is a measure of how the creative product co", "title": "" }, { "docid": "c8453255bf200ed841229f5e637b2074", "text": "One very simple interpretation of calibration is to adjust a set of parameters associated with a computational science and engineering code so that the model agreement is maximized with respect to a set of experimental data. One very simple interpretation of validation is to quantify our belief in the predictive capability of a computational code through comparison with a set of experimental data. Uncertainty in both the data and the code are important and must be mathematically understood to correctly perform both calibration and validation. Sensitivity analysis, being an important methodology in uncertainty analysis, is thus important to both calibration and validation. In this paper, we intend to clarify the language just used and express some opinions on the associated issues. We will endeavor to identify some technical challenges that must be resolved for successful validation of a predictive modeling capability. One of these challenges is a formal description of a ‘‘model discrepancy’’ term. Another challenge revolves around the general adaptation of abstract learning theory as a formalism that potentially encompasses both calibration and validation in the face of model uncertainty. r 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "c93c0966ef744722d58bbc9170e9a8ab", "text": "Past research has generated mixed support among social scientists for the utility of social norms in accounting for human behavior. We argue that norms do have a substantial impact on human action; however, the impact can only be properly recognized when researchers (a) separate 2 types of norms that at times act antagonistically in a situation—injunctive norms (what most others approve or disapprove) and descriptive norms (what most others do)—and (b) focus Ss' attention principally on the type of norm being studied. In 5 natural settings, focusing Ss on either the descriptive norms or the injunctive norms regarding littering caused the Ss* littering decisions to change only in accord with the dictates of the then more salient type of norm.", "title": "" }, { "docid": "1b30c14536db1161b77258b1ce213fbb", "text": "Click-through rate (CTR) prediction and relevance ranking are two fundamental problems in web advertising. In this study, we address the problem of modeling the relationship between CTR and relevance for sponsored search. We used normalized relevance scores comparable across all queries to represent relevance when modeling with CTR, instead of directly using human judgment labels or relevance scores valid only within same query. We classified clicks by identifying their relevance quality using dwell time and session information, and compared all clicks versus selective clicks effects when modeling relevance.\n Our results showed that the cleaned click signal outperforms raw click signal and others we explored, in terms of relevance score fitting. The cleaned clicks include clicks with dwell time greater than 5 seconds and last clicks in session. Besides traditional thoughts that there is no linear relation between click and relevance, we showed that the cleaned click based CTR can be fitted well with the normalized relevance scores using a quadratic regression model. This relevance-click model could help to train ranking models using processed click feedback to complement expensive human editorial relevance labels, or better leverage relevance signals in CTR prediction.", "title": "" }, { "docid": "2bdf2abea3e137645f53d8a9b36327ad", "text": "The use of a general-purpose code, COLSYS, is described. The code is capable of solving mixed-order systems of boundary-value problems in ordinary differential equations. The method of spline collocation at Gaussian points is implemented using a B-spline basis. Approximate solutions are computed on a sequence of automatically selected meshes until a user-specified set of tolerances is satisfied. A damped Newton's method is used for the nonlinear iteration. The code has been found to be particularly effective for difficult problems. It is intended that a user be able to use COLSYS easily after reading its algorithm description. The use of the code is then illustrated by examples demonstrating its effectiveness and capabilities.", "title": "" }, { "docid": "2df35b05a40a646ba6f826503955601a", "text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.", "title": "" }, { "docid": "57d40d18977bc332ba16fce1c3cf5a66", "text": "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.", "title": "" }, { "docid": "d741b6f33ccfae0fc8f4a79c5c8aa9cb", "text": "A nonlinear optimal controller with a fuzzy gain scheduler has been designed and applied to a Line-Of-Sight (LOS) stabilization system. Use of Linear Quadratic Regulator (LQR) theory is an optimal and simple manner of solving many control engineering problems. However, this method cannot be utilized directly for multigimbal LOS systems since they are nonlinear in nature. To adapt LQ controllers to nonlinear systems at least a linearization of the model plant is required. When the linearized model is only valid within the vicinity of an operating point a gain scheduler is required. Therefore, a Takagi-Sugeno Fuzzy Inference System gain scheduler has been implemented, which keeps the asymptotic stability performance provided by the optimal feedback gain approach. The simulation results illustrate that the proposed controller is capable of overcoming disturbances and maintaining a satisfactory tracking performance. Keywords—Fuzzy Gain-Scheduling, Gimbal, Line-Of-Sight Stabilization, LQR, Optimal Control", "title": "" }, { "docid": "2950e3c1347c4adeeb2582046cbea4b8", "text": "We present Mime, a compact, low-power 3D sensor for unencumbered free-form, single-handed gestural interaction with head-mounted displays (HMDs). Mime introduces a real-time signal processing framework that combines a novel three-pixel time-of-flight (TOF) module with a standard RGB camera. The TOF module achieves accurate 3D hand localization and tracking, and it thus enables motion-controlled gestures. The joint processing of 3D information with RGB image data enables finer, shape-based gestural interaction.\n Our Mime hardware prototype achieves fast and precise 3D gestural control. Compared with state-of-the-art 3D sensors like TOF cameras, the Microsoft Kinect and the Leap Motion Controller, Mime offers several key advantages for mobile applications and HMD use cases: very small size, daylight insensitivity, and low power consumption. Mime is built using standard, low-cost optoelectronic components and promises to be an inexpensive technology that can either be a peripheral component or be embedded within the HMD unit. We demonstrate the utility of the Mime sensor for HMD interaction with a variety of application scenarios, including 3D spatial input using close-range gestures, gaming, on-the-move interaction, and operation in cluttered environments and in broad daylight conditions.", "title": "" }, { "docid": "3fd6a5960d40fa98051f7178b1abb8bd", "text": "On average, resource-abundant countries have experienced lower growth over the last four decades than their resource-poor counterparts. But the most interesting aspect of the paradox of plenty is not the average effect of natural resources, but its variation. For every Nigeria or Venezuela there is a Norway or a Botswana. Why do natural resources induce prosperity in some countries but stagnation in others? This paper gives an overview of the dimensions along which resource-abundant winners and losers differ. In light of this, it then discusses different theory models of the resource curse, with a particular emphasis on recent developments in political economy.", "title": "" } ]
scidocsrr
366061cc202731f6c17afeb18d38db19
The DSM diagnostic criteria for gender identity disorder in adolescents and adults.
[ { "docid": "e61d7b44a39c5cc3a77b674b2934ba40", "text": "The sexual behaviors and attitudes of male-to-female (MtF) transsexuals have not been investigated systematically. This study presents information about sexuality before and after sex reassignment surgery (SRS), as reported by 232 MtF patients of one surgeon. Data were collected using self-administered questionnaires. The mean age of participants at time of SRS was 44 years (range, 18-70 years). Before SRS, 54% of participants had been predominantly attracted to women and 9% had been predominantly attracted to men. After SRS, these figures were 25% and 34%, respectively.Participants' median numbers of sexual partners before SRS and in the last 12 months after SRS were 6 and 1, respectively. Participants' reported number of sexual partners before SRS was similar to the number of partners reported by male participants in the National Health and Social Life Survey (NHSLS). After SRS, 32% of participants reported no sexual partners in the last 12 months, higher than reported by male or female participants in the NHSLS. Bisexual participants reported more partners before and after SRS than did other participants. 49% of participants reported hundreds of episodes or more of sexual arousal to cross-dressing or cross-gender fantasy (autogynephilia) before SRS; after SRS, only 3% so reported. More frequent autogynephilic arousal after SRS was correlated with more frequent masturbation, a larger number of sexual partners, and more frequent partnered sexual activity. 85% of participants experienced orgasm at least occasionally after SRS and 55% ejaculated with orgasm.", "title": "" }, { "docid": "a4a15096e116a6afc2730d1693b1c34f", "text": "The present study reports on the construction of a dimensional measure of gender identity (gender dysphoria) for adolescents and adults. The 27-item gender identity/gender dysphoria questionnaire for adolescents and adults (GIDYQ-AA) was administered to 389 university students (heterosexual and nonheterosexual) and 73 clinic-referred patients with gender identity disorder. Principal axis factor analysis indicated that a one-factor solution, accounting for 61.3% of the total variance, best fits the data. Factor loadings were all >or= .30 (median, .82; range, .34-.96). A mean total score (Cronbach's alpha, .97) was computed, which showed strong evidence for discriminant validity in that the gender identity patients had significantly more gender dysphoria than both the heterosexual and nonheterosexual university students. Using a cut-point of 3.00, we found the sensitivity was 90.4% for the gender identity patients and specificity was 99.7% for the controls. The utility of the GIDYQ-AA is discussed.", "title": "" } ]
[ { "docid": "1f629796e9180c14668e28b83dc30675", "text": "In this article we tackle the issue of searchable encryption with a generalized query model. Departing from many previous works that focused on queries consisting of a single keyword, we consider the the case of queries consisting of arbitrary boolean expressions on keywords, that is to say conjunctions and disjunctions of keywords and their complement. Our construction of boolean symmetric searchable encryption BSSE is mainly based on the orthogonalization of the keyword field according to the Gram-Schmidt process. Each document stored in an outsourced server is associated with a label which contains all the keywords corresponding to the document, and searches are performed by way of a simple inner product. Furthermore, the queries in the BSSE scheme are randomized. This randomization hides the search pattern of the user since the search results cannot be associated deterministically to queries. We formally define an adaptive security model for the BSSE scheme. In addition, the search complexity is in $O(n)$ where $n$ is the number of documents stored in the outsourced server.", "title": "" }, { "docid": "98aec0805e83e344a6b9898fb65e1a11", "text": "Technology offers the potential to objectively monitor people's eating and activity behaviors and encourage healthier lifestyles. BALANCE is a mobile phone-based system for long term wellness management. The BALANCE system automatically detects the user's caloric expenditure via sensor data from a Mobile Sensing Platform unit worn on the hip. Users manually enter information on foods eaten via an interface on an N95 mobile phone. Initial validation experiments measuring oxygen consumption during treadmill walking and jogging show that the system's estimate of caloric output is within 87% of the actual value. Future work will refine and continue to evaluate the system's efficacy and develop more robust data input and activity inference methods.", "title": "" }, { "docid": "670b58d379b7df273309e55cf8e25db4", "text": "In this paper, we introduce a new large-scale dataset of ships, called SeaShips, which is designed for training and evaluating ship object detection algorithms. The dataset currently consists of 31 455 images and covers six common ship types (ore carrier, bulk cargo carrier, general cargo ship, container ship, fishing boat, and passenger ship). All of the images are from about 10 080 real-world video segments, which are acquired by the monitoring cameras in a deployed coastline video surveillance system. They are carefully selected to mostly cover all possible imaging variations, for example, different scales, hull parts, illumination, viewpoints, backgrounds, and occlusions. All images are annotated with ship-type labels and high-precision bounding boxes. Based on the SeaShips dataset, we present the performance of three detectors as a baseline to do the following: 1) elementarily summarize the difficulties of the dataset for ship detection; 2) show detection results for researchers using the dataset; and 3) make a comparison to identify the strengths and weaknesses of the baseline algorithms. In practice, the SeaShips dataset would hopefully advance research and applications on ship detection.", "title": "" }, { "docid": "a76ba02ef0f87a41cdff1a4046d4bba1", "text": "This paper proposes two RF self-interference cancellation techniques. Their small form-factor enables full-duplex communication links for small-to-medium size portable devices and hence promotes the adoption of full-duplex in mass-market applications and next-generation standards, e.g. IEEE802.11 and 5G. Measured prototype implementations of an electrical balance duplexer and a dual-polarized antenna both achieve >50 dB self-interference suppression at RF, operating in the ISM band at 2.45GHz.", "title": "" }, { "docid": "0be3de2b6f0dd5d3158cc7a98286d571", "text": "The use of tablet PCs is spreading rapidly, and accordingly users browsing and inputting personal information in public spaces can often be seen by third parties. Unlike conventional mobile phones and notebook PCs equipped with distinct input devices (e.g., keyboards), tablet PCs have touchscreen keyboards for data input. Such integration of display and input device increases the potential for harm when the display is captured by malicious attackers. This paper presents the description of reconstructing tablet PC displays via measurement of electromagnetic (EM) emanation. In conventional studies, such EM display capture has been achieved by using non-portable setups. Those studies also assumed that a large amount of time was available in advance of capture to obtain the electrical parameters of the target display. In contrast, this paper demonstrates that such EM display capture is feasible in real time by a setup that fits in an attaché case. The screen image reconstruction is achieved by performing a prior course profiling and a complemental signal processing instead of the conventional fine parameter tuning. Such complemental processing can eliminate the differences of leakage parameters among individuals and therefore correct the distortions of images. The attack distance, 2 m, makes this method a practical threat to general tablet PCs in public places. This paper discusses possible attack scenarios based on the setup described above. In addition, we describe a mechanism of EM emanation from tablet PCs and a countermeasure against such EM display capture.", "title": "" }, { "docid": "b0cba371bb9628ac96a9ae2bb228f5a9", "text": "Graph-based recommendation approaches can model associations between users and items alongside additional contextual information. Recent studies demonstrated that representing features extracted from social media (SM) auxiliary data, like friendships, jointly with traditional users/items ratings in the graph, contribute to recommendation accuracy. In this work, we take a step further and propose an extended graph representation that includes socio-demographic and personal traits extracted from the content posted by the user on SM. Empirical results demonstrate that processing unstructured textual information collected from Twitter and representing it in structured form in the graph improves recommendation performance, especially in cold start conditions.", "title": "" }, { "docid": "f5703292e4c722332dcd85b172a3d69e", "text": "Since an ever-increasing part of the population makes use of social media in their day-to-day lives, social media data is being analysed in many different disciplines. The social media analytics process involves four distinct steps, data discovery, collection, preparation, and analysis. While there is a great deal of literature on the challenges and difficulties involving specific data analysis methods, there hardly exists research on the stages of data discovery, collection, and preparation. To address this gap, we conducted an extended and structured literature analysis through which we identified challenges addressed and solutions proposed. The literature search revealed that the volume of data was most often cited as a challenge by researchers. In contrast, other categories have received less attention. Based on the results of the literature search, we discuss the most important challenges for researchers and present potential solutions. The findings are used to extend an existing framework on social media analytics. The article provides benefits for researchers and practitioners who wish to collect and analyse social media data.", "title": "" }, { "docid": "4ae0bb75493e5d430037ba03fcff4054", "text": "David Moher is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, and the Department of Epidemiology and Community Medicine, Faculty of Medicine, University of Ottawa, Ottawa, Ontario, Canada. Alessandro Liberati is at the Università di Modena e Reggio Emilia, Modena, and the Centro Cochrane Italiano, Istituto Ricerche Farmacologiche Mario Negri, Milan, Italy. Jennifer Tetzlaff is at the Ottawa Methods Centre, Ottawa Hospital Research Institute, Ottawa, Ontario. Douglas G Altman is at the Centre for Statistics in Medicine, University of Oxford, Oxford, United Kingdom. Membership of the PRISMA Group is provided in the Acknowledgements.", "title": "" }, { "docid": "9a5ef746c96a82311e3ebe8a3476a5f4", "text": "A magnetic-tip steerable needle is presented with application to aiding deep brain stimulation electrode placement. The magnetic needle is 1.3mm in diameter at the tip with a 0.7mm diameter shaft, which is selected to match the size of a deep brain stimulation electrode. The tip orientation is controlled by applying torques to the embedded neodymium-iron-boron permanent magnets with a clinically-sized magnetic-manipulation system. The prototype design is capable of following trajectories under human-in-the-loop control with minimum bend radii of 100mm without inducing tissue damage and down to 30mm if some tissue damage is tolerable. The device can be retracted and redirected to reach multiple targets with a single insertion point.", "title": "" }, { "docid": "3d8cd89ae0b69ff4820f253aec3dbbeb", "text": "The importance of information as a resource for economic growth and education is steadily increasing. Due to technological advances in computer industry and the explosive growth of the Internet much valuable information will be available in digital libraries. This paper introduces a system that aims to support a user's browsing activities in document sets retrieved from a digital library. Latent Semantic Analysis is applied to extract salient semantic structures and citation patterns of documents stored in a digital library in a computationally expensive batch job. At retrieval time, cluster techniques are used to organize retrieved documents into clusters according to the previously extracted semantic similarities. A modified Boltzman algorithm [1] is employed to spatially organize the resulting clusters and their documents in the form of a three-dimensional information landscape or \"i-scape\". The i-scape is then displayed for interactive exploration via a multi-modal, virtual reality CAVE interface [8]. Users' browsing activities are recorded and user models are extracted to give newcomers online help based on previous navigation activity as well as to enable experienced users to recognize and exploit past user traces. In this way, the system provides interactive services to assist users in the spatial navigation, interpretation, and detailed exploration of potentially large document sets matching a query.", "title": "" }, { "docid": "2cd5075ed124f933fe56fe1dd566df22", "text": "We introduce MIDI-VAE, a neural network model based on Variational Autoencoders that is capable of handling polyphonic music with multiple instrument tracks, as well as modeling the dynamics of music by incorporating note durations and velocities. We show that MIDI-VAE can perform style transfer on symbolic music by automatically changing pitches, dynamics and instruments of a music piece from, e.g., a Classical to a Jazz style. We evaluate the efficacy of the style transfer by training separate style validation classifiers. Our model can also interpolate between short pieces of music, produce medleys and create mixtures of entire songs. The interpolations smoothly change pitches, dynamics and instrumentation to create a harmonic bridge between two music pieces. To the best of our knowledge, this work represents the first successful attempt at applying neural style transfer to complete musical compositions.", "title": "" }, { "docid": "c536e79078d7d5778895e5ac7f02c95e", "text": "Block-based programming languages like Scratch, Alice and Blockly are becoming increasingly common as introductory languages in programming education. There is substantial research showing that these visual programming environments are suitable for teaching programming concepts. But, what do people do when they use Scratch? In this paper we explore the characteristics of Scratch programs. To this end we have scraped the Scratch public repository and retrieved 250,000 projects. We present an analysis of these projects in three different dimensions. Initially, we look at the types of blocks used and the size of the projects. We then investigate complexity, used abstractions and programming concepts. Finally we detect code smells such as large scripts, dead code and duplicated code blocks. Our results show that 1) most Scratch programs are small, however Scratch programs consisting of over 100 sprites exist, 2) programming abstraction concepts like procedures are not commonly used and 3) Scratch programs do suffer from code smells including large scripts and unmatched broadcast signals.", "title": "" }, { "docid": "b5d22d191745e4b94c6b7784b52c8ed8", "text": "One of the biggest problems of SMEs is their tendencies to financial distress because of insufficient finance background. In this study, an early warning system (EWS) model based on data mining for financial risk detection is presented. CHAID algorithm has been used for development of the EWS. Developed EWS can be served like a tailor made financial advisor in decision making process of the firms with its automated nature to the ones who have inadequate financial background. Besides, an application of the model implemented which covered 7853 SMEs based on Turkish Central Bank (TCB) 2007 data. By using EWS model, 31 risk profiles, 15 risk indicators, 2 early warning signals, and 4 financial road maps has been determined for financial risk mitigation. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4ad261905326b55a40569ebbc549a67c", "text": "OBJECTIVES\nTo analyze the Spanish experience in an international study which evaluated tocilizumab in patients with rheumatoid arthritis (RA) and an inadequate response to conventional disease-modifying antirheumatic drugs (DMARDs) or tumor necrosis factor inhibitors (TNFis) in a clinical practice setting.\n\n\nMATERIAL AND METHODS\nSubanalysis of 170 patients with RA from Spain who participated in a phase IIIb, open-label, international clinical trial. Patients presented inadequate response to DMARDs or TNFis. They received 8mg/kg of tocilizumab every 4 weeks in combination with a DMARD or as monotherapy during 20 weeks. Safety and efficacy of tocilizumab were analyzed. Special emphasis was placed on differences between failure to a DMARD or to a TNFi and the need to switch to tocilizumab with or without a washout period in patients who had previously received TNFi.\n\n\nRESULTS\nThe most common adverse events were infections (25%), increased total cholesterol (38%) and transaminases (15%). Five patients discontinued the study due to an adverse event. After six months of tocilizumab treatment, 71/50/30% of patients had ACR 20/50/70 responses, respectively. A higher proportion of TNFi-naive patients presented an ACR20 response: 76% compared to 64% in the TNFi group with previous washout and 66% in the TNFi group without previous washout.\n\n\nCONCLUSIONS\nSafety results were consistent with previous results in patients with RA and an inadequate response to DMARDs or TNFis. Tocilizumab is more effective in patients who did not respond to conventional DMARDs than in patients who did not respond to TNFis.", "title": "" }, { "docid": "a87c60deb820064abaa9093398937ff3", "text": "Cardiac arrhythmia is one of the most important indicators of heart disease. Premature ventricular contractions (PVCs) are a common form of cardiac arrhythmia caused by ectopic heartbeats. The detection of PVCs by means of ECG (electrocardiogram) signals is important for the prediction of possible heart failure. This study focuses on the classification of PVC heartbeats from ECG signals and, in particular, on the performance evaluation of selected features using genetic algorithms (GA) to the classification of PVC arrhythmia. The objective of this study is to apply GA as a feature selection method to select the best feature subset from 200 time series features and to integrate these best features to recognize PVC forms. Neural networks, support vector machines and k-nearest neighbour classification algorithms were used. Findings were expressed in terms of accuracy, sensitivity, and specificity for the MIT-BIH Arrhythmia Database. The results showed that the proposed model achieved higher accuracy rates than those of other works on this topic.", "title": "" }, { "docid": "5ea912d602b0107ae9833292da22b800", "text": "We propose a novel and flexible anchor mechanism named MetaAnchor for object detection frameworks. Unlike many previous detectors model anchors via a predefined manner, in MetaAnchor anchor functions could be dynamically generated from the arbitrary customized prior boxes. Taking advantage of weight prediction, MetaAnchor is able to work with most of the anchor-based object detection systems such as RetinaNet. Compared with the predefined anchor scheme, we empirically find that MetaAnchor is more robust to anchor settings and bounding box distributions; in addition, it also shows the potential on transfer tasks. Our experiment on COCO detection task shows that MetaAnchor consistently outperforms the counterparts in various scenarios.", "title": "" }, { "docid": "866b95a50dede975eeff9aeec91a610b", "text": "In this paper, we focus on differential privacy preserving spectral graph analysis. Spectral graph analysis deals with the analysis of the spectra (eigenvalues and eigenvector components) of the graph’s adjacency matrix or its variants. We develop two approaches to computing the ε-differential eigen decomposition of the graph’s adjacency matrix. The first approach, denoted as LNPP, is based on the Laplace Mechanism that calibrates Laplace noise on the eigenvalues and every entry of the eigenvectors based on their sensitivities. We derive the global sensitivities of both eigenvalues and eigenvectors based on the matrix perturbation theory. Because the output eigenvectors after perturbation are no longer orthogonormal, we postprocess the output eigenvectors by using the state-of-the-art vector orthogonalization technique. The second approach, denoted as SBMF, is based on the exponential mechanism and the properties of the matrix Bingham-von Mises-Fisher density for network data spectral analysis. We prove that the sampling procedure achieves differential privacy. We conduct empirical evaluation on a real social network data and compare the two approaches in terms of utility preservation (the accuracy of spectra and the accuracy of low rank approximation) under the same differential privacy threshold. Our empirical evaluation results show that LNPP generally incurs smaller utility loss.", "title": "" }, { "docid": "a7317f3f1b4767f20c38394e519fa0d8", "text": "The development of the concept of burden for use in research lacks consistent conceptualization and operational definitions. The purpose of this article is to analyze the concept of burden in an effort to promote conceptual clarity. The technique advocated by Walker and Avant is used to analyze this concept. Critical attributes of burden include subjective perception, multidimensional phenomena, dynamic change, and overload. Predisposing factors are caregiver's characteristics, the demands of caregivers, and the involvement in caregiving. The consequences of burden generate problems in care-receiver, caregiver, family, and health care system. Overall, this article enables us to advance this concept, identify the different sources of burden, and provide directions for nursing intervention.", "title": "" } ]
scidocsrr
d2a63e643af6ee3a04fedd62eab491d7
Effect of prebiotic intake on gut microbiota, intestinal permeability and glycemic control in children with type 1 diabetes: study protocol for a randomized controlled trial
[ { "docid": "656baf66e6dd638d9f48ea621593bac3", "text": "Recent evidence suggests that a particular gut microbial community may favour occurrence of the metabolic diseases. Recently, we reported that high-fat (HF) feeding was associated with higher endotoxaemia and lower Bifidobacterium species (spp.) caecal content in mice. We therefore tested whether restoration of the quantity of caecal Bifidobacterium spp. could modulate metabolic endotoxaemia, the inflammatory tone and the development of diabetes. Since bifidobacteria have been reported to reduce intestinal endotoxin levels and improve mucosal barrier function, we specifically increased the gut bifidobacterial content of HF-diet-fed mice through the use of a prebiotic (oligofructose [OFS]). Compared with normal chow-fed control mice, HF feeding significantly reduced intestinal Gram-negative and Gram-positive bacteria including levels of bifidobacteria, a dominant member of the intestinal microbiota, which is seen as physiologically positive. As expected, HF-OFS-fed mice had totally restored quantities of bifidobacteria. HF-feeding significantly increased endotoxaemia, which was normalised to control levels in HF-OFS-treated mice. Multiple-correlation analyses showed that endotoxaemia significantly and negatively correlated with Bifidobacterium spp., but no relationship was seen between endotoxaemia and any other bacterial group. Finally, in HF-OFS-treated-mice, Bifidobacterium spp. significantly and positively correlated with improved glucose tolerance, glucose-induced insulin secretion and normalised inflammatory tone (decreased endotoxaemia, plasma and adipose tissue proinflammatory cytokines). Together, these findings suggest that the gut microbiota contribute towards the pathophysiological regulation of endotoxaemia and set the tone of inflammation for occurrence of diabetes and/or obesity. Thus, it would be useful to develop specific strategies for modifying gut microbiota in favour of bifidobacteria to prevent the deleterious effect of HF-diet-induced metabolic diseases.", "title": "" } ]
[ { "docid": "e02207c42eda7ec15db5dcd26ee55460", "text": "This paper focuses on a new task, i.e. transplanting a category-and-task-specific neural network to a generic, modular network without strong supervision. We design an functionally interpretable structure for the generic network. Like building LEGO blocks, we teach the generic network a new category by directly transplanting the module corresponding to the category from a pre-trained network with a few or even without sample annotations. Our method incrementally adds new categories to the generic network but does not affect representations of existing categories. In this way, our method breaks the typical bottleneck of learning a net for massive tasks and categories, i.e. the requirement of collecting samples for all tasks and categories at the same time before the learning begins. Thus, we use a new distillation algorithm, namely back-distillation, to overcome specific challenges of network transplanting. Our method without training samples even outperformed the baseline with 100 training samples.", "title": "" }, { "docid": "c6aed5c5e899898083f33eb5f42d4706", "text": "Intelligent systems often depend on data provided by information agents, for example, sensor data or crowdsourced human computation. Providing accurate and relevant data requires costly effort that agents may not always be willing to provide. Thus, it becomes important not only to verify the correctness of data, but also to provide incentives so that agents that provide highquality data are rewarded while those that do not are discouraged by low rewards. We cover different settings and the assumptions they admit, including sensing, human computation, peer grading, reviews, and predictions. We survey different incentive mechanisms, including proper scoring rules, prediction markets and peer prediction, Bayesian Truth Serum, Peer Truth Serum, Correlated Agreement, and the settings where each of them would be suitable. As an alternative, we also consider reputation mechanisms. We complement the gametheoretic analysis with practical examples of applications in prediction platforms, community sensing, and peer grading.", "title": "" }, { "docid": "a9338a9f699bdc8d0085105e3ad217d1", "text": "The femoral head receives blood supply mainly from the deep branch of the medial femoral circumflex artery (MFCA). In previous studies we have performed anatomical dissections of 16 specimens and subsequently visualised the arteries supplying the femoral head in 55 healthy individuals. In this further radiological study we compared the arterial supply of the femoral head in 35 patients (34 men and one woman, mean age 37.1 years (16 to 64)) with a fracture/dislocation of the hip with a historical control group of 55 hips. Using CT angiography, we identified the three main arteries supplying the femoral head: the deep branch and the postero-inferior nutrient artery both arising from the MFCA, and the piriformis branch of the inferior gluteal artery. It was possible to visualise changes in blood flow after fracture/dislocation. Our results suggest that blood flow is present after reduction of the dislocated hip. The deep branch of the MFCA was patent and contrast-enhanced in 32 patients, and the diameter of this branch was significantly larger in the fracture/dislocation group than in the control group (p = 0.022). In a subgroup of ten patients with avascular necrosis (AVN) of the femoral head, we found a contrast-enhanced deep branch of the MFCA in eight hips. Two patients with no blood flow in any of the three main arteries supplying the femoral head developed AVN.", "title": "" }, { "docid": "1bb54da28e139390c2176ae244066575", "text": "A novel non-parametric, multi-variate quickest detection method is proposed for cognitive radios (CRs) using both energy and cyclostationary features. The proposed approach can be used to track state dynamics of communication channels. This capability can be useful for both dynamic spectrum sharing (DSS) and future CRs, as in practice, centralized channel synchronization is unrealistic and the prior information of the statistics of channel usage is, in general, hard to obtain. The proposed multi-variate non-parametric average sample power and cyclostationarity-based quickest detection scheme is shown to achieve better performance compared to traditional energy-based schemes. We also develop a parallel on-line quickest detection/off-line change-point detection algorithm to achieve self-awareness of detection delays and false alarms for future automation. Compared to traditional energy-based quickest detection schemes, the proposed multi-variate non-parametric quickest detection scheme has comparable computational complexity. The simulated performance shows improvements in terms of small detection delays and significantly higher percentage of spectrum utilization.", "title": "" }, { "docid": "309981f955187cef50b5b1f4527d97af", "text": "BACKGROUND\nDetection of unknown risks with marketed medicines is key to securing the optimal care of individual patients and to reducing the societal burden from adverse drug reactions. Large collections of individual case reports remain the primary source of information and require effective analytics to guide clinical assessors towards likely drug safety signals. Disproportionality analysis is based solely on aggregate numbers of reports and naively disregards report quality and content. However, these latter features are the very fundament of the ensuing clinical assessment.\n\n\nOBJECTIVE\nOur objective was to develop and evaluate a data-driven screening algorithm for emerging drug safety signals that accounts for report quality and content.\n\n\nMETHODS\nvigiRank is a predictive model for emerging safety signals, here implemented with shrinkage logistic regression to identify predictive variables and estimate their respective contributions. The variables considered for inclusion capture different aspects of strength of evidence, including quality and clinical content of individual reports, as well as trends in time and geographic spread. A reference set of 264 positive controls (historical safety signals from 2003 to 2007) and 5,280 negative controls (pairs of drugs and adverse events not listed in the Summary of Product Characteristics of that drug in 2012) was used for model fitting and evaluation; the latter used fivefold cross-validation to protect against over-fitting. All analyses were performed on a reconstructed version of VigiBase(®) as of 31 December 2004, at around which time most safety signals in our reference set were emerging.\n\n\nRESULTS\nThe following aspects of strength of evidence were selected for inclusion into vigiRank: the numbers of informative and recent reports, respectively; disproportional reporting; the number of reports with free-text descriptions of the case; and the geographic spread of reporting. vigiRank offered a statistically significant improvement in area under the receiver operating characteristics curve (AUC) over screening based on the Information Component (IC) and raw numbers of reports, respectively (0.775 vs. 0.736 and 0.707, cross-validated).\n\n\nCONCLUSIONS\nAccounting for multiple aspects of strength of evidence has clear conceptual and empirical advantages over disproportionality analysis. vigiRank is a first-of-its-kind predictive model to factor in report quality and content in first-pass screening to better meet tomorrow's post-marketing drug safety surveillance needs.", "title": "" }, { "docid": "da3650998a4bd6ea31467daa631d0e05", "text": "Consideration of facial muscle dynamics is underappreciated among clinicians who provide injectable filler treatment. Injectable fillers are customarily used to fill static wrinkles, folds, and localized areas of volume loss, whereas neuromodulators are used to address excessive muscle movement. However, a more comprehensive understanding of the role of muscle function in facial appearance, taking into account biomechanical concepts such as the balance of activity among synergistic and antagonistic muscle groups, is critical to restoring facial appearance to that of a typical youthful individual with facial esthetic treatments. Failure to fully understand the effects of loss of support (due to aging or congenital structural deficiency) on muscle stability and interaction can result in inadequate or inappropriate treatment, producing an unnatural appearance. This article outlines these concepts to provide an innovative framework for an understanding of the role of muscle movement on facial appearance and presents cases that illustrate how modulation of muscle movement with injectable fillers can address structural deficiencies, rebalance abnormal muscle activity, and restore facial appearance. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266.", "title": "" }, { "docid": "1813bda6a39855ce9e5caf24536425c1", "text": "This article presents VineSens, a hardware and software platform for supporting the decision-making of the vine grower. VineSens is based on a wireless sensor network system composed by autonomous and self-powered nodes that are deployed throughout a vineyard. Such nodes include sensors that allow us to obtain detailed knowledge on different viticulture processes. Thanks to the use of epidemiological models, VineSens is able to propose a custom control plan to prevent diseases like one of the most feared by vine growers: downy mildew. VineSens generates alerts that warn farmers about the measures that have to be taken and stores the historical weather data collected from different spots of the vineyard. Such data can then be accessed through a user-friendly web-based interface that can be accessed through the Internet by using desktop or mobile devices. VineSens was deployed at the beginning in 2016 in a vineyard in the Ribeira Sacra area (Galicia, Spain) and, since then, its hardware and software have been tested to prevent the development of downy mildew, showing during its first season that the system can led to substantial savings, to decrease the amount of phytosanitary products applied, and, as a consequence, to obtain a more ecologically sustainable and healthy wine.", "title": "" }, { "docid": "38fab4cc5cffea363eecbc8b2f2c6088", "text": "Domain adaptation algorithms are useful when the distributions of the training and the test data are different. In this paper, we focus on the problem of instrumental variation and time-varying drift in the field of sensors and measurement, which can be viewed as discrete and continuous distributional change in the feature space. We propose maximum independence domain adaptation (MIDA) and semi-supervised MIDA to address this problem. Domain features are first defined to describe the background information of a sample, such as the device label and acquisition time. Then, MIDA learns a subspace which has maximum independence with the domain features, so as to reduce the interdomain discrepancy in distributions. A feature augmentation strategy is also designed to project samples according to their backgrounds so as to improve the adaptation. The proposed algorithms are flexible and fast. Their effectiveness is verified by experiments on synthetic datasets and four real-world ones on sensors, measurement, and computer vision. They can greatly enhance the practicability of sensor systems, as well as extend the application scope of existing domain adaptation algorithms by uniformly handling different kinds of distributional change.", "title": "" }, { "docid": "6a1e5ffdcac8d22cfc8f9c2fc1ca0e17", "text": "Magnetic resonance images (MRI) play an important role in supporting and substituting clinical information in the diagnosis of multiple sclerosis (MS) disease by presenting lesion in brain MR images. In this paper, an algorithm for MS lesion segmentation from Brain MR Images has been presented. We revisit the modification of properties of fuzzy -c means algorithms and the canny edge detection. By changing and reformed fuzzy c-means clustering algorithms, and applying canny contraction principle, a relationship between MS lesions and edge detection is established. For the special case of FCM, we derive a sufficient condition and clustering parameters, allowing identification of them as (local) minima of the objective function.", "title": "" }, { "docid": "2a67a524cb3279967207b1fa8748cd04", "text": "Recent work in Information Retrieval (IR) using Deep Learning models has yielded state of the art results on a variety of IR tasks. Deep neural networks (DNN) are capable of learning ideal representations of data during the training process, removing the need for independently extracting features. However, the structures of these DNNs are often tailored to perform on specific datasets. In addition, IR tasks deal with text at varying levels of granularity from single factoids to documents containing thousands of words. In this paper, we examine the role of the granularity on the performance of common state of the art DNN structures in IR.", "title": "" }, { "docid": "57224fab5298169be0da314e55ca6b43", "text": "Although users’ preference is semantically reflected in the free-form review texts, this wealth of information was not fully exploited for learning recommender models. Specifically, almost all existing recommendation algorithms only exploit rating scores in order to find users’ preference, but ignore the review texts accompanied with rating information. In this paper, we propose a novel matrix factorization model (called TopicMF) which simultaneously considers the ratings and accompanied review texts. Experimental results on 22 real-world datasets show the superiority of our model over the state-of-the-art models, demonstrating its effectiveness for recommendation tasks.", "title": "" }, { "docid": "7423711eab3ab9054618c8445099671d", "text": "A worldview (or “world view”) is a set of assumptions about physical and social reality that may have powerful effects on cognition and behavior. Lacking a comprehensive model or formal theory up to now, the construct has been underused. This article advances theory by addressing these gaps. Worldview is defined. Major approaches to worldview are critically reviewed. Lines of evidence are described regarding worldview as a justifiable construct in psychology. Worldviews are distinguished from schemas. A collated model of a worldview’s component dimensions is described. An integrated theory of worldview function is outlined, relating worldview to personality traits, motivation, affect, cognition, behavior, and culture. A worldview research agenda is outlined for personality and social psychology (including positive and peace psychology).", "title": "" }, { "docid": "0d802fea4e3d9324ba46c35e5a002b6a", "text": "Hyponatremia is common in both inpatients and outpatients. Medications are often the cause of acute or chronic hyponatremia. Measuring the serum osmolality, urine sodium concentration and urine osmolality will help differentiate among the possible causes. Hyponatremia in the physical states of extracellular fluid (ECF) volume contraction and expansion can be easy to diagnose but often proves difficult to manage. In patients with these states or with normal or near-normal ECF volume, the syndrome of inappropriate secretion of antidiuretic hormone is a diagnosis of exclusion, requiring a thorough search for all other possible causes. Hyponatremia should be corrected at a rate similar to that at which it developed. When symptoms are mild, hyponatremia should be managed conservatively, with therapy aimed at removing the offending cause. When symptoms are severe, therapy should be aimed at more aggressive correction of the serum sodium concentration, typically with intravenous therapy in the inpatient setting.", "title": "" }, { "docid": "3072c5458a075e6643a7679ccceb1417", "text": "A novel interleaved flyback converter with leakage energy recycled is proposed. The proposed converter is combined with dual-switch dual-transformer flyback topology. Two clamping diodes are used to reduce the voltage stress on power switches to the input voltage level and also to recycle leakage inductance energy to the input voltage and capacitor. Besides, the interleaved control is implemented to reduce the output current ripple. In addition, the voltage on the primary windings is reduced to the half of the input voltage and thus reducing the turns ratio of transformers to improve efficiency. The operating principle and the steady state analysis of the proposed converter are discussed in detail. Finally, an experimental prototype is implemented with 400V input voltage, 24V/300W output to verify the feasibility of the proposed converter. The experimental results reveals that the highest efficiency of the proposed converter is 94.42%, the full load efficiency is 92.7%, and the 10% load efficiency is 92.61%.", "title": "" }, { "docid": "3680f9e5dd82e6f3d5d410f2725225cc", "text": "This work contributes several new elements to the quest for a biologically plausible implementation of backprop in brains. We introduce a very general and abstract framework for machine learning, in which the quantities of interest are defined implicitly through an energy function. In this framework, only one kind of neural computation is involved both for the first phase (when the prediction is made) and the second phase (after the target is revealed), like the contrastive Hebbian learning algorithm in the continuous Hopfield model for example. Contrary to automatic differentiation in computational graphs (i.e. standard backprop), there is no need for special computation in the second phase of our framework. One advantage of our framework over contrastive Hebbian learning is that the second phase corresponds to only nudging the first-phase fixed point towards a configuration that reduces prediction error. In the case of a multi-layer supervised neural network, the output units are slightly nudged towards their target, and the perturbation introduced at the output layer propagates backward in the network. The signal ’back-propagated’ during this second phase actually contains information about the error derivatives, which we use to implement a learning rule proved to perform gradient descent with respect to an objective function.", "title": "" }, { "docid": "31a0b00a79496deccdec4d3629fcbf88", "text": "The Human Papillomavirus (HPV) E6 protein is one of three oncoproteins encoded by the virus. It has long been recognized as a potent oncogene and is intimately associated with the events that result in the malignant conversion of virally infected cells. In order to understand the mechanisms by which E6 contributes to the development of human malignancy many laboratories have focused their attention on identifying the cellular proteins with which E6 interacts. In this review we discuss these interactions in the light of their respective contributions to the malignant progression of HPV transformed cells.", "title": "" }, { "docid": "5e59888b6e0c562d546618dd95fa00b8", "text": "The massive acceleration of the nitrogen cycle as a result of the production and industrial use of artificial nitrogen fertilizers worldwide has enabled humankind to greatly increase food production, but it has also led to a host of environmental problems, ranging from eutrophication of terrestrial and aquatic systems to global acidification. The findings of many national and international research programmes investigating the manifold consequences of human alteration of the nitrogen cycle have led to a much improved understanding of the scope of the anthropogenic nitrogen problem and possible strategies for managing it. Considerably less emphasis has been placed on the study of the interactions of nitrogen with the other major biogeochemical cycles, particularly that of carbon, and how these cycles interact with the climate system in the presence of the ever-increasing human intervention in the Earth system. With the release of carbon dioxide (CO2) from the burning of fossil fuels pushing the climate system into uncharted territory, which has major consequences for the functioning of the global carbon cycle, and with nitrogen having a crucial role in controlling key aspects of this cycle, questions about the nature and importance of nitrogen–carbon–climate interactions are becoming increasingly pressing. The central question is how the availability of nitrogen will affect the capacity of Earth’s biosphere to continue absorbing carbon from the atmosphere (see page 289), and hence continue to help in mitigating climate change. Addressing this and other open issues with regard to nitrogen–carbon–climate interactions requires an Earth-system perspective that investigates the dynamics of the nitrogen cycle in the context of a changing carbon cycle, a changing climate and changes in human actions.", "title": "" }, { "docid": "513add6fe4a18ed30e97a2c61ebd59ea", "text": "Holistic driving scene understanding is a critical step toward intelligent transportation systems. It involves different levels of analysis, interpretation, reasoning and decision making. In this paper, we propose a 3D dynamic scene analysis framework as the first step toward driving scene understanding. Specifically, given a sequence of synchronized 2D and 3D sensory data, the framework systematically integrates different perception modules to obtain 3D position, orientation, velocity and category of traffic participants and the ego car in a reconstructed 3D semantically labeled traffic scene. We implement this framework and demonstrate the effectiveness in challenging urban driving scenarios. The proposed framework builds a foundation for higher level driving scene understanding problems such as intention and motion prediction of surrounding entities, ego motion planning, and decision making.", "title": "" }, { "docid": "b1272039194d07ff9b7568b7f295fbfb", "text": "Protein catalysis requires the atomic-level orchestration of side chains, substrates and cofactors, and yet the ability to design a small-molecule-binding protein entirely from first principles with a precisely predetermined structure has not been demonstrated. Here we report the design of a novel protein, PS1, that binds a highly electron-deficient non-natural porphyrin at temperatures up to 100 °C. The high-resolution structure of holo-PS1 is in sub-Å agreement with the design. The structure of apo-PS1 retains the remote core packing of the holoprotein, with a flexible binding region that is predisposed to ligand binding with the desired geometry. Our results illustrate the unification of core packing and binding-site definition as a central principle of ligand-binding protein design.", "title": "" }, { "docid": "a3cd3ec70b5d794173db36cb9a219403", "text": "We consider the problem of grasping novel objects in cluttered environments. If a full 3-d model of the scene were available, one could use the model to estimate the stability and robustness of different grasps (formalized as form/force-closure, etc); in practice, however, a robot facing a novel object will usually be able to perceive only the front (visible) faces of the object. In this paper, we propose an approach to grasping that estimates the stability of different grasps, given only noisy estimates of the shape of visible portions of an object, such as that obtained from a depth sensor. By combining this with a kinematic description of a robot arm and hand, our algorithm is able to compute a specific positioning of the robot’s fingers so as to grasp an object. We test our algorithm on two robots (with very different arms/manipulators, including one with a multi-fingered hand). We report results on the task of grasping objects of significantly different shapes and appearances than ones in the training set, both in highly cluttered and in uncluttered environments. We also apply our algorithm to the problem of unloading items from a dishwasher. Introduction We consider the problem of grasping novel objects, in the presence of significant amounts of clutter. A key challenge in this setting is that a full 3-d model of the scene is typically not available. Instead, a robot’s depth sensors can usually estimate only the shape of the visible portions of the scene. In this paper, we propose an algorithm that, given such partial models of the scene, selects a grasp—that is, a configuration of the robot’s arm and fingers—to try to pick up an object. If a full 3-d model (including the occluded portions of a scene) were available, then methods such as formand forceclosure (Mason and Salisbury 1985; Bicchi and Kumar 2000; Pollard 2004) and other grasp quality metrics (Pelossof et al. 2004; Hsiao, Kaelbling, and Lozano-Perez 2007; Ciocarlie, Goldfeder, and Allen 2007) can be used to try to find a good grasp. However, given only the point cloud returned by stereo vision or other depth sensors, a straightforward application of these ideas is impossible, since we do not have a model of the occluded portions of the scene. Copyright c © 2008, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Figure 1: Image of an environment (left) and the 3-d pointcloud (right) returned by the Swissranger depth sensor. In detail, we will consider a robot that uses a camera, together with a depth sensor, to perceive a scene. The depth sensor returns a “point cloud,” corresponding to 3-d locations that it has found on the front unoccluded surfaces of the objects. (See Fig. 1.) Such point clouds are typically noisy (because of small errors in the depth estimates); but more importantly, they are also incomplete. 1 This work builds on Saxena et al. (2006a; 2006b; 2007; 2008) which applied supervised learning to identify visual properties that indicate good grasps, given a 2-d image of the scene. However, their algorithm only chose a 3-d “grasp point”—that is, the 3-d position (and 3-d orientation; Saxena et al. 2007) of the center of the end-effector. Thus, it did not generalize well to more complex arms and hands, such as to multi-fingered hands where one has to not only choose the 3d position (and orientation) of the hand, but also address the high dof problem of choosing the positions of all the fingers. Our approach begins by computing a number of features of grasp quality, using both 2-d image and the 3-d point cloud features. For example, the 3-d data is used to compute a number of grasp quality metrics, such as the degree to which the fingers are exerting forces normal to the surfaces of the object, and the degree to which they enclose the object. Using such features, we then apply a supervised learning algorithm to estimate the degree to which different configurations of the full arm and fingers reflect good grasps. We test our algorithm on two robots, on a variety of objects of shapes very different from ones in the training set, including a ski boot, a coil of wire, a game controller, and Forexample, standard stereo vision fails to return depth values for textureless portions of the object, thus its point clouds are typically very sparse. Further, the Swissranger gives few points only because of its low spatial resolution of 144 × 176. Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence (2008)", "title": "" } ]
scidocsrr
becd7ce11f5b60485a157d56f110813c
A Systematic Classification of Knowledge, Reasoning, and Context within the ARC Dataset
[ { "docid": "b4ab51818d868b2f9796540c71a7bd17", "text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.", "title": "" }, { "docid": "fa6f272026605bddf1b18c8f8234dba6", "text": "tion can machines think? by replacing it with another, namely can a machine pass the imitation game (the Turing test). In the years since, this test has been criticized as being a poor replacement for the original enquiry (for example, Hayes and Ford [1995]), which raises the question: what would a better replacement be? In this article, we argue that standardized tests are an effective and practical assessment of many aspects of machine intelligence, and should be part of any comprehensive measure of AI progress. While a crisp definition of machine intelligence remains elusive, we can enumerate some general properties we might expect of an intelligent machine. The list is potentially long (for example, Legg and Hutter [2007]), but should at least include the ability to (1) answer a wide variety of questions, (2) answer complex questions, (3) demonstrate commonsense and world knowledge, and (4) acquire new knowledge scalably. In addition, a suitable test should be clearly measurable, graduated (have a variety of levels of difficulty), not gameable, ambitious but realistic, and motivating. There are many other requirements we might add (for example, capabilities in robotics, vision, dialog), and thus any comprehensive measure of AI is likely to require a battery of different tests. However, standardized tests meet a surprising number of requirements, including the four listed, and thus should be a key component of a future battery of tests. As we will show, the tests require answering a wide variety of questions, including those requiring commonsense and world knowledge. In addition, they meet all the practical requirements, a huge advantage for any component of a future test of AI. Articles", "title": "" }, { "docid": "bbee52ebe65b2f7b8d0356a3fbdb80bf", "text": "Science Study Book Corpus Document Filter [...] enters a d orbital. The valence electrons (those added after the last noble gas configuration) in these elements include the ns and (n \\u2013 1) d electrons. The official IUPAC definition of transition elements specifies those with partially filled d orbitals. Thus, the elements with completely filled orbitals (Zn, Cd, Hg, as well as Cu, Ag, and Au in Figure 6.30) are not technically transition elements. However, the term is frequently used to refer to the entire d block (colored yellow in Figure 6.30), and we will adopt this usage in this textbook. Inner transition elements are metallic elements in which the last electron added occupies an f orbital.", "title": "" } ]
[ { "docid": "2d4fd6da60cad3b6a427bd406f16d6fa", "text": "BACKGROUND\nVarious cutaneous side-effects, including, exanthema, pruritus, urticaria and Lyell or Stevens-Johnson syndrome, have been reported with meropenem (carbapenem), a rarely-prescribed antibiotic. Levofloxacin (fluoroquinolone), a more frequently prescribed antibiotic, has similar cutaneous side-effects, as well as photosensitivity. We report a case of cutaneous hyperpigmentation induced by meropenem and levofloxacin.\n\n\nPATIENTS AND METHODS\nA 67-year-old male was treated with meropenem (1g×4 daily), levofloxacin (500mg twice daily) and amikacin (500mg daily) for 2 weeks, followed by meropenem, levofloxacin and rifampicin (600mg twice daily) for 4 weeks for osteitis of the fifth metatarsal. Three weeks after initiation of antibiotic therapy, dark hyperpigmentation appeared on the lower limbs, predominantly on the anterior aspects of the legs. Histology revealed dark, perivascular and interstitial deposits throughout the dermis, which stained with both Fontana-Masson and Perls stains. Infrared microspectroscopy revealed meropenem in the dermis of involved skin. After withdrawal of the antibiotics, the pigmentation subsided slowly.\n\n\nDISCUSSION\nSimilar cases of cutaneous hyperpigmentation have been reported after use of minocycline. In these cases, histological examination also showed iron and/or melanin deposits within the dermis, but the nature of the causative pigment remains unclear. In our case, infrared spectroscopy enabled us to identify meropenem in the dermis. Two cases of cutaneous hyperpigmentation have been reported following use of levofloxacin, and the results of histological examination were similar. This is the first case of cutaneous hyperpigmentation induced by meropenem.", "title": "" }, { "docid": "61c4e955604011a9b9a50ccbd2858070", "text": "This paper presents a second-order pulsewidth modulation (PWM) feedback loop to improve power supply rejection (PSR) of any open-loop PWM class-D amplifiers (CDAs). PSR of the audio amplifier has always been a key parameter in mobile phone applications. In contrast to class-AB amplifiers, the poor PSR performance has always been the major drawback for CDAs with a half-bridge connected power stage. The proposed PWM feedback loop is fabricated using GLOBALFOUNDRIES' 0.18-μm CMOS process technology. The measured PSR is more than 80 dB and the measured total harmonic distortion is less than 0.04% with a 1-kHz input sinusoidal test tone.", "title": "" }, { "docid": "354a136fc9bc939906ae1c347fd21d9c", "text": "Due to the growing popularity of indoor location-based services, indoor data management has received significant research attention in the past few years. However, we observe that the existing indexing and query processing techniques for the indoor space do not fully exploit the properties of the indoor space. Consequently, they provide below par performance which makes them unsuitable for large indoor venues with high query workloads. In this paper, we propose two novel indexes called Indoor Partitioning Tree (IPTree) and Vivid IP-Tree (VIP-Tree) that are carefully designed by utilizing the properties of indoor venues. The proposed indexes are lightweight, have small pre-processing cost and provide nearoptimal performance for shortest distance and shortest path queries. We also present efficient algorithms for other spatial queries such as k nearest neighbors queries and range queries. Our extensive experimental study on real and synthetic data sets demonstrates that our proposed indexes outperform the existing algorithms by several orders of magnitude.", "title": "" }, { "docid": "f032d36e081d2b5a4b0408b8f9b77954", "text": "BACKGROUND\nMalnutrition is still highly prevalent in developing countries. Schoolchildren may also be at high nutritional risk, not only under-five children. However, their nutritional status is poorly documented, particularly in urban areas. The paucity of information hinders the development of relevant nutrition programs for schoolchildren. The aim of this study carried out in Ouagadougou was to assess the nutritional status of schoolchildren attending public and private schools.\n\n\nMETHODS\nThe study was carried out to provide baseline data for the implementation and evaluation of the Nutrition Friendly School Initiative of WHO. Six intervention schools and six matched control schools were selected and a sample of 649 schoolchildren (48% boys) aged 7-14 years old from 8 public and 4 private schools were studied. Anthropometric and haemoglobin measurements, along with thyroid palpation, were performed. Serum retinol was measured in a random sub-sample of children (N = 173). WHO criteria were used to assess nutritional status. Chi square and independent t-test were used for proportions and mean comparisons between groups.\n\n\nRESULTS\nMean age of the children (48% boys) was 11.5 ± 1.2 years. Micronutrient malnutrition was highly prevalent, with 38.7% low serum retinol and 40.4% anaemia. The prevalence of stunting was 8.8% and that of thinness, 13.7%. The prevalence of anaemia (p = 0.001) and vitamin A deficiency (p < 0.001) was significantly higher in public than private schools. Goitre was not detected. Overweight/obesity was low (2.3%) and affected significantly more children in private schools (p = 0.009) and younger children (7-9 y) (p < 0.05). Thinness and stunting were significantly higher in peri-urban compared to urban schools (p < 0.05 and p = 0.004 respectively). Almost 15% of the children presented at least two nutritional deficiencies.\n\n\nCONCLUSION\nThis study shows that malnutrition and micronutrient deficiencies are also widely prevalent in schoolchildren in cities, and it underlines the need for nutrition interventions to target them.", "title": "" }, { "docid": "69de53fde2c621d3fa9763fab700f37a", "text": "Every year the number of installed wind power plants in the world increases. The horizontal axis wind turbine is the most common type of turbine but there exist other types. Here, three different wind turbines are considered; the horizontal axis wind turbine and two different concepts of vertical axis wind turbines; the Darrieus turbine and the H-rotor. This paper aims at making a comparative study of these three different wind turbines from the most important aspects including structural dynamics, control systems, maintenance, manufacturing and electrical equipment. A case study is presented where three different turbines are compared to each other. Furthermore, a study of blade areas for different turbines is presented. The vertical axis wind turbine appears to be advantageous to the horizontal axis wind turbine in several aspects. r 2007 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "5ea45a4376e228b3eacebb8dd8e290d2", "text": "The sharing economy has quickly become a very prominent subject of research in the broader computing literature and the in human--computer interaction (HCI) literature more specifically. When other computing research areas have experienced similarly rapid growth (e.g. human computation, eco-feedback technology), early stage literature reviews have proved useful and influential by identifying trends and gaps in the literature of interest and by providing key directions for short- and long-term future work. In this paper, we seek to provide the same benefits with respect to computing research on the sharing economy. Specifically, following the suggested approach of prior computing literature reviews, we conducted a systematic review of sharing economy articles published in the Association for Computing Machinery Digital Library to investigate the state of sharing economy research in computing. We performed this review with two simultaneous foci: a broad focus toward the computing literature more generally and a narrow focus specifically on HCI literature. We collected a total of 112 sharing economy articles published between 2008 and 2017 and through our analysis of these papers, we make two core contributions: (1) an understanding of the computing community's contributions to our knowledge about the sharing economy, and specifically the role of the HCI community in these contributions (i.e.what has been done) and (2) a discussion of under-explored and unexplored aspects of the sharing economy that can serve as a partial research agenda moving forward (i.e.what is next to do).", "title": "" }, { "docid": "d8b0ef94385d1379baeb499622253a02", "text": "Mining association rules associates events that took place together. In market basket analysis, these discovered rules associate items purchased together. Items that are not part of a transaction are not considered. In other words, typical association rules do not take into account items that are part of the domain but that are not together part of a transaction. Association rules are based on frequencies and count the transactions where items occur together. However, counting absences of items is prohibitive if the number of possible items is very large, which is typically the case. Nonetheless, knowing the relationship between the absence of an item and the presence of another can be very important in some applications. These rules are called negative association rules. We review current approaches for mining negative association rules and we discuss limitations and future research directions.", "title": "" }, { "docid": "a72ca91ab3d89e5918e8e13f98dc4a7d", "text": "We describe the Lightweight Communications and Marshalling (LCM) library for message passing and data marshalling. The primary goal of LCM is to simplify the development of low-latency message passing systems, especially for real-time robotics research applications.", "title": "" }, { "docid": "548a3bd89ca480788c258c54e67fb406", "text": "Document summarization and keyphrase extraction are two related tasks in the IR and NLP fields, and both of them aim at extracting condensed representations from a single text document. Existing methods for single document summarization and keyphrase extraction usually make use of only the information contained in the specified document. This article proposes using a small number of nearest neighbor documents to improve document summarization and keyphrase extraction for the specified document, under the assumption that the neighbor documents could provide additional knowledge and more clues. The specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results on the Document Understanding Conference (DUC) benchmark datasets demonstrate the effectiveness and robustness of our proposed approaches. The cross-document sentence relationships in the expanded document set are validated to be beneficial to single document summarization, and the word cooccurrence relationships in the neighbor documents are validated to be very helpful to single document keyphrase extraction.", "title": "" }, { "docid": "2f7944399a1f588d1b11d3cf7846af1c", "text": "Corrosion can cause section loss or cracks in the steel members which is one of the most important causes of deterioration of steel bridges. For some critical components of a steel bridge, it is fatal and could even cause the collapse of the whole bridge. Nowadays the most common approach to steel bridge inspection is visual inspection by inspectors with inspection trucks. This paper mainly presents a climbing robot with magnetic wheels which can move on the surface of steel bridge. Experiment results shows that the climbing robot can move on the steel bridge freely without disrupting traffic to reduce the risks to the inspectors.", "title": "" }, { "docid": "18dbbf0338d138f71a57b562883f0677", "text": "We present the analytical capability of TecDEM, a MATLAB toolbox used in conjunction with Global DEMs for the extraction of tectonic geomorphologic information. TecDEM includes a suite of algorithms to analyze topography, extracted drainage networks and sub-basins. The aim of part 2 of this paper series is the generation of morphometric maps for surface dynamics and basin analysis. TecDEM therefore allows the extraction of parameters such as isobase, incision, drainage density and surface roughness maps. We also provide tools for basin asymmetry and hypsometric analysis. These are efficient graphical user interfaces (GUIs) for mapping drainage deviation from basin mid-line and basin hypsometry. A morphotectonic interpretation of the Kaghan Valley (Northern Pakistan) is performed with TecDEM and the findings indicate a high correlation between surface dynamics and basin analysis parameters with neotectonic features in the study area. & 2010 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "beff14cfa1d0e5437a81584596e666ea", "text": "Graphene has exceptional optical, mechanical, and electrical properties, making it an emerging material for novel optoelectronics, photonics, and flexible transparent electrode applications. However, the relatively high sheet resistance of graphene is a major constraint for many of these applications. Here we propose a new approach to achieve low sheet resistance in large-scale CVD monolayer graphene using nonvolatile ferroelectric polymer gating. In this hybrid structure, large-scale graphene is heavily doped up to 3 × 10(13) cm(-2) by nonvolatile ferroelectric dipoles, yielding a low sheet resistance of 120 Ω/□ at ambient conditions. The graphene-ferroelectric transparent conductors (GFeTCs) exhibit more than 95% transmittance from the visible to the near-infrared range owing to the highly transparent nature of the ferroelectric polymer. Together with its excellent mechanical flexibility, chemical inertness, and the simple fabrication process of ferroelectric polymers, the proposed GFeTCs represent a new route toward large-scale graphene-based transparent electrodes and optoelectronics.", "title": "" }, { "docid": "6b2c009eca44ea374bb5f1164311e593", "text": "The ECG signal has been shown to contain relevant information for human identification. Even though results validate the potential of these signals, data acquisition methods and apparatus explored so far compromise user acceptability, requiring the acquisition of ECG at the chest. In this paper, we propose a finger-based ECG biometric system, that uses signals collected at the fingers, through a minimally intrusive 1-lead ECG setup recurring to Ag/AgCl electrodes without gel as interface with the skin. The collected signal is significantly more noisy than the ECG acquired at the chest, motivating the application of feature extraction and signal processing techniques to the problem. Time domain ECG signal processing is performed, which comprises the usual steps of filtering, peak detection, heartbeat waveform segmentation, and amplitude normalization, plus an additional step of time normalization. Through a simple minimum distance criterion between the test patterns and the enrollment database, results have revealed this to be a promising technique for biometric applications.", "title": "" }, { "docid": "b5df59d926ca4778c306b255d60870a1", "text": "In this paper the transcription and evaluation of the corpus DIMEx100 for Mexican Spanish is presented. First we describe the corpus and explain the linguistic and computational motivation for its design and collection process; then, the phonetic antecedents and the alphabet adopted for the transcription task are presented; the corpus has been transcribed at three different granularity levels, which are also specified in detail. The corpus statistics for each transcription level are also presented. A set of phonetic rules describing phonetic context observed empirically in spontaneous conversation is also validated with the transcription. The corpus has been used for the construction of acoustic models and a phonetic dictionary for the construction of a speech recognition system. Initial performance results suggest that the data can be used to train good quality acoustic models.", "title": "" }, { "docid": "20705a14783c89ac38693b2202363c1f", "text": "This paper analyzes the effect of employee recognition, pay, and benefits on job satisfaction. In this cross-sectional study, survey responses from university students in the U.S. (n = 457), Malaysia (n = 347) and Vietnam (n = 391) were analyzed. Employee recognition, pay, and benefits were found to have a significant impact on job satisfaction, regardless of home country income level (high, middle or low income) and culture (collectivist or individualist). However, the effect of benefits on job satisfaction was significantly more important for U.S. respondents than for respondents from Malaysia and Vietnam. The authors conclude that both financial and nonfinancial rewards have a role in influencing job satisfaction, which ultimately impacts employee performance. Theoretical and practical implications for developing effective recruitment and retention policies for employees are also discussed.", "title": "" }, { "docid": "ab98f6dc31d080abdb06bb9b4dba798e", "text": "In TEFL, it is often stated that communication presupposes comprehension. The main purpose of readability studies is thus to measure the comprehensibility of a piece of writing. In this regard, different readability measures were initially devised to help educators select passages suitable for both children and adults. However, readability formulas can certainly be extremely helpful in the realm of EFL reading. They were originally designed to assess the suitability of books for students at particular grade levels or ages. Nevertheless, they can be used as basic tools in determining certain crucial EFL text-characteristics instrumental in the skill of reading and its related issues. The aim of the present paper is to familiarize the readers with the most frequently used readability formulas as well as the pros and cons views toward the use of such formulas. Of course, this part mostly illustrates studies done on readability formulas with the results obtained. The main objective of this part is to help readers to become familiar with the background of the formulas, the theory on which they stand, what they are good for and what they are not with regard to a number of studies cited in this section.", "title": "" }, { "docid": "c0745b124949fdb5c9ffc54d66da1789", "text": "Anemia resulting from iron deficiency is one of the most prevalent diseases in the world. As iron has important roles in several biological processes such as oxygen transport, DNA synthesis and cell growth, there is a high need for iron therapies that result in high iron bioavailability with minimal toxic effects to treat patients suffering from anemia. This study aims to develop a novel oral iron-complex formulation based on hemin-loaded polymeric micelles composed of the biodegradable and thermosensitive polymer methoxy-poly(ethylene glycol)-b-poly[N-(2-hydroxypropyl)methacrylamide-dilactate], abbreviated as mPEG-b-p(HPMAm-Lac2). Hemin-loaded micelles were prepared by addition of hemin dissolved in DMSO:DMF (1:9, one volume) to an aqueous polymer solution (nine volumes) of mPEG-b-p(HPMAm-Lac2) followed by rapidly heating the mixture at 50°C to form hemin-loaded micelles that remain intact at room and physiological temperature. The highest loading capacity for hemin in mPEG-b-p(HPMAm-Lac2) micelles was 3.9%. The average particle diameter of the hemin-micelles ranged from 75 to 140nm, depending on the concentration of hemin solution that was used to prepare the micelles. The hemin-loaded micelles were stable at pH 2 for at least 3 h which covers the residence time of the formulation in the stomach after oral administration and up to 17 h at pH 7.4 which is sufficient time for uptake of the micelles by the enterocytes. Importantly, incubation of Caco-2 cells with hemin-micelles for 24 h at 37°C resulted in ferritin levels of 2500ng/mg protein which is about 10-fold higher than levels observed in cells incubated with iron sulfate under the same conditions. The hemin formulation also demonstrated superior cell viability compared to iron sulfate with and without ascorbic acid. The study presented here demonstrates the development of a promising novel iron complex for oral delivery.", "title": "" }, { "docid": "1d4a116465d9c50f085b18d526119a90", "text": "In this paper, we investigate the efficiency of FPGA implementations of AES and AES-like ciphers, specially in the context of authenticated encryption. We consider the encryption/decryption and the authentication/verification structures of OCB-like modes (like OTR or SCT modes). Their main advantage is that they are fully parallelisable. While this feature has already been used to increase the throughput/performance of hardware implementations, it is usually overlooked while comparing different ciphers. We show how to use it with zero area overhead, leading to a very significant efficiency gain. Additionally, we show that using FPGA technology mapping instead of logic optimization, the area of both the linear and non linear parts of the round function of several AES-like primitives can be reduced, without affecting the run-time performance. We provide the implementation results of two multi-stream implementations of both the LED and AES block ciphers. The AES implementation in this paper achieves an efficiency of 38 Mbps/slice, which is the most efficient implementation in literature, to the best of our knowledge. For LED, achieves 2.5 Mbps/slice on Spartan 3 FPGA, which is 2.57x better than the previous implementation. Besides, we use our new techniques to optimize the FPGA implementation of the CAESAR candidate Deoxys-I in both the encryption only and encryption/decryption settings. Finally, we show that the efficiency gains of the proposed techniques extend to other technologies, such as ASIC, as well.", "title": "" }, { "docid": "8390fd7e559832eea895fabeb48c3549", "text": "An algorithm is presented to perform connected component labeling of images of arbitrary dimension that are represented by a linear bintree. The bintree is a generalization of the quadtree data structure that enables dealing with images of arbitrary dimension. The linear bintree is a pointerless representation. The algorithm uses an active border which is represented by linked lists instead of arrays. This results in a significant reduction in the space requirements, thereby making it feasible to process threeand higher dimensional images. Analysis of the execution time of the algorithm shows almost linear behavior with respect to the number of leaf nodes in the image, and empirical tests are in agreement. The algorithm can be modified easily to compute a ( d 1)-dimensional boundary measure (e.g., perimeter in two dimensions and surface area in three dimensions) with linear", "title": "" }, { "docid": "fc9ec90a7fb9c18a5209f462d21cf0e1", "text": "The demand for accurate and reliable positioning in industrial applications, especially in robotics and high-precision machines, has led to the increased use of Harmonic Drives. The unique performance features of harmonic drives, such as high reduction ratio and high torque capacity in a compact geometry, justify their widespread application. However, nonlinear torsional compliance and friction are the most fundamental problems in these components and accurate modelling of the dynamic behaviour is expected to improve the performance of the system. This paper offers a model for torsional compliance of harmonic drives. A statistical measure of variation is defined, by which the reliability of the estimated parameters for different operating conditions, as well as the accuracy and integrity of the proposed model, are quantified. The model performance is assessed by simulation to verify the experimental results. Two test setups have been developed and built, which are employed to evaluate experimentally the behaviour of the system. Each setup comprises a different type of harmonic drive, namely the high load torque and the low load torque harmonic drive. The results show an accurate match between the simulation torque obtained from the identified model and the measured torque from the experiment, which indicates the reliability of the proposed model.", "title": "" } ]
scidocsrr
3b7f92cc3a43d3f7693a835df61c006d
Deep Convolutional Neural Networks for Spatiotemporal Crime Prediction
[ { "docid": "c39fe902027ba5cb5f0fa98005596178", "text": "Twitter is used extensively in the United States as well as globally, creating many opportunities to augment decision support systems with Twitterdriven predictive analytics. Twitter is an ideal data source for decision support: its users, who number in the millions, publicly discuss events, emotions, and innumerable other topics; its content is authored and distributed in real time at no charge; and individual messages (also known as tweets) are often tagged with precise spatial and temporal coordinates. This article presents research investigating the use of spatiotemporally tagged tweets for crime prediction. We use Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States. We then incorporate these topics into a crime prediction model and show that, for 19 of the 25 crime types we studied, the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation. We identify a number of performance bottlenecks that could impact the use of Twitter in an actual decision support system. We also point out important areas of future work for this research, including deeper semantic analysis of message con∗Email address: [email protected]; Tel.: 1+ 434 924 5397; Fax: 1+ 434 982 2972 Preprint submitted to Decision Support Systems January 14, 2014 tent, temporal modeling, and incorporation of auxiliary data sources. This research has implications specifically for criminal justice decision makers in charge of resource allocation for crime prevention. More generally, this research has implications for decision makers concerned with geographic spaces occupied by Twitter-using individuals.", "title": "" }, { "docid": "3a58c1a2e4428c0b875e1202055e5b13", "text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.", "title": "" } ]
[ { "docid": "eb83f7367ba11bb5582864a08bb746ff", "text": "Probabilistic inference algorithms for find­ ing the most probable explanation, the max­ imum aposteriori hypothesis, and the maxi­ mum expected utility and for updating belief are reformulated as an elimination-type al­ gorithm called bucket elimination. This em­ phasizes the principle common to many of the algorithms appearing in that literature and clarifies their relationship to nonserial dynamic programming algorithms. We also present a general way of combining condition­ ing and elimination within this framework. Bounds on complexity are given for all the al­ gorithms as a function of the problem's struc­ ture.", "title": "" }, { "docid": "303d489a5f2f1cf021a1854b6c2724e0", "text": "RGB-D cameras provide both color images and per-pixel depth stimates. The richness of this data and the recent development of low-c ost sensors have combined to present an attractive opportunity for mobile robot ics research. In this paper, we describe a system for visual odometry and mapping using an RGB-D camera, and its application to autonomous flight. By leveraging resu lts from recent stateof-the-art algorithms and hardware, our system enables 3D fl ight in cluttered environments using only onboard sensor data. All computation an d se sing required for local position control are performed onboard the vehicle, r educing the dependence on unreliable wireless links. However, even with accurate 3 D sensing and position estimation, some parts of the environment have more percept ual structure than others, leading to state estimates that vary in accuracy across the environment. If the vehicle plans a path without regard to how well it can localiz e itself along that path, it runs the risk of becoming lost or worse. We show how the Belief Roadmap (BRM) algorithm (Prentice and Roy, 2009), a belief space extensio n of the Probabilistic Roadmap algorithm, can be used to plan vehicle trajectories that incorporate the sensing model of the RGB-D camera. We evaluate the effective ness of our system for controlling a quadrotor micro air vehicle, demonstrate its use for constructing detailed 3D maps of an indoor environment, and discuss its li mitations. Abraham Bachrach and Samuel Prentice contributed equally to this work. Abraham Bachrach, Samuel Prentice, Ruijie He, Albert Huang a nd Nicholas Roy Computer Science and Artificial Intelligence Laboratory, Ma ss chusetts Institute of Technology, Cambridge, MA 02139. e-mail: abachrac, ruijie, albert, prentice, [email protected] u Peter Henry, Michael Krainin and Dieter Fox University of Washington, Department of Computer Science & Engi neering, Seattle, WA. e-mail: peter, mkrainin, [email protected]. Daniel Maturana The Robotics Institute, Carnegie Mellon University, Pittsbur gh, PA. e-mail: [email protected]", "title": "" }, { "docid": "379df071aceaee1be2228070f0245257", "text": "This paper reports a SiC-based solid-state circuit breaker (SSCB) with an adjustable current-time (I-t) tripping profile for both ultrafast short circuit protection and overload protection. The tripping time ranges from 0.5 microsecond to 10 seconds for a fault current ranging from 0.8X to 10X of the nominal current. The I-t tripping profile, adjustable by choosing different resistance values in the analog control circuit, can help avoid nuisance tripping of the SSCB due to inrush transient current. The maximum thermal capability of the 1200V SiC JFET static switch in the SSCB is investigated to set a practical thermal limit for the I-t tripping profile. Furthermore, a low fault current ‘blind zone’ limitation of the prior SSCB design is discussed and a new circuit solution is proposed to operate the SSCB even under a low fault current condition. Both simulation and experimental results are reported.", "title": "" }, { "docid": "6018c84c0e5666b5b4615766a5bb98a9", "text": "We introduce instancewise feature selection as a methodology for model interpretation. Our method is based on learning a function to extract a subset of features that are most informative for each given example. This feature selector is trained to maximize the mutual information between selected features and the response variable, where the conditional distribution of the response variable given the input is the model to be explained. We develop an efficient variational approximation to the mutual information, and show the effectiveness of our method on a variety of synthetic and real data sets using both quantitative metrics and human evaluation.", "title": "" }, { "docid": "e3557b0f064d848c5a9127a0c3d5f1db", "text": "Understanding the behaviors of a software system is very important for performing daily system maintenance tasks. In practice, one way to gain knowledge about the runtime behavior of a system is to manually analyze system logs collected during the system executions. With the increasing scale and complexity of software systems, it has become challenging for system operators to manually analyze system logs. To address these challenges, in this paper, we propose a new approach for contextual analysis of system logs for understanding a system's behaviors. In particular, we first use execution patterns to represent execution structures reflected by a sequence of system logs, and propose an algorithm to mine execution patterns from the program logs. The mined execution patterns correspond to different execution paths of the system. Based on these execution patterns, our approach further learns essential contextual factors (e.g., the occurrences of specific program logs with specific parameter values) that cause a specific branch or path to be executed by the system. The mining and learning results can help system operators to understand a software system's runtime execution logic and behaviors during various tasks such as system problem diagnosis. We demonstrate the feasibility of our approach upon two real-world software systems (Hadoop and Ethereal).", "title": "" }, { "docid": "cf3e66247ab575b5a8e5fe1678c209bd", "text": "Metamorphic testing (MT) is an effective methodology for testing those so-called ``non-testable'' programs (e.g., scientific programs), where it is sometimes very difficult for testers to know whether the outputs are correct. In metamorphic testing, metamorphic relations (MRs) (which specify how particular changes to the input of the program under test would change the output) play an essential role. However, testers may typically have to obtain MRs manually.\n In this paper, we propose a search-based approach to automatic inference of polynomial MRs for a program under test. In particular, we use a set of parameters to represent a particular class of MRs, which we refer to as polynomial MRs, and turn the problem of inferring MRs into a problem of searching for suitable values of the parameters. We then dynamically analyze multiple executions of the program, and use particle swarm optimization to solve the search problem. To improve the quality of inferred MRs, we further use MR filtering to remove some inferred MRs.\n We also conducted three empirical studies to evaluate our approach using four scientific libraries (including 189 scientific functions). From our empirical results, our approach is able to infer many high-quality MRs in acceptable time (i.e., from 9.87 seconds to 1231.16 seconds), which are effective in detecting faults with no false detection.", "title": "" }, { "docid": "cac379c00a4146acd06c446358c3e95a", "text": "In this work, a new base station antenna is proposed. Two separate frequency bands with separate radiating elements are used in each band. The frequency band separation ratio is about 1.3:1. These elements are arranged with different spacing (wider spacing for the lower frequency band, and narrower spacing for the higher frequency band). Isolation between bands inherently exists in this approach. This avoids the grating lobe effect, and mitigates the beam narrowing (dispersion) seen with fixed element spacing covering the whole wide bandwidth. A new low-profile cross dipole is designed, which is integrated in the array with an EBG/AMC structure for reducing the size of low band elements and decreasing coupling at high band.", "title": "" }, { "docid": "58390e457d03dfec19b0ae122a7c0e0b", "text": "A single-fed CP stacked patch antenna is proposed to cover all the GPS bands, including E5a/E5b for the Galileo system. The small aperture size (lambda/8 at the L5 band) and the single feeding property make this antenna a promising element for small GPS arrays. The design procedures and antenna performances are presented, and issues related to coupling between array elements are discussed.", "title": "" }, { "docid": "efced3407e46faf9fa43ce299add28f4", "text": "This is a pilot study of the use of “Flash cookies” by popular websites. We find that more than 50% of the sites in our sample are using Flash cookies to store information about the user. Some are using it to “respawn” or re-instantiate HTTP cookies deleted by the user. Flash cookies often share the same values as HTTP cookies, and are even used on government websites to assign unique values to users. Privacy policies rarely disclose the presence of Flash cookies, and user controls for effectuating privacy preferences are", "title": "" }, { "docid": "47b5e127b64cf1842841afcdb67d6d84", "text": "This work describes the aerodynamic characteristic for aircraft wing model with and without bird feather like winglet. The aerofoil used to construct the whole structure is NACA 653-218 Rectangular wing and this aerofoil has been used to compare the result with previous research using winglet. The model of the rectangular wing with bird feather like winglet has been fabricated using polystyrene before design using CATIA P3 V5R13 software and finally fabricated in wood. The experimental analysis for the aerodynamic characteristic for rectangular wing without winglet, wing with horizontal winglet and wing with 60 degree inclination winglet for Reynolds number 1.66×10, 2.08×10 and 2.50×10 have been carried out in open loop low speed wind tunnel at the Aerodynamics laboratory in Universiti Putra Malaysia. The experimental result shows 25-30 % reduction in drag coefficient and 10-20 % increase in lift coefficient by using bird feather like winglet for angle of attack of 8 degree. Keywords—Aerofoil, Wind tunnel, Winglet, Drag Coefficient.", "title": "" }, { "docid": "c9be394df8b4827c57c5413fc28b47e8", "text": "An important prerequisite for successful usage of computer systems and other interactive technology is a basic understanding of the symbols and interaction patterns used in them. This aspect of the broader construct “computer literacy” is used as indicator in the computer literacy scale, which proved to be an economical, reliable and valid instrument for the assessment of computer literacy in older adults.", "title": "" }, { "docid": "d6976dd4280c0534049c33ff9efb2058", "text": "Bitcoin, as well as many of its successors, require the whole transaction record to be reliably acquired by all nodes to prevent double-spending. Recently, many blockchains have been proposed to achieve scale-out throughput by letting nodes only acquire a fraction of the whole transaction set. However, these schemes, e.g., sharding and off-chain techniques, suffer from a degradation in decentralization or the capacity of fault tolerance. In this paper, we show that the complete set of transactions is not a necessity for the prevention of double-spending if the properties of value transfers is fully explored. In other words, we show that a value-transfer ledger like Bitcoin has the potential to scale-out by its nature without sacrificing security or decentralization. Firstly, we give a formal definition for the value-transfer ledger and its distinct features from a generic database. Then, we introduce the blockchain structure with a shared main chain for consensus and an individual chain for each node for recording transactions. A locally executable validation scheme is proposed with uncompromising validity and consistency. A beneficial consequence of our design is that nodes will spontaneously try to reduce their transmission cost by only providing the transactions needed to show that their transactions are not double spend. As a result, the network is sharded as each node only acquires part of the transaction record and a scale-out throughput could be achieved, which we call \"spontaneous sharding\".", "title": "" }, { "docid": "ba56c75498bfd733eb29ea5601c53181", "text": "The designations employed and the presentation of material in this information product do not imply the expression of any opinion whatsoever on the part of the Food and Agriculture Organization of the United Nations (FAO) concerning the legal or development status of any country, territory, city or area or of its authorities, or concerning the delimitation of its frontiers or boundaries. The mention of specific companies or products of manufacturers, whether or not these have been patented, does not imply that these have been endorsed or recommended by FAO in preference to others of a similar nature that are not mentioned. The views expressed in this information product are those of the author(s) and do not necessarily reflect the views of FAO.", "title": "" }, { "docid": "a29b94fb434ec5899ede49ff18561610", "text": "Contrary to the classical (time-triggered) principle that calculates the control signal in a periodic fashion, an event-driven control is computed and updated only when a certain condition is satisfied. This notably enables to save computations in the control task while ensuring equivalent performance. In this paper, we develop and implement such strategies to control a nonlinear and unstable system, that is the inverted pendulum. We are first interested on the stabilization of the pendulum near its inverted position and propose an event-based control approach. This notably demonstrates the efficiency of the event-based scheme even in the case where the system has to be actively actuated to remain upright. We then study the swinging of the pendulum up to the desired position and propose a low-cost control law based on an energy function. The switch between both strategies is also analyzed. A real-time experimentation is realized and shows that a reduction of about 98% and 50% of samples less than the classical scheme is achieved for the swing up and stabilization parts respectively.", "title": "" }, { "docid": "32cd2820e7f74e1307645874700641aa", "text": "This paper presents a framework to model the semantic representation of binary relations produced by open information extraction systems. For each binary relation, we infer a set of preferred types on the two arguments simultaneously, and generate a ranked list of type pairs which we call schemas. All inferred types are drawn from the Freebase type taxonomy, which are human readable. Our system collects 171,168 binary relations from ReVerb, and is able to produce top-ranking relation schemas with a mean reciprocal rank of 0.337.", "title": "" }, { "docid": "b4ecb4c62562517b9b16088ad8ae8c22", "text": "This articleii presents the results of video-based Human Robot Interaction (HRI) trials which investigated people’s perceptions of different robot appearances and associated attention-seeking features and behaviors displayed by robots with different appearance and behaviors. The HRI trials studied the participants’ preferences for various features of robot appearance and behavior, as well as their personality attributions towards the robots compared to their own personalities. Overall, participants tended to prefer robots with more human-like appearance and attributes. However, systematic individual differences in the dynamic appearance ratings are not consistent with a universal effect. Introverts and participants with lower emotional stability tended to prefer the mechanical looking appearance to a greater degree than other participants. It is also shown that it is possible to rate individual elements of a particular robot’s behavior and then assess the contribution, or otherwise, of that element to the overall perception of the robot by people. Relating participants’ dynamic appearance ratings of individual robots to independent static appearance ratings provided evidence that could be taken to support a portion of the left hand side of Mori’s theoretically proposed ‘uncanny valley’ diagram. Suggestions for future work are outlined. I.INTRODUCTION Robots that are currently commercially available for use in a domestic environment and which have human interaction features are often orientated towards toy or entertainment functions. In the future, a robot companion which is to find a more generally useful place within a human oriented domestic environment, and thus sharing a private home with a person or family, must satisfy two main criteria (Dautenhahn et al. (2005); Syrdal et al. (2006); Woods et al. (2007)): It must be able to perform a range of useful tasks or functions. It must carry out these tasks or functions in a manner that is socially acceptable and comfortable for people it shares the environment with and/or it interacts with. The technical challenges in getting a robot to perform useful tasks are extremely difficult, and many researchers are currently researching into the technical capabilities that will be required to perform useful functions in a human centered environment including navigation, manipulation, vision, speech, sensing, safety, system integration and planning. The second criteria is arguably equally important, because if the robot does not exhibit socially acceptable behavior, then people may reject the robot if it is annoying, irritating, unsettling or frightening to human users. Therefore: How can a robot behave in a socially acceptable manner? Research into social robots is generally contained within the rapidly developing field of Human-Robot Interaction (HRI). For an overview of socially interactive robots (robots designed to interact with humans in a social way) see Fong et al. (2003). Relevant examples of studies and investigations into human reactions to robots include: Goetz et al. (2003) where issues of robot appearance, behavior and task domains were investigated, and Severinson-Eklundh et al. (2003) which documents a longitudinal HRI trial investigating the human perspective of using a robotic assistant over several weeks . Khan (1998), Scopelliti et al. (2004) and Dautenhahn et al. (2005) have surveyed peoples’ views of domestic robots in order to aid the development of an initial design specification for domestic or servant robots. Kanda et al. (2004) presents results from a longitudinal HRI trial with a robot as a social partner and peer tutor aiding children learning English.", "title": "" }, { "docid": "5d002ab84e1a6034d2751f0807d914ac", "text": "We live in a world with a population of more than 7.1 Billion, have we ever imagine how many Leaders do we have? Yes, most of us are followers; we live in a world where we follow what have been commanded. The intension of this paper is to equip everyone with some knowledge to know how we can identify who leaders are, are you one of them, and how can we help our-selves and other develop leadership qualities. The Model highlights various traits which are very necessary for leadership. This paper have been investigate and put together after probing almost 30 other research papers. The Principal result we arrived on was that the major/ essential traits which are identified in a Leader are Honesty, Integrity, Drive (Achievement, Motivation, Ambition, Energy, Tenacity and Initiative), Self Confidence, Vision and Cognitive Ability. The Key finding also says that the people with such qualities are not necessary to be in politics, but they are from various walks of life such as major organization, different culture, background, education and ethnicities. Also we found out that just possessing of such traits alone does not guarantee one leadership success as evidence shows that effective leaders are different in nature from most of the other people in certain key respects. So, let us go through the paper to enhance out our mental abilities to search for the Leaders out there.", "title": "" }, { "docid": "5b9c12c1d65ab52d1a7bb6575c6c0bb1", "text": "The purpose of image enhancement is to process an acquired image for better contrast and visibility of features of interest for visual examination as well as subsequent computer-aided analysis and diagnosis. Therefore, we have proposed an algorithm for medical images enhancement. In the study, we used top-hat transform, contrast limited histogram equalization and anisotropic diffusion filter methods. The system results are quite satisfactory for many different medical images like lung, breast, brain, knee and etc.", "title": "" }, { "docid": "9bbee4d4c1040b5afd92910ce23d5ba5", "text": "BACKGROUND\nNovel interventions for treatment-resistant depression (TRD) in adolescents are urgently needed. Ketamine has been studied in adults with TRD, but little information is available for adolescents. This study investigated efficacy and tolerability of intravenous ketamine in adolescents with TRD, and explored clinical response predictors.\n\n\nMETHODS\nAdolescents, 12-18 years of age, with TRD (failure to respond to two previous antidepressant trials) were administered six ketamine (0.5 mg/kg) infusions over 2 weeks. Clinical response was defined as a 50% decrease in Children's Depression Rating Scale-Revised (CDRS-R); remission was CDRS-R score ≤28. Tolerability assessment included monitoring vital signs and dissociative symptoms using the Clinician-Administered Dissociative States Scale (CADSS).\n\n\nRESULTS\nThirteen participants (mean age 16.9 years, range 14.5-18.8 years, eight biologically male) completed the protocol. Average decrease in CDRS-R was 42.5% (p = 0.0004). Five (38%) adolescents met criteria for clinical response. Three responders showed sustained remission at 6-week follow-up; relapse occurred within 2 weeks for the other two responders. Ketamine infusions were generally well tolerated; dissociative symptoms and hemodynamic symptoms were transient. Higher dose was a significant predictor of treatment response.\n\n\nCONCLUSIONS\nThese results demonstrate the potential role for ketamine in treating adolescents with TRD. Limitations include the open-label design and small sample; future research addressing these issues are needed to confirm these results. Additionally, evidence suggested a dose-response relationship; future studies are needed to optimize dose. Finally, questions remain regarding the long-term safety of ketamine as a depression treatment; more information is needed before broader clinical use.", "title": "" }, { "docid": "6e77a99b6b0ddf18560580fed1ca5bbe", "text": "Theoretical analysis of the connection between taxation and risktaking has mainly been concerned with the effect of taxes on portfolio decisions of consumers, Mossin (1968b) and Stiglitz (1969). However, there are some problems which are not naturally classified under this heading and which, although of considerable practical interest, have been left out of the theoretical discussions. One such problem is tax evasion. This takes many forms, and one can hardly hope to give a completely general analysis of all these. Our objective in this paper is therefore the more limited one of analyzing the individual taxpayer’s decision on whether and to what extent to avoid taxes by deliberate underreporting. On the one hand our approach is related to the studies of economics of criminal activity, as e.g. in the papers by Becker ( 1968) and by Tulkens and Jacquemin (197 1). On the other hand it is related to the analysis of optimal portfolio and insurance policies in the economics of uncertainty, as in the work by Arrow ( 1970), Mossin ( 1968a) and several others. We shall start by considering a simple static model where this decision is the only one with which the individual is concerned, so that we ignore the interrelationships that probably exist with other types of economic choices. After a detailed study of this simple case (sections", "title": "" } ]
scidocsrr
f0b9c8ab88c8d01722e7b6b391497182
EX 2 : Exploration with Exemplar Models for Deep Reinforcement Learning
[ { "docid": "4fc6ac1b376c965d824b9f8eb52c4b50", "text": "Efficient exploration remains a major challenge for reinforcement learning (RL). Common dithering strategies for exploration, such as -greedy, do not carry out temporally-extended (or deep) exploration; this can lead to exponentially larger data requirements. However, most algorithms for statistically efficient RL are not computationally tractable in complex environments. Randomized value functions offer a promising approach to efficient exploration with generalization, but existing algorithms are not compatible with nonlinearly parameterized value functions. As a first step towards addressing such contexts we develop bootstrapped DQN. We demonstrate that bootstrapped DQN can combine deep exploration with deep neural networks for exponentially faster learning than any dithering strategy. In the Arcade Learning Environment bootstrapped DQN substantially improves learning speed and cumulative performance across most games.", "title": "" }, { "docid": "3181171d92ce0a8d3a44dba980c0cc5f", "text": "Exploration in complex domains is a key challenge in reinforcement learning, especially for tasks with very sparse rewards. Recent successes in deep reinforcement learning have been achieved mostly using simple heuristic exploration strategies such as -greedy action selection or Gaussian control noise, but there are many tasks where these methods are insufficient to make any learning progress. Here, we consider more complex heuristics: efficient and scalable exploration strategies that maximize a notion of an agent’s surprise about its experiences via intrinsic motivation. We propose to learn a model of the MDP transition probabilities concurrently with the policy, and to form intrinsic rewards that approximate the KL-divergence of the true transition probabilities from the learned model. One of our approximations results in using surprisal as intrinsic motivation, while the other gives the k-step learning progress. We show that our incentives enable agents to succeed in a wide range of environments with high-dimensional state spaces and very sparse rewards, including continuous control tasks and games in the Atari RAM domain, outperforming several other heuristic exploration techniques.", "title": "" } ]
[ { "docid": "7b5be6623ad250bea3b84c86c6fb0000", "text": "HTTP video streaming, employed by most of the video-sharing websites, allows users to control the video playback using, for example, pausing and switching the bit rate. These user-viewing activities can be used to mitigate the temporal structure impairments of the video quality. On the other hand, other activities, such as mouse movement, do not help reduce the impairment level. In this paper, we have performed subjective experiments to analyze user-viewing activities and correlate them with network path performance and user quality of experience. The results show that network measurement alone may miss important information about user dissatisfaction with the video quality. Moreover, video impairments can trigger user-viewing activities, notably pausing and reducing the screen size. By including the pause events into the prediction model, we can increase its explanatory power.", "title": "" }, { "docid": "2997be0d8b1f7a183e006eba78135b13", "text": "The basic mechanics of human locomotion are associated with vaulting over stiff legs in walking and rebounding on compliant legs in running. However, while rebounding legs well explain the stance dynamics of running, stiff legs cannot reproduce that of walking. With a simple bipedal spring-mass model, we show that not stiff but compliant legs are essential to obtain the basic walking mechanics; incorporating the double support as an essential part of the walking motion, the model reproduces the characteristic stance dynamics that result in the observed small vertical oscillation of the body and the observed out-of-phase changes in forward kinetic and gravitational potential energies. Exploring the parameter space of this model, we further show that it not only combines the basic dynamics of walking and running in one mechanical system, but also reveals these gaits to be just two out of the many solutions to legged locomotion offered by compliant leg behaviour and accessed by energy or speed.", "title": "" }, { "docid": "7deac3cbb3a30914412db45f69fb27f1", "text": "This paper presents the design, numerical analysis and measurements of a planar bypass balun that provides 1:4 impedance transformations between the unbalanced microstrip (MS) and balanced coplanar strip line (CPS). This type of balun is suitable for operation with small antennas fed with balanced a (parallel wire) transmission line, i.e. wire, planar dipoles and loop antennas. The balun has been applied to textile CPS-fed loop antennas, designed for operations below 1GHz. The performance of a loop antenna with the balun is described, as well as an idea of incorporating rigid circuits with flexible textile structures.", "title": "" }, { "docid": "600ecbb2ae0e5337a568bb3489cd5e29", "text": "This paper presents a novel approach for haptic object recognition with an anthropomorphic robot hand. Firstly, passive degrees of freedom are introduced to the tactile sensor system of the robot hand. This allows the planar tactile sensor patches to optimally adjust themselves to the object's surface and to acquire additional sensor information for shape reconstruction. Secondly, this paper presents an approach to classify an object directly from the haptic sensor data acquired by a palpation sequence with the robot hand - without building a 3d-model of the object. Therefore, a finite set of essential finger positions and tactile contact patterns are identified which can be used to describe a single palpation step. A palpation sequence can then be merged into a simple statistical description of the object and finally be classified. The proposed approach for haptic object recognition and the new tactile sensor system are evaluated with an anthropomorphic robot hand.", "title": "" }, { "docid": "3daa9fc7d434f8a7da84dd92f0665564", "text": "In this article we analyze the response of Time of Flight cameras (active sensors) for close range imaging under three different illumination conditions and compare the results with stereo vision (passive) sensors. Time of Flight sensors are sensitive to ambient light and have low resolution but deliver high frame rate accurate depth data under suitable conditions. We introduce some metrics for performance evaluation over a small region of interest. Based on these metrics, we analyze and compare depth imaging of leaf under indoor (room) and outdoor (shadow and sunlight) conditions by varying exposures of the sensors. Performance of three different time of flight cameras (PMD CamBoard, PMD CamCube and SwissRanger SR4000) is compared against selected stereo-correspondence algorithms (local correlation and graph cuts). PMD CamCube has better cancellation of sunlight, followed by CamBoard, while SwissRanger SR4000 performs poorly under sunlight. stereo vision is more robust to ambient illumination and provides high resolution depth data but it is constrained by texture of the object along with computational efficiency. Graph cut based stereo correspondence algorithm can better retrieve the shape of the leaves but is computationally much more expensive as compared to local correlation. Finally, we propose a method to increase the dynamic range of the ToF cameras for a scene involving both shadow and sunlight exposures at the same time using camera flags (PMD) or confidence matrix (SwissRanger).", "title": "" }, { "docid": "244b0b0029b4b440e1c5b953bda84aed", "text": "Most recent approaches to monocular 3D pose estimation rely on Deep Learning. They either train a Convolutional Neural Network to directly regress from an image to a 3D pose, which ignores the dependencies between human joints, or model these dependencies via a max-margin structured learning framework, which involves a high computational cost at inference time. In this paper, we introduce a Deep Learning regression architecture for structured prediction of 3D human pose from monocular images or 2D joint location heatmaps that relies on an overcomplete autoencoder to learn a high-dimensional latent pose representation and accounts for joint dependencies. We further propose an efficient Long Short-Term Memory network to enforce temporal consistency on 3D pose predictions. We demonstrate that our approach achieves state-of-the-art performance both in terms of structure preservation and prediction accuracy on standard 3D human pose estimation benchmarks.", "title": "" }, { "docid": "2c3142a8432d8ef14047c0b827d76c29", "text": "During the early phase of replication, HIV reverse transcribes its RNA and crosses the nuclear envelope while escaping host antiviral defenses. The host factor Cyclophilin A (CypA) is essential for these steps and binds the HIV capsid; however, the mechanism underlying this effect remains elusive. Here, we identify related capsid mutants in HIV-1, HIV-2, and SIVmac that are restricted by CypA. This antiviral restriction of mutated viruses is conserved across species and prevents nuclear import of the viral cDNA. Importantly, the inner nuclear envelope protein SUN2 is required for the antiviral activity of CypA. We show that wild-type HIV exploits SUN2 in primary CD4+ T cells as an essential host factor that is required for the positive effects of CypA on reverse transcription and infection. Altogether, these results establish essential CypA-dependent functions of SUN2 in HIV infection at the nuclear envelope.", "title": "" }, { "docid": "96edea15b87643d4501d10ad9e386c70", "text": "As the visualization field matures, an increasing number of general toolkits are developed to cover a broad range of applications. However, no general tool can incorporate the latest capabilities for all possible applications, nor can the user interfaces and workflows be easily adjusted to accommodate all user communities. As a result, users will often chose either substandard solutions presented in familiar, customized tools or assemble a patchwork of individual applications glued through ad-hoc scripts and extensive, manual intervention. Instead, we need the ability to easily and rapidly assemble the best-in-task tools into custom interfaces and workflows to optimally serve any given application community. Unfortunately, creating such meta-applications at the API or SDK level is difficult, time consuming, and often infeasible due to the sheer variety of data models, design philosophies, limits in functionality, and the use of closed commercial systems. In this paper, we present the ManyVis framework which enables custom solutions to be built both rapidly and simply by allowing coordination and communication across existing unrelated applications. ManyVis allows users to combine software tools with complementary characteristics into one virtual application driven by a single, custom-designed interface.", "title": "" }, { "docid": "c3566f9addba75542296f41be2bd604e", "text": "We consider the problem of content-based spam filtering for short text messages that arise in three contexts: mobile (SMS) communication, blog comments, and email summary information such as might be displayed by a low-bandwidth client. Short messages often consist of only a few words, and therefore present a challenge to traditional bag-of-words based spam filters. Using three corpora of short messages and message fields derived from real SMS, blog, and spam messages, we evaluate feature-based and compression-model-based spam filters. We observe that bag-of-words filters can be improved substantially using different features, while compression-model filters perform quite well as-is. We conclude that content filtering for short messages is surprisingly effective.", "title": "" }, { "docid": "b02bcb7e0d7669b69130604157c27c08", "text": "The success of Android phones makes them a prominent target for malicious software, in particular since the Android permission system turned out to be inadequate to protect the user against security and privacy threats. This work presents AppGuard, a powerful and flexible system for the enforcement of user-customizable security policies on untrusted Android applications. AppGuard does not require any changes to a smartphone’s firmware or root access. Our system offers complete mediation of security-relevant methods based on callee-site inline reference monitoring. We demonstrate the general applicability of AppGuard by several case studies, e.g., removing permissions from overly curious apps as well as defending against several recent real-world attacks on Android phones. Our technique exhibits very little space and runtime overhead. AppGuard is publicly available, has been invited to the Samsung Apps market, and has had more than 500,000 downloads so far.", "title": "" }, { "docid": "96331faaf58a7e1b651a12047a1c7455", "text": "The goal of scattered data interpolation techniques is to construct a (typically smooth) function from a set of unorganized samples. These techniques have a wide range of applications in computer graphics. For instance they can be used to model a surface from a set of sparse samples, to reconstruct a BRDF from a set of measurements, to interpolate motion capture data, or to compute the physical properties of a fluid. This course will survey and compare scattered interpolation algorithms and describe their applications in computer graphics. Although the course is focused on applying these techniques, we will introduce some of the underlying mathematical theory and briefly mention numerical considerations.", "title": "" }, { "docid": "e510143a57bc2c0da2c745ec6f53572a", "text": "Lately, fire outbreaks are common issues and its occurrence could cause severe damage toward nature and human properties. Thus, fire detection has been an important issue to protect human life and property and has increases in recent years. This paper focusing on the algorithm of fire detection using image processing techniques i.e. colour pixel classification. This Fire detection system does not require any special type of sensors and it has the ability to monitor large area and depending on the quality of camera used. The objective of this research is to design a methodology for fire detection using image as input. The propose algorithm is using colour pixel classification. This system used image enhancement technique, RGB and YCbCr colour models with given conditions to separate fire pixel from background and isolates luminance from chrominance contrasted from original image to detect fire. The propose system achieved 90% fire detection rate on average.", "title": "" }, { "docid": "04af9445949dec4a0ba16fe54b7a9e62", "text": "In the last five years, deep learning methods and particularly Convolutional Neural Networks (CNNs) have exhibited excellent accuracies in many pattern classification problems. Most of the state-of-the-art models apply data-augmentation techniques at the training stage. This paper provides a brief tutorial on data preprocessing and shows its benefits by using the competitive MNIST handwritten digits classification problem. We show and analyze the impact of different preprocessing techniques on the performance of three CNNs, LeNet, Network3 and DropConnect, together with their ensembles. The analyzed transformations are, centering, elastic deformation, translation, rotation and different combinations of them. Our analysis demonstrates that data-preprocessing techniques, such as the combination of elastic deformation and rotation, together with ensembles have a high potential to further improve the state-of-the-art accuracy in MNIST classification.", "title": "" }, { "docid": "b0ce4a13ea4a2401de4978b6859c5ef2", "text": "We propose to unify a variety of existing semantic classification tasks, such as semantic role labeling, anaphora resolution, and paraphrase detection, under the heading of Recognizing Textual Entailment (RTE). We present a general strategy to automatically generate one or more sentential hypotheses based on an input sentence and pre-existing manual semantic annotations. The resulting suite of datasets enables us to probe a statistical RTE model’s performance on different aspects of semantics. We demonstrate the value of this approach by investigating the behavior of a popular neural network RTE model.", "title": "" }, { "docid": "dfb125e8ae2b65540c14482fe5fe26a5", "text": "Considering the shift of museums towards digital experiences that can satiate the interests of their young audiences, we suggest an integrated schema for socially engaging large visitor groups. As a means to present our position we propose a framework for audience involvement with complex educational material, combining serious games and virtual environments along with a theory of contextual learning in museums. We describe the research methodology for validating our framework, including the description of a testbed application and results from existing studies with children in schools, summer camps, and a museum. Such findings serve both as evidence for the applicability of our position and as a guidepost for the direction we should move to foster richer social engagement of young crowds. Author", "title": "" }, { "docid": "78c3573511176ba63e2cf727e09c7eb4", "text": "Human aesthetic preference in the visual domain is reviewed from definitional, methodological, empirical, and theoretical perspectives. Aesthetic science is distinguished from the perception of art and from philosophical treatments of aesthetics. The strengths and weaknesses of important behavioral techniques are presented and discussed, including two-alternative forced-choice, rank order, subjective rating, production/adjustment, indirect, and other tasks. Major findings are reviewed about preferences for colors (single colors, color combinations, and color harmony), spatial structure (low-level spatial properties, shape properties, and spatial composition within a frame), and individual differences in both color and spatial structure. Major theoretical accounts of aesthetic response are outlined and evaluated, including explanations in terms of mere exposure effects, arousal dynamics, categorical prototypes, ecological factors, perceptual and conceptual fluency, and the interaction of multiple components. The results of the review support the conclusion that aesthetic response can be studied rigorously and meaningfully within the framework of scientific psychology.", "title": "" }, { "docid": "a53904f277c06e32bd6ad148399443c6", "text": "Big data is flowing into every area of our life, professional and personal. Big data is defined as datasets whose size is beyond the ability of typical software tools to capture, store, manage and analyze, due to the time and memory complexity. Velocity is one of the main properties of big data. In this demo, we present SAMOA (Scalable Advanced Massive Online Analysis), an open-source platform for mining big data streams. It provides a collection of distributed streaming algorithms for the most common data mining and machine learning tasks such as classification, clustering, and regression, as well as programming abstractions to develop new algorithms. It features a pluggable architecture that allows it to run on several distributed stream processing engines such as Storm, S4, and Samza. SAMOA is written in Java and is available at http://samoa-project.net under the Apache Software License version 2.0.", "title": "" }, { "docid": "5c2f115e0159d15a87904e52879c1abf", "text": "Current approaches for visual--inertial odometry (VIO) are able to attain highly accurate state estimation via nonlinear optimization. However, real-time optimization quickly becomes infeasible as the trajectory grows over time; this problem is further emphasized by the fact that inertial measurements come at high rate, hence, leading to the fast growth of the number of variables in the optimization. In this paper, we address this issue by preintegrating inertial measurements between selected keyframes into single relative motion constraints. Our first contribution is a preintegration theory that properly addresses the manifold structure of the rotation group. We formally discuss the generative measurement model as well as the nature of the rotation noise and derive the expression for the maximum a posteriori state estimator. Our theoretical development enables the computation of all necessary Jacobians for the optimization and a posteriori bias correction in analytic form. The second contribution is to show that the preintegrated inertial measurement unit model can be seamlessly integrated into a visual--inertial pipeline under the unifying framework of factor graphs. This enables the application of incremental-smoothing algorithms and the use of a structureless model for visual measurements, which avoids optimizing over the 3-D points, further accelerating the computation. We perform an extensive evaluation of our monocular VIO pipeline on real and simulated datasets. The results confirm that our modeling effort leads to an accurate state estimation in real time, outperforming state-of-the-art approaches.", "title": "" }, { "docid": "37a91db42be93afebb02a60cd9a7b339", "text": "We present novel method for image-text multi-modal representation learning. In our knowledge, this work is the first approach of applying adversarial learning concept to multi-modal learning and not exploiting image-text pair information to learn multi-modal feature. We only use category information in contrast with most previous methods using image-text pair information for multi-modal embedding. In this paper, we show that multi-modal feature can be achieved without image-text pair information and our method makes more similar distribution with image and text in multi-modal feature space than other methods which use image-text pair information. And we show our multi-modal feature has universal semantic information, even though it was trained for category prediction. Our model is end-to-end backpropagation, intuitive and easily extended to other multimodal learning work.", "title": "" }, { "docid": "c2722939dca35be6fd8662c6b77cee1d", "text": "The cost of moving and storing data is still a fundamental concern for computer architects. Inefficient handling of data can be attributed to conventional architectures being oblivious to the nature of the values that these data bits carry. We observe the phenomenon of spatio-value similarity, where data elements that are approximately similar in value exhibit spatial regularity in memory. This is inherent to 1) the data values of real-world applications, and 2) the way we store data structures in memory. We propose the Bunker Cache, a design that maps similar data to the same cache storage location based solely on their memory address, sacrificing some application quality loss for greater efficiency. The Bunker Cache enables performance gains (ranging from 1.08x to 1.19x) via reduced cache misses and energy savings (ranging from 1.18x to 1.39x) via reduced off-chip memory accesses and lower cache storage requirements. The Bunker Cache requires only modest changes to cache indexing hardware, integrating easily into commodity systems.", "title": "" } ]
scidocsrr
bed598d119ebf08545c93e7c90802bc1
Mash: fast genome and metagenome distance estimation using MinHash
[ { "docid": "6059b4bbf5d269d0a5f1f596b48c1acb", "text": "The mathematical concept of document resemblance captures well the informal notion of syntactic similarity. The resemblance can be estimated using a fixed size “sketch” for each document. For a large collection of documents (say hundreds of millions) the size of this sketch is of the order of a few hundred bytes per document. However, for efficient large scale web indexing it is not necessary to determine the actual resemblance value: it suffices to determine whether newly encountered documents are duplicates or near-duplicates of documents already indexed. In other words, it suffices to determine whether the resemblance is above a certain threshold. In this talk we show how this determination can be made using a ”sample” of less than 50 bytes per document. The basic approach for computing resemblance has two aspects: first, resemblance is expressed as a set (of strings) intersection problem, and second, the relative size of intersections is evaluated by a process of random sampling that can be done independently for each document. The process of estimating the relative size of intersection of sets and the threshold test discussed above can be applied to arbitrary sets, and thus might be of independent interest. The algorithm for filtering near-duplicate documents discussed here has been successfully implemented and has been used for the last three years in the context of the AltaVista search engine.", "title": "" }, { "docid": "faac043b0c32bad5a44d52b93e468b78", "text": "Comparative genomic analyses of primates offer considerable potential to define and understand the processes that mold, shape, and transform the human genome. However, primate taxonomy is both complex and controversial, with marginal unifying consensus of the evolutionary hierarchy of extant primate species. Here we provide new genomic sequence (~8 Mb) from 186 primates representing 61 (~90%) of the described genera, and we include outgroup species from Dermoptera, Scandentia, and Lagomorpha. The resultant phylogeny is exceptionally robust and illuminates events in primate evolution from ancient to recent, clarifying numerous taxonomic controversies and providing new data on human evolution. Ongoing speciation, reticulate evolution, ancient relic lineages, unequal rates of evolution, and disparate distributions of insertions/deletions among the reconstructed primate lineages are uncovered. Our resolution of the primate phylogeny provides an essential evolutionary framework with far-reaching applications including: human selection and adaptation, global emergence of zoonotic diseases, mammalian comparative genomics, primate taxonomy, and conservation of endangered species.", "title": "" }, { "docid": "252f4bcaeb5612a3018578ec2008dd71", "text": "Kraken is an ultrafast and highly accurate program for assigning taxonomic labels to metagenomic DNA sequences. Previous programs designed for this task have been relatively slow and computationally expensive, forcing researchers to use faster abundance estimation programs, which only classify small subsets of metagenomic data. Using exact alignment of k-mers, Kraken achieves classification accuracy comparable to the fastest BLAST program. In its fastest mode, Kraken classifies 100 base pair reads at a rate of over 4.1 million reads per minute, 909 times faster than Megablast and 11 times faster than the abundance estimation program MetaPhlAn. Kraken is available at http://ccb.jhu.edu/software/kraken/ .", "title": "" } ]
[ { "docid": "15de232c8daf22cf1a1592a21e1d9df3", "text": "This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze different methods for combining multimodal representations. Based on this methodological inventory, we discuss the benefit of multimodal grounding for a variety of language processing tasks and the challenges that arise. We particularly focus on multimodal grounding of verbs which play a crucial role for the compositional power of language. Title and Abstract in German Multimodale konzeptuelle Verankerung für die automatische Sprachverarbeitung Dieser Überblick erörtert, wie aktuelle Entwicklungen in der automatischen Verarbeitung multimodaler Inhalte die konzeptuelle Verankerung sprachlicher Inhalte erleichtern können. Die automatischen Methoden zur Verarbeitung multimodaler Inhalte werden zunächst hinsichtlich der zugrundeliegenden kognitiven Modelle menschlicher Informationsverarbeitung kategorisiert. Daraus ergeben sich verschiedene Methoden um Repräsentationen unterschiedlicher Modalitäten miteinander zu kombinieren. Ausgehend von diesen methodischen Grundlagen wird diskutiert, wie verschiedene Forschungsprobleme in der automatischen Sprachverarbeitung von multimodaler Verankerung profitieren können und welche Herausforderungen sich dabei ergeben. Ein besonderer Schwerpunkt wird dabei auf die multimodale konzeptuelle Verankerung von Verben gelegt, da diese eine wichtige kompositorische Funktion erfüllen.", "title": "" }, { "docid": "51e307584d6446ba2154676d02d2cc84", "text": "This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio’s dual coding theory, Baddeley’s working memory model, Engelkamp’s multimodal theory, Sweller’s cognitive load theory, Mayer’s multimedia learning theory, and Nathan’s ANIMATE theory. The discussion emphasizes the interplay between traditional research studies and instructional applications of this research for increasing recall, reducing interference, minimizing cognitive load, and enhancing understanding. Tentative conclusions are that (a) there is general agreement among the different architectures, which differ in focus; (b) learners’ integration of multiple codes is underspecified in the models; (c) animated instruction is not required when mental simulations are sufficient; (d) actions must be meaningful to be successful; and (e) multimodal instruction is superior to targeting modality-specific individual differences.", "title": "" }, { "docid": "7db6124dc1f196ec2067a2d9dc7ba028", "text": "We describe a graphical representation of probabilistic relationships-an alternative to the Bayesian network-called a dependency network. Like a Bayesian network, a dependency network has a graph and a probability component. The graph component is a (cyclic) directed graph such that a node's parents render that node independent of all other nodes in the network. The probability component consists of the probability of a node given its parents for each node (as in a Bayesian network). We identify several basic properties of this representation, and describe its use in collaborative filtering (the task of predicting preferences) and the visualization of predictive relationships.", "title": "" }, { "docid": "789a9d6e2a007938fa8f1715babcabd2", "text": "We present a novel framework that enables efficient probabilistic inference in large-scale scientific models by allowing the execution of existing domain-specific simulators as probabilistic programs, resulting in highly interpretable posterior inference. Our framework is general purpose and scalable, and is based on a crossplatform probabilistic execution protocol through which an inference engine can control simulators in a language-agnostic way. We demonstrate the technique in particle physics, on a scientifically accurate simulation of the τ (tau) lepton decay, which is a key ingredient in establishing the properties of the Higgs boson. Highenergy physics has a rich set of simulators based on quantum field theory and the interaction of particles in matter. We show how to use probabilistic programming to perform Bayesian inference in these existing simulator codebases directly, in particular conditioning on observable outputs from a simulated particle detector to directly produce an interpretable posterior distribution over decay pathways. Inference efficiency is achieved via inference compilation where a deep recurrent neural network is trained to parameterize proposal distributions and control the stochastic simulator in a sequential importance sampling scheme, at a fraction of the computational cost of Markov chain Monte Carlo sampling.", "title": "" }, { "docid": "c692dd35605c4af62429edef6b80c121", "text": "As one of the most important mid-level features of music, chord contains rich information of harmonic structure that is useful for music information retrieval. In this paper, we present a chord recognition system based on the N-gram model. The system is time-efficient, and its accuracy is comparable to existing systems. We further propose a new method to construct chord features for music emotion classification and evaluate its performance on commercial song recordings. Experimental results demonstrate the advantage of using chord features for music classification and retrieval.", "title": "" }, { "docid": "e6c7d1db1e1cfaab5fdba7dd1146bcd2", "text": "We define the object detection from imagery problem as estimating a very large but extremely sparse bounding box dependent probability distribution. Subsequently we identify a sparse distribution estimation scheme, Directed Sparse Sampling, and employ it in a single end-to-end CNN based detection model. This methodology extends and formalizes previous state-of-the-art detection models with an additional emphasis on high evaluation rates and reduced manual engineering. We introduce two novelties, a corner based region-of-interest estimator and a deconvolution based CNN model. The resulting model is scene adaptive, does not require manually defined reference bounding boxes and produces highly competitive results on MSCOCO, Pascal VOC 2007 and Pascal VOC 2012 with real-time evaluation rates. Further analysis suggests our model performs particularly well when finegrained object localization is desirable. We argue that this advantage stems from the significantly larger set of available regions-of-interest relative to other methods. Source-code is available from: https://github.com/lachlants/denet", "title": "" }, { "docid": "566913d3a3d2e8fe24d6f5ff78440b94", "text": "We describe a Digital Advertising System Simulation (DASS) for modeling advertising and its impact on user behavior. DASS is both flexible and general, and can be applied to research on a wide range of topics, such as digital attribution, ad fatigue, campaign optimization, and marketing mix modeling. This paper introduces the basic DASS simulation framework and illustrates its application to digital attribution. We show that common position-based attribution models fail to capture the true causal effects of advertising across several simple scenarios. These results lay a groundwork for the evaluation of more complex attribution models, and the development of improved models.", "title": "" }, { "docid": "e2d0a4d2c2c38722d9e9493cf506fc1c", "text": "This paper describes two Global Positioning System (GPS) based attitude determination algorithms which contain steps of integer ambiguity resolution and attitude computation. The first algorithm extends the ambiguity function method to account for the unique requirement of attitude determination. The second algorithm explores the artificial neural network approach to find the attitude. A test platform is set up for verifying these algorithms.", "title": "" }, { "docid": "56a2279c9c3bcbddf03561bec2508f81", "text": "The article introduces a framework for users' design quality judgments based on Adaptive Decision Making theory. The framework describes judgment on quality attributes (usability, content/functionality, aesthetics, customisation and engagement) with dependencies on decision making arising from the user's background, task and context. The framework is tested and refined by three experimental studies. The first two assessed judgment of quality attributes of websites with similar content but radically different designs for aesthetics and engagement. Halo effects were demonstrated whereby attribution of good quality on one attribute positively influenced judgment on another, even in the face of objective evidence to the contrary (e.g., usability errors). Users' judgment was also shown to be susceptible to framing effects of the task and their background. These appear to change the importance order of the quality attributes; hence, quality assessment of a design appears to be very context dependent. The third study assessed the influence of customisation by experiments on mobile services applications, and demonstrated that evaluation of customisation depends on the users' needs and motivation. The results are discussed in the context of the literature on aesthetic judgment, user experience and trade-offs between usability and hedonic/ludic design qualities.", "title": "" }, { "docid": "8477b50ea5b4dd76f0bf7190ba05c284", "text": "It is shown how Conceptual Graphs and Formal Concept Analysis may be combined to obtain a formalization of Elementary Logic which is useful for knowledge representation and processing. For this, a translation of conceptual graphs to formal contexts and concept lattices is described through an example. Using a suitable mathematization of conceptual graphs, basics of a uniied mathematical theory for Elementary Logic are proposed.", "title": "" }, { "docid": "f48ee93659a25bee9a49e8be6c789987", "text": "what design is from a theoretical point of view, which is a role of the descriptive model. However, descriptive models are not necessarily helpful in directly deriving either the architecture of intelligent CAD or the knowledge representation for intelligent CAD. For this purpose, we need a computable design process model that should coincide, at least to some extent, with a cognitive model that explains actual design activities. One of the major problems in developing so-called intelligent computer-aided design (CAD) systems (ten Hagen and Tomiyama 1987) is the representation of design knowledge, which is a two-part process: the representation of design objects and the representation of design processes. We believe that intelligent CAD systems will be fully realized only when these two types of representation are integrated. Progress has been made in the representation of design objects, as can be seen, for example, in geometric modeling; however, almost no significant results have been seen in the representation of design processes, which implies that we need a design theory to formalize them. According to Finger and Dixon (1989), design process models can be categorized into a descriptive model that explains how design is done, a cognitive model that explains the designer’s behavior, a prescriptive model that shows how design must be done, and a computable model that expresses a method by which a computer can accomplish a task. A design theory for intelligent CAD is not useful when it is merely descriptive or cognitive; it must also be computable. We need a general model of design Articles", "title": "" }, { "docid": "f3b1e1c9effb7828a62187e9eec5fba7", "text": "Histone modifications and chromatin-associated protein complexes are crucially involved in the control of gene expression, supervising cell fate decisions and differentiation. Many promoters in embryonic stem (ES) cells harbor a distinctive histone modification signature that combines the activating histone H3 Lys 4 trimethylation (H3K4me3) mark and the repressive H3K27me3 mark. These bivalent domains are considered to poise expression of developmental genes, allowing timely activation while maintaining repression in the absence of differentiation signals. Recent advances shed light on the establishment and function of bivalent domains; however, their role in development remains controversial, not least because suitable genetic models to probe their function in developing organisms are missing. Here, we explore avenues to and from bivalency and propose that bivalent domains and associated chromatin-modifying complexes safeguard proper and robust differentiation.", "title": "" }, { "docid": "5203f520e6992ae6eb2e8cb28f523f6a", "text": "Integrons can insert and excise antibiotic resistance genes on plasmids in bacteria by site-specific recombination. Class 1 integrons code for an integrase, IntI1 (337 amino acids in length), and are generally borne on elements derived from Tn5090, such as that found in the central part of Tn21. A second class of integron is found on transposon Tn7 and its relatives. We have completed the sequence of the Tn7 integrase gene, intI2, which contains an internal stop codon. This codon was found to be conserved among intI2 genes on three other Tn7-like transposons harboring different cassettes. The predicted peptide sequence (IntI2*) is 325 amino acids long and is 46% identical to IntI1. In order to detect recombination activity, the internal stop codon at position 179 in the parental allele was changed to a triplet coding for glutamic acid. The sequences flanking the cassette arrays in the class 1 and 2 integrons are not closely related, but a common pool of mobile cassettes is used by the different integron classes; two of the three antibiotic resistance cassettes on Tn7 and its close relatives are also found in various class 1 integrons. We also observed a fourth excisable cassette downstream of those described previously in Tn7. The fourth cassette encodes a 165-amino-acid protein of unknown function with 6.5 contiguous repeats of a sequence coding for 7 amino acids. IntI2*179E promoted site-specific excision of each of the cassettes in Tn7 at different frequencies. The integrases from Tn21 and Tn7 showed limited cross-specificity in that IntI1 could excise all cassettes from both Tn21 and Tn7. However, we did not observe a corresponding excision of the aadA1 cassette from Tn21 by IntI2*179E.", "title": "" }, { "docid": "9fdd2b84fc412e03016a12d951e4be01", "text": "We examine the implications of shape on the process of finding dense correspondence and half-occlusions for a stereo pair of images. The desired property of the disparity map is that it should be a piecewise continuous function which is consistent with the images and which has the minimum number of discontinuities. To zeroth order, piecewise continuity becomes piecewise constancy. Using this approximation, we first discuss an approach for dealing with such a fronto-parallel shapeless world, and the problems involved therein. We then introduce horizontal and vertical slant to create a first order approximation to piecewise continuity. In particular, we emphasize the following geometric fact: a horizontally slanted surface (i.e., having depth variation in the direction of the separation of the two cameras) will appear horizontally stretched in one image as compared to the other image. Thus, while corresponding two images, N pixels on a scanline in one image may correspond to a different number of pixels M in the other image. This leads to three important modifications to existing stereo algorithms: (a) due to unequal sampling, existing intensity matching metrics must be modified, (b) unequal numbers of pixels in the two images must be allowed to correspond to each other, and (c) the uniqueness constraint, which is often used for detecting occlusions, must be changed to an interval uniqueness constraint. We also discuss the asymmetry between vertical and horizontal slant, and the central role of non-horizontal edges in the context of vertical slant. Using experiments, we discuss cases where existing algorithms fail, and how the incorporation of these new constraints provides correct results.", "title": "" }, { "docid": "2e0585860c1fa533412ff1fea76632cb", "text": "Author Co-citation Analysis (ACA) has long been used as an effective method for identifying the intellectual structure of a research domain, but it relies on simple co-citation counting, which does not take the citation content into consideration. The present study proposes a new method for measuring the similarity between co-cited authors by considering author's citation content. We collected the full-text journal articles in the information science domain and extracted the citing sentences to calculate their similarity distances. We compared our method with traditional ACA and found out that our approach, while displaying a similar intellectual structure for the information science domain as the other baseline methods, also provides more details about the sub-disciplines in the domain than with traditional ACA.", "title": "" }, { "docid": "5447d3fe8ed886a8792a3d8d504eaf44", "text": "Glucose-responsive delivery of insulin mimicking the function of pancreatic β-cells to achieve meticulous control of blood glucose (BG) would revolutionize diabetes care. Here the authors report the development of a new glucose-responsive insulin delivery system based on the potential interaction between the glucose derivative-modified insulin (Glc-Insulin) and glucose transporters on erythrocytes (or red blood cells, RBCs) membrane. After being conjugated with the glucosamine, insulin can efficiently bind to RBC membranes. The binding is reversible in the setting of hyperglycemia, resulting in fast release of insulin and subsequent drop of BG level in vivo. The delivery vehicle can be further simplified utilizing injectable polymeric nanocarriers coated with RBC membrane and loaded with Glc-Insulin. The described work is the first demonstration of utilizing RBC membrane to achieve smart insulin delivery with fast responsiveness.", "title": "" }, { "docid": "f10d79d1eb6d3ec994c1ec7ec3769437", "text": "The security of embedded devices often relies on the secrecy of proprietary cryptographic algorithms. These algorithms and their weaknesses are frequently disclosed through reverse-engineering software, but it is commonly thought to be too expensive to reconstruct designs from a hardware implementation alone. This paper challenges that belief by presenting an approach to reverse-engineering a cipher from a silicon implementation. Using this mostly automated approach, we reveal a cipher from an RFID tag that is not known to have a software or micro-code implementation. We reconstruct the cipher from the widely used Mifare Classic RFID tag by using a combination of image analysis of circuits and protocol analysis. Our analysis reveals that the security of the tag is even below the level that its 48-bit key length suggests due to a number of design flaws. Weak random numbers and a weakness in the authentication protocol allow for pre-computed rainbow tables to be used to find any key in a matter of seconds. Our approach of deducing functionality from circuit images is mostly automated, hence it is also feasible for large chips. The assumption that algorithms can be kept secret should therefore to be avoided for any type of silicon chip. Il faut qu’il n’exige pas le secret, et qu’il puisse sans inconvénient tomber entre les mains de l’ennemi. ([A cipher] must not depend on secrecy, and it must not matter if it falls into enemy hands.) August Kerckhoffs, La Cryptographie Militaire, January 1883 [13]", "title": "" }, { "docid": "2da44919966d841d4a1d6f3cc2a648e9", "text": "A composite cavity-backed folded sectorial bowtie antenna (FSBA) is proposed and investigated in this paper, which is differentially fed by an SMA connector through a balun, i.e. a transition from a microstrip line to a parallel stripline. The composite cavity as a general case, consisting of a conical part and a cylindrical rim, can be tuned freely from a cylindrical to a cup-shaped one. Parametric studies are performed to optimize the antenna performance. Experimental results reveal that it can achieve an impedance bandwidth of 143% for SWR les 2, a broadside gain of 8-15.3 dBi, and stable radiation pattern over the whole operating band. The total electrical dimensions are 0.66lambdam in diameter and 0.16lambdam in height, where lambdam is the free-space wavelength at lower edge of the operating frequency band. The problem about the distorted patterns in the upper frequency band for wideband cavity-backed antennas is solved in our work.", "title": "" }, { "docid": "228678ad5d18d21d4bc7c1819329274f", "text": "Intentional frequency perturbation by recently researched active islanding detection techniques for inverter based distributed generation (DG) define new threshold settings for the frequency relays. This innovation has enabled the modern frequency relays to operate inside the non-detection zone (NDZ) of the conventional frequency relays. However, the effect of such perturbation on the performance of the rate of change of frequency (ROCOF) relays has not been researched so far. This paper evaluates the performance of ROCOF relays under such perturbations for an inverter interfaced DG and proposes an algorithm along with the new threshold settings to enable it work under the NDZ. The proposed algorithm is able to differentiate between an islanding and a non-islanding event. The operating principle of relay is based on low frequency current injection through grid side voltage source converter (VSC) control of doubly fed induction generator (DFIG) and therefore, the relay is defined as “active ROCOF relay”. Simulations are done in MATLAB.", "title": "" }, { "docid": "4e6ca2d20e904a0eb72fcdcd1164a5e2", "text": "Fraudulent activities (e.g., suspicious credit card transaction, financial reporting fraud, and money laundering) are critical concerns to various entities including bank, insurance companies, and public service organizations. Typically, these activities lead to detrimental effects on the victims such as a financial loss. Over the years, fraud analysis techniques underwent a rigorous development. However, lately, the advent of Big data led to vigorous advancement of these techniques since Big Data resulted in extensive opportunities to combat financial frauds. Given that the massive amount of data that investigators need to sift through, massive volumes of data integrated from multiple heterogeneous sources (e.g., social media, blogs) to find fraudulent patterns is emerging as a feasible approach.", "title": "" } ]
scidocsrr
93f55dd33860b0763d6a60c00ecb3596
Socially Aware Networking: A Survey
[ { "docid": "7fc6e701aacc7d014916b9b47b01be16", "text": "We compare recent approaches to community structure identification in terms of sensitivity and computational cost. The recently proposed modularity measure is revisited and the performance of the methods as applied to ad hoc networks with known community structure, is compared. We find that the most accurate methods tend to be more computationally expensive, and that both aspects need to be considered when choosing a method for practical purposes. The work is intended as an introduction as well as a proposal for a standard benchmark test of community detection methods.", "title": "" } ]
[ { "docid": "bfe9b8e84da087cfd3a3d8ece6dc9b9d", "text": "Microblog ranking is a hot research topic in recent years. Most of the related works apply TF-IDF metric for calculating content similarity while neglecting their semantic similarity. And most existing search engines which retrieve the microblog list by string matching the search keywords is not competent to provide a reliable list for users when dealing with polysemy and synonym. Besides, treating all the users with same authority for all topics is intuitively not ideal. In this paper, a comprehensive strategy for microblog ranking is proposed. First, we extend the conventional TF-IDF based content similarity with exploiting knowledge from WordNet. Then, we further incorporate a new feature for microblog ranking that is the topical relation between search keyword and its retrieval. Author topical authority is also incorporated into the ranking framework as an important feature for microblog ranking. Gradient Boosting Decision Tree(GBDT), then is employed to train the ranking model with multiple features involved. We conduct thorough experiments on a large-scale real-world Twitter dataset and demonstrate that our proposed approach outperform a number of existing approaches in discovering higher quality and more related microblogs.", "title": "" }, { "docid": "84a2d26a0987a79baf597508543f39b6", "text": "In order to recommend products to users we must ultimately predict how a user will respond to a new product. To do so we must uncover the implicit tastes of each user as well as the properties of each product. For example, in order to predict whether a user will enjoy Harry Potter, it helps to identify that the book is about wizards, as well as the user's level of interest in wizardry. User feedback is required to discover these latent product and user dimensions. Such feedback often comes in the form of a numeric rating accompanied by review text. However, traditional methods often discard review text, which makes user and product latent dimensions difficult to interpret, since they ignore the very text that justifies a user's rating. In this paper, we aim to combine latent rating dimensions (such as those of latent-factor recommender systems) with latent review topics (such as those learned by topic models like LDA). Our approach has several advantages. Firstly, we obtain highly interpretable textual labels for latent rating dimensions, which helps us to `justify' ratings with text. Secondly, our approach more accurately predicts product ratings by harnessing the information present in review text; this is especially true for new products and users, who may have too few ratings to model their latent factors, yet may still provide substantial information from the text of even a single review. Thirdly, our discovered topics can be used to facilitate other tasks such as automated genre discovery, and to identify useful and representative reviews.", "title": "" }, { "docid": "a4c76e58074a42133a59a31d9022450d", "text": "This article reviews a free-energy formulation that advances Helmholtz's agenda to find principles of brain function based on conservation laws and neuronal energy. It rests on advances in statistical physics, theoretical biology and machine learning to explain a remarkable range of facts about brain structure and function. We could have just scratched the surface of what this formulation offers; for example, it is becoming clear that the Bayesian brain is just one facet of the free-energy principle and that perception is an inevitable consequence of active exchange with the environment. Furthermore, one can see easily how constructs like memory, attention, value, reinforcement and salience might disclose their simple relationships within this framework.", "title": "" }, { "docid": "2bf0219394d87654d2824c805844fcaa", "text": "Wei-yu Kevin Chiang • Dilip Chhajed • James D. Hess Department of Information Systems, University of Maryland at Baltimore County, Baltimore, Maryland 21250 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 Department of Business Administration, University of Illinois at Urbana–Champaign, Champaign, Illinois 61820 [email protected][email protected][email protected]", "title": "" }, { "docid": "91e8516d2e7e1e9de918251ac694ee08", "text": "High performance 3D integration Systems need a higher interconnect density between the die than traditional μbump interconnects can offer. For ultra-fine pitches interconnect pitches below 5μm a different solution is required. This paper describes a hybrid wafer-to-wafer (W2W) bonding approach that uses Cu damascene patterned surface bonding, allowing to scale down the interconnection pitch below 5 μm, potentially even down to 1μm, depending on the achievable W2W bonding accuracy. The bonding method is referred to as hybrid bonding since the bonding of the Cu/dielectric damascene surfaces leads simultaneously to metallic and dielectric bonding. In this paper, the integration flow for 300mm hybrid wafer bonding at 3.6μm and 1.8μm pitch will be described using a novel, alternative, non-oxide Cu/dielectric damascene process. Optimization of the surface preparation before bonding will be discussed. Of particular importance is the wafer chemical-mechanical-polishing (CMP) process and the pre-bonding wafer treatment. Using proper surface activation and very low roughness dielectrics, void-free room temperature bonding can be achieved. High bonding strengths are obtained, even using low temperature anneal (250°C). The process flow also integrates the use of a 5μm diameter, 50μm deep via-middle through-silicon-vias (TSV) to connect the wafer interfaces to the external wafer backside.", "title": "" }, { "docid": "a76ba02ef0f87a41cdff1a4046d4bba1", "text": "This paper proposes two RF self-interference cancellation techniques. Their small form-factor enables full-duplex communication links for small-to-medium size portable devices and hence promotes the adoption of full-duplex in mass-market applications and next-generation standards, e.g. IEEE802.11 and 5G. Measured prototype implementations of an electrical balance duplexer and a dual-polarized antenna both achieve >50 dB self-interference suppression at RF, operating in the ISM band at 2.45GHz.", "title": "" }, { "docid": "6f265af3f4f93fcce13563cac14b5774", "text": "Inorganic pyrophosphate (PP(i)) produced by cells inhibits mineralization by binding to crystals. Its ubiquitous presence is thought to prevent \"soft\" tissues from mineralizing, whereas its degradation to P(i) in bones and teeth by tissue-nonspecific alkaline phosphatase (Tnap, Tnsalp, Alpl, Akp2) may facilitate crystal growth. Whereas the crystal binding properties of PP(i) are largely understood, less is known about its effects on osteoblast activity. We have used MC3T3-E1 osteoblast cultures to investigate the effect of PP(i) on osteoblast function and matrix mineralization. Mineralization in the cultures was dose-dependently inhibited by PP(i). This inhibition could be reversed by Tnap, but not if PP(i) was bound to mineral. PP(i) also led to increased levels of osteopontin (Opn) induced via the Erk1/2 and p38 MAPK signaling pathways. Opn regulation by PP(i) was also insensitive to foscarnet (an inhibitor of phosphate uptake) and levamisole (an inhibitor of Tnap enzymatic activity), suggesting that increased Opn levels did not result from changes in phosphate. Exogenous OPN inhibited mineralization, but dephosphorylation by Tnap reversed this effect, suggesting that OPN inhibits mineralization via its negatively charged phosphate residues and that like PP(i), hydrolysis by Tnap reduces its mineral inhibiting potency. Using enzyme kinetic studies, we have shown that PP(i) inhibits Tnap-mediated P(i) release from beta-glycerophosphate (a commonly used source of organic phosphate for culture mineralization studies) through a mixed type of inhibition. In summary, PP(i) prevents mineralization in MC3T3-E1 osteoblast cultures by at least three different mechanisms that include direct binding to growing crystals, induction of Opn expression, and inhibition of Tnap activity.", "title": "" }, { "docid": "e1b536458ddc8603b281bac69e6bd2e8", "text": "We present highly integrated sensor-actuator-controller units (SAC units), addressing the increasing need for easy to use components in the design of modern high-performance robotic systems. Following strict design principles and an electro-mechanical co-design from the beginning on, our development resulted in highly integrated SAC units. Each SAC unit includes a motor, a gear unit, an IMU, sensors for torque, position and temperature as well as all necessary embedded electronics for control and communication over a high-speed EtherCAT bus. Key design considerations were easy to use interfaces and a robust cabling system. Using slip rings to electrically connect the input and output side, the units allow continuous rotation even when chained along a robotic arm. The experimental validation shows the potential of the new SAC units regarding the design of humanoid robots.", "title": "" }, { "docid": "288845120cdf96a20850b3806be3d89a", "text": "DNA replicases are multicomponent machines that have evolved clever strategies to perform their function. Although the structure of DNA is elegant in its simplicity, the job of duplicating it is far from simple. At the heart of the replicase machinery is a heteropentameric AAA+ clamp-loading machine that couples ATP hydrolysis to load circular clamp proteins onto DNA. The clamps encircle DNA and hold polymerases to the template for processive action. Clamp-loader and sliding clamp structures have been solved in both prokaryotic and eukaryotic systems. The heteropentameric clamp loaders are circular oligomers, reflecting the circular shape of their respective clamp substrates. Clamps and clamp loaders also function in other DNA metabolic processes, including repair, checkpoint mechanisms, and cell cycle progression. Twin polymerases and clamps coordinate their actions with a clamp loader and yet other proteins to form a replisome machine that advances the replication fork.", "title": "" }, { "docid": "0b4c076b80d91eb20ef71e63f17e9654", "text": "Current sports injury reporting systems lack a common conceptual basis. We propose a conceptual foundation as a basis for the recording of health problems associated with participation in sports, based on the notion of impairment used by the World Health Organization. We provide definitions of sports impairment concepts to represent the perspectives of health services, the participants in sports and physical exercise themselves, and sports institutions. For each perspective, the duration of the causative event is used as the norm for separating concepts into those denoting impairment conditions sustained instantly and those developing gradually over time. Regarding sports impairment sustained in isolated events, 'sports injury' denotes the loss of bodily function or structure that is the object of observations in clinical examinations; 'sports trauma' is defined as an immediate sensation of pain, discomfort or loss of functioning that is the object of athlete self-evaluations; and 'sports incapacity' is the sidelining of an athlete because of a health evaluation made by a legitimate sports authority that is the object of time loss observations. Correspondingly, sports impairment caused by excessive bouts of physical exercise is denoted as 'sports disease' (overuse syndrome) when observed by health service professionals during clinical examinations, 'sports illness' when observed by the athlete in self-evaluations, and 'sports sickness' when recorded as time loss from sports participation by a sports body representative. We propose a concerted development effort in this area that takes advantage of concurrent ontology management resources and involves the international sporting community in building terminology systems that have broad relevance.", "title": "" }, { "docid": "915b9627736c6ae916eafcd647cb39af", "text": "This paper describes a methodology for automated recognition of complex human activities. The paper proposes a general framework which reliably recognizes high-level human actions and human-human interactions. Our approach is a description-based approach, which enables a user to encode the structure of a high-level human activity as a formal representation. Recognition of human activities is done by semantically matching constructed representations with actual observations. The methodology uses a context-free grammar (CFG) based representation scheme as a formal syntax for representing composite activities. Our CFG-based representation enables us to define complex human activities based on simpler activities or movements. Our system takes advantage of both statistical recognition techniques from computer vision and knowledge representation concepts from traditional artificial intelligence. In the low-level of the system, image sequences are processed to extract poses and gestures. Based on the recognition of gestures, the high-level of the system hierarchically recognizes composite actions and interactions occurring in a sequence of image frames. The concept of hallucinations and a probabilistic semantic-level recognition algorithm is introduced to cope with imperfect lower-layers. As a result, the system recognizes human activities including ‘fighting’ and ‘assault’, which are high-level activities that previous systems had difficulties. The experimental results show that our system reliably recognizes sequences of complex human activities with a high recognition rate.", "title": "" }, { "docid": "8b863cd49dfe5edc2d27a0e9e9db0429", "text": "This paper presents an annotation scheme for adding entity and event target annotations to the MPQA corpus, a rich span-annotated opinion corpus. The new corpus promises to be a valuable new resource for developing systems for entity/event-level sentiment analysis. Such systems, in turn, would be valuable in NLP applications such as Automatic Question Answering. We introduce the idea of entity and event targets (eTargets), describe the annotation scheme, and present the results of an agreement study.", "title": "" }, { "docid": "d6a6cadd782762e4591447b7dd2c870a", "text": "OBJECTIVE\nThe objective of this study was to assess the effects of participation in a mindfulness meditation-based stress reduction program on mood disturbance and symptoms of stress in cancer outpatients.\n\n\nMETHODS\nA randomized, wait-list controlled design was used. A convenience sample of eligible cancer patients enrolled after giving informed consent and were randomly assigned to either an immediate treatment condition or a wait-list control condition. Patients completed the Profile of Mood States and the Symptoms of Stress Inventory both before and after the intervention. The intervention consisted of a weekly meditation group lasting 1.5 hours for 7 weeks plus home meditation practice.\n\n\nRESULTS\nNinety patients (mean age, 51 years) completed the study. The group was heterogeneous in type and stage of cancer. Patients' mean preintervention scores on dependent measures were equivalent between groups. After the intervention, patients in the treatment group had significantly lower scores on Total Mood Disturbance and subscales of Depression, Anxiety, Anger, and Confusion and more Vigor than control subjects. The treatment group also had fewer overall Symptoms of Stress; fewer Cardiopulmonary and Gastrointestinal symptoms; less Emotional Irritability, Depression, and Cognitive Disorganization; and fewer Habitual Patterns of stress. Overall reduction in Total Mood Disturbance was 65%, with a 31% reduction in Symptoms of Stress.\n\n\nCONCLUSIONS\nThis program was effective in decreasing mood disturbance and stress symptoms in both male and female patients with a wide variety of cancer diagnoses, stages of illness, and ages. cancer, stress, mood, intervention, mindfulness.", "title": "" }, { "docid": "bed89842ee325f9dc662d63c07f34726", "text": "Analysis of flows such as human movement can help spatial planners better understand territorial patterns in urban environments. In this paper, we describe FlowSampler, an interactive visual interface designed for spatial planners to gather, extract and analyse human flows in geolocated social media data. Our system adopts a graph-based approach to infer movement pathways from spatial point type data and expresses the resulting information through multiple linked multiple visualisations to support data exploration. We describe two use cases to demonstrate the functionality of our system and characterise how spatial planners utilise it to address analytical task.", "title": "" }, { "docid": "1ddbe5990a1fc4fe22a9788c77307a9f", "text": "The DENDRAL and Meta-DENDRAL programs are products of a large, interdisciplinary group of Stanford University scientists concerned with many and highly varied aspects of the mechanization ofscientific reasoningand theformalization of scientific knowledge for this purpose. An early motivation for our work was to explore the power of existing AI methods, such as heuristic search, for reasoning in difficult scientific problems [7]. Another concern has been to exploit the AI methodology to understand better some fundamental questions in the philosophy of science, for example the processes by which explanatory hypotheses are discovered or judged adequate [18]. From the start, the project has had an applications dimension [9, 10, 27]. It has sought to develop \"expert level\" agents to assist in the solution ofproblems in their discipline that require complex symbolic reasoning. The applications dimension is the focus of this paper. In order to achieve high performance, the DENDRAL programs incorporate large amounts ofknowledge about the area of science to which they are applied, structure elucidation in organic chemistry. A \"smart assistant\" for a chemist needs tobe able toperform many tasks as well as an expert, but need not necessarily understand the domain at the same theoretical level as the expert. The over-all structure elucidation task is described below (Section 2) followed by a description of the role of the DENDRAL programs within that framework (Section 3). The Meta-DENDRAL programs (Section 4) use a weaker body of knowledge about the domain ofmass spectrometry because their task is to formulate rules of mass spectrometry by induction from empirical data. A strong model of the domain would bias therules unnecessarily.", "title": "" }, { "docid": "b41c0a4e2a312d74d9a244e01fc76d66", "text": "There is a growing interest in studying the adoption of m-payments but literature on the subject is still in its infancy and no empirical research relating to this has been conducted in the context of the UK to date. The aim of this study is to unveil the current situation in m-payment adoption research and provide future research direction through the development of a research model for the examination of factors affecting m-payment adoption in the UK context. Following an extensive search of the literature, this study finds that 186 relationships between independent and dependent variables have been analysed by 32 existing empirical m-payment and m-banking adoption studies. From analysis of these relationships the most significant factors found to influence adoption are uncovered and an extension of UTAUT2 with the addition of perceived risk and trust is proposed to increase the applicability of UTAUT2 to the m-payment context.", "title": "" }, { "docid": "18ef3fbade2856543cae1fcc563c1c43", "text": "This paper induces the prominence of variegated machine learning techniques adapted so far for the identifying different network attacks and suggests a preferable Intrusion Detection System (IDS) with the available system resources while optimizing the speed and accuracy. With booming number of intruders and hackers in todays vast and sophisticated computerized world, it is unceasingly challenging to identify unknown attacks in promising time with no false positive and no false negative. Principal Component Analysis (PCA) curtails the amount of data to be compared by reducing their dimensions prior to classification that results in reduction of detection time. In this paper, PCA is adopted to reduce higher dimension dataset to lower dimension dataset. It is accomplished by converting network packet header fields into a vector then PCA applied over high dimensional dataset to reduce the dimension. The reduced dimension dataset is tested with Support Vector Machines (SVM), K-Nearest Neighbors (KNN), J48 Tree algorithm, Random Forest Tree classification algorithm, Adaboost algorihm, Nearest Neighbors generalized Exemplars algorithm, Navebayes probabilistic classifier and Voting Features Interval classification algorithm. Obtained results demonstrates detection accuracy, computational efficiency with minimal false alarms, less system resources utilization. Experimental results are compared with respect to detection rate and detection time and found that TREE classification algorithms achieved superior results over other algorithms. The whole experiment is conducted by using KDD99 data set.", "title": "" }, { "docid": "fac3285b06bd12db0cef95bb854d4480", "text": "The design of a novel and versatile single-port quad-band patch antenna is presented. The antenna is capable of supporting a maximum of four operational sub-bands, with the inherent capability to enhance or suppress any resonance(s) of interest. In addition, circular-polarisation is also achieved at the low frequency band, to demonstrate the polarisation agility. A prototype model of the antenna has been fabricated and its performance experimentally validated. The antenna's single layer and low-profile configuration makes it suitable for mobile user terminals and its cavity-backed feature results in low levels of coupling.", "title": "" }, { "docid": "fa34cdffb421f2c514d5bacbc6776ae9", "text": "A review on various CMOS voltage level shifters is presented in this paper. A voltage level-shifter shifts the level of input voltage to desired output voltage. Voltage Level Shifter circuits are compared with respect to output voltage level, power consumption and delay. Systems often require voltage level translation devices to allow interfacing between integrated circuit devices built from different voltage technologies. The choice of the proper voltage level translation device depends on many factors and will affect the performance and efficiency of the circuit application.", "title": "" }, { "docid": "14ca9dfee206612e36cd6c3b3e0ca61e", "text": "Radio-frequency identification (RFID) technology promises to revolutionize the way we track items in supply chain, retail store, and asset management applications. The size and different characteristics of RFID data pose many interesting challenges in the current data management systems. In this paper, we provide a brief overview of RFID technology and highlight a few of the data management challenges that we believe are suitable topics for exploratory research.", "title": "" } ]
scidocsrr
f0431a47bb75b36308a735769caad188
Stacked convolutional auto-encoders for steganalysis of digital images
[ { "docid": "f0b522d7f3a0eeb6cb951356407cf15a", "text": "Today, the most accurate steganalysis methods for digital media are built as supervised classifiers on feature vectors extracted from the media. The tool of choice for the machine learning seems to be the support vector machine (SVM). In this paper, we propose an alternative and well-known machine learning tool-ensemble classifiers implemented as random forests-and argue that they are ideally suited for steganalysis. Ensemble classifiers scale much more favorably w.r.t. the number of training examples and the feature dimensionality with performance comparable to the much more complex SVMs. The significantly lower training complexity opens up the possibility for the steganalyst to work with rich (high-dimensional) cover models and train on larger training sets-two key elements that appear necessary to reliably detect modern steganographic algorithms. Ensemble classification is portrayed here as a powerful developer tool that allows fast construction of steganography detectors with markedly improved detection accuracy across a wide range of embedding methods. The power of the proposed framework is demonstrated on three steganographic methods that hide messages in JPEG images.", "title": "" }, { "docid": "33069cfad58493e2f2fdd3effcdf0279", "text": "Recent findings [HOT06] have made possible the learning of deep layered hierarchical representations of data mimicking the brains working. It is hoped that this paradigm will unlock some of the power of the brain and lead to advances towards true AI. In this thesis I implement and evaluate state-of-the-art deep learning models and using these as building blocks I investigate the hypothesis that predicting the time-to-time sensory input is a good learning objective. I introduce the Predictive Encoder (PE) and show that a simple non-regularized learning rule, minimizing prediction error on natural video patches leads to receptive fields similar to those found in Macaque monkey visual area V1. I scale this model to video of natural scenes by introducing the Convolutional Predictive Encoder (CPE) and show similar results. Both models can be used in deep architectures as a deep learning module.", "title": "" } ]
[ { "docid": "0241cef84d46b942ee32fc7345874b90", "text": "A total of eight appendices (Appendix 1 through Appendix 8) and an associated reference for these appendices have been placed here. In addition, there is currently a search engine located at to assist users in identifying BPR techniques and tools.", "title": "" }, { "docid": "30817500bafa489642779975875e270f", "text": "We consider the hypothesis testing problem of detecting a shift between the means of two multivariate normal distributions in the high-dimensional setting, allowing for the data dimension p to exceed the sample size n. Our contribution is a new test statistic for the two-sample test of means that integrates a random projection with the classical Hotelling T 2 statistic. Working within a high-dimensional framework that allows (p, n) → ∞, we first derive an asymptotic power function for our test, and then provide sufficient conditions for it to achieve greater power than other state-of-the-art tests. Using ROC curves generated from simulated data, we demonstrate superior performance against competing tests in the parameter regimes anticipated by our theoretical results. Lastly, we illustrate an advantage of our procedure with comparisons on a high-dimensional gene expression dataset involving the discrimination of different types of cancer.", "title": "" }, { "docid": "ef1c42ff8348aa9c20a65dafdb98e93e", "text": "This study investigates the influence of online news and clickbait headlines on online users’ emotional arousal and behavior. An experiment was conducted to examine the level of arousal in three online news headline groups—news headlines, clickbait headlines, and control headlines. Arousal was measured by two different measurement approaches—pupillary response recorded by an eye-tracking device and selfassessment manikin (SAM) reported in a survey. Overall, the findings suggest that certain clickbait headlines can evoke users’ arousal which subsequently drives intention to read news stories. Arousal scores assessed by the pupillary response and SAM are consistent when the level of emotional arousal is high.", "title": "" }, { "docid": "8d07f52f154f81ce9dedd7c5d7e3182d", "text": "We present a 3D face reconstruction system that takes as input either one single view or several different views. Given a facial image, we first classify the facial pose into one of five predefined poses, then detect two anchor points that are then used to detect a set of predefined facial landmarks. Based on these initial steps, for a single view we apply a warping process using a generic 3D face model to build a 3D face. For multiple views, we apply sparse bundle adjustment to reconstruct 3D landmarks which are used to deform the generic 3D face model. Experimental results on the Color FERET and CMU multi-PIE databases confirm our framework is effective in creating realistic 3D face models that can be used in many computer vision applications, such as 3D face recognition at a distance.", "title": "" }, { "docid": "8d9246e7780770b5f7de9ef0adbab3e6", "text": "This paper proposes a self-adaption Kalman observer (SAKO) used in a permanent-magnet synchronous motor (PMSM) servo system. The proposed SAKO can make up measurement noise of the absolute encoder with limited resolution ratio and avoid differentiating process and filter delay of the traditional speed measuring methods. To be different from the traditional Kalman observer, the proposed observer updates the gain matrix by calculating the measurement noise at the current time. The variable gain matrix is used to estimate and correct the observed position, speed, and load torque to solve the problem that the motor speed calculated by the traditional methods is prone to large speed error and time delay when PMSM runs at low speeds. The state variables observed by the proposed observer are used as the speed feedback signals and compensation signal of the load torque disturbance in PMSM servo system. The simulations and experiments prove that the SAKO can observe speed and load torque precisely and timely and that the feedforward and feedback control system of PMSM can improve the speed tracking ability.", "title": "" }, { "docid": "984f7a2023a14efbbd5027abfc12a586", "text": "Name ambiguity stems from the fact that many people or objects share identical names in the real world. Such name ambiguity decreases the performance of document retrieval, Web search, information integration, and may cause confusion in other applications. Due to the same name spellings and lack of information, it is a nontrivial task to distinguish them accurately. In this article, we focus on investigating the problem in digital libraries to distinguish publications written by authors with identical names. We present an effective framework named GHOST (abbreviation for GrapHical framewOrk for name diSambiguaTion), to solve the problem systematically. We devise a novel similarity metric, and utilize only one type of attribute (i.e., coauthorship) in GHOST. Given the similarity matrix, intermediate results are grouped into clusters with a recently introduced powerful clustering algorithm called Affinity Propagation. In addition, as a complementary technique, user feedback can be used to enhance the performance. We evaluated the framework on the real DBLP and PubMed datasets, and the experimental results show that GHOST can achieve both high precision and recall.", "title": "" }, { "docid": "e0d8d1f65424080d538d87564783bdbb", "text": "Many deals that look good on paper never materialize into value-creating endeavors. Often, the problem begins at the negotiating table. In fact, the very person everyone thinks is pivotal to a deal's success--the negotiator--is often the one who undermines it. That's because most negotiators have a deal maker mind-set: They see the signed contract as the final destination rather than the start of a cooperative venture. What's worse, most companies reward negotiators on the basis of the number and size of the deals they're signing, giving them no incentive to change. The author asserts that organizations and negotiators must transition from a deal maker mentality--which involves squeezing your counterpart for everything you can get--to an implementation mind-set--which sets the stage for a healthy working relationship long after the ink has dried. Achieving an implementation mind-set demands five new approaches. First, start with the end in mind: Negotiation teams should carry out a \"benefit of hindsight\" exercise to imagine what sorts of problems they'll have encountered 12 months down the road. Second, help your counterpart prepare. Surprise confers advantage only because the other side has no time to think through all the implications of a proposal. If they agree to something they can't deliver, it will affect you both. Third, treat alignment as a shared responsibility. After all, if the other side's interests aren't aligned, it's your problem, too. Fourth, send one unified message. Negotiators should brief implementation teams on both sides together so everyone has the same information. And fifth, manage the negotiation like a business exercise: Combine disciplined negotiation preparation with post-negotiation reviews. Above all, companies must remember that the best deals don't end at the negotiating table--they begin there.", "title": "" }, { "docid": "07a048f6d960a3e11433bd10a4d40836", "text": "This paper presents a survey of topological spatial logics, taking as its point of departure the interpretation of the modal logic S4 due to McKinsey and Tarski. We consider the effect of extending this logic with the means to represent topological connectedness, focusing principally on the issue of computational complexity. In particular, we draw attention to the special problems which arise when the logics are interpreted not over arbitrary topological spaces, but over (low-dimensional) Euclidean spaces.", "title": "" }, { "docid": "05941fa5fe1d7728d9bce44f524ff17f", "text": "legend N2D N1D 2LPEG N2D vs. 2LPEG N1D vs. 2LPEG EFFICACY Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 275 Primary analysis set, n1⁄4 272 Primary endpoint: Patients with successful overall bowel cleansing efficacy (HCS) [n] 253 (92.0%) 245 (89.1%) 238 (87.5%) -4.00%* [0.055] -6.91%* [0.328] Supportive secondary endpoint: Patients with successful overall bowel cleansing efficacy (BBPS) [n] 249 (90.5%) 243 (88.4%) 232 (85.3%) n.a. n.a. Primary endpoint: Excellent plus Good cleansing rate in colon ascendens (primary analysis set) [n] 87 (31.6%) 93 (33.8%) 41 (15.1%) 8.11%* [50.001] 10.32%* [50.001] Key secondary endpoint: Adenoma detection rate, colon ascendens 11.6% 11.6% 8.1% -4.80%; 12.00%** [0.106] -4.80%; 12.00%** [0.106] Key secondary endpoint: Adenoma detection rate, overall colon 26.6% 27.6% 26.8% -8.47%; 8.02%** [0.569] -7.65%; 9.11%** [0.455] Key secondary endpoint: Polyp detection rate, colon ascendens 23.3% 18.6% 16.2% -1.41%; 15.47%** [0.024] -6.12%; 10.82%** [0.268] Key secondary endpoint: Polyp detection rate, overall colon 44.0% 45.1% 44.5% -8.85%; 8.00%** [0.579] –7.78%; 9.09%** [0.478] Compliance rates (min 75% of both doses taken) [n] 235 (85.5%) 233 (84.7%) 245 (90.1%) n.a. n.a. SAFETY Safety set, n1⁄4 262 Safety set, n1⁄4 269 Safety set, n1⁄4 263 All treatment-emergent adverse events [n] 77 89 53 n.a. n.a. Patients with any related treatment-emergent adverse event [n] 30 (11.5%) 40 (14.9%) 20 (7.6%) n.a. n.a. *1⁄4 97.5% 1-sided CI; **1⁄4 95% 2-sided CI; n.a.1⁄4 not applicable. United European Gastroenterology Journal 4(5S) A219", "title": "" }, { "docid": "4507c71798a856be64381d7098f30bf4", "text": "Adversarial examples are intentionally crafted data with the purpose of deceiving neural networks into misclassification. When we talk about strategies to create such examples, we usually refer to perturbation-based methods that fabricate adversarial examples by applying invisible perturbations onto normal data. The resulting data reserve their visual appearance to human observers, yet can be totally unrecognizable to DNN models, which in turn leads to completely misleading predictions. In this paper, however, we consider crafting adversarial examples from existing data as a limitation to example diversity. We propose a non-perturbationbased framework that generates native adversarial examples from class-conditional generative adversarial networks. As such, the generated data will not resemble any existing data and thus expand example diversity, raising the difficulty in adversarial defense. We then extend this framework to pre-trained conditional GANs, in which we turn an existing generator into an \"adversarial-example generator\". We conduct experiments on our approach for MNIST and CIFAR10 datasets and have satisfactory results, showing that this approach can be a potential alternative to previous attack strategies.", "title": "" }, { "docid": "6e52471655da243e278f121cd1b12596", "text": "Finite element method (FEM) is a powerful tool in analysis of electrical machines however, the computational cost is high depending on the geometry of analyzed machine. In synchronous reluctance machines (SyRM) with transversally laminated rotors, the anisotropy of magnetic circuit is provided by flux barriers which can be of various shapes. Flux barriers of shape based on Zhukovski's curves seem to provide very good electromagnetic properties of the machine. Complex geometry requires a fine mesh which increases computational cost when performing finite element analysis. By using magnetic equivalent circuit (MEC) it is possible to obtain good accuracy at low cost. This paper presents magnetic equivalent circuit of SyRM with new type of flux barriers. Numerical calculation of flux barriers' reluctances will be also presented.", "title": "" }, { "docid": "4aec1d1c4f4ca3990836a5d15fba81c7", "text": "P eople with higher cognitive ability (or “IQ”) differ from those with lower cognitive ability in a variety of important and unimportant ways. On average, they live longer, earn more, have larger working memories, faster reaction times and are more susceptible to visual illusions (Jensen, 1998). Despite the diversity of phenomena related to IQ, few have attempted to understand—or even describe—its influences on judgment and decision making. Studies on time preference, risk preference, probability weighting, ambiguity aversion, endowment effects, anchoring and other widely researched topics rarely make any reference to the possible effects of cognitive abilities (or cognitive traits). Decision researchers may neglect cognitive ability because they are more interested in the average effect of some experimental manipulation. On this view, individual differences (in intelligence or anything else) are regarded as a nuisance—as just another source of “unexplained” variance. Second, most studies are conducted on college undergraduates, who are widely perceived as fairly homogenous. Third, characterizing performance differences on cognitive tasks requires terms (“IQ” and “aptitudes” and such) that many object to because of their association with discriminatory policies. In short, researchers may be reluctant to study something they do not find interesting, that is not perceived to vary much within the subject pool conveniently obtained, and that will just get them into trouble anyway. But as Lubinski and Humphreys (1997) note, a neglected aspect does not cease to operate because it is neglected, and there is no good reason for ignoring the possibility that general intelligence or various more specific cognitive abilities are important causal determinants of decision making. To provoke interest in this", "title": "" }, { "docid": "0bf150f6cd566c31ec840a57d8d2fa55", "text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.", "title": "" }, { "docid": "16118317af9ae39ee95765616c5506ed", "text": "Generative Adversarial Networks (GANs) are shown to be successful at generating new and realistic samples including 3D object models. Conditional GAN, a variant of GANs, allows generating samples in given conditions. However, objects generated for each condition are different and it does not allow generation of the same object in different conditions. In this paper, we first adapt conditional GAN, which is originally designed for 2D image generation, to the problem of generating 3D models in different rotations. We then propose a new approach to guide the network to generate the same 3D sample in different and controllable rotation angles (sample pairs). Unlike previous studies, the proposed method does not require modification of the standard conditional GAN architecture and it can be integrated into the training step of any conditional GAN. Experimental results and visual comparison of 3D models show that the proposed method is successful at generating model pairs in different conditions.", "title": "" }, { "docid": "52722e0d7a11f2deccf5dec893a8febb", "text": "With more than 340~million messages that are posted on Twitter every day, the amount of duplicate content as well as the demand for appropriate duplicate detection mechanisms is increasing tremendously. Yet there exists little research that aims at detecting near-duplicate content on microblogging platforms. We investigate the problem of near-duplicate detection on Twitter and introduce a framework that analyzes the tweets by comparing (i) syntactical characteristics, (ii) semantic similarity, and (iii) contextual information. Our framework provides different duplicate detection strategies that, among others, make use of external Web resources which are referenced from microposts. Machine learning is exploited in order to learn patterns that help identifying duplicate content. We put our duplicate detection framework into practice by integrating it into Twinder, a search engine for Twitter streams. An in-depth analysis shows that it allows Twinder to diversify search results and improve the quality of Twitter search. We conduct extensive experiments in which we (1) evaluate the quality of different strategies for detecting duplicates, (2) analyze the impact of various features on duplicate detection, (3) investigate the quality of strategies that classify to what exact level two microposts can be considered as duplicates and (4) optimize the process of identifying duplicate content on Twitter. Our results prove that semantic features which are extracted by our framework can boost the performance of detecting duplicates.", "title": "" }, { "docid": "f1a4874767c7b4e0c45a97e516b885d0", "text": "It is proposed to use weighted least-norm solution to avoid joint limits for redundant joint manipulators. A comparison is made with the gradient projection method for avoiding joint limits. While the gradient projection method provides the optimal direction for the joint velocity vector within the null space, its magnitude is not unique and is adjusted by a scalar coefficient chosen by trial and error. It is shown in this paper that one fixed value of the scalar coefficient is not suitable even in a small workspace. The proposed manipulation scheme automatically chooses an appropriate magnitude of the self-motion throughout the workspace. This scheme, unlike the gradient projection method, guarantees joint limit avoidance, and also minimizes unnecessary self-motion. It was implemented and tested for real-time control of a seven-degree-offreedom (7-DOF) Robotics Research Corporation (RRC) manipulator.", "title": "" }, { "docid": "b5fe13becf36cdc699a083b732dc5d6a", "text": "The stability of two-dimensional, linear, discrete systems is examined using the 2-D matrix Lyapunov equation. While the existence of a positive definite solution pair to the 2-D Lyapunov equation is sufficient for stability, the paper proves that such existence is not necessary for stability, disproving a long-standing conjecture.", "title": "" }, { "docid": "c7e5a93ecc6714ffbb39809fb64b440c", "text": "This study investigated the role of self-directed learning (SDL) in problem-based learning (PBL) and examined how SDL relates to self-regulated learning (SRL). First, it is explained how SDL is implemented in PBL environments. Similarities between SDL and SRL are highlighted. However, both concepts differ on important aspects. SDL includes an additional premise of giving students a broader role in the selection and evaluation of learning materials. SDL can encompass SRL, but the opposite does not hold. Further, a review of empirical studies on SDL and SRL in PBL was conducted. Results suggested that SDL and SRL are developmental processes, that the “self” aspect is crucial, and that PBL can foster SDL. It is concluded that conceptual clarity of what SDL entails and guidance for both teachers and students can help PBL to bring forth self-directed learners.", "title": "" }, { "docid": "b12b500f7c6ac3166eb4fbdd789196ea", "text": "Theory of Mind (ToM) is the ability to attribute thoughts, intentions and beliefs to others. This involves component processes, including cognitive perspective taking (cognitive ToM) and understanding emotions (affective ToM). This study assessed the distinction and overlap of neural processes involved in these respective components, and also investigated their development between adolescence and adulthood. While data suggest that ToM develops between adolescence and adulthood, these populations have not been compared on cognitive and affective ToM domains. Using fMRI with 15 adolescent (aged 11-16 years) and 15 adult (aged 24-40 years) males, we assessed neural responses during cartoon vignettes requiring cognitive ToM, affective ToM or physical causality comprehension (control). An additional aim was to explore relationships between fMRI data and self-reported empathy. Both cognitive and affective ToM conditions were associated with neural responses in the classic ToM network across both groups, although only affective ToM recruited medial/ventromedial PFC (mPFC/vmPFC). Adolescents additionally activated vmPFC more than did adults during affective ToM. The specificity of the mPFC/vmPFC response during affective ToM supports evidence from lesion studies suggesting that vmPFC may integrate affective information during ToM. Furthermore, the differential neural response in vmPFC between adult and adolescent groups indicates developmental changes in affective ToM processing.", "title": "" } ]
scidocsrr
f25a8834dab8ee8f17dcef2f09d5c613
A Tutorial on Deep Learning for Music Information Retrieval
[ { "docid": "e5ec413c71f8f4012a94e20f7a575e68", "text": "It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: 1) the slow gradient-based learning algorithms are extensively used to train neural networks, and 2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these traditional implementations, this paper proposes a new learning algorithm called extreme learning machine (ELM) for single-hidden layer feedforward neural networks (SLFNs) which randomly chooses the input weights and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide the best generalization performance at extremely fast learning speed. The experimental results based on real-world benchmarking function approximation and classification problems including large complex applications show that the new algorithm can produce best generalization performance in some cases and can learn much faster than traditional popular learning algorithms for feedforward neural networks.", "title": "" }, { "docid": "e4197f2d23fdbec9af85954c40ca46da", "text": "In this work we investigate the applicability of unsupervised feature learning methods to the task of automatic genre prediction of music pieces. More specifically we evaluate a framework that recently has been successfully used to recognize objects in images. We first extract local patches from the time-frequency transformed audio signal, which are then pre-processed and used for unsupervised learning of an overcomplete dictionary of local features. For learning we either use a bootstrapped k-means clustering approach or select features randomly. We further extract feature responses in a convolutional manner and train a linear SVM for classification. We extensively evaluate the approach on the GTZAN dataset, emphasizing the influence of important design choices such as dimensionality reduction, pooling and patch dimension on the classification accuracy. We show that convolutional extraction of local feature responses is crucial to reach high performance. Furthermore we find that using this approach, simple and fast learning techniques such as k-means or randomly selected features are competitive with previously published results which also learn features from audio signals.", "title": "" } ]
[ { "docid": "66b104459bdfc063cf7559c363c5802f", "text": "We present a new local strategy to solve incremental learning tasks. Applied to Support Vector Machines based on local kernel, it allows to avoid re-learning of all the parameters by selecting a working subset where the incremental learning is performed. Automatic selection procedure is based on the estimation of generalization error by using theoretical bounds that involve the margin notion. Experimental simulation on three typical datasets of machine learning give promising results.", "title": "" }, { "docid": "0188eb4ef8a87b6cee8657018360fa69", "text": "This paper presents a pattern division multiple access (PDMA) concept for cellular future radio access (FRA) towards the 2020s information society. Different from the current LTE radio access scheme (until Release 11), PDMA is a novel non-orthogonal multiple access technology based on the total optimization of multiple user communication system. It considers joint design from both transmitter and receiver. At the receiver, multiple users are detected by successive interference cancellation (SIC) detection method. Numerical results show that the PDMA system based on SIC improve the average sum rate of users over the orthogonal system with affordable complexity.", "title": "" }, { "docid": "cab673895969ded614a4063d19777f4d", "text": "Functional magnetic resonance imaging was used to assess the cortical areas active during the observation of mouth actions performed by humans and by individuals belonging to other species (monkey and dog). Two types of actions were presented: biting and oral communicative actions (speech reading, lip-smacking, barking). As a control, static images of the same actions were shown. Observation of biting, regardless of the species of the individual performing the action, determined two activation foci (one rostral and one caudal) in the inferior parietal lobule and an activation of the pars opercularis of the inferior frontal gyrus and the adjacent ventral premotor cortex. The left rostral parietal focus (possibly BA 40) and the left premotor focus were very similar in all three conditions, while the right side foci were stronger during the observation of actions made by conspecifics. The observation of speech reading activated the left pars opercularis of the inferior frontal gyrus, the observation of lip-smacking activated a small focus in the pars opercularis bilaterally, and the observation of barking did not produce any activation in the frontal lobe. Observation of all types of mouth actions induced activation of extrastriate occipital areas. These results suggest that actions made by other individuals may be recognized through different mechanisms. Actions belonging to the motor repertoire of the observer (e.g., biting and speech reading) are mapped on the observer's motor system. Actions that do not belong to this repertoire (e.g., barking) are essentially recognized based on their visual properties. We propose that when the motor representation of the observed action is activated, the observer gains knowledge of the observed action in a personal perspective, while this perspective is lacking when there is no motor activation.", "title": "" }, { "docid": "f6ae71fee81a8560f37cb0dccfd1e3cd", "text": "Linguistic research to date has determined many of the principles that govern the structure of the spatial schemas represented by closed-class forms across the world’s languages. contributing to this cumulative understanding have, for example, been Gruber 1965, Fillmore 1968, Leech 1969, Clark 1973, Bennett 1975, Herskovits 1982, Jackendoff 1983, Zubin and Svorou 1984, as well as myself, Talmy 1983, 2000a, 2000b). It is now feasible to integrate these principles and to determine the comprehensive system they belong to for spatial structuring in spoken language. The finding here is that this system has three main parts: the componential, the compositional, and the augmentive.", "title": "" }, { "docid": "6e82e635682cf87a84463f01c01a1d33", "text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.", "title": "" }, { "docid": "67dedca1dbdf5845b32c74e17fc42eb6", "text": "How much trust a user places in a recommender is crucial to the uptake of the recommendations. Although prior work established various factors that build and sustain user trust, their comparative impact has not been studied in depth. This paper presents the results of a crowdsourced study examining the impact of various recommendation interfaces and content selection strategies on user trust. It evaluates the subjective ranking of nine key factors of trust grouped into three dimensions and examines the differences observed with respect to users' personality traits.", "title": "" }, { "docid": "5cd3abebf4d990bb9196b7019b29c568", "text": "Wearing comfort of clothing is dependent on air permeability, moisture absorbency and wicking properties of fabric, which are related to the porosity of fabric. In this work, a plug-in is developed using Python script and incorporated in Abaqus/CAE for the prediction of porosity of plain weft knitted fabrics. The Plug-in is able to automatically generate 3D solid and multifilament weft knitted fabric models and accurately determine the porosity of fabrics in two steps. In this work, plain weft knitted fabrics made of monofilament, multifilament and spun yarn made of staple fibers were used to evaluate the effectiveness of the developed plug-in. In the case of staple fiber yarn, intra yarn porosity was considered in the calculation of porosity. The first step is to develop a 3D geometrical model of plain weft knitted fabric and the second step is to calculate the porosity of the fabric by using the geometrical parameter of 3D weft knitted fabric model generated in step one. The predicted porosity of plain weft knitted fabric is extracted in the second step and is displayed in the message area. The predicted results obtained from the plug-in have been compared with the experimental results obtained from previously developed models; they agreed well.", "title": "" }, { "docid": "bd3f7e8e4416f67cb6e26ce0575af624", "text": "Soft materials are being adopted in robotics in order to facilitate biomedical applications and in order to achieve simpler and more capable robots. One route to simplification is to design the robot's body using `smart materials' that carry the burden of control and actuation. Metamaterials enable just such rational design of the material properties. Here we present a soft robot that exploits mechanical metamaterials for the intrinsic synchronization of two passive clutches which contact its travel surface. Doing so allows it to move through an enclosed passage with an inchworm motion propelled by a single actuator. Our soft robot consists of two 3D-printed metamaterials that implement auxetic and normal elastic properties. The design, fabrication and characterization of the metamaterials are described. In addition, a working soft robot is presented. Since the synchronization mechanism is a feature of the robot's material body, we believe that the proposed design will enable compliant and robust implementations that scale well with miniaturization.", "title": "" }, { "docid": "a2d699f3c600743c732b26071639038a", "text": "A novel rectifying circuit topology is proposed for converting electromagnetic pulse waves (PWs), that are collected by a wideband antenna, into dc voltage. The typical incident signal considered in this paper consists of 10-ns pulses modulated around 2.4 GHz with a repetition period of 100 ns. The proposed rectifying circuit topology comprises a double-current architecture with inductances that collect the energy during the pulse delivery as well as an output capacitance that maintains the dc output voltage between the pulses. Experimental results show that the efficiency of the rectifier reaches 64% for a mean available incident power of 4 dBm. Similar performances are achieved when a wideband antenna is combined with the rectifier in order to realize a rectenna. By increasing the repetition period of the incident PWs to 400 ns, the rectifier still operates with an efficiency of 52% for a mean available incident pulse power of −8 dBm. Finally, the proposed PW rectenna is tested for a wireless energy transmission application in a low- $Q$ cavity. The time reversal technique is applied to focus PWs around the desired rectenna. Results show that the rectenna is still efficient when noisy PW is handled.", "title": "" }, { "docid": "829f94e5e649d9b3501953e6b418bc11", "text": "Most modern hypervisors offer powerful resource control primitives such as reservations, limits, and shares for individual virtual machines (VMs). These primitives provide a means to dynamic vertical scaling of VMs in order for the virtual applications to meet their respective service level objectives (SLOs). VMware DRS offers an additional resource abstraction of a resource pool (RP) as a logical container representing an aggregate resource allocation for a collection of VMs. In spite of the abundant research on translating application performance goals to resource requirements, the implementation of VM vertical scaling techniques in commercial products remains limited. In addition, no prior research has studied automatic adjustment of resource control settings at the resource pool level. In this paper, we present AppRM, a tool that automatically sets resource controls for both virtual machines and resource pools to meet application SLOs. AppRM contains a hierarchy of virtual application managers and resource pool managers. At the application level, AppRM translates performance objectives into the appropriate resource control settings for the individual VMs running that application. At the resource pool level, AppRM ensures that all important applications within the resource pool can meet their performance targets by adjusting controls at the resource pool level. Experimental results under a variety of dynamically changing workloads composed by multi-tiered applications demonstrate the effectiveness of AppRM. In all cases, AppRM is able to deliver application performance satisfaction without manual intervention.", "title": "" }, { "docid": "4e19a7342ff32f82bc743f40b3395ee3", "text": "The face image is the most accessible biometric modality which is used for highly accurate face recognition systems, while it is vulnerable to many different types of presentation attacks. Face anti-spoofing is a very critical step before feeding the face image to biometric systems. In this paper, we propose a novel two-stream CNN-based approach for face anti-spoofing, by extracting the local features and holistic depth maps from the face images. The local features facilitate CNN to discriminate the spoof patches independent of the spatial face areas. On the other hand, holistic depth map examine whether the input image has a face-like depth. Extensive experiments are conducted on the challenging databases (CASIA-FASD, MSU-USSA, and Replay Attack), with comparison to the state of the art.", "title": "" }, { "docid": "3b549ddb51daba4fa5a0db8fa281ff7e", "text": "We propose a method for learning from streaming visual data using a compact, constant size representation of all the data that was seen until a given moment. Specifically, we construct a “coreset” representation of streaming data using a parallelized algorithm, which is an approximation of a set with relation to the squared distances between this set and all other points in its ambient space. We learn an adaptive object appearance model from the coreset tree in constant time and logarithmic space and use it for object tracking by detection. Our method obtains excellent results for object tracking on three standard datasets over more than 100 videos. The ability to summarize data efficiently makes our method ideally suited for tracking in long videos in presence of space and time constraints. We demonstrate this ability by outperforming a variety of algorithms on the TLD dataset with 2685 frames on average. This coreset based learning approach can be applied for both real-time learning of small, varied data and fast learning of big data.", "title": "" }, { "docid": "6974bf94292b51fc4efd699c28c90003", "text": "We just released an Open Source receiver that is able to decode IEEE 802.11a/g/p Orthogonal Frequency Division Multiplexing (OFDM) frames in software. This is the first Software Defined Radio (SDR) based OFDM receiver supporting channel bandwidths up to 20MHz that is not relying on additional FPGA code. Our receiver comprises all layers from the physical up to decoding the MAC packet and extracting the payload of IEEE 802.11a/g/p frames. In our demonstration, visitors can interact live with the receiver while it is decoding frames that are sent over the air. The impact of moving the antennas and changing the settings are displayed live in time and frequency domain. Furthermore, the decoded frames are fed to Wireshark where the WiFi traffic can be further investigated. It is possible to access and visualize the data in every decoding step from the raw samples, the autocorrelation used for frame detection, the subcarriers before and after equalization, up to the decoded MAC packets. The receiver is completely Open Source and represents one step towards experimental research with SDR.", "title": "" }, { "docid": "e5f90c30d546fe22a25305afefeaff8c", "text": "H2O2 has been found to be required for the activity of the main microbial enzymes responsible for lignin oxidative cleavage, peroxidases. Along with other small radicals, it is implicated in the early attack of plant biomass by fungi. Among the few extracellular H2O2-generating enzymes known are the glyoxal oxidases (GLOX). GLOX is a copper-containing enzyme, sharing high similarity at the level of active site structure and chemistry with galactose oxidase. Genes encoding GLOX enzymes are widely distributed among wood-degrading fungi especially white-rot degraders, plant pathogenic and symbiotic fungi. GLOX has also been identified in plants. Although widely distributed, only few examples of characterized GLOX exist. The first characterized fungal GLOX was isolated from Phanerochaete chrysosporium. The GLOX from Utilago maydis has a role in filamentous growth and pathogenicity. More recently, two other glyoxal oxidases from the fungus Pycnoporus cinnabarinus were also characterized. In plants, GLOX from Vitis pseudoreticulata was found to be implicated in grapevine defence mechanisms. Fungal GLOX were found to be activated by peroxidases in vitro suggesting a synergistic and regulatory relationship between these enzymes. The substrates oxidized by GLOX are mainly aldehydes generated during lignin and carbohydrates degradation. The reactions catalysed by this enzyme such as the oxidation of toxic molecules and the production of valuable compounds (organic acids) makes GLOX a promising target for biotechnological applications. This aspect on GLOX remains new and needs to be investigated.", "title": "" }, { "docid": "e2ee26af1fb425f8591b5b8689080fff", "text": "In this paper, we focus on a recent Web trend called microblogging, and in particular a site called Twitter. The content of such a site is an extraordinarily large number of small textual messages, posted by millions of users, at random or in response to perceived events or situations. We have developed an algorithm that takes a trending phrase or any phrase specified by a user, collects a large number of posts containing the phrase, and provides an automatically created summary of the posts related to the term. We present examples of summaries we produce along with initial evaluation.", "title": "" }, { "docid": "6a1ade9670c8ee161209d54901318692", "text": "The motion of a plane can be described by a homography. We study how to parameterize homographies to maximize plane estimation performance. We compare the usual 3 × 3 matrix parameterization with a parameterization that combines 4 fixed points in one of the images with 4 variable points in the other image. We empirically show that this 4pt parameterization is far superior. We also compare both parameterizations with a variety of direct parameterizations. In the case of unknown relative orientation, we compare with a direct parameterization of the plane equation, and the rotation and translation of the camera(s). We show that the direct parameteri-zation is both less accurate and far less robust than the 4-point parameterization. We explain the poor performance using a measure of independence of the Jacobian images. In the fully calibrated setting, the direct parameterization just consists of 3 parameters of the plane equation. We show that this parameterization is far more robust than the 4-point parameterization, but only approximately as accurate. In the case of a moving stereo rig we find that the direct parameterization of plane equation, camera rotation and translation performs very well, both in terms of accuracy and robustness. This is in contrast to the corresponding direct parameterization in the case of unknown relative orientation. Finally, we illustrate the use of plane estimation in 2 automotive applications.", "title": "" }, { "docid": "8283789e148f6e84f7901dc2a6ad0550", "text": "A physical map has been constructed of the human genome containing 15,086 sequence-tagged sites (STSs), with an average spacing of 199 kilobases. The project involved assembly of a radiation hybrid map of the human genome containing 6193 loci and incorporated a genetic linkage map of the human genome containing 5264 loci. This information was combined with the results of STS-content screening of 10,850 loci against a yeast artificial chromosome library to produce an integrated map, anchored by the radiation hybrid and genetic maps. The map provides radiation hybrid coverage of 99 percent and physical coverage of 94 percent of the human genome. The map also represents an early step in an international project to generate a transcript map of the human genome, with more than 3235 expressed sequences localized. The STSs in the map provide a scaffold for initiating large-scale sequencing of the human genome.", "title": "" }, { "docid": "2b952c455c9f8daa7f6c0c024620aef8", "text": "Broadband use is booming around the globe as the infrastructure is built to provide high speed Internet and Internet Protocol television (IPTV) services. Driven by fierce competition and the search for increasing average revenue per user (ARPU), operators are evolving so they can deliver services within the home that involve a wide range of technologies, terminals, and appliances, as well as software that is increasingly rich and complex. “It should all work” is the key theme on the end user's mind, yet call centers are confronted with a multitude of consumer problems. The demarcation point between provider network and home network is blurring, in fact, if not yet in the consumer's mind. In this context, operators need to significantly rethink service lifecycle management. This paper explains how home and access support systems cover the most critical part of the network in service delivery. They build upon the inherent operation support features of access multiplexers, network termination devices, and home devices to allow the planning, fulfillment, operation, and assurance of new services.", "title": "" }, { "docid": "8baa6af3ee08029f0a555e4f4db4e218", "text": "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.", "title": "" }, { "docid": "b51ab8520c29aa6b2ceaa79e9dda21b5", "text": "This paper presents a new nanolubricant for the intermediate gearbox of the Apache aircraft. Historically, the intermediate gearbox has been prone for grease leaking and this natural-occurring fault has negatively impacted the airworthiness of the aircraft. In this study, the incorporation of graphite nanoparticles in mobile aviation gear oil is presented as a nanofluid with excellent thermo-physical properties. Condition-based maintenance practices are demonstrated where four nanoparticle additive oil samples with different concentrations are tested in a full-scale tail rotor drive-train test stand, in addition to, a baseline sample for comparison purposes. Different condition monitoring results suggest the capacity of the nanofluids to have significant gearbox performance benefits when compared to the base oil.", "title": "" } ]
scidocsrr
886e646e5ea0c0497984ecd7cb60ff9b
Sequence Discriminative Training for Offline Handwriting Recognition by an Interpolated CTC and Lattice-Free MMI Objective Function
[ { "docid": "6dfc558d273ec99ffa7dc638912d272c", "text": "Recurrent neural networks (RNNs) with Long Short-Term memory cells currently hold the best known results in unconstrained handwriting recognition. We show that their performance can be greatly improved using dropout - a recently proposed regularization method for deep architectures. While previous works showed that dropout gave superior performance in the context of convolutional networks, it had never been applied to RNNs. In our approach, dropout is carefully used in the network so that it does not affect the recurrent connections, hence the power of RNNs in modeling sequences is preserved. Extensive experiments on a broad range of handwritten databases confirm the effectiveness of dropout on deep architectures even when the network mainly consists of recurrent and shared connections.", "title": "" }, { "docid": "7f7a67af972d26746ce1ae0c7ec09499", "text": "We describe Microsoft's conversational speech recognition system, in which we combine recent developments in neural-network-based acoustic and language modeling to advance the state of the art on the Switchboard recognition task. Inspired by machine learning ensemble techniques, the system uses a range of convolutional and recurrent neural networks. I-vector modeling and lattice-free MMI training provide significant gains for all acoustic model architectures. Language model rescoring with multiple forward and backward running RNNLMs, and word posterior-based system combination provide a 20% boost. The best single system uses a ResNet architecture acoustic model with RNNLM rescoring, and achieves a word error rate of 6.9% on the NIST 2000 Switchboard task. The combined system has an error rate of 6.2%, representing an improvement over previously reported results on this benchmark task.", "title": "" } ]
[ { "docid": "bedc7de2ede206905e89daf61828f868", "text": "Spectral graph partitioning provides a powerful approach to image segmentation. We introduce an alternate idea that finds partitions with a small isoperimetric constant, requiring solution to a linear system rather than an eigenvector problem. This approach produces the high quality segmentations of spectral methods, but with improved speed and stability.", "title": "" }, { "docid": "2923d1776422a1f44395f169f0d61995", "text": "Rolling upgrade consists of upgrading progressively the servers of a distributed system to reduce service downtime.Upgrading a subset of servers requires a well-engineered cluster membership protocol to maintain, in the meantime, the availability of the system state. Existing cluster membership reconfigurations, like CoreOS etcd, rely on a primary not only for reconfiguration but also for storing information. At any moment, there can be at most one primary, whose replacement induces disruption. We propose Rollup, a non-disruptive rolling upgrade protocol with a fast consensus-based reconfiguration. Rollup relies on a candidate leader only for the reconfiguration and scalable biquorums for service requests. While Rollup implements a non-disruptive cluster membership protocol, it does not offer a full-fledged coordination service. We analyzed Rollup theoretically and experimentally on an isolated network of 26 physical machines and an Amazon EC2 cluster of 59 virtual machines. Our results show an 8-fold speedup compared to a rolling upgrade based on a primary for reconfiguration.", "title": "" }, { "docid": "4d6ca3875418dedcd0b71bc13b1a529d", "text": "Leadership is one of the most discussed and important topics in the social sciences especially in organizational theory and management. Generally leadership is the process of influencing group activities towards the achievement of goals. A lot of researches have been conducted in this area .some researchers investigated individual characteristics such as demographics, skills and abilities, and personality traits, predict leadership effectiveness. Different theories, leadership styles and models have been propounded to provide explanations on the leadership phenomenon and to help leaders influence their followers towards achieving organizational goals. Today with the change in organizations and business environment the leadership styles and theories have been changed. In this paper, we review the new leadership theories and styles that are new emerging and are according to the need of the organizations. Leadership styles and theories have been investigated to get the deep understanding of the new trends and theories of the leadership to help the managers and organizations choose appropriate style of leadership. key words: new emerging styles, new theories, leadership, organization", "title": "" }, { "docid": "e8eba986ab77d519ce8808b3d33b2032", "text": "In this paper, an implementation of an extended target tracking filter using measurements from high-resolution automotive Radio Detection and Ranging (RADAR) is proposed. Our algorithm uses the Cartesian point measurements from the target's contour as well as the Doppler range rate provided by the RADAR to track a target vehicle's position, orientation, and translational and rotational velocities. We also apply a Gaussian Process (GP) to model the vehicle's shape. To cope with the nonlinear measurement equation, we implement an Extended Kalman Filter (EKF) and provide the necessary derivatives for the Doppler measurement. We then evaluate the effectiveness of incorporating the Doppler rate on simulations and on 2 sets of real data.", "title": "" }, { "docid": "fe0c8969c666b6074d2bc5cc49546b78", "text": "We propose a novel adversarial multi-task learning scheme, aiming at actively curtailing the inter-talker feature variability while maximizing its senone discriminability so as to enhance the performance of a deep neural network (DNN) based ASR system. We call the scheme speaker-invariant training (SIT). In SIT, a DNN acoustic model and a speaker classifier network are jointly optimized to minimize the senone (tied triphone state) classification loss, and simultaneously mini-maximize the speaker classification loss. A speaker-invariant and senone-discriminative deep feature is learned through this adversarial multi-task learning. With SIT, a canonical DNN acoustic model with significantly reduced variance in its output probabilities is learned with no explicit speaker-independent (SI) transformations or speaker-specific representations used in training or testing. Evaluated on the CHiME-3 dataset, the SIT achieves 4.99% relative word error rate (WER) improvement over the conventional SI acoustic model. With additional unsupervised speaker adaptation, the speaker-adapted (SA) SIT model achieves 4.86% relative WER gain over the SA SI acoustic model.", "title": "" }, { "docid": "61c6d49c3cdafe4366d231ebad676077", "text": "Video affective content analysis has been an active research area in recent decades, since emotion is an important component in the classification and retrieval of videos. Video affective content analysis can be divided into two approaches: direct and implicit. Direct approaches infer the affective content of videos directly from related audiovisual features. Implicit approaches, on the other hand, detect affective content from videos based on an automatic analysis of a user's spontaneous response while consuming the videos. This paper first proposes a general framework for video affective content analysis, which includes video content, emotional descriptors, and users' spontaneous nonverbal responses, as well as the relationships between the three. Then, we survey current research in both direct and implicit video affective content analysis, with a focus on direct video affective content analysis. Lastly, we identify several challenges in this field and put forward recommendations for future research.", "title": "" }, { "docid": "97ca52a74f6984cda706b54830c58fd8", "text": "In this paper, we study a novel approach for named entity recognition (NER) and mention detection in natural language processing. Instead of treating NER as a sequence labelling problem, we propose a new local detection approach, which rely on the recent fixed-size ordinally forgetting encoding (FOFE) method to fully encode each sentence fragment and its left/right contexts into a fixed-size representation. Afterwards, a simple feedforward neural network is used to reject or predict entity label for each individual fragment. The proposed method has been evaluated in several popular NER and mention detection tasks, including the CoNLL 2003 NER task and TAC-KBP2015 and TAC-KBP2016 Trilingual Entity Discovery and Linking (EDL) tasks. Our methods have yielded pretty strong performance in all of these examined tasks. This local detection approach has shown many advantages over the traditional sequence labelling methods.", "title": "" }, { "docid": "8eeb8fba948b37b4e9489c472cb1506a", "text": "Total Quality Management (TQM) has become, according to one source, 'as pervasive a part of business thinking as quarterly financial results,' and yet TQM's role as a strategic resource remains virtually unexamined in strategic management research. Drawing on the resource approach and other theoretical perspectives, this article examines TQM as a potential source of sustainable competitive advantage, reviews existing empirical evidence, and reports findings from a new empirical study of TQM's performance consequences. The findings suggest that most features generally associated with TQM—such as quality training, process improvement, and benchmarking—do not generally produce advantage, but that certain tacit, behavioral, imperfectly imitable features—such as open culture, employee empowerment, and executive commitment—can produce advantage. The author concludes that these tacit resources, and not TQM tools and techniques, drive TQM success, and that organizations that acquire them can outperform competitors with or without the accompanying TQM ideology.", "title": "" }, { "docid": "dbc57902c0655f1bdb3f7dbdcdb6fd5c", "text": "In this paper, a progressive learning technique for multi-class classification is proposed. This newly developed learning technique is independent of the number of class constraints and it can learn new classes while still retaining the knowledge of previous classes. Whenever a new class (non-native to the knowledge learnt thus far) is encountered, the neural network structure gets remodeled automatically by facilitating new neurons and interconnections, and the parameters are calculated in such a way that it retains the knowledge learnt thus far. This technique is suitable for realworld applications where the number of classes is often unknown and online learning from real-time data is required. The consistency and the complexity of the progressive learning technique are analyzed. Several standard datasets are used to evaluate the performance of the developed technique. A comparative study shows that the developed technique is superior. Key Words—Classification, machine learning, multi-class, sequential learning, progressive learning.", "title": "" }, { "docid": "1f4e15c44b4700598701667fa5baaaef", "text": "We present the new HippoCampus micro underwater vehicle, first introduced in [1]. It is designed for monitoring confined fluid volumes. These tightly constrained settings demand agile vehicle dynamics. Moreover, we adapt a robust attitude control scheme for aerial drones to the underwater domain. We demonstrate the performance of the controller with a challenging maneuver. A submerged Furuta pendulum is stabilized by HippoCampus after a swing-up. The experimental results reveal the robustness of the control method, as the system quickly recovers from strong physical disturbances, which are applied to the system.", "title": "" }, { "docid": "fb79df27fa2a5b1af8d292af8d53af6e", "text": "This paper presents a proportional integral derivative (PID) controller with a derivative filter coefficient to control a twin rotor multiple input multiple output system (TRMS), which is a nonlinear system with two degrees of freedom and cross couplings. The mathematical modeling of TRMS is done using MATLAB/Simulink. The simulation results are compared with the results of conventional PID controller. The results of proposed PID controller with derivative filter shows better transient and steady state response as compared to conventional PID controller.", "title": "" }, { "docid": "c6bdd8d88dd2f878ddc6f2e8be39aa78", "text": "A wide variety of non-photorealistic rendering techniques make use of random variation in the placement or appearance of primitives. In order to avoid the \"shower-door\" effect, this random variation should move with the objects in the scene. Here we present coherent noise tailored to this purpose. We compute the coherent noise with a specialized filter that uses the depth and velocity fields of a source sequence. The computation is fast and suitable for interactive applications like games.", "title": "" }, { "docid": "2737e9e01f00db8fa568ae1fe5881a5e", "text": "Resonant converters which use a small DC bus capacitor to achieve high power factor are desirable for low cost Inductive Power Transfer (IPT) applications but produce amplitude modulated waveforms which are then present on any coupled load. The modulated coupled voltage produces pulse currents which could be used for battery charging purposes. In order to understand the effects of such pulse charging, two Lithium Iron Phosphate (LiFePO4) batteries underwent 2000 cycles of charge and discharging cycling utilizing both pulse and DC charging profiles. The cycling results show that such pulse charging is comparable to conventional DC charging and may be suitable for low cost battery charging applications without impacting battery life.", "title": "" }, { "docid": "e37805ea3c4e25ab49dc4f7992d8e7c6", "text": "Curriculum learning (CL) or self-paced learning (SPL) represents a recently proposed learning regime inspired by the learning process of humans and animals that gradually proceeds from easy to more complex samples in training. The two methods share a similar conceptual learning paradigm, but differ in specific learning schemes. In CL, the curriculum is predetermined by prior knowledge, and remain fixed thereafter. Therefore, this type of method heavily relies on the quality of prior knowledge while ignoring feedback about the learner. In SPL, the curriculum is dynamically determined to adjust to the learning pace of the leaner. However, SPL is unable to deal with prior knowledge, rendering it prone to overfitting. In this paper, we discover the missing link between CL and SPL, and propose a unified framework named self-paced curriculum leaning (SPCL). SPCL is formulated as a concise optimization problem that takes into account both prior knowledge known before training and the learning progress during training. In comparison to human education, SPCL is analogous to “instructor-student-collaborative” learning mode, as opposed to “instructor-driven” in CL or “student-driven” in SPL. Empirically, we show that the advantage of SPCL on two tasks. Curriculum learning (Bengio et al. 2009) and self-paced learning (Kumar, Packer, and Koller 2010) have been attracting increasing attention in the field of machine learning and artificial intelligence. Both the learning paradigms are inspired by the learning principle underlying the cognitive process of humans and animals, which generally start with learning easier aspects of a task, and then gradually take more complex examples into consideration. The intuition can be explained in analogous to human education in which a pupil is supposed to understand elementary algebra before he or she can learn more advanced algebra topics. This learning paradigm has been empirically demonstrated to be instrumental in avoiding bad local minima and in achieving a better generalization result (Khan, Zhu, and Mutlu 2011; Basu and Christensen 2013; Tang et al. 2012). A curriculum determines a sequence of training samples which essentially corresponds to a list of samples ranked in ascending order of learning difficulty. A major disparity Copyright c © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. between curriculum learning (CL) and self-paced learning (SPL) lies in the derivation of the curriculum. In CL, the curriculum is assumed to be given by an oracle beforehand, and remains fixed thereafter. In SPL, the curriculum is dynamically generated by the learner itself, according to what the learner has already learned. The advantage of CL includes the flexibility to incorporate prior knowledge from various sources. Its drawback stems from the fact that the curriculum design is determined independently of the subsequent learning, which may result in inconsistency between the fixed curriculum and the dynamically learned models. From the optimization perspective, since the learning proceeds iteratively, there is no guarantee that the predetermined curriculum can even lead to a converged solution. SPL, on the other hand, formulates the learning problem as a concise biconvex problem, where the curriculum design is embedded and jointly learned with model parameters. Therefore, the learned model is consistent. However, SPL is limited in incorporating prior knowledge into learning, rendering it prone to overfitting. Ignoring prior knowledge is less reasonable when reliable prior information is available. Since both methods have their advantages, it is difficult to judge which one is better in practice. In this paper, we discover the missing link between CL and SPL. We formally propose a unified framework called Self-paced Curriculum Leaning (SPCL). SPCL represents a general learning paradigm that combines the merits from both the CL and SPL. On one hand, it inherits and further generalizes the theory of SPL. On the other hand, SPCL addresses the drawback of SPL by introducing a flexible way to incorporate prior knowledge. This paper also discusses concrete implementations within the proposed framework, which can be useful for solving various problems. This paper offers a compelling insight on the relationship between the existing CL and SPL methods. Their relation can be intuitively explained in the context of human education, in which SPCL represents an “instructor-student collaborative” learning paradigm, as opposed to “instructordriven” in CL or “student-driven” in SPL. In SPCL, instructors provide prior knowledge on a weak learning sequence of samples, while leaving students the freedom to decide the actual curriculum according to their learning pace. Since an optimal curriculum for the instructor may not necessarily be optimal for all students, we hypothesize that given reasonable prior knowledge, the curriculum devised by instructors and students together can be expected to be better than the curriculum designed by either part alone. Empirically, we substantiate this hypothesis by demonstrating that the proposed method outperforms both CL and SPL on two tasks. The rest of the paper is organized as follows. We first briefly introduce the background knowledge on CL and SPL. Then we propose the model and the algorithm of SPCL. After that, we discuss concrete implementations of SPCL. The experimental results and conclusions are presented in the last two sections. Background Knowledge", "title": "" }, { "docid": "0dfcb525fe5dd00032e7826a76a290e7", "text": "In this study, we tried to find a solution for inpainting problem using deep convolutional autoencoders. A new training approach has been proposed as an alternative to the Generative Adversarial Networks. The neural network that designed for inpainting takes an image, which the certain part of its center is extracted, as an input then it attempts to fill the blank region. During the training phase, a distinct deep convolutional neural network is used and it is called Advisor Network. We show that the features extracted from intermediate layers of the Advisor Network, which is trained on a different dataset for classification, improves the performance of the autoencoder.", "title": "" }, { "docid": "eef278400e3526a90e144662aab9af12", "text": "BACKGROUND\nMango is a highly perishable seasonal fruit and large quantities are wasted during the peak season as a result of poor postharvest handling procedures. Processing surplus mango fruits into flour to be used as a functional ingredient appears to be a good preservation method to ensure its extended consumption.\n\n\nRESULTS\nIn the present study, the chemical composition, bioactive/antioxidant compounds and functional properties of green and ripe mango (Mangifera indica var. Chokanan) peel and pulp flours were evaluated. Compared to commercial wheat flour, mango flours were significantly low in moisture and protein, but were high in crude fiber, fat and ash content. Mango flour showed a balance between soluble and insoluble dietary fiber proportions, with total dietary fiber content ranging from 3.2 to 5.94 g kg⁻¹. Mango flours exhibited high values for bioactive/antioxidant compounds compared to wheat flour. The water absorption capacity and oil absorption capacity of mango flours ranged from 0.36 to 0.87 g kg⁻¹ and from 0.18 to 0.22 g kg⁻¹, respectively.\n\n\nCONCLUSION\nResults of this study showed mango peel flour to be a rich source of dietary fiber with good antioxidant and functional properties, which could be a useful ingredient for new functional food formulations.", "title": "" }, { "docid": "754115ea561f99d9d185e90b7a67acb3", "text": "The danger of SQL injections has been known for more than a decade but injection attacks have led the OWASP top 10 for years and still are one of the major reasons for devastating attacks on web sites. As about 24% percent of the top 10 million web sites are built upon the content management system WordPress, it's no surprise that content management systems in general and WordPress in particular are frequently targeted. To understand how the underlying security bugs can be discovered and exploited by attackers, 199 publicly disclosed SQL injection exploits for WordPress and its plugins have been analyzed. The steps an attacker would take to uncover and utilize these bugs are followed in order to gain access to the underlying database through automated, dynamic vulnerability scanning with well-known, freely available tools. Previous studies have shown that the majority of the security bugs are caused by the same programming errors as 10 years ago and state that the complexity of finding and exploiting them has not increased significantly. Furthermore, they claim that although the complexity has not increased, automated tools still do not detect the majority of bugs. The results of this paper show that tools for automated, dynamic vulnerability scanning only play a subordinate role for developing exploits. The reason for this is that only a small percentage of attack vectors can be found during the detection phase. So even if the complexity of exploiting an attack vector has not increased, this attack vector has to be found in the first place, which is the major challenge for this kind of tools. Therefore, from today's perspective, a combination with manual and/or static analysis is essential when testing for security vulnerabilities.", "title": "" }, { "docid": "7b6c039783091260cee03704ce9748d8", "text": "We describe Algorithm 2 in detail. Algorithm 2 takes as input the sample set S, the query sequence F , the sensitivity of query ∆, the threshold τ , and the stop parameter s. Algorithm 2 outputs the result of each comparison with the threshold. In Algorithm 2, each noisy query output is compred with a noisy threshold at line 4 and outputs the result of comparison. Let ⊤ mean that fk(S) > τ . Algorithm 2 is terminated if outputs ⊤ s times.", "title": "" }, { "docid": "941d7a7a59261fe2463f42cad9cff004", "text": "Dragon's blood is one of the renowned traditional medicines used in different cultures of world. It has got several therapeutic uses: haemostatic, antidiarrhetic, antiulcer, antimicrobial, antiviral, wound healing, antitumor, anti-inflammatory, antioxidant, etc. Besides these medicinal applications, it is used as a coloring material, varnish and also has got applications in folk magic. These red saps and resins are derived from a number of disparate taxa. Despite its wide uses, little research has been done to know about its true source, quality control and clinical applications. In this review, we have tried to overview different sources of Dragon's blood, its source wise chemical constituents and therapeutic uses. As well as, a little attempt has been done to review the techniques used for its quality control and safety.", "title": "" }, { "docid": "72c79181572c836cb92aac8fe7a14c5d", "text": "When automatic plagiarism detection is carried out considering a reference corpus, a suspicious text is compared to a set of original documents in order to relate the plagiarised text fragments to their potential source. One of the biggest difficulties in this task is to locate plagiarised fragments that have been modified (by rewording, insertion or deletion, for example) from the source text. The definition of proper text chunks as comparison units of the suspicious and original texts is crucial for the success of this kind of applications. Our experiments with the METER corpus show that the best results are obtained when considering low level word n-grams comparisons (n = {2, 3}).", "title": "" } ]
scidocsrr
a05a6184d933b9ebb2532954976fe785
Word2Vec and Doc2Vec in Unsupervised Sentiment Analysis of Clinical Discharge Summaries
[ { "docid": "cfbf63d92dfafe4ac0243acdff6cf562", "text": "In this paper we present a linguistic resource for the lexical representation of affective knowledge. This resource (named W ORDNETAFFECT) was developed starting from W ORDNET, through a selection and tagging of a subset of synsets representing the affective", "title": "" }, { "docid": "6b693af5ed67feab686a9a92e4329c94", "text": "Physicians and nurses express their judgments and observations towards a patient’s health status in clinical narratives. Thus, their judgments are explicitly or implicitly included in patient records. To get impressions on the current health situation of a patient or on changes in the status, analysis and retrieval of this subjective content is crucial. In this paper, we approach this question as sentiment analysis problem and analyze the feasibility of assessing these judgments in clinical text by means of general sentiment analysis methods. Specifically, the word usage in clinical narratives and in a general text corpus is compared. The linguistic characteristics of judgments in clinical narratives are collected. Besides, the requirements for sentiment analysis and retrieval from clinical narratives are derived.", "title": "" } ]
[ { "docid": "5394ca3d404c23a03bb123070855bf3c", "text": "UNLABELLED\nA previously characterized rice hull smoke extract (RHSE) was tested for bactericidal activity against Salmonella Typhimurium using the disc-diffusion method. The minimum inhibitory concentration (MIC) value of RHSE was 0.822% (v/v). The in vivo antibacterial activity of RHSE (1.0%, v/v) was also examined in a Salmonella-infected Balb/c mouse model. Mice infected with a sublethal dose of the pathogens were administered intraperitoneally a 1.0% solution of RHSE at four 12-h intervals during the 48-h experimental period. The results showed that RHSE inhibited bacterial growth by 59.4%, 51.4%, 39.6%, and 28.3% compared to 78.7%, 64.6%, 59.2%, and 43.2% inhibition with the medicinal antibiotic vancomycin (20 mg/mL). By contrast, 4 consecutive administrations at 12-h intervals elicited the most effective antibacterial effect of 75.0% and 85.5% growth reduction of the bacteria by RHSE and vancomycin, respectively. The combination of RHSE and vancomycin acted synergistically against the pathogen. The inclusion of RHSE (1.0% v/w) as part of a standard mouse diet fed for 2 wk decreased mortality of 10 mice infected with lethal doses of the Salmonella. Photomicrographs of histological changes in liver tissues show that RHSE also protected the liver against Salmonella-induced pathological necrosis lesions. These beneficial results suggest that the RHSE has the potential to complement wood-derived smokes as antimicrobial flavor formulations for application to human foods and animal feeds.\n\n\nPRACTICAL APPLICATION\nThe new antimicrobial and anti-inflammatory rice hull derived liquid smoke has the potential to complement widely used wood-derived liquid smokes as an antimicrobial flavor and health-promoting formulation for application to foods.", "title": "" }, { "docid": "923eee773a2953468bfd5876e0393d4d", "text": "Latent variable time-series models are among the most heavily used tools from machine learning and applied statistics. These models have the advantage of learning latent structure both from noisy observations and from the temporal ordering in the data, where it is assumed that meaningful correlation structure exists across time. A few highly-structured models, such as the linear dynamical system with linear-Gaussian observations, have closed-form inference procedures (e.g. the Kalman Filter), but this case is an exception to the general rule that exact posterior inference in more complex generative models is intractable. Consequently, much work in time-series modeling focuses on approximate inference procedures for one particular class of models. Here, we extend recent developments in stochastic variational inference to develop a ‘black-box’ approximate inference technique for latent variable models with latent dynamical structure. We propose a structured Gaussian variational approximate posterior that carries the same intuition as the standard Kalman filter-smoother but, importantly, permits us to use the same inference approach to approximate the posterior of much more general, nonlinear latent variable generative models. We show that our approach recovers accurate estimates in the case of basic models with closed-form posteriors, and more interestingly performs well in comparison to variational approaches that were designed in a bespoke fashion for specific non-conjugate models.", "title": "" }, { "docid": "6c15e15bddca3cf7a197eec0cf560448", "text": "Enterprises and service providers are increasingly looking to global service delivery as a means for containing costs while improving the quality of service delivery. However, it is often difficult to effectively manage the conflicting needs associated with dynamic customer workload, strict service level constraints, and efficient service personnel organization. In this paper we propose a dynamic approach for workload and personnel management, where organization of personnel is dynamically adjusted based upon differences between observed and target service level metrics. Our approach consists of constructing a dynamic service delivery organization and developing a feedback control mechanism for dynamic workload management. We demonstrate the effectiveness of the proposed approach in an IT incident management example designed based on a large service delivery environment handling more than ten thousand service requests over a period of six months.", "title": "" }, { "docid": "b899a5effd239f1548128786d5ae3a8f", "text": "As fault diagnosis and prognosis systems in aerospace applications become more capable, the ability to utilize information supplied by them becomes increasingly important. While certain types of vehicle health data can be effectively processed and acted upon by crew or support personnel, others, due to their complexity or time constraints, require either automated or semi-automated reasoning. Prognostics-enabled Decision Making (PDM) is an emerging research area that aims to integrate prognostic health information and knowledge about the future operating conditions into the process of selecting subsequent actions for the system. The newly developed PDM algorithms require suitable software and hardware platforms for testing under realistic fault scenarios. The paper describes the development of such a platform, based on the K11 planetary rover prototype. A variety of injectable fault modes are being investigated for electrical, mechanical, and power subsystems of the testbed, along with methods for data collection and processing. In addition to the hardware platform, a software simulator with matching capabilities has been developed. The simulator allows for prototyping and initial validation of the algorithms prior to their deployment on the K11. The simulator is also available to the PDM algorithms to assist with the reasoning process. A reference set of diagnostic, prognostic, and decision making algorithms is also described, followed by an overview of the current test scenarios and the results of their execution on the simulator. Edward Balaban et.al. This is an open-access article distributed under the terms of the Creative Commons Attribution 3.0 United States License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.", "title": "" }, { "docid": "0591acdb82c352362de74d6daef10539", "text": "In this paper we report on our ongoing studies around the application of Augmented Reality methods to support the order picking process of logistics applications. Order picking is the gathering of goods out of a prepared range of items following some customer orders. We named the visual support of this order picking process using Head-mounted Displays “Pick-by-Vision”. This work presents the case study of bringing our previously developed Pickby-Vision system from the lab to an experimental factory hall to evaluate it under more realistic conditions. This includes the execution of two user studies. In the first one we compared our Pickby-Vision system with and without tracking to picking using a paper list to check picking performance and quality in general. In a second test we had subjects using the Pick-by-Vision system continuously for two hours to gain in-depth insight into the longer use of our system, checking user strain besides the general performance. Furthermore, we report on the general obstacles of trying to use HMD-based AR in an industrial setup and discuss our observations of user behaviour.", "title": "" }, { "docid": "d194d474676e5ee3113c588de30496c7", "text": "While studies of social movements have mostly examined prevalent public discourses, undercurrents' the backstage practices consisting of meaning-making processes, narratives, and situated work-have received less attention. Through a qualitative interview study with sixteen participants, we examine the role of social media in supporting the undercurrents of the Umbrella Movement in Hong Kong. Interviews focused on an intense period of the movement exemplified by sit-in activities inspired by Occupy Wall Street in the USA. Whereas the use of Facebook for public discourse was similar to what has been reported in other studies, we found that an ecology of social media tools such as Facebook, WhatsApp, Telegram, and Google Docs mediated undercurrents that served to ground the public discourse of the movement. We discuss how the undercurrents sustained and developed public discourses in concrete ways.", "title": "" }, { "docid": "9eedeec21ab380c0466ed7edfe7c745d", "text": "In this paper, we study the effect of using-grams (sequences of words of length n) for text categorization. We use an efficient algorithm for gener ating suchn-gram features in two benchmark domains, the 20 newsgroups data set and 21,578 REU TERS newswire articles. Our results with the rule learning algorithm R IPPER indicate that, after the removal of stop words, word sequences of length 2 or 3 are most useful. Using l o er sequences reduces classification performance.", "title": "" }, { "docid": "3299c32ee123e8c5fb28582e5f3a8455", "text": "Software defects, commonly known as bugs, present a serious challenge for system reliability and dependability. Once a program failure is observed, the debugging activities to locate the defects are typically nontrivial and time consuming. In this paper, we propose a novel automated approach to pin-point the root-causes of software failures.\n Our proposed approach consists of three steps. The first step is bug prediction, which leverages the existing work on anomaly-based bug detection as exceptional behavior during program execution has been shown to frequently point to the root cause of a software failure. The second step is bug isolation, which eliminates false-positive bug predictions by checking whether the dynamic forward slices of bug predictions lead to the observed program failure. The last step is bug validation, in which the isolated anomalies are validated by dynamically nullifying their effects and observing if the program still fails. The whole bug prediction, isolation and validation process is fully automated and can be implemented with efficient architectural support. Our experiments with 6 programs and 7 bugs, including a real bug in the gcc 2.95.2 compiler, show that our approach is highly effective at isolating only the relevant anomalies. Compared to state-of-art debugging techniques, our proposed approach pinpoints the defect locations more accurately and presents the user with a much smaller code set to analyze.", "title": "" }, { "docid": "7d308c302065253ee1adbffad04ff3f1", "text": "Cloud computing opens a new era in IT as it can provide various elastic and scalable IT services in a pay-as-you-go fashion, where its users can reduce the huge capital investments in their own IT infrastructure. In this philosophy, users of cloud storage services no longer physically maintain direct control over their data, which makes data security one of the major concerns of using cloud. Existing research work already allows data integrity to be verified without possession of the actual data file. When the verification is done by a trusted third party, this verification process is also called data auditing, and this third party is called an auditor. However, such schemes in existence suffer from several common drawbacks. First, a necessary authorization/authentication process is missing between the auditor and cloud service provider, i.e., anyone can challenge the cloud service provider for a proof of integrity of certain file, which potentially puts the quality of the so-called `auditing-as-a-service' at risk; Second, although some of the recent work based on BLS signature can already support fully dynamic data updates over fixed-size data blocks, they only support updates with fixed-sized blocks as basic unit, which we call coarse-grained updates. As a result, every small update will cause re-computation and updating of the authenticator for an entire file block, which in turn causes higher storage and communication overheads. In this paper, we provide a formal analysis for possible types of fine-grained data updates and propose a scheme that can fully support authorized auditing and fine-grained update requests. Based on our scheme, we also propose an enhancement that can dramatically reduce communication overheads for verifying small updates. Theoretical analysis and experimental results demonstrate that our scheme can offer not only enhanced security and flexibility, but also significantly lower overhead for big data applications with a large number of frequent small updates, such as applications in social media and business transactions.", "title": "" }, { "docid": "cebc36cd572740069ab22e8181c405c4", "text": "Dealing with high-dimensional input spaces, like visual input, is a challenging task for reinforcement learning (RL). Neuroevolution (NE), used for continuous RL problems, has to either reduce the problem dimensionality by (1) compressing the representation of the neural network controllers or (2) employing a pre-processor (compressor) that transforms the high-dimensional raw inputs into low-dimensional features. In this paper, we are able to evolve extremely small recurrent neural network (RNN) controllers for a task that previously required networks with over a million weights. The high-dimensional visual input, which the controller would normally receive, is first transformed into a compact feature vector through a deep, max-pooling convolutional neural network (MPCNN). Both the MPCNN preprocessor and the RNN controller are evolved successfully to control a car in the TORCS racing simulator using only visual input. This is the first use of deep learning in the context evolutionary RL.", "title": "" }, { "docid": "bd681720305b4dbfca49c3c90ee671be", "text": "This document describes an extension of the One-Time Password (OTP) algorithm, namely the HMAC-based One-Time Password (HOTP) algorithm, as defined in RFC 4226, to support the time-based moving factor. The HOTP algorithm specifies an event-based OTP algorithm, where the moving factor is an event counter. The present work bases the moving factor on a time value. A time-based variant of the OTP algorithm provides short-lived OTP values, which are desirable for enhanced security. The proposed algorithm can be used across a wide range of network applications, from remote Virtual Private Network (VPN) access and Wi-Fi network logon to transaction-oriented Web applications. The authors believe that a common and shared algorithm will facilitate adoption of two-factor authentication on the Internet by enabling interoperability across commercial and open-source implementations. (IETF). It represents the consensus of the IETF community. It has received public review and has been approved for publication by the Internet Engineering Steering Group (IESG). Not all documents approved by the IESG are a candidate for any level of Internet Standard; see Section 2 of RFC 5741. Information about the current status of this document, any errata, and how to provide feedback on it may be obtained at in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License.", "title": "" }, { "docid": "6cabc50fda1107a61c2704c4917b9501", "text": "A vehicle tracking system is very useful for tracking the movement of a vehicle from any location at any time. In this work, real time Google map and Arduino based vehicle tracking system is implemented with Global Positioning System (GPS) and Global system for mobile communication (GSM) technology. GPS module provides geographic coordinates at regular time intervals. Then the GSM module transmits the location of vehicle to cell phone of owner/user in terms of latitude and longitude. At the same time, location is displayed on LCD. Finally, Google map displays the location and name of the place on cell phone. Thus, owner/user will be able to continuously monitor a moving vehicle using the cell phone. In order to show the feasibility and effectiveness of the system, this work presents experimental result of the vehicle tracking system. The proposed system is user friendly and ensures safety and surveillance at low maintenance cost.", "title": "" }, { "docid": "1324ee90acbdfe27a14a0d86d785341a", "text": "Though autonomous vehicles are currently operating in several places, many important questions within the field of autonomous vehicle research remain to be addressed satisfactorily. In this paper, we examine the role of communication between pedestrians and autonomous vehicles at unsignalized intersections. The nature of interaction between pedestrians and autonomous vehicles remains mostly in the realm of speculation currently. Of course, pedestrian’s reactions towards autonomous vehicles will gradually change over time owing to habituation, but it is clear that this topic requires urgent and ongoing study, not least of all because engineers require some working model for pedestrian-autonomous-vehicle communication. Our paper proposes a decision-theoretic model that expresses the interaction between a pedestrian and a vehicle. The model considers the interaction between a pedestrian and a vehicle as expressed an MDP, based on prior work conducted by psychologists examining similar experimental conditions. We describe this model and our simulation study of behavior it exhibits. The preliminary results on evaluating the behavior of the autonomous vehicle are promising and we believe it can help reduce the data needed to develop fuller models.", "title": "" }, { "docid": "b6ee2327d8e7de5ede72540a378e69a0", "text": "Heads of Government from Asia and the Pacific have committed to a malaria-free region by 2030. In 2015, the total number of confirmed cases reported to the World Health Organization by 22 Asia Pacific countries was 2,461,025. However, this was likely a gross underestimate due in part to incidence data not being available from the wide variety of known sources. There is a recognized need for an accurate picture of malaria over time and space to support the goal of elimination. A survey was conducted to gain a deeper understanding of the collection of malaria incidence data for surveillance by National Malaria Control Programmes in 22 countries identified by the Asia Pacific Leaders Malaria Alliance. In 2015–2016, a short questionnaire on malaria surveillance was distributed to 22 country National Malaria Control Programmes (NMCP) in the Asia Pacific. It collected country-specific information about the extent of inclusion of the range of possible sources of malaria incidence data and the role of the private sector in malaria treatment. The findings were used to produce recommendations for the regional heads of government on improving malaria surveillance to inform regional efforts towards malaria elimination. A survey response was received from all 22 target countries. Most of the malaria incidence data collected by NMCPs originated from government health facilities, while many did not collect comprehensive data from mobile and migrant populations, the private sector or the military. All data from village health workers were included by 10/20 countries and some by 5/20. Other sources of data included by some countries were plantations, police and other security forces, sentinel surveillance sites, research or academic institutions, private laboratories and other government ministries. Malaria was treated in private health facilities in 19/21 countries, while anti-malarials were available in private pharmacies in 16/21 and private shops in 6/21. Most countries use primarily paper-based reporting. Most collected malaria incidence data in the Asia Pacific is from government health facilities while data from a wide variety of other known sources are often not included in national surveillance databases. In particular, there needs to be a concerted regional effort to support inclusion of data on mobile and migrant populations and the private sector. There should also be an emphasis on electronic reporting and data harmonization across organizations. This will provide a more accurate and up to date picture of the true burden and distribution of malaria and will be of great assistance in helping realize the goal of malaria elimination in the Asia Pacific by 2030.", "title": "" }, { "docid": "3a95b876619ce4b666278810b80cae77", "text": "On 14 November 2016, northeastern South Island of New Zealand was struck by a major moment magnitude (Mw) 7.8 earthquake. Field observations, in conjunction with interferometric synthetic aperture radar, Global Positioning System, and seismology data, reveal this to be one of the most complex earthquakes ever recorded. The rupture propagated northward for more than 170 kilometers along both mapped and unmapped faults before continuing offshore at the island’s northeastern extent. Geodetic and field observations reveal surface ruptures along at least 12 major faults, including possible slip along the southern Hikurangi subduction interface; extensive uplift along much of the coastline; and widespread anelastic deformation, including the ~8-meter uplift of a fault-bounded block. This complex earthquake defies many conventional assumptions about the degree to which earthquake ruptures are controlled by fault segmentation and should motivate reevaluation of these issues in seismic hazard models.", "title": "" }, { "docid": "0f6dbf39b8e06a768b3d2b769327168d", "text": "In this paper, we focus on how to boost the multi-view clustering by exploring the complementary information among multi-view features. A multi-view clustering framework, called Diversity-induced Multi-view Subspace Clustering (DiMSC), is proposed for this task. In our method, we extend the existing subspace clustering into the multi-view domain, and utilize the Hilbert Schmidt Independence Criterion (HSIC) as a diversity term to explore the complementarity of multi-view representations, which could be solved efficiently by using the alternating minimizing optimization. Compared to other multi-view clustering methods, the enhanced complementarity reduces the redundancy between the multi-view representations, and improves the accuracy of the clustering results. Experiments on both image and video face clustering well demonstrate that the proposed method outperforms the state-of-the-art methods.", "title": "" }, { "docid": "5b4045a80ae584050a9057ba32c9296b", "text": "Electro-rheological (ER) fluids are smart fluids which can transform into solid-like phase by applying an electric field. This process is reversible and can be strategically used to build fluidic components for innovative soft robots capable of soft locomotion. In this work, we show the potential applications of ER fluids to build valves that simplify design of fluidic based soft robots. We propose the design and development of a composite ER valve, aimed at controlling the flexibility of soft robots bodies by controlling the ER fluid flow. We present how an ad hoc number of such soft components can be embodied in a simple crawling soft robot (Wormbot); in a locomotion mechanism capable of forward motion through rotation; and, in a tendon driven continuum arm. All these embodiments show how simplification of the hydraulic circuits relies on the simple structure of ER valves. Finally, we address preliminary experiments to characterize the behavior of Wormbot in terms of actuation forces.", "title": "" }, { "docid": "190bc8482b4bdc8662be25af68adb2c0", "text": "The goal of all vitreous surgery is to perform the desired intraoperative intervention with minimum collateral damage in the most efficient way possible. An understanding of the principles of fluidics is of importance to all vitreoretinal surgeons to achieve these aims. Advances in technology mean that surgeons are being given increasing choice in the settings they are able to select for surgery. Manufacturers are marketing systems with aspiration driven by peristaltic, Venturi and hybrid pumps. Increasingly fast cut rates are offered with optimised, and in some cases surgeon-controlled, duty cycles. Function-specific cutters are becoming available and narrow-gauge instrumentation is evolving to meet surgeon demands with higher achievable flow rates. In parallel with the developments in outflow technology, infusion systems are advancing with lowering flow resistance and intraocular pressure control to improve fluidic stability during surgery. This review discusses the important aspects of fluidic technology so that surgeons can select the optimum machine parameters to carry out safe and effective surgery.", "title": "" }, { "docid": "43a94e75e054f0245bdfc92c5217ce44", "text": "Fine-grained image categories recognition is a challenging task aiming at distinguishing objects belonging to the same basic-level category, such as leaf or mushroom. It is a useful technique that can be applied for species recognition, face verification, and etc. Most of the existing methods have difficulties to automatically detect discriminative object components. In this paper, we propose a new fine-grained image categorization model that can be deemed as an improved version spatial pyramid matching (SPM). Instead of the conventional SPM that enumeratively conducts cell-to-cell matching between images, the proposed model combines multiple cells into cellets that are highly responsive to object fine-grained categories. In particular, we describe object components by cellets that connect spatially adjacent cells from the same pyramid level. Straightforwardly, image categorization can be casted as the matching between cellets extracted from pairwise images. Toward an effective matching process, a hierarchical sparse coding algorithm is derived that represents each cellet by a linear combination of the basis cellets. Further, a linear discriminant analysis (LDA)-like scheme is employed to select the cellets with high discrimination. On the basis of the feature vector built from the selected cellets, fine-grained image categorization is conducted by training a linear SVM. Experimental results on the Caltech-UCSD birds, the Leeds butterflies, and the COSMIC insects data sets demonstrate our model outperforms the state-of-the-art. Besides, the visualized cellets show discriminative object parts are localized accurately.", "title": "" }, { "docid": "21aedc605ab5c9ef5416091adc407396", "text": "This paper presents the basic results for using the parallel coordinate representation as a high dimensional data analysis tool. Several alternatives are reviewed. The basic algorithm for parallel coordinates is laid out and a discussion of its properties as a projective transformation are shown. The several of the duality results are discussed along with their interpretations as data analysis tools. A discussion of permutations of the parallel coordinate axes is given and some examples are given. Some extensions of the parallel coordinate idea are given. The paper closes with a discussion of implementation and some of our experiences are relayed. 1This research was supported by the Air Force Office of Scientific Research under grant number AFOSR-870179, by the Army Research Office under contract number DAAL03-87-K-0087 and by the National Science Foundation under grant number DMS-8701931 . Hyperdimensional Data Analysis Using Parallel Coordinates", "title": "" } ]
scidocsrr
1b77ce3e83e9bfa07c05622e803ebfdf
Mechanical design and basic analysis of a modular robot with special climbing and manipulation functions
[ { "docid": "7eba5af9ca0beaf8cbac4afb45e85339", "text": "This paper is concerned with the derivation of the kinematics model of the University of Tehran-Pole Climbing Robot (UT-PCR). As the first step, an appropriate set of coordinates is selected and used to describe the state of the robot. Nonholonomic constraints imposed by the wheels are then expressed as a set of differential equations. By describing these equations in terms of the state of the robot an underactuated driftless nonlinear control system with affine inputs that governs the motion of the robot is derived. A set of experimental results are also given to show the capability of the UT-PCR in climbing a stepped pole.", "title": "" } ]
[ { "docid": "ddb2ba1118e28acf687208bff99ce53a", "text": "We show that information about social relationships can be used to improve user-level sentiment analysis. The main motivation behind our approach is that users that are somehow \"connected\" may be more likely to hold similar opinions; therefore, relationship information can complement what we can extract about a user's viewpoints from their utterances. Employing Twitter as a source for our experimental data, and working within a semi-supervised framework, we propose models that are induced either from the Twitter follower/followee network or from the network in Twitter formed by users referring to each other using \"@\" mentions. Our transductive learning results reveal that incorporating social-network information can indeed lead to statistically significant sentiment classification improvements over the performance of an approach based on Support Vector Machines having access only to textual features.", "title": "" }, { "docid": "3458fb52eba9aa39896c1d7e3b3dc738", "text": "The rising popularity of Android and the GUI-driven nature of its apps have motivated the need for applicable automated GUI testing techniques. Although exhaustive testing of all possible combinations is the ideal upper bound in combinatorial testing, it is often infeasible, due to the combinatorial explosion of test cases. This paper presents TrimDroid, a framework for GUI testing of Android apps that uses a novel strategy to generate tests in a combinatorial, yet scalable, fashion. It is backed with automated program analysis and formally rigorous test generation engines. TrimDroid relies on program analysis to extract formal specifications. These specifications express the app's behavior (i.e., control flow between the various app screens) as well as the GUI elements and their dependencies. The dependencies among the GUI elements comprising the app are used to reduce the number of combinations with the help of a solver. Our experiments have corroborated TrimDroid's ability to achieve a comparable coverage as that possible under exhaustive GUI testing using significantly fewer test cases.", "title": "" }, { "docid": "772df08be1a3c3ea0854603727727c63", "text": "This paper presents a low profile ultrawideband tightly coupled phased array antenna with integrated feedlines. The aperture array consists of planar element pairs with fractal geometry. In each element these pairs are set orthogonal to each other for dual polarisation. The design is an array of closely capacitively coupled pairs of fractal octagonal rings. The adjustment of the capacitive load at the tip end of the elements and the strong mutual coupling between the elements, enables a wideband conformal performance. Adding a ground plane below the array partly compensates for the frequency variation of the array impedance, providing further enhancement in the array bandwidth. Additional improvement is achieved by placing another layer of conductive elements at a defined distance above the radiating elements. A Genetic Algorithm was scripted in MATLAB and combined with the HFSS simulator, providing an easy optimisation tool across the operational bandwidth for the array unit cell design parameters. The proposed antenna shows a wide-scanning ability with a low cross-polarisation level over a wide bandwidth.", "title": "" }, { "docid": "e913a4d2206be999f0278d48caa4708a", "text": "Widespread deployment of the Internet enabled building of an emerging IT delivery model, i.e., cloud computing. Albeit cloud computing-based services have rapidly developed, their security aspects are still at the initial stage of development. In order to preserve cybersecurity in cloud computing, cybersecurity information that will be exchanged within it needs to be identified and discussed. For this purpose, we propose an ontological approach to cybersecurity in cloud computing. We build an ontology for cybersecurity operational information based on actual cybersecurity operations mainly focused on non-cloud computing. In order to discuss necessary cybersecurity information in cloud computing, we apply the ontology to cloud computing. Through the discussion, we identify essential changes in cloud computing such as data-asset decoupling and clarify the cybersecurity information required by the changes such as data provenance and resource dependency information.", "title": "" }, { "docid": "56205e79e706e05957cb5081d6a8348a", "text": "Corpus-based set expansion (i.e., finding the “complete” set of entities belonging to the same semantic class, based on a given corpus and a tiny set of seeds) is a critical task in knowledge discovery. It may facilitate numerous downstream applications, such as information extraction, taxonomy induction, question answering, and web search. To discover new entities in an expanded set, previous approaches either make one-time entity ranking based on distributional similarity, or resort to iterative pattern-based bootstrapping. The core challenge for these methods is how to deal with noisy context features derived from free-text corpora, which may lead to entity intrusion and semantic drifting. In this study, we propose a novel framework, SetExpan, which tackles this problem, with two techniques: (1) a context feature selection method that selects clean context features for calculating entity-entity distributional similarity, and (2) a ranking-based unsupervised ensemble method for expanding entity set based on denoised context features. Experiments on three datasets show that SetExpan is robust and outperforms previous state-of-the-art methods in terms of mean average precision.", "title": "" }, { "docid": "0084faef0e08c4025ccb3f8fd50892f1", "text": "Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them. In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.", "title": "" }, { "docid": "0f80933b5302bd6d9595234ff8368ac4", "text": "We show how a simple convolutional neural network (CNN) can be trained to accurately and robustly regress 6 degrees of freedom (6DoF) 3D head pose, directly from image intensities. We further explain how this FacePoseNet (FPN) can be used to align faces in 2D and 3D as an alternative to explicit facial landmark detection for these tasks. We claim that in many cases the standard means of measuring landmark detector accuracy can be misleading when comparing different face alignments. Instead, we compare our FPN with existing methods by evaluating how they affect face recognition accuracy on the IJB-A and IJB-B benchmarks: using the same recognition pipeline, but varying the face alignment method. Our results show that (a) better landmark detection accuracy measured on the 300W benchmark does not necessarily imply better face recognition accuracy. (b) Our FPN provides superior 2D and 3D face alignment on both benchmarks. Finally, (c), FPN aligns faces at a small fraction of the computational cost of comparably accurate landmark detectors. For many purposes, FPN is thus a far faster and far more accurate face alignment method than using facial landmark detectors.", "title": "" }, { "docid": "b012b434060ccc2c4e8c67d42e43728a", "text": "With rapid development, wireless sensor networks (WSNs) have been focused on improving the performance consist of energy efficiency, communication effectiveness, and system throughput. Many novel mechanisms have been implemented by adapting the social behaviors of natural creatures, such as bats, birds, ants, fish and honeybees. These systems are known as nature inspired systems or swarm intelligence in in order to provide optimization strategies, handle large-scale networks and avoid resource constraints. Spider monkey optimization (SMO) is a recent addition to the family of swarm intelligence algorithms by structuring the social foraging behavior of spider monkeys. In this paper, we aim to study the mechanism of SMO in the field of WSNs, formulating the mathematical model of the behavior patterns which cluster-based Spider Monkey Optimization (SMO-C) approach is adapted. In addition, our proposed methodology based on the Spider Monkey's behavioral structure aims to improve the traditional routing protocols in term of low-energy consumption and system quality of the network.", "title": "" }, { "docid": "71ca5a461ff8eb6fc33c1a272c4acfac", "text": "We introduce a tree manipulation language, Fast, that overcomes technical limitations of previous tree manipulation languages, such as XPath and XSLT which do not support precise program analysis, or TTT and Tiburon which only support trees over finite alphabets. At the heart of Fast is a combination of SMT solvers and tree transducers, enabling it to model programs whose input and output can range over any decidable theory. The language can express multiple applications. We write an HTML “sanitizer” in Fast and obtain results comparable to leading libraries but with smaller code. Next we show how augmented reality “tagging” applications can be checked for potential overlap in milliseconds using Fast type checking. We show how transducer composition enables deforestation for improved performance. Overall, we strike a balance between expressiveness and precise analysis that works for a large class of important tree-manipulating programs.", "title": "" }, { "docid": "26ead0555a416c62a2153f29c5d95c25", "text": "BACKGROUND\nAgricultural systems are amended ecosystems with a variety of properties. Modern agroecosystems have tended towards high through-flow systems, with energy supplied by fossil fuels directed out of the system (either deliberately for harvests or accidentally through side effects). In the coming decades, resource constraints over water, soil, biodiversity and land will affect agricultural systems. Sustainable agroecosystems are those tending to have a positive impact on natural, social and human capital, while unsustainable systems feed back to deplete these assets, leaving fewer for the future. Sustainable intensification (SI) is defined as a process or system where agricultural yields are increased without adverse environmental impact and without the conversion of additional non-agricultural land. The concept does not articulate or privilege any particular vision or method of agricultural production. Rather, it emphasizes ends rather than means, and does not pre-determine technologies, species mix or particular design components. The combination of the terms 'sustainable' and 'intensification' is an attempt to indicate that desirable outcomes around both more food and improved environmental goods and services could be achieved by a variety of means. Nonetheless, it remains controversial to some.\n\n\nSCOPE AND CONCLUSIONS\nThis review analyses recent evidence of the impacts of SI in both developing and industrialized countries, and demonstrates that both yield and natural capital dividends can occur. The review begins with analysis of the emergence of combined agricultural-environmental systems, the environmental and social outcomes of recent agricultural revolutions, and analyses the challenges for food production this century as populations grow and consumption patterns change. Emergent criticisms are highlighted, and the positive impacts of SI on food outputs and renewable capital assets detailed. It concludes with observations on policies and incentives necessary for the wider adoption of SI, and indicates how SI could both promote transitions towards greener economies as well as benefit from progress in other sectors.", "title": "" }, { "docid": "be9cb16913cabce783a16998fb5023b7", "text": "Unlike conventional hydro and tidal barrage installations, water current turbines in open flow can generate power from flowing water with almost zero environmental impact, over a much wider range of sites than those available for conventional tidal power generation. Recent developments in current turbine design are reviewed and some potential advantages of ducted or “diffuser-augmented” current turbines are explored. These include improved safety, protection from weed growth, increased power output and reduced turbine and gearbox size for a given power output. Ducted turbines are not subject to the so-called Betz limit, which defines an upper limit of 59.3% of the incident kinetic energy that can be converted to shaft power by a single actuator disk turbine in open flow. For ducted turbines the theoretical limit depends on (i) the pressure difference that can be created between duct inlet and outlet, and (ii) the volumetric flow through the duct. These factors in turn depend on the shape of the duct and the ratio of duct area to turbine area. Previous investigations by others have found a theoretical limit for a diffuser-augmented wind turbine of about 3.3 times the Betz limit, and a model diffuseraugmented wind turbine has extracted 4.25 times the power extracted by the same turbine without a diffuser. In the present study, similar principles applied to a water turbine have so far achieved an augmentation factor of 3 at an early stage of the investigation.", "title": "" }, { "docid": "102a9eb7ba9f65a52c6983d74120430e", "text": "A key aim of social psychology is to understand the psychological processes through which independent variables affect dependent variables in the social domain. This objective has given rise to statistical methods for mediation analysis. In mediation analysis, the significance of the relationship between the independent and dependent variables has been integral in theory testing, being used as a basis to determine (1) whether to proceed with analyses of mediation and (2) whether one or several proposed mediator(s) fully or partially accounts for an effect. Synthesizing past research and offering new arguments, we suggest that the collective evidence raises considerable concern that the focus on the significance between the independent and dependent variables, both before and after mediation tests, is unjustified and can impair theory development and testing. To expand theory involving social psychological processes, we argue that attention in mediation analysis should be shifted towards assessing the magnitude and significance of indirect effects. Understanding the psychological processes by which independent variables affect dependent variables in the social domain has long been of interest to social psychologists. Although moderation approaches can test competing psychological mechanisms (e.g., Petty, 2006; Spencer, Zanna, & Fong, 2005), mediation is typically the standard for testing theories regarding process (e.g., Baron & Kenny, 1986; James & Brett, 1984; Judd & Kenny, 1981; MacKinnon, 2008; MacKinnon, Lockwood, Hoffman, West, & Sheets, 2002; Muller, Judd, & Yzerbyt, 2005; Preacher & Hayes, 2004; Preacher, Rucker, & Hayes, 2007; Shrout & Bolger, 2002). For example, dual process models of persuasion (e.g., Petty & Cacioppo, 1986) often distinguish among competing accounts by measuring the postulated underlying process (e.g., thought favorability, thought confidence) and examining their viability as mediators (Tormala, Briñol, & Petty, 2007). Thus, deciding on appropriate requirements for mediation is vital to theory development. Supporting the high status of mediation analysis in our field, MacKinnon, Fairchild, and Fritz (2007) report that research in social psychology accounts for 34% of all mediation tests in psychology more generally. In our own analysis of journal articles published from 2005 to 2009, we found that approximately 59% of articles in the Journal of Personality and Social Psychology (JPSP) and 65% of articles in Personality and Social Psychology Bulletin (PSPB) included at least one mediation test. Consistent with the observations of MacKinnon et al., we found that the bulk of these analyses continue to follow the causal steps approach outlined by Baron and Kenny (1986). Social and Personality Psychology Compass 5/6 (2011): 359–371, 10.1111/j.1751-9004.2011.00355.x a 2011 The Authors Social and Personality Psychology Compass a 2011 Blackwell Publishing Ltd The current article examines the viability of the causal steps approach in which the significance of the relationship between an independent variable (X) and a dependent variable (Y) are tested both before and after controlling for a mediator (M) in order to examine the validity of a theory specifying mediation. Traditionally, the X fi Y relationship is tested prior to mediation to determine whether there is an effect to mediate, and it is also tested after introducing a potential mediator to determine whether that mediator fully or partially accounts for the effect. At first glance, the requirement of a significant X fi Y association prior to examining mediation seems reasonable. If there is no significant X fi Y relationship, how can there be any mediation of it? Furthermore, the requirement that X fi Y become nonsignificant when controlling for the mediator seems sensible in order to claim ‘full mediation’. What is the point of hypothesizing or testing for additional mediators if the inclusion of one mediator renders the initial relationship indistinguishable from zero? Despite the intuitive appeal of these requirements, the present article raises serious concerns about their use.", "title": "" }, { "docid": "5e8f2e9d799b865bb16bd3a68003db73", "text": "A robust road markings detection algorithm is a fundamental component of intelligent vehicles' autonomous navigation in urban environment. This paper presents an algorithm for detecting road markings including zebra crossings, stop lines and lane markings to provide road information for intelligent vehicles. First, to eliminate the impact of the perspective effect, an Inverse Perspective Mapping (IPM) transformation is applied to the images grabbed by the camera; the region of interest (ROI) was extracted from IPM image by a low level processing. Then, different algorithms are adopted to extract zebra crossings, stop lines and lane markings. The experiments on a large number of street scenes in different conditions demonstrate the effectiveness of the proposed algorithm.", "title": "" }, { "docid": "58c238443e7fbe7043cfa4c67b28dbb2", "text": "In the fall of 2013, we offered an open online Introduction to Recommender Systems through Coursera, while simultaneously offering a for-credit version of the course on-campus using the Coursera platform and a flipped classroom instruction model. As the goal of offering this course was to experiment with this type of instruction, we performed extensive evaluation including surveys of demographics, self-assessed skills, and learning intent; we also designed a knowledge-assessment tool specifically for the subject matter in this course, administering it before and after the course to measure learning, and again 5 months later to measure retention. We also tracked students through the course, including separating out students enrolled for credit from those enrolled only for the free, open course.\n Students had significant knowledge gains across all levels of prior knowledge and across all demographic categories. The main predictor of knowledge gain was effort expended in the course. Students also had significant knowledge retention after the course. Both of these results are limited to the sample of students who chose to complete our knowledge tests. Student completion of the course was hard to predict, with few factors contributing predictive power; the main predictor of completion was intent to complete. Students who chose a concepts-only track with hand exercises achieved the same level of knowledge of recommender systems concepts as those who chose a programming track and its added assignments, though the programming students gained additional programming knowledge. Based on the limited data we were able to gather, face-to-face students performed as well as the online-only students or better; they preferred this format to traditional lecture for reasons ranging from pure convenience to the desire to watch videos at a different pace (slower for English language learners; faster for some native English speakers). This article also includes our qualitative observations, lessons learned, and future directions.", "title": "" }, { "docid": "f69b170e9ccd7f04cbc526373b0ad8ee", "text": "meaning (overall M = 5.89) and significantly higher than with any of the other three abstract meanings (overall M = 2.05, all ps < .001). Procedure. Under a cover story of studying advertising slogans, participants saw one of the 22 target brands and thought about its abstract concept in memory. They were then presented, on a single screen, with four alternative slogans (in random order) for the target brand and were asked to rank the slogans, from 1 (“best”) to 4 (“worst”), in terms of how well the slogan fits the image of the target brand. Each slogan was intended to distinctively communicate the abstract meaning associated with one of the four high-levelmeaning associated with one of the four high-level brand value dimensions uncovered in the pilot study. After a series of filler tasks, participants indicated their attitude toward the brand on a seven-point scale (1 = “very unfavorable,” and 7 = “very favorable”). Ranking of the slogans. We conducted separate nonparametric Kruskal-Wallis tests on each country’s data to evaluate differences in the rank order for each of the four slogans among the four types of brand concepts. In all countries, the tests were significant (the United States: all 2(3, N = 539) ≥ 145.4, all ps < .001; China: all 2(3, N = 208) ≥ 52.8, all ps < .001; Canada: all 2(3, N = 380) ≥ 33.3, all ps < .001; Turkey: all 2(3, N = 380) ≥ 51.0, all ps < .001). We pooled the data from the four countries and conducted follow-up tests to evaluate pairwise differences in the rank order of each slogan among the four brand concepts, controlling for Type I error across tests using the Bonferroni approach. The results of these tests indicated that each slogan was ranked at the top in terms of favorability when it matched the brand concept (self-enhancement brand concept: Mself-enhancement slogan = 1.77; openness brand FIGURE 2 Structural Relations Among Value Dimensions from Multidimensional Scaling (Pilot: Study 1) b = benevolence, t = tradition, c = conformity, sec = security S e l f E n h a n c e m e n t IN D VID U A L C O N C ER N S C O LL EC TI VE C O N C ER N S", "title": "" }, { "docid": "badb04b676d3dab31024e8033fc8aec4", "text": "Review was undertaken from February 1969 to January 1998 at the State forensic science center (Forensic Science) in Adelaide, South Australia, of all cases of murder-suicide involving children <16 years of age. A total of 13 separate cases were identified involving 30 victims, all of whom were related to the perpetrators. There were 7 male and 6 female perpetrators (age range, 23-41 years; average, 31 years) consisting of 6 mothers, 6 father/husbands, and 1 uncle/son-in-law. The 30 victims consisted of 11 daughters, 11 sons, 1 niece, 1 mother-in-law, and 6 wives of the assailants. The 23 children were aged from 10 months to 15 years (average, 6.0 years). The 6 mothers murdered 9 children and no spouses, with 3 child survivors. The 6 fathers murdered 13 children and 6 wives, with 1 child survivor. This study has demonstrated a higher percentage of female perpetrators than other studies of murder-suicide. The methods of homicide and suicide used were generally less violent among the female perpetrators compared with male perpetrators. Fathers killed not only their children but also their wives, whereas mothers murdered only their children. These results suggest differences between murder-suicides that involve children and adult-only cases, and between cases in which the mother rather than the father is the perpetrator.", "title": "" }, { "docid": "e222cbd0d62e4a323feb7c57bc3ff7a3", "text": "Facebook and other social media have been hailed as delivering the promise of new, socially engaged educational experiences for students in undergraduate, self-directed, and other educational sectors. A theoretical and historical analysis of these media in the light of earlier media transformations, however, helps to situate and qualify this promise. Specifically, the analysis of dominant social media presented here questions whether social media platforms satisfy a crucial component of learning – fostering the capacity for debate and disagreement. By using the analytical frame of media theorist Raymond Williams, with its emphasis on the influence of advertising in the content and form of television, we weigh the conditions of dominant social networking sites as constraints for debate and therefore learning. Accordingly, we propose an update to Williams’ erudite work that is in keeping with our findings. Williams’ critique focuses on the structural characteristics of sequence, rhythm, and flow of television as a cultural form. Our critique proposes the terms information design, architecture, and above all algorithm, as structural characteristics that similarly apply to the related but contemporary cultural form of social networking services. Illustrating the ongoing salience of media theory and history for research in e-learning, the article updates Williams’ work while leveraging it in a critical discussion of the suitability of commercial social media for education.", "title": "" }, { "docid": "3fae9d0778c9f9df1ae51ad3b5f62a05", "text": "This paper argues for the utility of back-end driven onloading to the edge as a way to address bandwidth use and latency challenges for future device-cloud interactions. Supporting such edge functions (EFs) requires solutions that can provide (i) fast and scalable EF provisioning and (ii) strong guarantees for the integrity of the EF execution and confidentiality of the state stored at the edge. In response to these goals, we (i) present a detailed design space exploration of the current technologies that can be leveraged in the design of edge function platforms (EFPs), (ii) develop a solution to address security concerns of EFs that leverages emerging hardware support for OS agnostic trusted execution environments such as Intel SGX enclaves, and (iii) propose and evaluate AirBox, a platform for fast, scalable and secure onloading of edge functions.", "title": "" }, { "docid": "25d25da610b4b3fe54b665d55afc3323", "text": "We address the problem of vision-based navigation in busy inner-city locations, using a stereo rig mounted on a mobile platform. In this scenario semantic information becomes important: rather than modelling moving objects as arbitrary obstacles, they should be categorised and tracked in order to predict their future behaviour. To this end, we combine classical geometric world mapping with object category detection and tracking. Object-category specific detectors serve to find instances of the most important object classes (in our case pedestrians and cars). Based on these detections, multi-object tracking recovers the objects’ trajectories, thereby making it possible to predict their future locations, and to employ dynamic path planning. The approach is evaluated on challenging, realistic video sequences recorded at busy inner-city locations.", "title": "" } ]
scidocsrr
401343e29ba8e7ed73f7d2aaa811afdd
Resilience : The emergence of a perspective for social – ecological systems analyses
[ { "docid": "5eb526843c41d2549862b60c17110b5b", "text": "■ Abstract We explore the social dimension that enables adaptive ecosystem-based management. The review concentrates on experiences of adaptive governance of socialecological systems during periods of abrupt change (crisis) and investigates social sources of renewal and reorganization. Such governance connects individuals, organizations, agencies, and institutions at multiple organizational levels. Key persons provide leadership, trust, vision, meaning, and they help transform management organizations toward a learning environment. Adaptive governance systems often self-organize as social networks with teams and actor groups that draw on various knowledge systems and experiences for the development of a common understanding and policies. The emergence of “bridging organizations” seem to lower the costs of collaboration and conflict resolution, and enabling legislation and governmental policies can support self-organization while framing creativity for adaptive comanagement efforts. A resilient social-ecological system may make use of crisis as an opportunity to transform into a more desired state.", "title": "" } ]
[ { "docid": "832305d62b48e316d82efc62fc390359", "text": "Bluetooth Low Energy (BLE) is ideally suited to exchange information between mobile devices and Internet-of-Things (IoT) sensors. It is supported by most recent consumer mobile devices and can be integrated into sensors enabling them to exchange information in an energy-efficient manner. However, when BLE is used to access or modify sensitive sensor parameters, exchanged messages need to be suitably protected, which may not be possible with the security mechanisms defined in the BLE specification. Consequently we contribute BALSA, a set of cryptographic protocols, a BLE service and a suggested usage architecture aiming to provide a suitable level of security. In this paper we define and analyze these components and describe our proof-of-concept, which demonstrates the feasibility and benefits of BALSA.", "title": "" }, { "docid": "f466a283f1073569ca31e43ebffcda7d", "text": "Facial alignment involves finding a set of landmark points on an image with a known semantic meaning. However, this semantic meaning of landmark points is often lost in 2D approaches where landmarks are either moved to visible boundaries or ignored as the pose of the face changes. In order to extract consistent alignment points across large poses, the 3D structure of the face must be considered in the alignment step. However, extracting a 3D structure from a single 2D image usually requires alignment in the first place. We present our novel approach to simultaneously extract the 3D shape of the face and the semantically consistent 2D alignment through a 3D Spatial Transformer Network (3DSTN) to model both the camera projection matrix and the warping parameters of a 3D model. By utilizing a generic 3D model and a Thin Plate Spline (TPS) warping function, we are able to generate subject specific 3D shapes without the need for a large 3D shape basis. In addition, our proposed network can be trained in an end-to-end frame-work on entirely synthetic data from the 300W-LP dataset. Unlike other 3D methods, our approach only requires one pass through the network resulting in a faster than real-time alignment. Evaluations of our model on the Annotated Facial Landmarks in the Wild (AFLW) and AFLW2000-3D datasets show our method achieves state-of-the-art performance over other 3D approaches to alignment.", "title": "" }, { "docid": "da8e0706b5ca5b7d391a07d443edc0cf", "text": "The Web has become an excellent source for gathering consumer opinions. There are now numerous Web sources containing such opinions, e.g., product reviews, forums, discussion groups, and blogs. Techniques are now being developed to exploit these sources to help organizations and individuals to gain such important information easily and quickly. In this paper, we first discuss several aspects of the problem in the AI context, and then present some results of our existing work published in KDD-04 and WWW-05.", "title": "" }, { "docid": "2bdefbc66ae89ce8e48acf0d13041e0a", "text": "We introduce an ac transconductance dispersion method (ACGD) to profile the oxide traps in an MOSFET without needing a body contact. The method extracts the spatial distribution of oxide traps from the frequency dependence of transconductance, which is attributed to charge trapping as modulated by an ac gate voltage. The results from this method have been verified by the use of the multifrequency charge pumping (MFCP) technique. In fact, this method complements the MFCP technique in terms of the trap depth that each method is capable of probing. We will demonstrate the method with InP passivated InGaAs substrates, along with electrically stressed Si N-MOSFETs.", "title": "" }, { "docid": "e0d3a7e7e000c6704518763bf8dff8c8", "text": "Integration of optical communication circuits directly into high-performance microprocessor chips can enable extremely powerful computer systems. A germanium photodetector that can be monolithically integrated with silicon transistor technology is viewed as a key element in connecting chip components with infrared optical signals. Such a device should have the capability to detect very-low-power optical signals at very high speed. Although germanium avalanche photodetectors (APD) using charge amplification close to avalanche breakdown can achieve high gain and thus detect low-power optical signals, they are universally considered to suffer from an intolerably high amplification noise characteristic of germanium. High gain with low excess noise has been demonstrated using a germanium layer only for detection of light signals, with amplification taking place in a separate silicon layer. However, the relatively thick semiconductor layers that are required in such structures limit APD speeds to about 10 GHz, and require excessively high bias voltages of around 25 V (ref. 12). Here we show how nanophotonic and nanoelectronic engineering aimed at shaping optical and electrical fields on the nanometre scale within a germanium amplification layer can overcome the otherwise intrinsically poor noise characteristics, achieving a dramatic reduction of amplification noise by over 70 per cent. By generating strongly non-uniform electric fields, the region of impact ionization in germanium is reduced to just 30 nm, allowing the device to benefit from the noise reduction effects that arise at these small distances. Furthermore, the smallness of the APDs means that a bias voltage of only 1.5 V is required to achieve an avalanche gain of over 10 dB with operational speeds exceeding 30 GHz. Monolithic integration of such a device into computer chips might enable applications beyond computer optical interconnects—in telecommunications, secure quantum key distribution, and subthreshold ultralow-power transistors.", "title": "" }, { "docid": "e38f29a603fb23544ea2fcae04eb1b5d", "text": "Provenance refers to the entire amount of information, comprising all the elements and their relationships, that contribute to the existence of a piece of data. The knowledge of provenance data allows a great number of benefits such as verifying a product, result reproductivity, sharing and reuse of knowledge, or assessing data quality and validity. With such tangible benefits, it is no wonder that in recent years, research on provenance has grown exponentially, and has been applied to a wide range of different scientific disciplines. Some years ago, managing and recording provenance information were performed manually. Given the huge volume of information available nowadays, the manual performance of such tasks is no longer an option. The problem of systematically performing tasks such as the understanding, capture and management of provenance has gained significant attention by the research community and industry over the past decades. As a consequence, there has been a huge amount of contributions and proposed provenance systems as solutions for performing such kinds of tasks. The overall objective of this paper is to plot the landscape of published systems in the field of provenance, with two main purposes. First, we seek to evaluate the desired characteristics that provenance systems are expected to have. Second, we aim at identifying a set of representative systems (both early and recent use) to be exhaustively analyzed according to such characteristics. In particular, we have performed a systematic literature review of studies, identifying a comprehensive set of 105 relevant resources in all. The results show that there are common aspects or characteristics of provenance systems thoroughly renowned throughout the literature on the topic. Based on these results, we have defined a six-dimensional taxonomy of provenance characteristics attending to: general aspects, data capture, data access, subject, storage, and non-functional aspects. Additionally, the study has found that there are 25 most referenced provenance systems within the provenance context. This study exhaustively analyzes and compares such systems attending to our taxonomy and pinpoints future directions.", "title": "" }, { "docid": "cdf313ff69ebd11b360cd5e3b3942580", "text": "This paper presents, for the first time, a novel pupil detection method for near-infrared head-mounted cameras, which relies not only on image appearance to pursue the shape and gradient variation of the pupil contour, but also on structure principle to explore the mechanism of pupil projection. There are three main characteristics in the proposed method. First, in order to complement the pupil projection information, an eyeball center calibration method is proposed to build an eye model. Second, by utilizing the deformation model of pupils under head-mounted cameras and the edge gradients of a circular pattern, we find the best fitting ellipse describing the pupil boundary. Third, an eye-model-based pupil fitting algorithm with only three parameters is proposed to fine-tune the final pupil contour. Consequently, the proposed method extracts the geometry-appearance information, effectively boosting the performance of pupil detection. Experimental results show that this method outperforms the state-of-the-art ones. On a widely used public database (LPW), our method achieves 72.62% in terms of detection rate up to an error of five pixels, which is superior to the previous best one.", "title": "" }, { "docid": "fbce6308301306e0ef5877b192281a95", "text": "AIM\nThe aim of this paper is to distinguish the integrative review method from other review methods and to propose methodological strategies specific to the integrative review method to enhance the rigour of the process.\n\n\nBACKGROUND\nRecent evidence-based practice initiatives have increased the need for and the production of all types of reviews of the literature (integrative reviews, systematic reviews, meta-analyses, and qualitative reviews). The integrative review method is the only approach that allows for the combination of diverse methodologies (for example, experimental and non-experimental research), and has the potential to play a greater role in evidence-based practice for nursing. With respect to the integrative review method, strategies to enhance data collection and extraction have been developed; however, methods of analysis, synthesis, and conclusion drawing remain poorly formulated.\n\n\nDISCUSSION\nA modified framework for research reviews is presented to address issues specific to the integrative review method. Issues related to specifying the review purpose, searching the literature, evaluating data from primary sources, analysing data, and presenting the results are discussed. Data analysis methods of qualitative research are proposed as strategies that enhance the rigour of combining diverse methodologies as well as empirical and theoretical sources in an integrative review.\n\n\nCONCLUSION\nAn updated integrative review method has the potential to allow for diverse primary research methods to become a greater part of evidence-based practice initiatives.", "title": "" }, { "docid": "f6de868d9d3938feb7c33f082dddcdc0", "text": "The proliferation of wearable devices, e.g., smartwatches and activity trackers, with embedded sensors has already shown its great potential on monitoring and inferring human daily activities. This paper reveals a serious security breach of wearable devices in the context of divulging secret information (i.e., key entries) while people accessing key-based security systems. Existing methods of obtaining such secret information relies on installations of dedicated hardware (e.g., video camera or fake keypad), or training with labeled data from body sensors, which restrict use cases in practical adversary scenarios. In this work, we show that a wearable device can be exploited to discriminate mm-level distances and directions of the user's fine-grained hand movements, which enable attackers to reproduce the trajectories of the user's hand and further to recover the secret key entries. In particular, our system confirms the possibility of using embedded sensors in wearable devices, i.e., accelerometers, gyroscopes, and magnetometers, to derive the moving distance of the user's hand between consecutive key entries regardless of the pose of the hand. Our Backward PIN-Sequence Inference algorithm exploits the inherent physical constraints between key entries to infer the complete user key entry sequence. Extensive experiments are conducted with over 5000 key entry traces collected from 20 adults for key-based security systems (i.e. ATM keypads and regular keyboards) through testing on different kinds of wearables. Results demonstrate that such a technique can achieve 80% accuracy with only one try and more than 90% accuracy with three tries, which to our knowledge, is the first technique that reveals personal PINs leveraging wearable devices without the need for labeled training data and contextual information.", "title": "" }, { "docid": "2d5d72944f12446a93e63f53ffce7352", "text": "Standardization of transanal total mesorectal excision requires the delineation of the principal procedural components before implementation in practice. This technique is a bottom-up approach to a proctectomy with the goal of a complete mesorectal excision for optimal outcomes of oncologic treatment. A detailed stepwise description of the approach with technical pearls is provided to optimize one's understanding of this technique and contribute to reducing the inherent risk of beginning a new procedure. Surgeons should be trained according to standardized pathways including online preparation, observational or hands-on courses as well as the potential for proctorship of early cases experiences. Furthermore, technological pearls with access to the \"video-in-photo\" (VIP) function, allow surgeons to link some of the images in this article to operative demonstrations of certain aspects of this technique.", "title": "" }, { "docid": "7c2d0b382685ac7e85c978ece31251d7", "text": "Given an edge-weighted graph G with a set $$Q$$ Q of k terminals, a mimicking network is a graph with the same set of terminals that exactly preserves the size of minimum cut between any partition of the terminals. A natural question in the area of graph compression is to provide as small mimicking networks as possible for input graph G being either an arbitrary graph or coming from a specific graph class. We show an exponential lower bound for cut mimicking networks in planar graphs: there are edge-weighted planar graphs with k terminals that require $$2^{k-2}$$ 2k-2 edges in any mimicking network. This nearly matches an upper bound of $$\\mathcal {O}(k 2^{2k})$$ O(k22k) of Krauthgamer and Rika (in: Khanna (ed) Proceedings of the twenty-fourth annual ACM-SIAM symposium on discrete algorithms, SODA 2013, New Orleans, 2013) and is in sharp contrast with the upper bounds of $$\\mathcal {O}(k^2)$$ O(k2) and $$\\mathcal {O}(k^4)$$ O(k4) under the assumption that all terminals lie on a single face (Goranci et al., in: Pruhs and Sohler (eds) 25th Annual European symposium on algorithms (ESA 2017), 2017, arXiv:1702.01136; Krauthgamer and Rika in Refined vertex sparsifiers of planar graphs, 2017, arXiv:1702.05951). As a side result we show a tight example for double-exponential upper bounds given by Hagerup et al. (J Comput Syst Sci 57(3):366–375, 1998), Khan and Raghavendra (Inf Process Lett 114(7):365–371, 2014), and Chambers and Eppstein (J Gr Algorithms Appl 17(3):201–220, 2013).", "title": "" }, { "docid": "ad9f3510ffaf7d0bdcf811a839401b83", "text": "The stator permanent magnet (PM) machines have simple and robust rotor structure as well as high torque density. The hybrid excitation topology can realize flux regulation and wide constant power operating capability of the stator PM machines when used in dc power systems. This paper compares and analyzes the electromagnetic performance of different hybrid excitation stator PM machines according to different combination modes of PMs, excitation winding, and iron flux bridge. Then, the control strategies for voltage regulation of dc power systems are discussed based on different critical control variables including the excitation current, the armature current, and the electromagnetic torque. Furthermore, an improved direct torque control (DTC) strategy is investigated to improve system performance. A parallel hybrid excitation flux-switching generator employing the improved DTC which shows excellent dynamic and steady-state performance has been achieved experimentally.", "title": "" }, { "docid": "136278bd47962b54b644a77bbdaf77e3", "text": "In this paper, we consider the grayscale template-matching problem, invariant to rotation, scale, translation, brightness and contrast, without previous operations that discard grayscale information, like detection of edges, detection of interest points or segmentation/binarization of the images. The obvious “brute force” solution performs a series of conventional template matchings between the image to analyze and the template query shape rotated by every angle, translated to every position and scaled by every factor (within some specified range of scale factors). Clearly, this takes too long and thus is not practical. We propose a technique that substantially accelerates this searching, while obtaining the same result as the original brute force algorithm. In some experiments, our algorithm was 400 times faster than the brute force algorithm. Our algorithm consists of three cascaded filters. These filters successively exclude pixels that have no chance of matching the template from further processing.", "title": "" }, { "docid": "d657085072f829db812a2735d0e7f41c", "text": "Recently, increasing attention has been drawn to training semantic segmentation models using synthetic data and computer-generated annotation. However, domain gap remains a major barrier and prevents models learned from synthetic data from generalizing well to real-world applications. In this work, we take the advantage of additional geometric information from synthetic data, a powerful yet largely neglected cue, to bridge the domain gap. Such geometric information can be generated easily from synthetic data, and is proven to be closely coupled with semantic information. With the geometric information, we propose a model to reduce domain shift on two levels: on the input level, we augment the traditional image translation network with the additional geometric information to translate synthetic images into realistic styles; on the output level, we build a task network which simultaneously performs depth estimation and semantic segmentation on the synthetic data. Meanwhile, we encourage the network to preserve the correlation between depth and semantics by adversarial training on the output space. We then validate our method on two pairs of synthetic to real dataset: Virtual KITTI→KITTI, and SYNTHIA→Cityscapes, where we achieve a significant performance gain compared to the non-adaptive baseline and methods without using geometric information. This demonstrates the usefulness of geometric information from synthetic data for cross-domain semantic segmentation.", "title": "" }, { "docid": "c45d911aea9d06208a4ef273c9ab5ff3", "text": "A wide range of research has used face data to estimate a person's engagement, in applications from advertising to student learning. An interesting and important question not addressed in prior work is if face-based models of engagement are generalizable and context-free, or do engagement models depend on context and task. This research shows that context-sensitive face-based engagement models are more accurate, at least in the space of web-based tools for trauma recovery. Estimating engagement is important as various psychological studies indicate that engagement is a key component to measure the effectiveness of treatment and can be predictive of behavioral outcomes in many applications. In this paper, we analyze user engagement in a trauma-recovery regime during two separate modules/tasks: relaxation and triggers. The dataset comprises of 8M+ frames from multiple videos collected from 110 subjects, with engagement data coming from 800+ subject self-reports. We build an engagement prediction model as sequence learning from facial Action Units (AUs) using Long Short Term Memory (LSTMs). Our experiments demonstrate that engagement prediction is contextual and depends significantly on the allocated task. Models trained to predict engagement on one task are only weak predictors for another and are much less accurate than context-specific models. Further, we show the interplay of subject mood and engagement using a very short version of Profile of Mood States (POMS) to extend our LSTM model.", "title": "" }, { "docid": "aea5591e815e61ba01914a84eda8b6af", "text": "We present a bitsliced implementation of AES encryption in counter mode for 64-bit Intel processors. Running at 7.59 cycles/byte on a Core 2, it is up to 25% faster than previous implementations, while simultaneously offering protection against timing attacks. In particular, it is the only cache-timing-attack resistant implementation offering competitive speeds for stream as well as for packet encryption: for 576-byte packets, we improve performance over previous bitsliced implementations by more than a factor of 2. We also report more than 30% improved speeds for lookup-table based Galois/Counter mode authentication, achieving 10.68 cycles/byte for authenticated encryption. Furthermore, we present the first constant-time implementation of AES-GCM that has a reasonable speed of 21.99 cycles/byte, thus offering a full suite of timing-analysis resistant software for authenticated encryption.", "title": "" }, { "docid": "940e7dc630b7dcbe097ade7abb2883a4", "text": "Modern object detection methods typically rely on bounding box proposals as input. While initially popularized in the 2D case, this idea has received increasing attention for 3D bounding boxes. Nevertheless, existing 3D box proposal techniques all assume having access to depth as input, which is unfortunately not always available in practice. In this paper, we therefore introduce an approach to generating 3D box proposals from a single monocular RGB image. To this end, we develop an integrated, fully differentiable framework that inherently predicts a depth map, extracts a 3D volumetric scene representation and generates 3D object proposals. At the core of our approach lies a novel residual, differentiable truncated signed distance function module, which, accounting for the relatively low accuracy of the predicted depth map, extracts a 3D volumetric representation of the scene. Our experiments on the standard NYUv2 dataset demonstrate that our framework lets us generate high-quality 3D box proposals and that it outperforms the two-stage technique consisting of successively performing state-of-the-art depth prediction and depthbased 3D proposal generation.", "title": "" }, { "docid": "67974bd363f89a9da77b2e09851905d3", "text": "Traditional approaches to the task of ACE event detection primarily regard multiple events in one sentence as independent ones and recognize them separately by using sentence-level information. However, events in one sentence are usually interdependent and sentence-level information is often insufficient to resolve ambiguities for some types of events. This paper proposes a novel framework dubbed as Hierarchical and Bias Tagging Networks with Gated Multi-level Attention Mechanisms (HBTNGMA) to solve the two problems simultaneously. Firstly, we propose a hierarchical and bias tagging networks to detect multiple events in one sentence collectively. Then, we devise a gated multi-level attention to automatically extract and dynamically fuse the sentence-level and document-level information. The experimental results on the widely used ACE 2005 dataset show that our approach significantly outperforms other state-of-the-art methods.", "title": "" }, { "docid": "27ddea786e06ffe20b4f526875cdd76b", "text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .", "title": "" }, { "docid": "5a9f6b9f6f278f5f3359d5d58b8516a8", "text": "BACKGROUND\nMusculoskeletal disorders (MSDs) that result from poor ergonomic design are one of the occupational disorders of greatest concern in the industrial sector. A key advantage in the primary design phase is to focus on a method of assessment that detects and evaluates the potential risks experienced by the operative when faced with these types of physical injuries. The method of assessment will improve the process design identifying potential ergonomic improvements from various design alternatives or activities undertaken as part of the cycle of continuous improvement throughout the differing phases of the product life cycle.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nThis paper presents a novel postural assessment method (NERPA) fit for product-process design, which was developed with the help of a digital human model together with a 3D CAD tool, which is widely used in the aeronautic and automotive industries. The power of 3D visualization and the possibility of studying the actual assembly sequence in a virtual environment can allow the functional performance of the parts to be addressed. Such tools can also provide us with an ergonomic workstation design, together with a competitive advantage in the assembly process.\n\n\nCONCLUSIONS\nThe method developed was used in the design of six production lines, studying 240 manual assembly operations and improving 21 of them. This study demonstrated the proposed method's usefulness and found statistically significant differences in the evaluations of the proposed method and the widely used Rapid Upper Limb Assessment (RULA) method.", "title": "" } ]
scidocsrr
edc8871e7c4dd6ee0caf1ee083242a3a
BotMosaic: Collaborative Network Watermark for Botnet Detection
[ { "docid": "e77b339a245fc09111d7c9033db7a884", "text": "Botnets are now recognized as one of the most serious security threats. In contrast to previous malware, botnets have the characteristic of a command and control (C&C) channel. Botnets also often use existing common protocols, e.g., IRC, HTTP, and in protocol-conforming manners. This makes the detection of botnet C&C a challenging problem. In this paper, we propose an approach that uses network-based anomaly detection to identify botnet C&C channels in a local area network without any prior knowledge of signatures or C&C server addresses. This detection approach can identify both the C&C servers and infected hosts in the network. Our approach is based on the observation that, because of the pre-programmed activities related to C&C, bots within the same botnet will likely demonstrate spatial-temporal correlation and similarity. For example, they engage in coordinated communication, propagation, and attack and fraudulent activities. Our prototype system, BotSniffer, can capture this spatial-temporal correlation in network traffic and utilize statistical algorithms to detect botnets with theoretical bounds on the false positive and false negative rates. We evaluated BotSniffer using many real-world network traces. The results show that BotSniffer can detect real-world botnets with high accuracy and has a very low false positive rate.", "title": "" } ]
[ { "docid": "932934a4362bd671427954d0afb61459", "text": "On the basis of the similarity between spinel and rocksalt structures, it is shown that some spinel oxides (e.g., MgCo2O4, etc) can be cathode materials for Mg rechargeable batteries around 150 °C. The Mg insertion into spinel lattices occurs via \"intercalation and push-out\" process to form a rocksalt phase in the spinel mother phase. For example, by utilizing the valence change from Co(III) to Co(II) in MgCo2O4, Mg insertion occurs at a considerably high potential of about 2.9 V vs. Mg2+/Mg, and similarly it occurs around 2.3 V vs. Mg2+/Mg with the valence change from Mn(III) to Mn(II) in MgMn2O4, being comparable to the ab initio calculation. The feasibility of Mg insertion would depend on the phase stability of the counterpart rocksalt XO of MgO in Mg2X2O4 or MgX3O4 (X = Co, Fe, Mn, and Cr). In addition, the normal spinel MgMn2O4 and MgCr2O4 can be demagnesiated to some extent owing to the robust host structure of Mg1-xX2O4, where the Mg extraction/insertion potentials for MgMn2O4 and MgCr2O4 are both about 3.4 V vs. Mg2+/Mg. Especially, the former \"intercalation and push-out\" process would provide a safe and stable design of cathode materials for polyvalent cations.", "title": "" }, { "docid": "a24f958c480812feb338b651849037b2", "text": "This paper investigates the detection and classification of fighting and pre and post fighting events when viewed from a video camera. Specifically we investigate normal, pre, post and actual fighting sequences and classify them. A hierarchical AdaBoost classifier is described and results using this approach are presented. We show it is possible to classify pre-fighting situations using such an approach and demonstrate how it can be used in the general case of continuous sequences.", "title": "" }, { "docid": "00b85bd052a196b1f02d00f6ad532ed2", "text": "The book Build Your Own Database Driven Website Using PHP & MySQL by Kevin Yank provides a hands-on look at what's involved in building a database-driven Web site. The author does a good job of patiently teaching the reader how to install and configure PHP 5 and MySQL to organize dynamic Web pages and put together a viable content management system. At just over 350 pages, the book is rather small compared to a lot of others on the topic, but it contains all the essentials. The author employs excellent teaching techniques to set up the foundation stone by stone and then grouts everything solidly together later in the book. This book aims at intermediate and advanced Web designers looking to make the leap to server-side programming. The author assumes his readers are comfortable with simple HTML. He provides an excellent introduction to PHP and MySQL (including installation) and explains how to make them work together. The amount of material he covers guarantees that almost any reader will benefit.", "title": "" }, { "docid": "6ac996c20f036308f36c7b667babe876", "text": "Patents are a very useful source of technical information. The public availability of patents over the Internet, with for some databases (eg. Espacenet) the assurance of a constant format, allows the development of high value added products using this information source and provides an easy way to analyze patent information. This simple and powerful tool facilitates the use of patents in academic research, in SMEs and in developing countries providing a way to use patents as a ideas resource thus improving technological innovation.", "title": "" }, { "docid": "1289f47ea43ddd72fc90977b0a538d1c", "text": "This study identifies evaluative, attitudinal, and behavioral factors that enhance or reduce the likelihood of consumers aborting intended online transactions (transaction abort likelihood). Path analyses show that risk perceptions associated with eshopping have direct influence on the transaction abort likelihood, whereas benefit perceptions do not. In addition, consumers who have favorable attitudes toward e-shopping, purchasing experiences from the Internet, and high purchasing frequencies from catalogs are less likely to abort intended transactions. The results also show that attitude toward e-shopping mediate relationships between the transaction abort likelihood and other predictors (i.e., effort saving, product offering, control in the information search, and time spent on the Internet per visit). # 2003 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "c052f693b65a0f3189fc1e9f4df11162", "text": "In this paper we present ElastiFace, a simple and versatile method for establishing correspondence between textured face models, either for the construction of a blend-shape facial rig or for the exploration of new characters by morphing between a set of input models. While there exists a wide variety of approaches for inter-surface mapping and mesh morphing, most techniques are not suitable for our application: They either require the insertion of additional vertices, are limited to topological planes or spheres, are restricted to near-isometric input meshes, and/or are algorithmically and computationally involved. In contrast, our method extends linear non-rigid registration techniques to allow for strongly varying input geometries. It is geometrically intuitive, simple to implement, computationally efficient, and robustly handles highly non-isometric input models. In order to match the requirements of other applications, such as recent perception studies, we further extend our geometric matching to the matching of input textures and morphing of geometries and rendering styles.", "title": "" }, { "docid": "071b46c04389b6fe3830989a31991d0d", "text": "Direct slicing of CAD models to generate process planning instructions for solid freeform fabrication may overcome inherent disadvantages of using stereolithography format in terms of the process accuracy, ease of file management, and incorporation of multiple materials. This paper will present the results of our development of a direct slicing algorithm for layered freeform fabrication. The direct slicing algorithm was based on a neutral, international standard (ISO 10303) STEP-formatted non-uniform rational B-spline (NURBS) geometric representation and is intended to be independent of any commercial CAD software. The following aspects of the development effort will be presented: (1) determination of optimal build direction based upon STEP-based NURBS models; (2) adaptive subdivision of NURBS data for geometric refinement; and (3) ray-casting slice generation into sets of raster patterns. The development also provides for multi-material slicing and will provide an effective tool in heterogeneous slicing processes. q 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "d38db185d37fa96795e640d918a8dfe8", "text": "Learning behaviour of artificial agents is commonly studied in the framework of Reinforcement Learning. Reinforcement Learning gained increasing popularity in the past years. This is partially due to developments that enabled the possibility to employ complex function approximators, such as deep networks, in combination with the framework. Two of the core challenges in Reinforcement Learning are the correct assignment of credits over long periods of time and dealing with sparse rewards. In this thesis we propose a framework based on the notions of goals to tackle these problems. This work implements several components required to obtain a form of goal-directed behaviour, similar to how it is observed in human reasoning. This includes the representation of a goal space, learning how to set goals and finally how to reach them. The framework itself is build upon the options model, which is a common approach for representing temporally extended actions in Reinforcement Learning. All components of the proposed method can be implemented as deep networks and the complete system can be learned in an end-to-end fashion using standard optimization techniques. We evaluate the approach on a set of continuous control problems of increasing difficulty. We show, that we are able to solve a difficult gathering task, which poses a challenge to state-of-the-art Reinforcement Learning algorithms. The presented approach is furthermore able to scale to complex kinematic agents of the MuJoCo benchmark.", "title": "" }, { "docid": "714c06da1a728663afd8dbb1cd2d472d", "text": "This paper proposes hybrid semiMarkov conditional random fields (SCRFs) for neural sequence labeling in natural language processing. Based on conventional conditional random fields (CRFs), SCRFs have been designed for the tasks of assigning labels to segments by extracting features from and describing transitions between segments instead of words. In this paper, we improve the existing SCRF methods by employing word-level and segment-level information simultaneously. First, word-level labels are utilized to derive the segment scores in SCRFs. Second, a CRF output layer and an SCRF output layer are integrated into an unified neural network and trained jointly. Experimental results on CoNLL 2003 named entity recognition (NER) shared task show that our model achieves state-of-the-art performance when no external knowledge is used.", "title": "" }, { "docid": "2cf7921cce2b3077c59d9e4e2ab13afe", "text": "Scientists and consumers preference focused on natural colorants due to the emergence of negative health effects of synthetic colorants which is used for many years in foods. Interest in natural colorants is increasing with each passing day as a consequence of their antimicrobial and antioxidant effects. The biggest obstacle in promotion of natural colorants as food pigment agents is that it requires high investment. For this reason, the R&D studies related issues are shifted to processes to reduce cost and it is directed to pigment production from microorganisms with fermentation. Nowadays, there is pigments obtained by commercially microorganisms or plants with fermantation. These pigments can be use for both food colorant and food supplement. In this review, besides colourant and antioxidant properties, antimicrobial properties of natural colorants are discussed.", "title": "" }, { "docid": "83f067159913e65410a054681461ab4d", "text": "Cloud computing has revolutionized the way computing and software services are delivered to the clients on demand. It offers users the ability to connect to computing resources and access IT managed services with a previously unknown level of ease. Due to this greater level of flexibility, the cloud has become the breeding ground of a new generation of products and services. However, the flexibility of cloud-based services comes with the risk of the security and privacy of users' data. Thus, security concerns among users of the cloud have become a major barrier to the widespread growth of cloud computing. One of the security concerns of cloud is data mining based privacy attacks that involve analyzing data over a long period to extract valuable information. In particular, in current cloud architecture a client entrusts a single cloud provider with his data. It gives the provider and outside attackers having unauthorized access to cloud, an opportunity of analyzing client data over a long period to extract sensitive information that causes privacy violation of clients. This is a big concern for many clients of cloud. In this paper, we first identify the data mining based privacy risks on cloud data and propose a distributed architecture to eliminate the risks.", "title": "" }, { "docid": "59a1088003576f2e75cdbedc24ae8bdf", "text": "Two literatures or sets of articles are complementary if, considered together, they can reveal useful information of scientik interest not apparent in either of the two sets alone. Of particular interest are complementary literatures that are also mutually isolated and noninteractive (they do not cite each other and are not co-cited). In that case, the intriguing possibility akrae that thm &tfnrmnt;nn n&wd hv mwnhXno them 4. nnvnl Lyww u-c “‘1 YLL”I&.L.sU”4L 6uy’“s. u, b..S..“Y.Ayj .a.-** Y ..u. -... During the past decade, we have identified seven examples of complementary noninteractive structures in the biomedical literature. Each structure led to a novel, plausible, and testable hypothesis that, in several cases, was subsequently corroborated by medical researchers through clinical or laboratory investigation. We have also developed, tested, and described a systematic, computer-sided approach to iinding and identifying complementary noninteractive literatures. Specialization, Fragmentation, and a Connection Explosion By some obscure spontaneous process scientists have responded to the growth of science by organizing their work into soecialties, thus permitting each individual to -r-~ focus on a small part of the total literature. Specialties that grow too large tend to divide into subspecialties that have their own literatures which, by a process of repeated splitting, maintain more or less fixed and manageable size. As the total literature grows, the number of specialties, but not in general the size of each, increases (Kochen, 1963; Swanson, 199Oc). But the unintended consequence of specialization is fragmentation. By dividing up the pie, the potential relationships among its pieces tend to be neglected. Although scientific literature cannot, in the long run, grow disproportionately to the growth of the communities and resources that produce it, combinations of implicitlyrelated segments of literature can grow much faster than the literature itself and can readily exceed the capacity of the community to identify and assimilate such relatedness (Swanson, 1993). The signilicance of the “information explosion” thus may lie not in an explosion of quantity per se, but in an incalculably greater combinatorial explosion of unnoticed and unintended logical connections. The Significance of Complementary Noninteractive Literatures If two literatures each of substantial size are linked by arguments that they respectively put forward -that is, are “logically” related, or complementary -one would expect to gain usefui information by combining them. For example, suppose that one (biomedical) literature establishes that some environmental factor A influences certain internal physiological conditions and a second literature establishes that these same physiological changes influence the course of disease C. Presumably, then, anyone who reads both literatures could conclude that factor A might influence disease C. Under such --->!L---f -----l-----ry-?r-. ----.---,a ?1-_----_I rl-conamons or comptementdnty one woum dtso expect me two literatures to refer to each other. If, however, the two literatures were developed independently of one another, the logical l inkage illustrated may be both unintended and unnoticed. To detect such mutual isolation, we examine the citation pattern. If two literatures are “noninteractive” that ir if thmv hnvm n~.rer fnr odAnm\\ kppn &ml = ulyc 1U) a. “W, na6L.V ..Y.“. ,“a vva&“..n] “W.. UluIu together, and if neither cites the other, then it is possible that scientists have not previously considered both iiteratures together, and so it is possible that no one is aware of the implicit A-C connection. The two conditions, complementarily and noninteraction, describe a model structure that shows how useful information can remain undiscovered even though its components consist of public knowledge (Swanson, 1987,199l). Public Knowledge / Private Knowledge There is, of course, no way to know in any particular case whether the possibility of an AC relationship in the above model has or has not occurred to someone, or whether or not anyone has actually considered the two literatures on A and C together, a private matter that necessarily remains conjectural. However, our argument is based only on determining whether there is any printed evidence to the contrary. We are concerned with public rather than Data Mining: Integration Q Application 295 From: KDD-96 Proceedings. Copyright © 1996, AAAI (www.aaai.org). All rights reserved. private knowledge -with the state of the record produced rather than the state of mind of the producers (Swanson, 1990d). The point of bringing together the AB and BC literatures, in any event, is not to \"prove\" an AC linkage, (by considering only transitive relationships) but rather call attention to an apparently unnoticed association that may be worth investigating. In principle any chain of scientific, including analogic, reasoning in which different links appear in noninteractive literatures may lead to the discovery of new interesting connections. \"What people know\" is a common u derstanding of what is meant by \"knowledge\". If taken in this subjective sense, the idea of \"knowledge discovery\" could mean merely that someone discovered something they hadn’t known before. Our focus in the present paper is on a second sense of the word \"knowledge\", a meaning associated with the products of human i tellectual activity, as encoded in the public record, rather than with the contents of the human mind. This abstract world of human-created \"objective\" knowledge is open to exploration and discovery, for it can contain territory that is subjectively unknown to anyone (Popper, 1972). Our work is directed toward the discovery of scientificallyuseful information implicit in the public record, but not previously made xplicit. The problem we address concerns structures within the scientific literature, not within the mind. The Process of Finding Complementary Noninteractive Literatures During the past ten years, we have pursued three goals: i) to show in principle how new knowledge might be gained by synthesizing logicallyrelated noninteractive literatures; ii) to demonstrate that such structures do exist, at least within the biomedical literature; and iii) to develop a systematic process for finding them. In pursuit of goal iii, we have created interactive software and database arch strategies that can facilitate the discovery of complementary st uctures in the published literature of science. The universe or searchspace under consideration is limited only by the coverage of the major scientific databases, though we have focused primarily on the biomedical field and the MEDLINE database (8 million records). In 1991, a systematic approach to finding complementary structures was outlined and became a point of departure for software development (Swanson, 1991). The system that has now taken shape is based on a 3-way interaction between computer software, bibliographic databases, and a human operator. Tae interaction generates information structtues that are used heuristically to guide the search for promising complementary literatures. The user of the system begins by choosing a question 296 Technology Spotlight or problem area of scientific interest that can be associated with a literature, C. Elsewhere we describe and evaluate experimental computer software, which we call ARROWSMITH (Swanson & Smalheiser, 1997), that performs two separate functions that can be used independently. The first function produces a list of candidates for a second literature, A, complementary o C, from which the user can select one candidate (at a time) an input, along with C, to the second function. This first function can be considered as a computer-assisted process of problem-discovery, an issue identified in the AI literature (Langley, et al., 1987; p304-307). Alternatively, the user may wish to identify a second literature, A, as a conjecture or hypothesis generated independently of the computer-produced list of candidates. Our approach has been based on the use of article titles as a guide to identifying complementary literatures. As indicated above, our point of departure for the second function is a tentative scientific hypothesis associated with two literalxtres, A and C. A title-word search of MEDLINE is used to create two local computer title-files associated with A and C, respectively. These files are used as input to the ARROWSMITH software, which then produces a list of all words common to the two sets of titles, except for words excluded by an extensive stoplist (presently about 5000 words). The resulting list of words provides the basis for identifying title-word pathways that might provide clues to the presence of complementary arguments within the literatures corresponding to A and C. The output of this procedure is a structured titledisplay (plus journal citation), that serves as a heuristic aid to identifying word-linked titles and serves also as an organized guide to the literature.", "title": "" }, { "docid": "28f8be68a0fe4762af272a0e11d53f7d", "text": "In this article, we address the cross-domain (i.e., street and shop) clothing retrieval problem and investigate its real-world applications for online clothing shopping. It is a challenging problem due to the large discrepancy between street and shop domain images. We focus on learning an effective feature-embedding model to generate robust and discriminative feature representation across domains. Existing triplet embedding models achieve promising results by finding an embedding metric in which the distance between negative pairs is larger than the distance between positive pairs plus a margin. However, existing methods do not address the challenges in the cross-domain clothing retrieval scenario sufficiently. First, the intradomain and cross-domain data relationships need to be considered simultaneously. Second, the number of matched and nonmatched cross-domain pairs are unbalanced. To address these challenges, we propose a deep cross-triplet embedding algorithm together with a cross-triplet sampling strategy. The extensive experimental evaluations demonstrate the effectiveness of the proposed algorithms well. Furthermore, we investigate two novel online shopping applications, clothing trying on and accessories recommendation, based on a unified cross-domain clothing retrieval framework.", "title": "" }, { "docid": "aab83f305b6519c091f883d869a0b92c", "text": "With the development of the web of data, recent statistical, data-to-text generation approaches have focused on mapping data (e.g., database records or knowledge-base (KB) triples) to natural language. In contrast to previous grammar-based approaches, this more recent work systematically eschews syntax and learns a direct mapping between meaning representations and natural language. By contrast, I argue that an explicit model of syntax can help support NLG in several ways. Based on case studies drawn from KB-to-text generation, I show that syntax can be used to support supervised training with little training data; to ensure domain portability; and to improve statistical hypertagging.", "title": "" }, { "docid": "ca4aa2c6f4096bbffaa2e3e1dd06fbe8", "text": "Hybrid unmanned aircraft, that combine hover capability with a wing for fast and efficient forward flight, have attracted a lot of attention in recent years. Many different designs are proposed, but one of the most promising is the tailsitter concept. However, tailsitters are difficult to control across the entire flight envelope, which often includes stalled flight. Additionally, their wing surface makes them susceptible to wind gusts. In this paper, we propose incremental nonlinear dynamic inversion control for the attitude and position control. The result is a single, continuous controller, that is able to track the acceleration of the vehicle across the flight envelope. The proposed controller is implemented on the Cyclone hybrid UAV. Multiple outdoor experiments are performed, showing that unmodeled forces and moments are effectively compensated by the incremental control structure, and that accelerations can be tracked across the flight envelope. Finally, we provide a comprehensive procedure for the implementation of the controller on other types of hybrid UAVs.", "title": "" }, { "docid": "9058505c04c1dc7c33603fd8347312a0", "text": "Fear appeals are a polarizing issue, with proponents confident in their efficacy and opponents confident that they backfire. We present the results of a comprehensive meta-analysis investigating fear appeals' effectiveness for influencing attitudes, intentions, and behaviors. We tested predictions from a large number of theories, the majority of which have never been tested meta-analytically until now. Studies were included if they contained a treatment group exposed to a fear appeal, a valid comparison group, a manipulation of depicted fear, a measure of attitudes, intentions, or behaviors concerning the targeted risk or recommended solution, and adequate statistics to calculate effect sizes. The meta-analysis included 127 articles (9% unpublished) yielding 248 independent samples (NTotal = 27,372) collected from diverse populations. Results showed a positive effect of fear appeals on attitudes, intentions, and behaviors, with the average effect on a composite index being random-effects d = 0.29. Moderation analyses based on prominent fear appeal theories showed that the effectiveness of fear appeals increased when the message included efficacy statements, depicted high susceptibility and severity, recommended one-time only (vs. repeated) behaviors, and targeted audiences that included a larger percentage of female message recipients. Overall, we conclude that (a) fear appeals are effective at positively influencing attitude, intentions, and behaviors; (b) there are very few circumstances under which they are not effective; and (c) there are no identified circumstances under which they backfire and lead to undesirable outcomes.", "title": "" }, { "docid": "4a2de9235a698a3b5e517446088d2ac6", "text": "In recent years, there has been a growing interest in designing multi-robot systems (hereafter MRSs) to provide cost effective, fault-tolerant and reliable solutions to a variety of automated applications. Here, we review recent advancements in MRSs specifically designed for cooperative object transport, which requires the members of MRSs to coordinate their actions to transport objects from a starting position to a final destination. To achieve cooperative object transport, a wide range of transport, coordination and control strategies have been proposed. Our goal is to provide a comprehensive summary for this relatively heterogeneous and fast-growing body of scientific literature. While distilling the information, we purposefully avoid using hierarchical dichotomies, which have been traditionally used in the field of MRSs. Instead, we employ a coarse-grain approach by classifying each study based on the transport strategy used; pushing-only, grasping and caging. We identify key design constraints that may be shared among these studies despite considerable differences in their design methods. In the end, we discuss several open challenges and possible directions for future work to improve the performance of the current MRSs. Overall, we hope to increasethe visibility and accessibility of the excellent studies in the field and provide a framework that helps the reader to navigate through them more effectively.", "title": "" }, { "docid": "4fcea2e99877dedc419893313c1baea4", "text": "A cardiac circumstance affected through irregular electrical action of the heart is called an arrhythmia. A noninvasive method called Electrocardiogram (ECG) is used to diagnosis arrhythmias or irregularities of the heart. The difficulty encountered by doctors in the analysis of heartbeat irregularities id due to the non-stationary of ECG signal, the existence of noise and the abnormality of the heartbeat. The computer-assisted study of ECG signal supports doctors to diagnoses diseases of cardiovascular. The major limitations of all the ECG signal analysis of arrhythmia detection are because to the non-stationary behavior of the ECG signals and unobserved information existent in the ECG signals. In addition, detection based on Extreme learning machine (ELM) has become a common technique in machine learning. However, it easily suffers from overfitting. This paper proposes a hybrid classification technique using Bayesian and Extreme Learning Machine (B-ELM) technique for heartbeat recognition of arrhythmia detection AD. The proposed technique is capable of detecting arrhythmia classes with a maximum accuracy of (98.09%) and less computational time about 2.5s.", "title": "" }, { "docid": "9e8d4b422a7ed05ee338fcd426dab723", "text": "Entity typing is an essential task for constructing a knowledge base. However, many non-English knowledge bases fail to type their entities due to the absence of a reasonable local hierarchical taxonomy. Since constructing a widely accepted taxonomy is a hard problem, we propose to type these non-English entities with some widely accepted taxonomies in English, such as DBpedia, Yago and Freebase. We define this problem as cross-lingual type inference. In this paper, we present CUTE to type Chinese entities with DBpedia types. First we exploit the cross-lingual entity linking between Chinese and English entities to construct the training data. Then we propose a multi-label hierarchical classification algorithm to type these Chinese entities. Experimental results show the effectiveness and efficiency of our method.", "title": "" }, { "docid": "68288cbb20c43b2f1911d6264cc81a6c", "text": "Folliculitis decalvans is an inflammatory presentation of cicatrizing alopecia characterized by inflammatory perifollicular papules and pustules. It generally occurs in adult males, predominantly involving the vertex and occipital areas of the scalp. The use of dermatoscopy in hair and scalp diseases improves diagnostic accuracy. Some trichoscopic findings, such as follicular tufts, perifollicular erythema, crusts and pustules, can be observed in folliculitis decalvans. More research on the pathogenesis and treatment options of this disfiguring disease is required for improving patient management.", "title": "" } ]
scidocsrr
24c70b1ee4001017b1ef9740520874dd
Compositional Vector Space Models for Knowledge Base Inference
[ { "docid": "8b46e6e341f4fdf4eb18e66f237c4000", "text": "We present a general learning-based approach for phrase-level sentiment analysis that adopts an ordinal sentiment scale and is explicitly compositional in nature. Thus, we can model the compositional effects required for accurate assignment of phrase-level sentiment. For example, combining an adverb (e.g., “very”) with a positive polar adjective (e.g., “good”) produces a phrase (“very good”) with increased polarity over the adjective alone. Inspired by recent work on distributional approaches to compositionality, we model each word as a matrix and combine words using iterated matrix multiplication, which allows for the modeling of both additive and multiplicative semantic effects. Although the multiplication-based matrix-space framework has been shown to be a theoretically elegant way to model composition (Rudolph and Giesbrecht, 2010), training such models has to be done carefully: the optimization is nonconvex and requires a good initial starting point. This paper presents the first such algorithm for learning a matrix-space model for semantic composition. In the context of the phrase-level sentiment analysis task, our experimental results show statistically significant improvements in performance over a bagof-words model.", "title": "" }, { "docid": "78cda62ca882bb09efc08f7d4ea1801e", "text": "Open Domain: There are nearly an unbounded number of classes, objects and relations Missing Data: Many useful facts are never explicitly stated No Negative Examples: Labeling positive and negative examples for all interesting relations is impractical Learning First-Order Horn Clauses from Web Text Stefan Schoenmackers Oren Etzioni Daniel S. Weld Jesse Davis Turing Center, University of Washington Katholieke Universiteit Leuven", "title": "" } ]
[ { "docid": "011ff2d5995a46a686d9edb80f33b8ca", "text": "In the era of Social Computing, the role of customer reviews and ratings can be instrumental in predicting the success and sustainability of businesses as customers and even competitors use them to judge the quality of a business. Yelp is one of the most popular websites for users to write such reviews. This rating can be subjective and biased toward user's personality. Business preferences of a user can be decrypted based on his/ her past reviews. In this paper, we deal with (i) uncovering latent topics in Yelp data based on positive and negative reviews using topic modeling to learn which topics are the most frequent among customer reviews, (ii) sentiment analysis of users' reviews to learn how these topics associate to a positive or negative rating which will help businesses improve their offers and services, and (iii) predicting unbiased ratings from user-generated review text alone, using Linear Regression model. We also perform data analysis to get some deeper insights into customer reviews.", "title": "" }, { "docid": "a7aac88bd2862bafc2b4e1e562a7b86a", "text": "Longitudinal melanonychia presents in various conditions including neoplastic and reactive disorders. It is much more frequently seen in non-Caucasians than Caucasians. While most cases of nail apparatus melanoma start as longitudinal melanonychia, melanocytic nevi of the nail apparatus also typically accompany longitudinal melanonychia. Identifying the suspicious longitudinal melanonychia is therefore an important task for dermatologists. Dermoscopy provides useful information for making this decision. The most suspicious dermoscopic feature of early nail apparatus melanoma is irregular lines on a brown background. Evaluation of the irregularity may be rather subjective, but through experience, dermatologists can improve their diagnostic skills of longitudinal melanonychia, including benign conditions showing regular lines. Other important dermoscopic features of early nail apparatus melanoma are micro-Hutchinson's sign, a wide pigmented band, and triangular pigmentation on the nail plate. Although there is as yet no solid evidence concerning the frequency of dermoscopic follow up, we recommend checking the suspicious longitudinal melanonychia every 6 months. Moreover, patients with longitudinal melanonychia should be asked to return to the clinic quickly if the lesion shows obvious changes. Diagnosis of amelanotic or hypomelanotic melanoma affecting the nail apparatus is also challenging, but melanoma should be highly suspected if remnants of melanin granules are detected dermoscopically.", "title": "" }, { "docid": "f7aceafa35aaacb5b2b854a8b7e275b6", "text": "In this paper, the study and implementation of a high frequency pulse LED driver with self-oscillating circuit is presented. The self-oscillating half-bridge series resonant inverter is adopted in this LED driver and the circuit characteristics of LED with high frequency pulse driving voltage is also discussed. LED module is connected with full bridge diode rectifier but without low pass filter and this LED module is driven with high frequency pulse. In additional, the self-oscillating resonant circuit with saturable core is used to achieve zero voltage switching and to control the LED current. The LED equivalent circuit of resonant circuit and the operating principle of the self-oscillating half-bridge inverter are discussed in detail. Finally, an 18 W high frequency pulse LED driver is implemented to verify the feasibility. Experimental results show that the circuit efficiency is over 86.5% when input voltage operating within AC 110 ± 10 Vrms and the maximum circuit efficiency is up to 89.2%.", "title": "" }, { "docid": "e729c06c5a4153af05740a01509ee5d5", "text": "Understanding large-scale document collections in an efficient manner is an important problem. Usually, document data are associated with other information (e.g., an author's gender, age, and location) and their links to other entities (e.g., co-authorship and citation networks). For the analysis of such data, we often have to reveal common as well as discriminative characteristics of documents with respect to their associated information, e.g., male- vs. female-authored documents, old vs. new documents, etc. To address such needs, this paper presents a novel topic modeling method based on joint nonnegative matrix factorization, which simultaneously discovers common as well as discriminative topics given multiple document sets. Our approach is based on a block-coordinate descent framework and is capable of utilizing only the most representative, thus meaningful, keywords in each topic through a novel pseudo-deflation approach. We perform both quantitative and qualitative evaluations using synthetic as well as real-world document data sets such as research paper collections and nonprofit micro-finance data. We show our method has a great potential for providing in-depth analyses by clearly identifying common and discriminative topics among multiple document sets.", "title": "" }, { "docid": "74a3c4dae9573325b292da736d46a78e", "text": "Machine learning is currently dominated by largely experimental work focused on improvements in a few key tasks. However, the impressive accuracy numbers of the best performing models are questionable because the same test sets have been used to select these models for multiple years now. To understand the danger of overfitting, we measure the accuracy of CIFAR-10 classifiers by creating a new test set of truly unseen images. Although we ensure that the new test set is as close to the original data distribution as possible, we find a large drop in accuracy (4% to 10%) for a broad range of deep learning models. Yet, more recent models with higher original accuracy show a smaller drop and better overall performance, indicating that this drop is likely not due to overfitting based on adaptivity. Instead, we view our results as evidence that current accuracy numbers are brittle and susceptible to even minute natural variations in the data distribution.", "title": "" }, { "docid": "1ec8f8e1b34ebcf8a0c99975d2fa58c4", "text": "BACKGROUND\nTo compare simultaneous recordings from an external patch system specifically designed to ensure better P-wave recordings and standard Holter monitor to determine diagnostic efficacy. Holter monitors are a mainstay of clinical practice, but are cumbersome to access and wear and P-wave signal quality is frequently inadequate.\n\n\nMETHODS\nThis study compared the diagnostic efficacy of the P-wave centric electrocardiogram (ECG) patch (Carnation Ambulatory Monitor) to standard 3-channel (leads V1, II, and V5) Holter monitor (Northeast Monitoring, Maynard, MA). Patients were referred to a hospital Holter clinic for standard clinical indications. Each patient wore both devices simultaneously and served as their own control. Holter and Patch reports were read in a blinded fashion by experienced electrophysiologists unaware of the findings in the other corresponding ECG recording. All patients, technicians, and physicians completed a questionnaire on comfort and ease of use, and potential complications.\n\n\nRESULTS\nIn all 50 patients, the P-wave centric patch recording system identified rhythms in 23 patients (46%) that altered management, compared to 6 Holter patients (12%), P<.001. The patch ECG intervals PR, QRS and QT correlated well with the Holter ECG intervals having correlation coefficients of 0.93, 0.86, and 0.94, respectively. Finally, 48 patients (96%) preferred wearing the patch monitor.\n\n\nCONCLUSIONS\nA single-channel ambulatory patch ECG monitor, designed specifically to ensure that the P-wave component of the ECG be visible, resulted in a significantly improved rhythm diagnosis and avoided inaccurate diagnoses made by the standard 3-channel Holter monitor.", "title": "" }, { "docid": "fa0f3d0d78040d6b89087c24d8b7c07c", "text": "Named entity recognition is a crucial component of biomedical natural language processing, enabling information extraction and ultimately reasoning over and knowledge discovery from text. Much progress has been made in the design of rule-based and supervised tools, but they are often genre and task dependent. As such, adapting them to different genres of text or identifying new types of entities requires major effort in re-annotation or rule development. In this paper, we propose an unsupervised approach to extracting named entities from biomedical text. We describe a stepwise solution to tackle the challenges of entity boundary detection and entity type classification without relying on any handcrafted rules, heuristics, or annotated data. A noun phrase chunker followed by a filter based on inverse document frequency extracts candidate entities from free text. Classification of candidate entities into categories of interest is carried out by leveraging principles from distributional semantics. Experiments show that our system, especially the entity classification step, yields competitive results on two popular biomedical datasets of clinical notes and biological literature, and outperforms a baseline dictionary match approach. Detailed error analysis provides a road map for future work.", "title": "" }, { "docid": "9d9afbd6168c884f54f72d3daea57ca7", "text": "0167-8655/$ see front matter 2009 Elsevier B.V. A doi:10.1016/j.patrec.2009.06.012 * Corresponding author. Tel.: +82 2 705 8931; fax: E-mail addresses: [email protected] (S. Yoon), sa Computer aided diagnosis (CADx) systems for digitized mammograms solve the problem of classification between benign and malignant tissues while studies have shown that using only a subset of features generated from the mammograms can yield higher classification accuracy. To this end, we propose a mutual information-based Support Vector Machine Recursive Feature Elimination (SVM-RFE) as the classification method with feature selection in this paper. We have conducted extensive experiments on publicly available mammographic data and the obtained results indicate that the proposed method outperforms other SVM and SVM-RFE-based methods. 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "cabfa3e645415d491ed4ca776b9e370a", "text": "The impact of social networks in customer buying decisions is rapidly increasing, because they are effective in shaping public opinion. This paper helps marketers analyze a social network’s members based on different characteristics as well as choose the best method for identifying influential people among them. Marketers can then use these influential people as seeds for market products/services. Considering the importance of opinion leadership in social networks, the authors provide a comprehensive overview of existing literature. Studies show that different titles (such as opinion leaders, influential people, market mavens, and key players) are used to refer to the influential group in social networks. In this paper, all the properties presented for opinion leaders in the form of different titles are classified into three general categories, including structural, relational, and personal characteristics. Furthermore, based on studying opinion leader identification methods, appropriate parameters are extracted in a comprehensive chart to evaluate and compare these methods accurately. based marketing, word-of-mouth marketing has more creditability (Li & Du, 2011), because there is no direct link between the sender and the merchant. As a result, information is considered independent and subjective. In recent years, many researches in word-of-mouth marketing investigate discovering influential nodes in a social network. These influential people are called opinion leaders in the literature. Organizations interested in e-commerce need to identify opinion leaders among their customers, also the place (web site) which they are going online. This is the place they can market their products. DOI: 10.4018/jvcsn.2011010105 44 International Journal of Virtual Communities and Social Networking, 3(1), 43-59, January-March 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. Social Network Analysis Regarding the importance of interpersonal relationship, studies are looking for formal methods to measures who talks to whom in a community. These methods are known as social network analysis (Scott, 1991; Wasserman & Faust, 1994; Rogers & Kincaid, 1981; Valente & Davis, 1999). Social network analysis includes the study of the interpersonal relationships. It usually is more focused on the network itself, rather than on the attributes of the members (Li & Du, 2011). Valente and Rogers (1995) have described social network analysis from the point of view of interpersonal communication by “formal methods of measuring who talks to whom within a community”. Social network analysis enables researchers to identify people who are more central in the network and so more influential. By using these central people or opinion leaders as seeds diffusion of a new product or service can be accelerated (Katz & Lazarsfeld, 1955; Valente & Davis, 1999). Importance of Social Networks for Marketing The importance of social networks as a marketing tool is increasing, and it includes diverse areas (Even-Dar & Shapirab, 2011). Analysis of interdependencies between customers can improve targeted marketing as well as help organization in acquisition of new customers who are not detectable by traditional techniques. By recent technological developments social networks are not limited in face-to-face and physical relationships. Furthermore, online social networks have become a new medium for word-of-mouth marketing. Although the face-to-face word-of-mouth has a greater impact on consumer purchasing decisions over printed information because of its vividness and credibility, in recent years with the growth of the Internet and virtual communities the written word-of-mouth (word-of-mouse) has been created in the online channels (Mak, 2008). Consider a company that wants to launch a new product. This company can benefit from popular social networks like Facebook and Myspace rather than using classical advertising channels. Then, convincing several key persons in each network to adopt the new product, can help a company to exploit an effective diffusion in the network through word-of-mouth. According to Nielsen’s survey of more than 26,000 internet uses, 78% of respondents exhibited recommendations from others are the most trusted source when considering a product or service (Nielsen, 2007). Based on another study conducted by Deloitte’s Consumer Products group, almost 62% of consumers who read consumer-written product reviews online declare their purchase decisions have been directly influenced by the user reviews (Delottie, 2007). Empirical studies have demonstrated that new ideas and practices spread through interpersonal communication (Valente & Rogers, 1995; Valente & Davis, 1999; Valente, 1995). Hawkins et al. (1995) suggest that companies can use four possible courses of action, including marketing research, product sampling, retailing/personal selling and advertising to use their knowledge of opinion leaders to their advantage. The authors of this paper in a similar study have done a review of related literature using social networks for improving marketing response. They discuss the benefits and challenges of utilizing interpersonal relationships in a network as well as opinion leader identification; also, a three step process to show how firms can apply social networks for their marketing activities has been proposed (Jafari Momtaz et al., 2011). While applications of opinion leadership in business and marketing have been widely studied, it generally deals with the development of measurement scale (Burt, 1999), its importance in the social sciences (Flynn et al., 1994), and its application to various areas related to the marketing, such as the health care industry, political science (Burt, 1999) and public communications (Howard et al., 2000; Locock et al., 2001). In this paper, a comprehensive review of studies in the field of opinion leadership and employing social networks to improve the marketing response is done. In the next sec15 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/identifying-opinion-leadersmarketing-analyzing/60541?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2", "title": "" }, { "docid": "8250999ad1b7278ff123cd3c89b5d2d9", "text": "Drawing on Bronfenbrenner’s ecological theory and prior empirical research, the current study examines the way that blogging and social networking may impact feelings of connection and social support, which in turn could impact maternal well-being (e.g., marital functioning, parenting stress, and depression). One hundred and fifty-seven new mothers reported on their media use and various well-being variables. On average, mothers were 27 years old (SD = 5.15) and infants were 7.90 months old (SD = 5.21). All mothers had access to the Internet in their home. New mothers spent approximately 3 hours on the computer each day, with most of this time spent on the Internet. Findings suggested that frequency of blogging predicted feelings of connection to extended family and friends which then predicted perceptions of social support. This in turn predicted maternal well-being, as measured by marital satisfaction, couple conflict, parenting stress, and depression. In sum, blogging may improve new mothers’ well-being, as they feel more connected to the world outside their home through the Internet.", "title": "" }, { "docid": "aa5d8162801abcc81ac542f7f2a423e5", "text": "Prediction of popularity has profound impact for social media, since it offers opportunities to reveal individual preference and public attention from evolutionary social systems. Previous research, although achieves promising results, neglects one distinctive characteristic of social data, i.e., sequentiality. For example, the popularity of online content is generated over time with sequential post streams of social media. To investigate the sequential prediction of popularity, we propose a novel prediction framework called Deep Temporal Context Networks (DTCN) by incorporating both temporal context and temporal attention into account. Our DTCN contains three main components, from embedding, learning to predicting. With a joint embedding network, we obtain a unified deep representation of multi-modal user-post data in a common embedding space. Then, based on the embedded data sequence over time, temporal context learning attempts to recurrently learn two adaptive temporal contexts for sequential popularity. Finally, a novel temporal attention is designed to predict new popularity (the popularity of a new userpost pair) with temporal coherence across multiple time-scales. Experiments on our released image dataset with about 600K Flickr photos demonstrate that DTCN outperforms state-of-the-art deep prediction algorithms, with an average of 21.51% relative performance improvement in the popularity prediction (Spearman Ranking Correlation).", "title": "" }, { "docid": "708c9b97f4a393ac49688d913b1d2cc6", "text": "Cognitive NLP systemsi.e., NLP systems that make use of behavioral data augment traditional text-based features with cognitive features extracted from eye-movement patterns, EEG signals, brain-imaging etc.. Such extraction of features is typically manual. We contend that manual extraction of features may not be the best way to tackle text subtleties that characteristically prevail in complex classification tasks like sentiment analysis and sarcasm detection, and that even the extraction and choice of features should be delegated to the learning system. We introduce a framework to automatically extract cognitive features from the eye-movement / gaze data of human readers reading the text and use them as features along with textual features for the tasks of sentiment polarity and sarcasm detection. Our proposed framework is based on Convolutional Neural Network (CNN). The CNN learns features from both gaze and text and uses them to classify the input text. We test our technique on published sentiment and sarcasm labeled datasets, enriched with gaze information, to show that using a combination of automatically learned text and gaze features often yields better classification performance over (i) CNN based systems that rely on text input alone and (ii) existing systems that rely on handcrafted gaze and textual features.", "title": "" }, { "docid": "d5f905fb66ba81ecde0239a4cc3bfe3f", "text": "Bidirectional path tracing (BDPT) can render highly realistic scenes with complicated lighting scenarios. The Light Vertex Cache (LVC) based BDPT method by Davidovic et al. [Davidovič et al. 2014] provided good performance on scenes with simple materials in a progressive rendering scenario. In this paper, we propose a new bidirectional path tracing formulation based on the LVC approach that handles scenes with complex, layered materials efficiently on the GPU. We achieve coherent material evaluation while conserving GPU memory requirements using sorting. We propose a modified method for selecting light vertices using the contribution importance which improves the image quality for a given amount of work. Progressive rendering can empower artists in the production pipeline to iterate and preview their work quickly. We hope the work presented here will enable the use of GPUs in the production pipeline with complex materials and complicated lighting scenarios.", "title": "" }, { "docid": "400a56ea0b2c005ed16500f0d7818313", "text": "Real estate appraisal, which is the process of estimating the price for real estate properties, is crucial for both buyers and sellers as the basis for negotiation and transaction. Traditionally, the repeat sales model has been widely adopted to estimate real estate prices. However, it depends on the design and calculation of a complex economic-related index, which is challenging to estimate accurately. Today, real estate brokers provide easy access to detailed online information on real estate properties to their clients. We are interested in estimating the real estate price from these large amounts of easily accessed data. In particular, we analyze the prediction power of online house pictures, which is one of the key factors for online users to make a potential visiting decision. The development of robust computer vision algorithms makes the analysis of visual content possible. In this paper, we employ a recurrent neural network to predict real estate prices using the state-of-the-art visual features. The experimental results indicate that our model outperforms several other state-of-the-art baseline algorithms in terms of both mean absolute error and mean absolute percentage error.", "title": "" }, { "docid": "b8c59cb962a970daaf012b15bcb8413d", "text": "Joint image filters leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for suppressing noise or enhancing spatial resolution. Existing methods either rely on various explicit filter constructions or hand-designed objective functions, thereby making it difficult to understand, improve, and accelerate these filters in a coherent framework. In this paper, we propose a learning-based approach for constructing joint filters based on Convolutional Neural Networks. In contrast to existing methods that consider only the guidance image, the proposed algorithm can selectively transfer salient structures that are consistent with both guidance and target images. We show that the model trained on a certain type of data, e.g., RGB and depth images, generalizes well to other modalities, e.g., flash/non-Flash and RGB/NIR images. We validate the effectiveness of the proposed joint filter through extensive experimental evaluations with state-of-the-art methods.", "title": "" }, { "docid": "6db749b222a44764cf07bde527c230a3", "text": "There have been many claims that the Internet represents a new “frictionless market.” Our research empirically analyzes the characteristics of the Internet as a channel for two categories of homogeneous products — books and CDs. Using a data set of over 8,500 price observations collected over a period of 15 months, we compare pricing behavior at 41 Internet and conventional retail outlets. We find that prices on the Internet are 9-16% lower than prices in conventional outlets, depending on whether taxes, shipping and shopping costs are included in the price. Additionally, we find that Internet retailers’ price adjustments over time are up to 100 times smaller than conventional retailers’ price adjustments — presumably reflecting lower menu costs in Internet channels. We also find that levels of price dispersion depend importantly on the measures employed. When we simply compare the prices posted by different Internet retailers we find substantial dispersion. Internet retailer prices differ by an average of 33% for books and 25% for CDs. However, when we weight these prices by proxies for market share, we find dispersion is lower in Internet channels than in conventional channels, reflecting the dominance of certain heavily branded retailers. We conclude that while there is lower friction in many dimensions of Internet competition, branding, awareness, and trust remain important sources of heterogeneity among Internet retailers.", "title": "" }, { "docid": "83ed2dfe4456bc3cc8052747e7df7bfc", "text": "Dietary restriction has been shown to have several health benefits including increased insulin sensitivity, stress resistance, reduced morbidity, and increased life span. The mechanism remains unknown, but the need for a long-term reduction in caloric intake to achieve these benefits has been assumed. We report that when C57BL6 mice are maintained on an intermittent fasting (alternate-day fasting) dietary-restriction regimen their overall food intake is not decreased and their body weight is maintained. Nevertheless, intermittent fasting resulted in beneficial effects that met or exceeded those of caloric restriction including reduced serum glucose and insulin levels and increased resistance of neurons in the brain to excitotoxic stress. Intermittent fasting therefore has beneficial effects on glucose regulation and neuronal resistance to injury in these mice that are independent of caloric intake.", "title": "" }, { "docid": "90c6cf2fd66683843a8dd549676727d5", "text": "Despite great progress in neuroscience, there are still fundamental unanswered questions about the brain, including the origin of subjective experience and consciousness. Some answers might rely on new physical mechanisms. Given that biophotons have been discovered in the brain, it is interesting to explore if neurons use photonic communication in addition to the well-studied electro-chemical signals. Such photonic communication in the brain would require waveguides. Here we review recent work (S. Kumar, K. Boone, J. Tuszynski, P. Barclay, and C. Simon, Scientific Reports 6, 36508 (2016)) suggesting that myelinated axons could serve as photonic waveguides. The light transmission in the myelinated axon was modeled, taking into account its realistic imperfections, and experiments were proposed both in vivo and in vitro to test this hypothesis. Potential implications for quantum biology are discussed.", "title": "" }, { "docid": "21f079e590e020df08d461ba78a26d65", "text": "The aim of this study was to develop a tool to measure the knowledge of nurses on pressure ulcer prevention. PUKAT 2·0 is a revised and updated version of the Pressure Ulcer Knowledge Assessment Tool (PUKAT) developed in 2010 at Ghent University, Belgium. The updated version was developed using state-of-the-art techniques to establish evidence concerning validity and reliability. Face and content validity were determined through a Delphi procedure including both experts from the European Pressure Ulcer Advisory Panel (EPUAP) and the National Pressure Ulcer Advisory Panel (NPUAP) (n = 15). A subsequent psychometric evaluation of 342 nurses and nursing students evaluated the item difficulty, discriminating power and quality of the response alternatives. Furthermore, construct validity was established through a test-retest procedure and the known-groups technique. The content validity was good and the difficulty level moderate. The discernment was found to be excellent: all groups with a (theoretically expected) higher level of expertise had a significantly higher score than the groups with a (theoretically expected) lower level of expertise. The stability of the tool is sufficient (Intraclass Correlation Coefficient = 0·69). The PUKAT 2·0 demonstrated good psychometric properties and can be used and disseminated internationally to assess knowledge about pressure ulcer prevention.", "title": "" }, { "docid": "1e852e116c11a6c7fb1067313b1ffaa3", "text": "Article history: Received 20 February 2013 Received in revised form 30 July 2013 Accepted 11 September 2013 Available online 21 September 2013", "title": "" } ]
scidocsrr
ce570c9ec903f054234cc8932dc3cea3
Semantically-Guided Video Object Segmentation
[ { "docid": "775f4fd21194e18cdf303248f1cde206", "text": "Both image segmentation and dense 3D modeling from images represent an intrinsically ill-posed problem. Strong regularizers are therefore required to constrain the solutions from being 'too noisy'. Unfortunately, these priors generally yield overly smooth reconstructions and/or segmentations in certain regions whereas they fail in other areas to constrain the solution sufficiently. In this paper we argue that image segmentation and dense 3D reconstruction contribute valuable information to each other's task. As a consequence, we propose a rigorous mathematical framework to formulate and solve a joint segmentation and dense reconstruction problem. Image segmentations provide geometric cues about which surface orientations are more likely to appear at a certain location in space whereas a dense 3D reconstruction yields a suitable regularization for the segmentation problem by lifting the labeling from 2D images to 3D space. We show how appearance-based cues and 3D surface orientation priors can be learned from training data and subsequently used for class-specific regularization. Experimental results on several real data sets highlight the advantages of our joint formulation.", "title": "" }, { "docid": "522345eb9b2e53f05bb9d961c85fea23", "text": "In this work, we propose a novel approach to video segmentation that operates in bilateral space. We design a new energy on the vertices of a regularly sampled spatiotemporal bilateral grid, which can be solved efficiently using a standard graph cut label assignment. Using a bilateral formulation, the energy that we minimize implicitly approximates long-range, spatio-temporal connections between pixels while still containing only a small number of variables and only local graph edges. We compare to a number of recent methods, and show that our approach achieves state-of-the-art results on multiple benchmarks in a fraction of the runtime. Furthermore, our method scales linearly with image size, allowing for interactive feedback on real-world high resolution video.", "title": "" }, { "docid": "a4b123705dda7ae3ac7e9e88a50bd64a", "text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.", "title": "" } ]
[ { "docid": "b811c82ff944715edc2b7dec382cb529", "text": "The mobile industry has experienced a dramatic growth; it evolves from analog to digital 2G (GSM), then to high date rate cellular wireless communication such as 3G (WCDMA), and further to packet optimized 3.5G (HSPA) and 4G (LTE and LTE advanced) systems. Today, the main design challenges of mobile phone antenna are the requirements of small size, built-in structure, and multisystems in multibands, including all cellular 2G, 3G, 4G, and other noncellular radio-frequency (RF) bands, and moreover the need for a nice appearance and meeting all standards and requirements such as specific absorption rates (SARs), hearing aid compatibility (HAC), and over the air (OTA). This paper gives an overview of some important antenna designs and progress in mobile phones in the last 15 years, and presents the recent development on new antenna technology for LTE and compact multiple-input-multiple-output (MIMO) terminals.", "title": "" }, { "docid": "3df9bacf95281fc609ee7fd2d4724e91", "text": "The deleterious effects of plastic debris on the marine environment were reviewed by bringing together most of the literature published so far on the topic. A large number of marine species is known to be harmed and/or killed by plastic debris, which could jeopardize their survival, especially since many are already endangered by other forms of anthropogenic activities. Marine animals are mostly affected through entanglement in and ingestion of plastic litter. Other less known threats include the use of plastic debris by \"invader\" species and the absorption of polychlorinated biphenyls from ingested plastics. Less conspicuous forms, such as plastic pellets and \"scrubbers\" are also hazardous. To address the problem of plastic debris in the oceans is a difficult task, and a variety of approaches are urgently required. Some of the ways to mitigate the problem are discussed.", "title": "" }, { "docid": "d104206fd95525192240e9a6d6aedd89", "text": "Graphical models are usually learned without regard to the cost of doing inference with them. As a result, even if a good model is learned, it may perform poorly at prediction, because it requires approximate inference. We propose an alternative: learning models with a score function that directly penalizes the cost of inference. Specifically, we learn arithmetic circuits with a penalty on the number of edges in the circuit (in which the cost of inference is linear). Our algorithm is equivalent to learning a Bayesian network with context-specific independence by greedily splitting conditional distributions, at each step scoring the candidates by compiling the resulting network into an arithmetic circuit, and using its size as the penalty. We show how this can be done efficiently, without compiling a circuit from scratch for each candidate. Experiments on several real-world domains show that our algorithm is able to learn tractable models with very large treewidth, and yields more accurate predictions than a standard context-specific Bayesian network learner, in far less time.", "title": "" }, { "docid": "13c2dea57aed95f7b937a9d329dd5af8", "text": "Understanding topic hierarchies in text streams and their evolution patterns over time is very important in many applications. In this paper, we propose an evolutionary multi-branch tree clustering method for streaming text data. We build evolutionary trees in a Bayesian online filtering framework. The tree construction is formulated as an online posterior estimation problem, which considers both the likelihood of the current tree and conditional prior given the previous tree. We also introduce a constraint model to compute the conditional prior of a tree in the multi-branch setting. Experiments on real world news data demonstrate that our algorithm can better incorporate historical tree information and is more efficient and effective than the traditional evolutionary hierarchical clustering algorithm.", "title": "" }, { "docid": "3f48f5be25ac5d040cc9d226588427b3", "text": "Snake robots, sometimes called hyper-redundant mechanisms, can use their many degrees of freedom to achieve a variety of locomotive capabilities. These capabilities are ideally suited for disaster response because the snake robot can thread through tightly packed volumes, accessing locations that people and conventional machinery otherwise cannot. Snake robots also have the advantage of possessing a variety of locomotion capabilities that conventional robots do not. Just like their biological counterparts, snake robots achieve these locomotion capabilities using cyclic motions called gaits. These cyclic motions directly control the snake robot’s internal degrees of freedom which, in turn, causes a net motion, say forward, lateral and rotational, for the snake robot. The gaits described in this paper fall into two categories: parameterized and scripted. The parameterized gaits, as their name suggests, can be described by a relative simple parameterized function, whereas the scripted cannot. This paper describes the functions we prescribed for gait generation and our experiences in making these robots operate in real experiments. © Koninklijke Brill NV, Leiden and The Robotics Society of Japan, 2009", "title": "" }, { "docid": "cb7dda8f4059e5a66e4a6e26fcda601e", "text": "Purpose – This UK-based research aims to build on the US-based work of Keller and Aaker, which found a significant association between “company credibility” (via a brand’s “expertise” and “trustworthiness”) and brand extension acceptance, hypothesising that brand trust, measured via two correlate dimensions, is significantly related to brand extension acceptance. Design/methodology/approach – Discusses brand extension and various prior, validated influences on its success. Focuses on the construct of trust and develops hypotheses about the relationship of brand trust with brand extension acceptance. The hypotheses are then tested on data collected from consumers in the UK. Findings – This paper, using 368 consumer responses to nine, real, low involvement UK product and service brands, finds support for a significant association between the variables, comparable in strength with that between media weight and brand share, and greater than that delivered by the perceived quality level of the parent brand. Originality/value – The research findings, which develop a sparse literature in this linkage area, are of significance to marketing practitioners, since brand trust, already associated with brand equity and brand loyalty, and now with brand extension, needs to be managed and monitored with care. The paper prompts further investigation of the relationship between brand trust and brand extension acceptance in other geographic markets and with other higher involvement categories.", "title": "" }, { "docid": "567445f68597ea8ff5e89719772819be", "text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.", "title": "" }, { "docid": "30672a5e329d9ed61a65b07f24731c91", "text": "Combined star-delta windings in electrical machines result in a higher fundamental winding factor and cause a smaller spatial harmonic content. This leads to lower I2R losses in the stator and the rotor winding and thus to an increased efficiency. However, compared to an equivalent six-phase winding, additional spatial harmonics are generated due to the different magnetomotive force in the star and delta part of the winding. In this paper, a complete theory and analysis method for the analytical calculation of the efficiency of induction motors equipped with combined star-delta windings is developed. The method takes into account the additional harmonic content due to the different magnetomotive force in the star and delta part. To check the analysis' validity, an experimental test is reported both on a cage induction motor equipped with a combined star-delta winding in the stator and on a reference motor with the same core but with a classical three-phase winding.", "title": "" }, { "docid": "bd5a124345544982d485a0e036c49de8", "text": "In this paper, we explore various aspects of fusing LIDAR and color imagery for pedestrian detection in the context of convolutional neural networks (CNNs), which have recently become state-of-art for many vision problems. We incorporate LIDAR by up-sampling the point cloud to a dense depth map and then extracting three features representing different aspects of the 3D scene. We then use those features as extra image channels. Specifically, we leverage recent work on HHA [9] (horizontal disparity, height above ground, and angle) representations, adapting the code to work on up-sampled LIDAR rather than Microsoft Kinect depth maps. We show, for the first time, that such a representation is applicable to up-sampled LIDAR data, despite its sparsity. Since CNNs learn a deep hierarchy of feature representations, we then explore the question: At what level of representation should we fuse this additional information with the original RGB image channels? We use the KITTI pedestrian detection dataset for our exploration. We first replicate the finding that region-CNNs (R-CNNs) [8] can outperform the original proposal mechanism using only RGB images, but only if fine-tuning is employed. Then, we show that: 1) using HHA features and RGB images performs better than RGB-only, even without any fine-tuning using large RGB web data, 2) fusing RGB and HHA achieves the strongest results if done late, but, under a parameter or computational budget, is best done at the early to middle layers of the hierarchical representation, which tend to represent midlevel features rather than low (e.g. edges) or high (e.g. object class decision) level features, 3) some of the less successful methods have the most parameters, indicating that increased classification accuracy is not simply a function of increased capacity in the neural network.", "title": "" }, { "docid": "4924441de38f1b28e66330a1cb219f4b", "text": "Online marketing is one of the best practices used to establish a brand and to increase its popularity. Advertisements are used in a better way to showcase the company’s product/service and give rise to a worthy online marketing strategy. Posting an advertisement on utilitarian web pages helps to maximize brand reach and get a better feedback. Now-a-days companies are cautious of their brand image on the Internet due to the growing number of Internet users. Since there are billions of Web sites on the Internet, it becomes difficult for companies to really decide where to advertise on the Internet for brand popularity. What if, the company advertise on a page which is visited by less number of the interested (for a particular type of product) users instead of a web page which is visited by more number of the interested users?—this doubt and uncertainty—is a core issue faced by many companies. This research paper presents a Brand analysis framework and suggests some experimental practices to ensure efficiency of the proposed framework. This framework is divided into three components—(1) Web site network formation framework—a framework that forms a Web site network of a specific search query obtained from resultant web pages of three search engines-Google, Yahoo & Bing and their associated web pages; (2) content scraping framework—it crawls the content of web pages existing in the framework-formed Web site network; (3) rank assignment of networked web pages—text edge processing algorithm has been used to find out terms of interest and their occurrence associated with search query. We have further applied sentiment analysis to validate positive or negative impact of the sentences, having the search term and its associated terms (with reference to the search query) to identify impact of web page. Later, on the basis of both—text edge analysis and sentiment analysis results, we assigned a rank to networked web pages and online social network pages. In this research work, we present experiments for ‘Motorola smart phone,’ ‘LG smart phone’ and ‘Samsung smart phone’ as search query and sampled the Web site network of top 20 search results of all three search engines and examined up to 60 search results for each search engine. This work is useful to target the right online location for specific brand marketing. Once the brand knows the web pages/social media pages containing high brand affinity and ensures that the content of high affinity web page/social media page has a positive impact, we advertise at that respective online location. Thus, targeted brand analysis framework for online marketing not only has benefits for the advertisement agencies but also for the customers.", "title": "" }, { "docid": "01bd2cdb72270a4ad36beeca29cf670b", "text": "5-Lipoxygenase (5-LO) plays a pivotal role in the progression of atherosclerosis. Therefore, this study investigated the molecular mechanisms involved in 5-LO expression on monocytes induced by LPS. Stimulation of THP-1 monocytes with LPS (0~3 µg/ml) increased 5-LO promoter activity and 5-LO protein expression in a concentration-dependent manner. LPS-induced 5-LO expression was blocked by pharmacological inhibition of the Akt pathway, but not by inhibitors of MAPK pathways including the ERK, JNK, and p38 MAPK pathways. In line with these results, LPS increased the phosphorylation of Akt, suggesting a role for the Akt pathway in LPS-induced 5-LO expression. In a promoter activity assay conducted to identify transcription factors, both Sp1 and NF-κB were found to play central roles in 5-LO expression in LPS-treated monocytes. The LPS-enhanced activities of Sp1 and NF-κB were attenuated by an Akt inhibitor. Moreover, the LPS-enhanced phosphorylation of Akt was significantly attenuated in cells pretreated with an anti-TLR4 antibody. Taken together, 5-LO expression in LPS-stimulated monocytes is regulated at the transcriptional level via TLR4/Akt-mediated activations of Sp1 and NF-κB pathways in monocytes.", "title": "" }, { "docid": "2d3adb98f6b1b4e161d84314958960e5", "text": "BACKGROUND\nBright light therapy was shown to be a promising treatment for depression during pregnancy in a recent open-label study. In an extension of this work, we report findings from a double-blind placebo-controlled pilot study.\n\n\nMETHOD\nTen pregnant women with DSM-IV major depressive disorder were randomly assigned from April 2000 to January 2002 to a 5-week clinical trial with either a 7000 lux (active) or 500 lux (placebo) light box. At the end of the randomized controlled trial, subjects had the option of continuing in a 5-week extension phase. The Structured Interview Guide for the Hamilton Depression Scale-Seasonal Affective Disorder Version was administered to assess changes in clinical status. Salivary melatonin was used to index circadian rhythm phase for comparison with antidepressant results.\n\n\nRESULTS\nAlthough there was a small mean group advantage of active treatment throughout the randomized controlled trial, it was not statistically significant. However, in the longer 10-week trial, the presence of active versus placebo light produced a clear treatment effect (p =.001) with an effect size (0.43) similar to that seen in antidepressant drug trials. Successful treatment with bright light was associated with phase advances of the melatonin rhythm.\n\n\nCONCLUSION\nThese findings provide additional evidence for an active effect of bright light therapy for antepartum depression and underscore the need for an expanded randomized clinical trial.", "title": "" }, { "docid": "30c6829427aaa8d23989afcd666372f7", "text": "We developed an optimizing compiler for intrusion detection rules popularized by an open-source Snort Network Intrusion Detection System (www.snort.org). While Snort and Snort-like rules are usually thought of as a list of independent patterns to be tested in a sequential order, we demonstrate that common compilation techniques are directly applicable to Snort rule sets and are able to produce high-performance matching engines. SNORTRAN combines several compilation techniques, including cost-optimized decision trees, pattern matching precompilation, and string set clustering. Although all these techniques have been used before in other domain-specific languages, we believe their synthesis in SNORTRAN is original and unique. Introduction Snort [RO99] is a popular open-source Network Intrusion Detection System (NIDS). Snort is controlled by a set of pattern/action rules residing in a configuration file of a specific format. Due to Snort’s popularity, Snort-like rules are accepted by several other NIDS [FSTM, HANK]. In this paper we describe an optimizing compiler for Snort rule sets called SNORTRAN that incorporates ideas of pattern matching compilation based on cost-optimized decision trees [DKP92, KS88] with setwise string search algorithms popularized by recent research in highperformance NIDS detection engines [FV01, CC01, GJMP]. The two main design goals were performance and compatibility with the original Snort rule interpreter. The primary application area for NIDS is monitoring IP traffic inside and outside of firewalls, looking for unusual activities that can be attributed to external attacks or internal misuse. Most NIDS are designed to handle T1/partial T3 traffic, but as the number of the known vulnerabilities grows and more and more weight is given to internal misuse monitoring on high-throughput networks (100Mbps/1Gbps), it gets harder to keep up with the traffic without dropping too many packets to make detection ineffective. Throwing hardware at the problem is not always possible because of growing maintenance and support costs, let alone the fact that the problem of making multi-unit system work in realistic environment is as hard as the original performance problem. Bottlenecks of the detection process were identified by many researchers and practitioners [FV01, ND02, GJMP], and several approaches were proposed [FV01, CC01]. Our benchmarking supported the performance analysis made by M. Fisk and G. Varghese [FV01], adding some interesting findings on worst-case behavior of setwise string search algorithms in practice. Traditionally, NIDS are designed around a packet grabber (system-specific or libcap) getting traffic packets off the wire, combined with preprocessors, packet decoders, and a detection engine looking for a static set of signatures loaded from a rule file at system startup. Snort [SNORT] and", "title": "" }, { "docid": "01b3c9758bd68ad68a2f1d262feaa4e8", "text": "A low-voltage-swing MOSFET gate drive technique is proposed in this paper for enhancing the efficiency characteristics of high-frequency-switching dc-dc converters. The parasitic power dissipation of a dc-dc converter is reduced by lowering the voltage swing of the power transistor gate drivers. A comprehensive circuit model of the parasitic impedances of a monolithic buck converter is presented. Closed-form expressions for the total power dissipation of a low-swing buck converter are proposed. The effect of reducing the MOSFET gate voltage swings is explored with the proposed circuit model. A range of design parameters is evaluated, permitting the development of a design space for full integration of active and passive devices of a low-swing buck converter on the same die, for a target CMOS technology. The optimum gate voltage swing of a power MOSFET that maximizes efficiency is lower than a standard full voltage swing. An efficiency of 88% at a switching frequency of 102 MHz is achieved for a voltage conversion from 1.8 to 0.9 V with a low-swing dc-dc converter based on a 0.18-/spl mu/m CMOS technology. The power dissipation of a low-swing dc-dc converter is reduced by 27.9% as compared to a standard full-swing dc-dc converter.", "title": "" }, { "docid": "ce2d4247b1072b3c593e73fe9d67cf63", "text": "OBJECTIVE\nTo improve walking and other aspects of physical function with a progressive 6-month exercise program in patients with multiple sclerosis (MS).\n\n\nMETHODS\nMS patients with mild to moderate disability (Expanded Disability Status Scale scores 1.0 to 5.5) were randomly assigned to an exercise or control group. The intervention consisted of strength and aerobic training initiated during 3-week inpatient rehabilitation and continued for 23 weeks at home. The groups were evaluated at baseline and at 6 months. The primary outcome was walking speed, measured by 7.62 m and 500 m walk tests. Secondary outcomes included lower extremity strength, upper extremity endurance and dexterity, peak oxygen uptake, and static balance. An intention-to-treat analysis was used.\n\n\nRESULTS\nNinety-one (96%) of the 95 patients entering the study completed it. Change between groups was significant in the 7.62 m (p = 0.04) and 500 m walk tests (p = 0.01). In the 7.62 m walk test, 22% of the exercising patients showed clinically meaningful improvements. The exercise group also showed increased upper extremity endurance as compared to controls. No other noteworthy exercise-induced changes were observed. Exercise adherence varied considerably among the exercisers.\n\n\nCONCLUSIONS\nWalking speed improved in this randomized study. The results confirm that exercise is safe for multiple sclerosis patients and should be recommended for those with mild to moderate disability.", "title": "" }, { "docid": "68cff1020543f97e5e8c2710bc85c823", "text": "This paper describes modelling and testing of a digital distance relay for transmission line protection using MATLAB/SIMULINK. SIMULINK’s Power System Blockset (PSB) is used for detailed modelling of a power system network and fault simulation. MATLAB is used to implement programs of digital distance relaying algorithms and to serve as main software environment. The technique is an interactive simulation environment for relaying algorithm design and evaluation. The basic principles of a digital distance relay and some related filtering techniques are also described in this paper. A 345 kV, 100 km transmission line and a MHO type distance relay are selected as examples for fault simulation and relay testing. Some simulation results are given.", "title": "" }, { "docid": "21aa2df33199b6fbdc64abd1ea65341b", "text": "AIM\nBefore an attempt is made to develop any population-specific behavioural change programme, it is important to know what the factors that influence behaviours are. The aim of this study was to identify what are the perceived determinants that attribute to young people's choices to both consume and misuse alcohol.\n\n\nMETHOD\nUsing a descriptive survey design, a web-based questionnaire based on the Theory of Triadic Influence was administered to students aged 18-29 years at one university in Northern Ireland.\n\n\nRESULTS\nOut of the total respondents ( n = 595), knowledge scores on alcohol consumption and the health risks associated with heavy episodic drinking were high (92.4%, n = 550). Over half (54.1%, n = 322) cited the Internet as their main source for alcohol-related information. The three most perceived influential factors of inclination to misuse alcohol were strains/conflict within the family home ( M = 2.98, standard deviation ( SD) = 0.18, 98.7%, n = 587), risk taking/curiosity behaviour ( M = 2.97, SD = 0.27, 97.3%, n = 579) and the desire not to be socially alienated ( M = 2.94, SD = 0.33, 96%, n = 571). Females were statistically significantly more likely to be influenced by desire not to be socially alienated than males (  p = .029). Religion and personal reasons were the most commonly cited reasons for not drinking.\n\n\nCONCLUSION\nFuture initiatives to reduce alcohol misuse and alcohol-related harms need to focus on changing social normative beliefs and attitudes around alcohol consumption and the family and environmental factors that influence the choice of young adult's alcohol drinking behaviour. Investment in multi-component interventions may be a useful approach.", "title": "" }, { "docid": "0b86a006b1f8e3a5e940daef25fe7d58", "text": "While drug toxicity (especially hepatotoxicity) is the most frequent reason cited for withdrawal of an approved drug, no simple solution exists to adequately predict such adverse events. Simple cytotoxicity assays in HepG2 cells are relatively insensitive to human hepatotoxic drugs in a retrospective analysis of marketed pharmaceuticals. In comparison, a panel of pre-lethal mechanistic cellular assays hold the promise to deliver a more sensitive approach to detect endpoint-specific drug toxicities. The panel of assays covered by this review includes steatosis, cholestasis, phospholipidosis, reactive intermediates, mitochondria membrane function, oxidative stress, and drug interactions. In addition, the use of metabolically competent cells or the introduction of major human hepatocytes in these in vitro studies allow a more complete picture of potential drug side effect. Since inter-individual therapeutic index (TI) may differ from patient to patient, the rational use of one or more of these cellular assay and targeted in vivo exposure data may allow pharmaceutical scientists to select drug candidates with a higher TI potential in the drug discovery phase.", "title": "" }, { "docid": "1649b2776fcc2b8a736306128f8a2331", "text": "The paradigm of simulated annealing is applied to the problem of drawing graphs “nicely.” Our algorithm deals with general undirected graphs with straight-line edges, and employs several simple criteria for the aesthetic quality of the result. The algorithm is flexible, in that the relative weights of the criteria can be changed. For graphs of modest size it produces good results, competitive with those produced by other methods, notably, the “spring method” and its variants.", "title": "" }, { "docid": "67733befe230741c69665218dd256dc0", "text": "Model reduction of the Markov process is a basic problem in modeling statetransition systems. Motivated by the state aggregation approach rooted in control theory, we study the statistical state compression of a finite-state Markov chain from empirical trajectories. Through the lens of spectral decomposition, we study the rank and features of Markov processes, as well as properties like representability, aggregatability and lumpability. We develop a class of spectral state compression methods for three tasks: (1) estimate the transition matrix of a low-rank Markov model, (2) estimate the leading subspace spanned by Markov features, and (3) recover latent structures of the state space like state aggregation and lumpable partition. The proposed methods provide an unsupervised learning framework for identifying Markov features and clustering states. We provide upper bounds for the estimation errors and nearly matching minimax lower bounds. Numerical studies are performed on synthetic data and a dataset of New York City taxi trips. ∗Anru Zhang is with the Department of Statistics, University of Wisconsin-Madison, Madison, WI 53706, E-mail: [email protected]; Mengdi Wang is with the Department of Operations Research and Financial Engineering, Princeton University, Princeton, NJ 08544, E-mail: [email protected]. †", "title": "" } ]
scidocsrr
850a195fc49bfcc68808dd54c19d3d97
Energy Saving Additive Neural Network
[ { "docid": "b059f6d2e9f10e20417f97c05d92c134", "text": "We present a hybrid analog/digital very large scale integration (VLSI) implementation of a spiking neural network with programmable synaptic weights. The synaptic weight values are stored in an asynchronous Static Random Access Memory (SRAM) module, which is interfaced to a fast current-mode event-driven DAC for producing synaptic currents with the appropriate amplitude values. These currents are further integrated by current-mode integrator synapses to produce biophysically realistic temporal dynamics. The synapse output currents are then integrated by compact and efficient integrate and fire silicon neuron circuits with spike-frequency adaptation and adjustable refractory period and spike-reset voltage settings. The fabricated chip comprises a total of 32 × 32 SRAM cells, 4 × 32 synapse circuits and 32 × 1 silicon neurons. It acts as a transceiver, receiving asynchronous events in input, performing neural computation with hybrid analog/digital circuits on the input spikes, and eventually producing digital asynchronous events in output. Input, output, and synaptic weight values are transmitted to/from the chip using a common communication protocol based on the Address Event Representation (AER). Using this representation it is possible to interface the device to a workstation or a micro-controller and explore the effect of different types of Spike-Timing Dependent Plasticity (STDP) learning algorithms for updating the synaptic weights values in the SRAM module. We present experimental results demonstrating the correct operation of all the circuits present on the chip.", "title": "" } ]
[ { "docid": "6bc2f0ea840e4b14e1340aa0c0bf4f07", "text": "A low-voltage low-power CMOS operational transconductance amplifier (OTA) with near rail-to-rail output swing is presented in this brief. The proposed circuit is based on the current-mirror OTA topology. In addition, several circuit techniques are adopted to enhance the voltage gain. Simulated from a 0.8-V supply voltage, the proposed OTA achieves a 62-dB dc gain and a gain–bandwidth product of 160 MHz while driving a 2-pF load. The OTA is designed in a 0.18m CMOS process. The power consumption is 0.25 mW including the common-mode feedback circuit.", "title": "" }, { "docid": "235edeee5ed3a16b88960400d13cb64f", "text": "Product service systems (PSS) can be understood as an innovation / business strategy that includes a set of products and services that are realized by an actor network. More recently, PSS that comprise System of Systems (SoS) have been of increasing interest, notably in the transportation (autonomous vehicle infrastructures, multi-modal transportation) and energy sector (smart grids). Architecting such PSS-SoS goes beyond classic SoS engineering, as they are often driven by new technology, without an a priori client and actor network, and thus, a much larger number of potential architectures. However, it seems that neither the existing PSS nor SoS literature provides solutions to how to architect such PSS. This paper presents a methodology for architecting PSS-SoS that are driven by technological innovation. The objective is to design PSS-SoS architectures together with their value proposition and business model from an initial technology impact assessment. For this purpose, we adapt approaches from the strategic management, business modeling, PSS and SoS architecting literature. We illustrate the methodology by applying it to the case of an automobile PSS.", "title": "" }, { "docid": "cdd3dd7a367027ebfe4b3f59eca99267", "text": "3 Computation of the shearlet transform 13 3.1 Finite discrete shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.2 A discrete shearlet frame . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.3 Inversion of the shearlet transform . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.4 Smooth shearlets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.5 Implementation details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.1 Indexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 3.5.2 Computation of spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 3.6 Short documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.7 Download & Installation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 3.8 Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.9 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32", "title": "" }, { "docid": "a3da533f428b101c8f8cb0de04546e48", "text": "In this paper we investigate the challenging problem of cursive text recognition in natural scene images. In particular, we have focused on isolated Urdu character recognition in natural scenes that could not be handled by tradition Optical Character Recognition (OCR) techniques developed for Arabic and Urdu scanned documents. We also present a dataset of Urdu characters segmented from images of signboards, street scenes, shop scenes and advertisement banners containing Urdu text. A variety of deep learning techniques have been proposed by researchers for natural scene text detection and recognition. In this work, a Convolutional Neural Network (CNN) is applied as a classifier, as CNN approaches have been reported to provide high accuracy for natural scene text detection and recognition. A dataset of manually segmented characters was developed and deep learning based data augmentation techniques were applied to further increase the size of the dataset. The training is formulated using filter sizes of 3x3, 5x5 and mixed 3x3 and 5x5 with a stride value of 1 and 2. The CNN model is trained with various learning rates and state-of-the-art results are achieved.", "title": "" }, { "docid": "d81a5fd44adc6825e18e3841e4e66291", "text": "We study compression techniques for parallel in-memory graph algorithms, and show that we can achieve reduced space usage while obtaining competitive or improved performance compared to running the algorithms on uncompressed graphs. We integrate the compression techniques into Ligra, a recent shared-memory graph processing system. This system, which we call Ligra+, is able to represent graphs using about half of the space for the uncompressed graphs on average. Furthermore, Ligra+ is slightly faster than Ligra on average on a 40-core machine with hyper-threading. Our experimental study shows that Ligra+ is able to process graphs using less memory, while performing as well as or faster than Ligra.", "title": "" }, { "docid": "184402cd0ef80ae3426fd36fbb2ec998", "text": "Hundreds of hours of videos are uploaded every minute on YouTube and other video sharing sites: some will be viewed by millions of people and other will go unnoticed by all but the uploader. In this paper we propose to use visual sentiment and content features to predict the popularity of web videos. The proposed approach outperforms current state-of-the-art methods on two publicly available datasets.", "title": "" }, { "docid": "2da84ca7d7db508a6f9a443f2dbae7c1", "text": "This paper proposes a computationally efficient approach to detecting objects natively in 3D point clouds using convolutional neural networks (CNNs). In particular, this is achieved by leveraging a feature-centric voting scheme to implement novel convolutional layers which explicitly exploit the sparsity encountered in the input. To this end, we examine the trade-off between accuracy and speed for different architectures and additionally propose to use an L1 penalty on the filter activations to further encourage sparsity in the intermediate representations. To the best of our knowledge, this is the first work to propose sparse convolutional layers and L1 regularisation for efficient large-scale processing of 3D data. We demonstrate the efficacy of our approach on the KITTI object detection benchmark and show that VoteSDeep models with as few as three layers outperform the previous state of the art in both laser and laser-vision based approaches by margins of up to 40% while remaining highly competitive in terms of processing time.", "title": "" }, { "docid": "d864cc5603c97a8ff3c070dd385fe3a8", "text": "Nowadays, different protocols coexist in Internet that provides services to users. Unfortunately, control decisions and distributed management make it hard to control networks. These problems result in an inefficient and unpredictable network behaviour. Software Defined Networks (SDN) is a new concept of network architecture. It intends to be more flexible and to simplify the management in networks with respect to traditional architectures. Each of these aspects are possible because of the separation of control plane (controller) and data plane (switches) in network devices. OpenFlow is the most common protocol for SDN networks that provides the communication between control and data planes. Moreover, the advantage of decoupling control and data planes enables a quick evolution of protocols and also its deployment without replacing data plane switches. In this survey, we review the SDN technology and the OpenFlow protocol and their related works. Specifically, we describe some technologies as Wireless Sensor Networks and Wireless Cellular Networks and how SDN can be included within them in order to solve their challenges. We classify different solutions for each technology attending to the problem that is being fixed.", "title": "" }, { "docid": "8674128201d80772040446f1ab6a7cd1", "text": "In this paper, we present an attribute graph grammar for image parsing on scenes with man-made objects, such as buildings, hallways, kitchens, and living moms. We choose one class of primitives - 3D planar rectangles projected on images and six graph grammar production rules. Each production rule not only expands a node into its components, but also includes a number of equations that constrain the attributes of a parent node and those of its children. Thus our graph grammar is context sensitive. The grammar rules are used recursively to produce a large number of objects and patterns in images and thus the whole graph grammar is a type of generative model. The inference algorithm integrates bottom-up rectangle detection which activates top-down prediction using the grammar rules. The final results are validated in a Bayesian framework. The output of the inference is a hierarchical parsing graph with objects, surfaces, rectangles, and their spatial relations. In the inference, the acceptance of a grammar rule means recognition of an object, and actions are taken to pass the attributes between a node and its parent through the constraint equations associated with this production rule. When an attribute is passed from a child node to a parent node, it is called bottom-up, and the opposite is called top-down", "title": "" }, { "docid": "3755f56410365a498c3a1ff4b61e77de", "text": "Both high switching frequency and high efficiency are critical in reducing power adapter size. The active clamp flyback (ACF) topology allows zero voltage soft switching (ZVS) under all line and load conditions, eliminates leakage inductance and snubber losses, and enables high frequency and high power density power conversion. Traditional ACF ZVS operation relies on the resonance between leakage inductance and a small primary-side clamping capacitor, which leads to increased rms current and high conduction loss. This also causes oscillatory output rectifier current and impedes the implementation of synchronous rectification. This paper proposes a secondary-side resonance scheme to shape the primary current waveform in a way that significantly improves synchronous rectifier operation and reduces primary rms current. The concept is verified with a ${\\mathbf{25}}\\hbox{--}{\\text{W/in}}^{3}$ high-density 45-W adapter prototype using a monolithic gallium nitride power IC. Over 93% full-load efficiency was demonstrated at the worst case 90-V ac input and maximum full-load efficiency was 94.5%.", "title": "" }, { "docid": "cc4548925973baa6220ad81082a93c86", "text": "Usually benefits for transportation investments are analysed within a framework of cost-benefit analysis or its related techniques such as financial analysis, cost-effectiveness analysis, life-cycle costing, economic impact analysis, and others. While these tools are valid techniques in general, their application to intermodal transportation would underestimate the overall economic impact by missing important aspects of productivity enhancement. Intermodal transportation is an example of the so-called general purpose technologies (GPTs) that are characterized by statistically significant spillover effects. Diffusion, secondary innovations, and increased demand for specific human capital are basic features of GPTs. Eventually these features affect major macroeconomic variables, especially productivity. Recent economic literature claims that in order to study GPTs, micro and macro evidence should be combined to establish a better understanding of the connecting mechanisms from the micro level to the overall performance of an economy or the macro level. This study analyses these issues with respect to intermodal transportation. The goal is to understand the basic micro and macro mechanisms behind intermodal transportation in order to further develop a rigorous framework for evaluation of benefits from intermodal transportation. In doing so, lessons from computer simulation of the basic features of intermodal transportation are discussed and conclusions are made regarding an agenda for work in the field. 1 Dr. Yuri V. Yevdokimov, Assistant Professor of Economics and Civil Engineering, University of New Brunswick, Canada, Tel. (506) 447-3221, Fax (506) 453-4514, E-mail: [email protected] Introduction Intermodal transportation can be thought of as a process for transporting freight and passengers by means of a system of interconnected networks, involving various combinations of modes of transportation, in which all of the components are seamlessly linked and efficiently combined. Intermodal transportation is rapidly gaining acceptance as an integral component of the systems approach of conducting business in an increasingly competitive and interdependent global economy. For example, the United States Code with respect to transportation states: AIt is the policy of the United States Government to develop a National Intermodal Transportation System that is economically efficient and environmentally sound, provides the foundation for the United States to compete in the global economy and will move individuals and property in an energy efficient way. The National Intermodal Transportation System shall consist of all forms of transportation in a unified, interconnected manner, including the transportation systems of the future, to reduce energy consumption and air pollution while promoting economic development and supporting the United States= pre-eminent position in international commerce.@ (49 USC, Ch. 55, Sec. 5501, 1998) David Collenette (1997), the Transport Minister of Canada, noted: AWith population growth came development, and the relative advantages and disadvantages of the different modes changed as the transportation system became more advanced.... Intermodalism today is about safe, efficient transportation by the most appropriate combination of modes.@ (The Summit on North American Intermodal Transportation, 1997) These statements define intermodal transportation as a macroeconomic concept, because an effective transportation system is a vital factor in assuring the efficiency of an economic system as a whole. Moreover, intermodal transportation is an important socio-economic phenomenon which implies that the benefits of intermodal transportation have to be evaluated at the macroeconomic level, or at least at the regional level, involving all elements of the economic system that gain from having a more efficient transportation network in place. Defining Economic Benefits of Intermodal Transportation Traditionally, the benefits of a transportation investment have been primarily evaluated through reduced travel time and reduced vehicle maintenance and operation costs. However, according to Weisbrod and Treyz (1998), such methods underestimate the total benefits of transportation investment by Amissing other important aspects of productivity enhancement.@ It is so because transportation does not have an intrinsic purpose in itself and is rather intended to enable other economic activities such as production, consumption, leisure, and dissemination of knowledge to take place. Hence, in order to measure total economic benefits of investing in intermodal transportation, it is necessary to understand their basic relationships with different economic activities. Eventually, improvements in transportation reduce transportation costs. The immediate benefit of the reduction is the fall in total cost of production in an economic system under study which results in growth of the system=s output. This conclusion has been known in economic development literature since Tinbergen=s paper in 1957 (Tinbergen, 1957). However, the literature does not explicitly identify why transportation costs will fall. This issue is addressed in this discussion with respect to intermodal transportation. Transportation is a multiple service to multiple users. It is produced in transportation networks that provide infrastructure for economic activities. It appears that transportation networks have economies of scale. As discussed below, intermodal transportation magnifies these scale effects resulting in increasing returns to scale (IRS) of a specific nature. It implies that there are positive externalities that arise because of the scale effects, externalities that can initiate cumulative economic growth at the regional level as well as at the national level (see, for example, Brathen and Hervick, 1997, and Hussain and Westin, 1997). The phenomenon is known as a spill-over effect. Previously the effect has been evaluated through the contribution of transportation infrastructure investment to economic growth. Since Auschauer=s (1989) paper many economists have found evidence of such a contribution (see, for example, Bonaglia and Ferrara, 2000 and Khanam, 1996). Intermodal transportation as it was defined at the very beginning is more than mere improvements in transportation infrastructure. From a theoretical standpoint, it posseses some characteristics of the general-purpose technologies (GPT), and it seems appropriate to regard it as an example of the GPT, which is discussed below. It appears reasonable to study intermodal transportation as a two-way improvement of an economic system=s productivity. On the one hand, it improves current operational functions of the system. On the other hand, it expands those functions. Both improvements are achieved by consolidating different transportation systems into a seamless transportation network that utilizes the comparative advantages of different transportation modes. Improvements due to intermodal transportation are associated with the increased productivity of transportation services and a reduction in logistic costs. The former results in an increased volume of transportation per unit cost, while the latter directly reduces costs of commodity production. Expansion of the intermodal transportation network is associated with economies of scale and better accessibility to input and output markets. The overall impact of intermodal transportation can be divided into four elements: (i) an increase in the volume of transportation in an existing transportation network; (ii) a reduction in logistic costs of current operations; (iii) the economies of scale associated with transportation network expansion; (iv) better accessibility to input and output markets. These four elements are discussed below in a sequence. Increase in volume of transportation in the existing network An increase in volume of transportation can lead to economies of density a specific scale effect. The economies of density exist if an increase in the volume of transportation in the network does not require a proportional increase in all inputs of the network. Usually the phenomenon is associated with an increase in the frequency of transportation (traffic) within the existing network (see Boyer, 1998 for a formal definition, Ciccone and Hall, 1996 for general discussion of economies of density, and Fujii, Im and Mak, 1992 for examples of economies of density in transportation). In the case of intermodal transportation, economies of density are achieved through cargo containerization, cargo consolidation and computer-guiding systems at intermodal facilities. Cargo containerization and consolidation result in an increased load factor of transportation vehicles and higher capacity utilization of the transportation fixed facilities, while utilization of computer-guiding systems results in higher labour productivity. For instance, in 1994 Burlington Northern Santa Fe Railway (BNSF) introduced the Alliance Intermodal Facility at Fort Worth, Texas, into its operations between Chicago and Los Angeles. According to OmniTRAX specialists, who operates the facility, BNSF has nearly doubled its volume of throughput at the intermodal facility since 1994. First, containerization of commodities being transported plus hubbing or cargo consolidation at the intermodal facility resulted in longer trains with higher frequency. Second, all day-to-day operations at the intermodal facility are governed by the Optimization Alternatives Strategic Intermodal Scheduler (OASIS) computer system, which allowed BNSF to handle more operations with less labour. Reduction in Logistic Costs Intermodal transportation is characterized by optimal frequency of service and modal choice and increased reliability. Combined, these two features define the just-in-time delivery -a major service produced by intermodal transportation. Furthermore, Blackburn (1991) argues that just-in-time d", "title": "" }, { "docid": "a926341e8b663de6c412b8e3a61ee171", "text": "— Studies within the EHEA framework include the acquisition of skills such as the ability to learn autonomously, which requires students to devote much of their time to individual and group work to reinforce and further complement the knowledge acquired in the classroom. In order to consolidate the results obtained from classroom activities, lecturers must develop tools to encourage learning and facilitate the process of independent learning. The aim of this work is to present the use of virtual laboratories based on Easy Java Simulations to assist in the understanding and testing of electrical machines. con los usuarios integrándose fácilmente en plataformas de e-aprendizaje. Para nuestra aplicación hemos escogido el Java Ejs (Easy Java Simulations), ya que es una herramienta de software gratuita, diseñada para el desarrollo de laboratorios virtuales interactivos, dispone de elementos visuales parametrizables", "title": "" }, { "docid": "5c935db4a010bc26d93dd436c5e2f978", "text": "A taxonomic revision of Australian Macrobrachium identified three species new to the Australian fauna – two undescribed species and one new record, viz. M. auratumsp. nov., M. koombooloombasp. nov., and M. mammillodactylus(Thallwitz, 1892). Eight taxa previously described by Riek (1951) are recognised as new junior subjective synonyms, viz. M. adscitum adscitum, M. atactum atactum, M. atactum ischnomorphum, M. atactum sobrinum, M. australiense crassum, M. australiense cristatum, M. australiense eupharum of M. australienseHolthuis, 1950, and M. glypticumof M. handschiniRoux, 1933. Apart from an erroneous type locality for a junior subjective synonym, there were no records to confirm the presence of M. australe(Guérin-Méneville, 1838) on the Australian continent. In total, 13 species of Macrobrachiumare recorded from the Australian continent. Keys to male developmental stages and Australian species are provided. A revised diagnosis is given for the genus. A list of 31 atypical species which do not appear to be based on fully developed males or which require re-evaluation of their generic status is provided. Terminology applied to spines and setae is revised.", "title": "" }, { "docid": "e2d39e2714351b04054b871fa8a7a2fa", "text": "In this letter, we propose sparsity-based coherent and noncoherent dictionaries for action recognition. First, the input data are divided into different clusters and the number of clusters depends on the number of action categories. Within each cluster, we seek data items of each action category. If the number of data items exceeds threshold in any action category, these items are labeled as coherent. In a similar way, all coherent data items from different clusters form a coherent group of each action category, and data that are not part of the coherent group belong to noncoherent group of each action category. These coherent and noncoherent groups are learned using K-singular value decomposition dictionary learning. Since the coherent group has more similarity among data, only few atoms need to be learned. In the noncoherent group, there is a high variability among the data items. So, we propose an orthogonal-projection-based selection to get optimal dictionary in order to retain maximum variance in the data. Finally, the obtained dictionary atoms of both groups in each action category are combined and then updated using the limited Broyden–Fletcher–Goldfarb–Shanno optimization algorithm. The experiments are conducted on challenging datasets HMDB51 and UCF50 with action bank features and achieve comparable result using this state-of-the-art feature.", "title": "" }, { "docid": "56e47efe6efdb7819c6a2e87e8fbb56e", "text": "Recent investigations of Field Programmable Gate Array (FPGA)-based time-to-digital converters (TDCs) have predominantly focused on improving the time resolution of the device. However, the monolithic integration of multi-channel TDCs and the achievement of high measurement throughput remain challenging issues for certain applications. In this paper, the potential of the resources provided by the Kintex-7 Xilinx FPGA is fully explored, and a new design is proposed for the implementation of a high performance multi-channel TDC system on this FPGA. Using the tapped-delay-line wave union TDC architecture, in which a negative pulse is triggered by the hit signal propagating along the carry chain, two time measurements are performed in a single carry chain within one clock cycle. The differential non-linearity and time resolution can be significantly improved by realigning the bins. The on-line calibration and on-line updating of the calibration table reduce the influence of variations of environmental conditions. The logic resources of the 6-input look-up tables in the FPGA are employed for hit signal edge detection and bubble-proof encoding, thereby allowing the TDC system to operate at the maximum allowable clock rate of the FPGA and to achieve the maximum possible measurement throughput. This resource-efficient design, in combination with a modular implementation, makes the integration of multiple channels in one FPGA practicable. Using our design, a 128-channel TDC with a dead time of 1.47 ns, a dynamic range of 360 ns, and a root-mean-square resolution of less than 10 ps was implemented in a single Kintex-7 device.", "title": "" }, { "docid": "b06fc6126bf086cdef1d5ac289cf5ebe", "text": "Rhinophyma is a subtype of rosacea characterized by nodular thickening of the skin, sebaceous gland hyperplasia, dilated pores, and in its late stage, fibrosis. Phymatous changes in rosacea are most common on the nose but can also occur on the chin (gnatophyma), ears (otophyma), and eyelids (blepharophyma). In severe cases, phymatous changes result in the loss of normal facial contours, significant disfigurement, and social isolation. Additionally, patients with profound rhinophyma can experience nare obstruction and difficulty breathing due to the weight and bulk of their nose. Treatment options for severe advanced rhinophyma include cryosurgery, partial-thickness decortication with subsequent secondary repithelialization, carbon dioxide (CO2) or erbium-doped yttrium aluminum garnet (Er:YAG) laser ablation, full-thickness resection with graft or flap reconstruction, excision by electrocautery or radio frequency, and sculpting resection using a heated Shaw scalpel. We report a severe case of rhinophyma resulting in marked facial disfigurement and nasal obstruction treated successfully using the Shaw scalpel. Rhinophymectomy using the Shaw scalpel allows for efficient and efficacious treatment of rhinophyma without the need for multiple procedures or general anesthesia and thus should be considered in patients with nare obstruction who require intervention.", "title": "" }, { "docid": "3c29c0a3e8ec6292f05c7907436b5e9a", "text": "Emerging Wi-Fi technologies are expected to cope with large amounts of traffic in dense networks. Consequently, proposals for the future IEEE 802.11ax Wi-Fi amendment include sensing threshold and transmit power adaptation, in order to improve spatial reuse. However, it is not yet understood to which extent such adaptive approaches — and which variant — would achieve a better balance between spatial reuse and the level of interference, in order to improve the network performance. Moreover, it is not clear how legacy Wi-Fi devices would be affected by new-generation Wi-Fi implementing these adaptive design parameters. In this paper we present a thorough comparative study in ns-3 for four major proposed adaptation algorithms and we compare their performance against legacy non-adaptive Wi-Fi. Additionally, we consider mixed populations where both legacy non-adaptive and new-generation adaptive populations coexist. We assume a dense indoor residential deployment and different numbers of available channels in the 5 GHz band, relevant for future IEEE 802.11ax. Our results show that for the dense scenarios considered, the algorithms do not significantly improve the overall network performance compared to the legacy baseline, as they increase the throughput of some nodes, while decreasing the throughput of others. For mixed populations in dense deployments, adaptation algorithms that improve the performance of new-generation nodes degrade the performance of legacy nodes and vice versa. This suggests that to support Wi-Fi evolution for dense deployments and consistently increase the throughput throughout the network, more sophisticated algorithms are needed, e.g. considering combinations of input parameters in current variants.", "title": "" }, { "docid": "eb3eccf745937773c399334673235f57", "text": "Continuous practices, i.e., continuous integration, delivery, and deployment, are the software development industry practices that enable organizations to frequently and reliably release new features and products. With the increasing interest in the literature on continuous practices, it is important to systematically review and synthesize the approaches, tools, challenges, and practices reported for adopting and implementing continuous practices. This paper aimed at systematically reviewing the state of the art of continuous practices to classify approaches and tools, identify challenges and practices in this regard, and identify the gaps for future research. We used the systematic literature review method for reviewing the peer-reviewed papers on continuous practices published between 2004 and June 1, 2016. We applied the thematic analysis method for analyzing the data extracted from reviewing 69 papers selected using predefined criteria. We have identified 30 approaches and associated tools, which facilitate the implementation of continuous practices in the following ways: 1) reducing build and test time in continuous integration (CI); 2) increasing visibility and awareness on build and test results in CI; 3) supporting (semi-) automated continuous testing; 4) detecting violations, flaws, and faults in CI; 5) addressing security and scalability issues in deployment pipeline; and 6) improving dependability and reliability of deployment process. We have also determined a list of critical factors, such as testing (effort and time), team awareness and transparency, good design principles, customer, highly skilled and motivated team, application domain, and appropriate infrastructure that should be carefully considered when introducing continuous practices in a given organization. The majority of the reviewed papers were validation (34.7%) and evaluation (36.2%) research types. This paper also reveals that continuous practices have been successfully applied to both greenfield and maintenance projects. Continuous practices have become an important area of software engineering research and practice. While the reported approaches, tools, and practices are addressing a wide range of challenges, there are several challenges and gaps, which require future research work for improving the capturing and reporting of contextual information in the studies reporting different aspects of continuous practices; gaining a deep understanding of how software-intensive systems should be (re-) architected to support continuous practices; and addressing the lack of knowledge and tools for engineering processes of designing and running secure deployment pipelines.", "title": "" }, { "docid": "a9dbb873487081afcc2a24dd7cb74bfe", "text": "We presented the first single block collision attack on MD5 with complexity of 2 MD5 compressions and posted the challenge for another completely new one in 2010. Last year, Stevens presented a single block collision attack to our challenge, with complexity of 2 MD5 compressions. We really appreciate Stevens’s hard work. However, it is a pity that he had not found even a better solution than our original one, let alone a completely new one and the very optimal solution that we preserved and have been hoping that someone can find it, whose collision complexity is about 2 MD5 compressions. In this paper, we propose a method how to choose the optimal input difference for generating MD5 collision pairs. First, we divide the sufficient conditions into two classes: strong conditions and weak conditions, by the degree of difficulty for condition satisfaction. Second, we prove that there exist strong conditions in only 24 steps (one and a half rounds) under specific conditions, by utilizing the weaknesses of compression functions of MD5, which are difference inheriting and message expanding. Third, there should be no difference scaling after state word q25 so that it can result in the least number of strong conditions in each differential path, in such a way we deduce the distribution of strong conditions for each input difference pattern. Finally, we choose the input difference with the least number of strong conditions and the most number of free message words. We implement the most efficient 2-block MD5 collision attack, which needs only about 2 MD5 compressions to find a collision pair, and show a single-block collision attack with complexity 2.", "title": "" }, { "docid": "cb66a49205c9914be88a7631ecc6c52a", "text": "BACKGROUND\nMidline facial clefts are rare and challenging deformities caused by failure of fusion of the medial nasal prominences. These anomalies vary in severity, and may include microform lines or midline lip notching, incomplete or complete labial clefting, nasal bifidity, or severe craniofacial bony and soft tissue anomalies with orbital hypertelorism and frontoethmoidal encephaloceles. In this study, the authors present 4 cases, classify the spectrum of midline cleft anomalies, and review our technical approaches to the surgical correction of midline cleft lip and bifid nasal deformities. Embryology and associated anomalies are discussed.\n\n\nMETHODS\nThe authors retrospectively reviewed our experience with 4 cases of midline cleft lip with and without nasal deformities of varied complexity. In addition, a comprehensive literature search was performed, identifying studies published relating to midline cleft lip and/or bifid nose deformities. Our assessment of the anomalies in our series, in conjunction with published reports, was used to establish a 5-tiered classification system. Technical approaches and clinical reports are described.\n\n\nRESULTS\nFunctional and aesthetic anatomic correction was successfully achieved in each case without complication. A classification and treatment strategy for the treatment of midline cleft lip and bifid nose deformity is presented.\n\n\nCONCLUSIONS\nThe successful treatment of midline cleft lip and bifid nose deformities first requires the identification and classification of the wide variety of anomalies. With exposure of abnormal nasolabial anatomy, the excision of redundant skin and soft tissue, anatomic approximation of cartilaginous elements, orbicularis oris muscle repair, and craniofacial osteotomy and reduction as indicated, a single-stage correction of midline cleft lip and bifid nasal deformity can be safely and effectively achieved.", "title": "" } ]
scidocsrr
b067267dc90e3f058449faafd07a9fda
A Framework for Investigating the Impact of Information Systems Capability on Strategic Information Systems Planning Outcomes
[ { "docid": "5ebd4fc7ee26a8f831f7fea2f657ccdd", "text": "1 This article was reviewed and accepted by all the senior editors, including the editor-in-chief. Articles published in future issues will be accepted by just a single senior editor, based on reviews by members of the Editorial Board. 2 Sincere thanks go to Anna Dekker and Denyse O’Leary for their assistance with this research. Funding was generously provided by the Advanced Practices Council of the Society for Information Management and by the Social Sciences and Humanities Research Council of Canada. An earlier version of this manuscript was presented at the Academy of Management Conference in Toronto, Canada, in August 2000. 3 In this article, the terms information systems (IS) and information technology (IT) are used interchangeably. 4 Regardless of whether IS services are provided internally (in a centralized, decentralized, or federal manner) or are outsourced, we assume the boundaries of the IS function can be identified. Thus, the fit between the unit(s) providing IS services and the rest of the organization can be examined. and books have been written on the subject, firms continue to demonstrate limited alignment.", "title": "" } ]
[ { "docid": "8dfa68e87eee41dbef8e137b860e19cc", "text": "We investigate regrets associated with users' posts on a popular social networking site. Our findings are based on a series of interviews, user diaries, and online surveys involving 569 American Facebook users. Their regrets revolved around sensitive topics, content with strong sentiment, lies, and secrets. Our research reveals several possible causes of why users make posts that they later regret: (1) they want to be perceived in favorable ways, (2) they do not think about their reason for posting or the consequences of their posts, (3) they misjudge the culture and norms within their social circles, (4) they are in a \"hot\" state of high emotion when posting, or under the influence of drugs or alcohol, (5) their postings are seen by an unintended audience, (6) they do not foresee how their posts could be perceived by people within their intended audience, and (7) they misunderstand or misuse the Facebook platform. Some reported incidents had serious repercussions, such as breaking up relationships or job losses. We discuss methodological considerations in studying negative experiences associated with social networking posts, as well as ways of helping users of social networking sites avoid such regrets.", "title": "" }, { "docid": "e8cf458c60dc7b4a8f71df2fabf1558d", "text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.", "title": "" }, { "docid": "fc6726bddf3d70b7cb3745137f4583c1", "text": "Maximum power point tracking (MPPT) is a very important necessity in a system of energy conversion from a renewable energy source. Many research papers have been produced with various schemes over past decades for the MPPT in photovoltaic (PV) system. This research paper inspires its motivation from the fact that the keen study of these existing techniques reveals that there is still quite a need for an absolutely generic and yet very simple MPPT controller which should have all the following traits: total independence from system's parameters, ability to reach the global maxima in minimal possible steps, the correct sense of tracking direction despite the abrupt atmospheric or parametrical changes, and finally having a very cost-effective and energy efficient hardware with the complexity no more than that of a minimal MPPT algorithm like Perturb and Observe (P&O). The MPPT controller presented in this paper is a successful attempt to fulfil all these requirements. It extends the MPPT techniques found in the recent research papers with some innovations in the control algorithm and a simplistic hardware. The simulation results confirm that the proposed MPPT controller is very fast, very efficient, very simple and low cost as compared to the contemporary ones.", "title": "" }, { "docid": "9efd74df34775bc4c7a08230e67e990b", "text": "OBJECTIVE\nFirearm violence is a significant public health problem in the United States, and alcohol is frequently involved. This article reviews existing research on the relationships between alcohol misuse; ownership, access to, and use of firearms; and the commission of firearm violence, and discusses the policy implications of these findings.\n\n\nMETHOD\nNarrative review augmented by new tabulations of publicly-available data.\n\n\nRESULTS\nAcute and chronic alcohol misuse is positively associated with firearm ownership, risk behaviors involving firearms, and risk for perpetrating both interpersonal and self-directed firearm violence. In an average month, an estimated 8.9 to 11.7 million firearm owners binge drink. For men, deaths from alcohol-related firearm violence equal those from alcohol-related motor vehicle crashes. Enforceable policies restricting access to firearms for persons who misuse alcohol are uncommon. Policies that restrict access on the basis of other risk factors have been shown to reduce risk for subsequent violence.\n\n\nCONCLUSION\nThe evidence suggests that restricting access to firearms for persons with a documented history of alcohol misuse would be an effective violence prevention measure. Restrictions should rely on unambiguous definitions of alcohol misuse to facilitate enforcement and should be rigorously evaluated.", "title": "" }, { "docid": "384943f815aadbad990cc42ca9f6f9d0", "text": "The N×N queen&apos;s puzzle is the problem of placing N chess queen on an N×N chess board so that no two queens attack each other. This approach is a classical problem in the artificial intelligence area. A solution requires that no two queens share the same row, column or diagonal. These problems for computer scientists present practical solution to many useful applications and have become an important issue. In this paper we proposed new resolution for solving n-Queens used combination of depth firs search (DFS) and breathe first search (BFS) techniques. The proposed algorithm act based on placing queens on chess board directly. This is possible by regular pattern on the basis of the work law of minister. The results show that performance and run time in this approach better then back tracking methods and hill climbing modes.", "title": "" }, { "docid": "a45e7855be4a99ef2d382e914650e8bc", "text": "We propose a novel type inference technique for Python programs. Type inference is difficult for Python programs due to their heavy dependence on external APIs and the dynamic language features. We observe that Python source code often contains a lot of type hints such as attribute accesses and variable names. However, such type hints are not reliable. We hence propose to use probabilistic inference to allow the beliefs of individual type hints to be propagated, aggregated, and eventually converge on probabilities of variable types. Our results show that our technique substantially outperforms a state-of-the-art Python type inference engine based on abstract interpretation.", "title": "" }, { "docid": "0289858bb9002e00d753e1ed2da8b204", "text": "This paper presents a motion planning method for mobile manipulators for which the base locomotion is less precise than the manipulator control. In such a case, it is advisable to move the base to discrete poses from which the manipulator can be deployed to cover a prescribed trajectory. The proposed method finds base poses that not only cover the trajectory but also meet constraints on a measure of manipulability. We propose a variant of the conventional manipulability measure that is suited to the trajectory control of the end effector of the mobile manipulator along an arbitrary curve in three space. Results with implementation on a mobile manipulator are discussed.", "title": "" }, { "docid": "9d34171c2fcc8e36b2fb907fe63fc08d", "text": "A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.", "title": "" }, { "docid": "a8f5f7c147c1ac8cabf86d4809aa3f65", "text": "Structural gene rearrangements resulting in gene fusions are frequent events in solid tumours. The identification of certain activating fusions can aid in the diagnosis and effective treatment of patients with tumours harbouring these alterations. Advances in the techniques used to identify fusions have enabled physicians to detect these alterations in the clinic. Targeted therapies directed at constitutively activated oncogenic tyrosine kinases have proven remarkably effective against cancers with fusions involving ALK, ROS1, or PDGFB, and the efficacy of this approach continues to be explored in malignancies with RET, NTRK1/2/3, FGFR1/2/3, and BRAF/CRAF fusions. Nevertheless, prolonged treatment with such tyrosine-kinase inhibitors (TKIs) leads to the development of acquired resistance to therapy. This resistance can be mediated by mutations that alter drug binding, or by the activation of bypass pathways. Second-generation and third-generation TKIs have been developed to overcome resistance, and have variable levels of activity against tumours harbouring individual mutations that confer resistance to first-generation TKIs. The rational sequential administration of different inhibitors is emerging as a new treatment paradigm for patients with tumours that retain continued dependency on the downstream kinase of interest.", "title": "" }, { "docid": "2cfb782a527b1806eda302c4c7b63219", "text": "The latest version of the ISO 26262 standard from 2016 represents the state of the art for a safety-guided development of safety-critical electric/electronic vehicle systems. These vehicle systems include advanced driver assistance systems and vehicle guidance systems. The development process proposed in the ISO 26262 standard is based upon multiple V-models, and defines activities and work products for each process step. In many of these process steps, scenario based approaches can be applied to achieve the defined work products for the development of automated driving functions. To accomplish the work products of different process steps, scenarios have to focus on various aspects like a human understandable notation or a description via state variables. This leads to contradictory requirements regarding the level of detail and way of notation for the representation of scenarios. In this paper, the authors discuss requirements for the representation of scenarios in different process steps defined by the ISO 26262 standard, propose a consistent terminology based on prior publications for the identified levels of abstraction, and demonstrate how scenarios can be systematically evolved along the phases of the development process outlined in the ISO 26262 standard.", "title": "" }, { "docid": "54396daee78bb3ca974925159d6dec15", "text": "Classroom Salon is an on-line social collaboration tool that allows instructors to create, manage, and analyze social net- works (called Salons) to enhance student learning. Students in a Salon can cooperatively create, comment on, and modify documents. Classroom Salon provides tools that allow the instructor to monitor the social networks and gauge both student participation and individual effectiveness. This pa- per describes Classroom Salon, provides some use cases that we have developed for introductory computer science classes and presents some preliminary observations of using this tool in several computer science courses at Carnegie Mellon University.", "title": "" }, { "docid": "ddecb743bc098a3e31ca58bc17810cf1", "text": "Maxout network is a powerful alternate to traditional sigmoid neural networks and is showing success in speech recognition. However, maxout network is prone to overfitting thus regularization methods such as dropout are often needed. In this paper, a stochastic pooling regularization method for max-out networks is proposed to control overfitting. In stochastic pooling, a distribution is produced for each pooling region by the softmax normalization of the piece values. The active piece is selected based on the distribution during training, and an effective probability weighting is conducted during testing. We apply the stochastic pooling maxout (SPM) networks within the DNN-HMM framework and evaluate its effectiveness under a low-resource speech recognition condition. On benchmark test sets, the SPM network yields 4.7-8.6% relative improvements over the baseline maxout network. Further evaluations show the superiority of stochastic pooling over dropout for low-resource speech recognition.", "title": "" }, { "docid": "99582c5c50f5103f15a6777af94c6584", "text": "Depth estimation in computer vision and robotics is most commonly done via stereo vision (stereopsis), in which images from two cameras are used to triangulate and estimate distances. However, there are also numerous monocular visual cues— such as texture variations and gradients, defocus, color/haze, etc.—that have heretofore been little exploited in such systems. Some of these cues apply even in regions without texture, where stereo would work poorly. In this paper, we apply a Markov Random Field (MRF) learning algorithm to capture some of these monocular cues, and incorporate them into a stereo system. We show that by adding monocular cues to stereo (triangulation) ones, we obtain significantly more accurate depth estimates than is possible using either monocular or stereo cues alone. This holds true for a large variety of environments, including both indoor environments and unstructured outdoor environments containing trees/forests, buildings, etc. Our approach is general, and applies to incorporating monocular cues together with any off-the-shelf stereo system.", "title": "" }, { "docid": "a9f8f3946dd963066006f19a251eef7c", "text": "Three-dimensional virtual worlds are an emerging medium currently being used in both traditional classrooms and for distance education. Three-dimensional (3D) virtual worlds are a combination of desk-top interactive Virtual Reality within a chat environment. This analysis provides an overview of Active Worlds Educational Universe and Adobe Atmosphere and the pedagogical affordances and constraints of the inscription tools, discourse tools, experiential tools, and resource tools of each application. The purpose of this review is to discuss the implications of using each application for educational initiatives by exploring how the various design features of each may support and enhance the design of interactive learning environments.", "title": "" }, { "docid": "f631ceda1a738c12ea71846650a11372", "text": "An object recognition engine needs to extract discriminative features from data representing an object and accurately classify the object to be of practical use in robotics. Furthermore, the classification of the object must be rapidly performed in the presence of a voluminous stream of data. These conditions call for a distributed and scalable architecture that can utilize a cloud computing infrastructure for performing object recognition. This paper introduces a Cloud-based Object Recognition Engine (CORE) to address these needs. CORE is able to train on large-scale datasets, perform classification of 3D point cloud data, and efficiently transfer data in a robotic network.", "title": "" }, { "docid": "cdee55e977d5809b87f3e8be98acaaa3", "text": "Proximity effects caused by uneven distribution of current among the insulated wire strands of stator multi-strand windings can contribute significant bundle-level proximity losses in permanent magnet (PM) machines operating at high speeds. Three-dimensional finite element analysis is used to investigate the effects of transposition of the insulated strands in stator winding bundles on the copper losses in high-speed machines. The investigation confirms that the bundle proximity losses must be considered in the design of stator windings for high-speed machines, and the amplitude of these losses decreases monotonically as the level of transposition is increased from untransposed to fully-transposed (360°) wire bundles. Analytical models are introduced to estimate the currents in strands in a slot for a high-speed machine.", "title": "" }, { "docid": "4308a50a3a7cf1d426cb476545147f50", "text": "In this paper, the relationship between the numbers of stator slots, winding polarities, and rotor poles for variable reluctance resolvers is derived and verified, which makes it possible for the same stator and winding to be shared by the rotors with different poles. Based on the established relationship, a simple factor is introduced to evaluate the level of voltage harmonics as an index for choosing appropriate stator slot and rotor pole combinations. With due account for easy manufacturing, alternate windings are proposed without apparent deterioration in voltage harmonics of a resolver. In particular, alternate windings with nonoverlapping uniform coils are proved to be possible for output windings in some stator slot and rotor pole combinations, which further simplify the manufacture process. Finite element method is adopted to verify the proposed design, together with experiments on the prototypes.", "title": "" }, { "docid": "c8db1af44dccc23bf0e06dcc8c43bca6", "text": "A reconfigurable mechanism for varying the footprint of a four-wheeled omnidirectional vehicle is developed and applied to wheelchairs. The variable footprint mechanism consists of a pair of beams intersecting at a pivotal point in the middle. Two pairs of ball wheels at the diagonal positions of the vehicle chassis are mounted, respectively, on the two beams intersecting in the middle. The angle between the two beams varies actively so that the ratio of the wheel base to the tread may change. Four independent servo motors driving the four ball wheels allow the vehicle to move in an arbitrary direction from an arbitrary configuration as well as to change the angle between the two beams and thereby change the footprint. The objective of controlling the beam angle is threefold. One is to augment static stability by varying the footprint so that the mass centroid of the vehicle may be kept within the footprint at all times. The second is to reduce the width of the vehicle when going through a narrow doorway. The third is to apparently change the gear ratio relating the vehicle speed to individual actuator speeds. First the concept of the varying footprint mechanism is described, and its kinematic behavior is analyzed, followed by the three control algorithms for varying the footprint. A prototype vehicle for an application as a wheelchair platform is designed, built, and tested.", "title": "" }, { "docid": "cfdee8bd0802872f4bd216df226f9c35", "text": "Single-unit recording studies in the macaque have carefully documented the modulatory effects of attention on the response properties of visual cortical neurons. Attention produces qualitatively different effects on firing rate, depending on whether a stimulus appears alone or accompanied by distracters. Studies of contrast gain control in anesthetized mammals have found parallel patterns of results when the luminance contrast of a stimulus increases. This finding suggests that attention has co-opted the circuits that mediate contrast gain control and that it operates by increasing the effective contrast of the attended stimulus. Consistent with this idea, microstimulation of the frontal eye fields, one of several areas that control the allocation of spatial attention, induces spatially local increases in sensitivity both at the behavioral level and among neurons in area V4, where endogenously generated attention increases contrast sensitivity. Studies in the slice have begun to explain how modulatory signals might cause such increases in sensitivity.", "title": "" }, { "docid": "82e6da590f8f836c9a06c26ef4440005", "text": "We introduce a new count-based optimistic exploration algorithm for reinforcement learning (RL) that is feasible in environments with highdimensional state-action spaces. The success of RL algorithms in these domains depends crucially on generalisation from limited training experience. Function approximation techniques enable RL agents to generalise in order to estimate the value of unvisited states, but at present few methods enable generalisation regarding uncertainty. This has prevented the combination of scalable RL algorithms with efficient exploration strategies that drive the agent to reduce its uncertainty. We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state. Our φ-pseudocount achieves generalisation by exploiting the same feature representation of the state space that is used for value function approximation. States that have less frequently observed features are deemed more uncertain. The φ-ExplorationBonus algorithm rewards the agent for exploring in feature space rather than in the untransformed state space. The method is simpler and less computationally expensive than some previous proposals, and achieves near state-of-the-art results on highdimensional RL benchmarks.", "title": "" } ]
scidocsrr
c0c7f6e365f2bdd184a9df5cfc5f8587
A Practical Wireless Attack on the Connected Car and Security Protocol for In-Vehicle CAN
[ { "docid": "8d041241f1a587b234c8784dea9088a4", "text": "Modern intelligent vehicles have electronic control units containing firmware that enables various functions in the vehicle. New firmware versions are constantly developed to remove bugs and improve functionality. Automobile manufacturers have traditionally performed firmware updates over cables but in the near future they are aiming at conducting firmware updates over the air, which would allow faster updates and improved safety for the driver. In this paper, we present a protocol for secure firmware updates over the air. The protocol provides data integrity, data authentication, data confidentiality, and freshness. In our protocol, a hash chain is created of the firmware, and the first packet is signed by a trusted source, thus authenticating the whole chain. Moreover, the packets are encrypted using symmetric keys. We discuss the practical considerations that exist for implementing our protocol and show that the protocol is computationally efficient, has low memory overhead, and is suitable for wireless communication. Therefore, it is well suited to the limited hardware resources in the wireless vehicle environment.", "title": "" } ]
[ { "docid": "ae83a2258907f00500792178dc65340d", "text": "In this paper, a novel method for lung nodule detection, segmentation and recognition using computed tomography (CT) images is presented. Our contribution consists of several steps. First, the lung area is segmented by active contour modeling followed by some masking techniques to transfer non-isolated nodules into isolated ones. Then, nodules are detected by the support vector machine (SVM) classifier using efficient 2D stochastic and 3D anatomical features. Contours of detected nodules are then extracted by active contour modeling. In this step all solid and cavitary nodules are accurately segmented. Finally, lung tissues are classified into four classes: namely lung wall, parenchyma, bronchioles and nodules. This classification helps us to distinguish a nodule connected to the lung wall and/or bronchioles (attached nodule) from the one covered by parenchyma (solitary nodule). At the end, performance of our proposed method is examined and compared with other efficient methods through experiments using clinical CT images and two groups of public datasets from Lung Image Database Consortium (LIDC) and ANODE09. Solid, non-solid and cavitary nodules are detected with an overall detection rate of 89%; the number of false positive is 7.3/scan and the location of all detected nodules are recognized correctly.", "title": "" }, { "docid": "ee6612fa13482f7e3bbc7241b9e22297", "text": "The MOND limit is shown to follow from a requirement of space-time scale invariance of the equations of motion for nonrelativistic, purely gravitational systems; i.e., invariance of the equations of motion under (t, r) → (λt, λr) in the limit a 0 → ∞. It is suggested that this should replace the definition of the MOND limit based on the asymptotic behavior of a Newtonian-MOND interpolating function. In this way, the salient, deep-MOND results–asymptotically flat rotation curves, the mass-rotational-speed relation (baryonic Tully-Fisher relation), the Faber-Jackson relation, etc.–follow from a symmetry principle. For example, asymptotic flatness of rotation curves reflects the fact that radii change under scaling, while velocities do not. I then comment on the interpretation of the deep-MOND limit as one of \" zero mass \" : Rest masses, whose presence obstructs scaling symmetry, become negligible compared to the \" phantom \" , dynamical masses–those that some would attribute to dark matter. Unlike the former masses, the latter transform in a way that is consistent with the symmetry. Finally, I discuss the putative MOND-cosmology connection, in particular the possibility that MOND-especially the deep-MOND limit– is related to the asymptotic de Sitter geometry of our universe. I point out, in this connection, the possible relevance of a (classical) de Sitter-conformal-field-theory (dS/CFT) correspondence.", "title": "" }, { "docid": "5cd68b483657180231786dc5a3407c85", "text": "The ability of robotic systems to autonomously understand and/or navigate in uncertain environments is critically dependent on fairly accurate strategies, which are not always optimally achieved due to effectiveness, computational cost, and parameter settings. In this paper, we propose a novel and simple adaptive strategy to increase the efficiency and drastically reduce the computational effort in particle filters (PFs). The purpose of the adaptive approach (dispersion-based adaptive particle filter - DAPF) is to provide higher number of particles during the initial searching state (when the localization presents greater uncertainty) and fewer particles during the subsequent state (when the localization exhibits less uncertainty). With the aim of studying the dynamical PF behavior regarding others and putting the proposed algorithm into practice, we designed a methodology based on different target applications and a Kinect sensor. The various experiments conducted for both color tracking and mobile robot localization problems served to demonstrate that the DAPF algorithm can be further generalized. As a result, the DAPF approach significantly improved the computational performance over two well-known filtering strategies: 1) the classical PF with fixed particle set sizes, and 2) the adaptive technique named Kullback-Leiber distance.", "title": "" }, { "docid": "d647fc2b5635a3dfcebf7843fef3434c", "text": "Touch is our primary non-verbal communication channel for conveying intimate emotions and as such essential for our physical and emotional wellbeing. In our digital age, human social interaction is often mediated. However, even though there is increasing evidence that mediated touch affords affective communication, current communication systems (such as videoconferencing) still do not support communication through the sense of touch. As a result, mediated communication does not provide the intense affective experience of co-located communication. The need for ICT mediated or generated touch as an intuitive way of social communication is even further emphasized by the growing interest in the use of touch-enabled agents and robots for healthcare, teaching, and telepresence applications. Here, we review the important role of social touch in our daily life and the available evidence that affective touch can be mediated reliably between humans and between humans and digital agents. We base our observations on evidence from psychology, computer science, sociology, and neuroscience with focus on the first two. Our review shows that mediated affective touch can modulate physiological responses, increase trust and affection, help to establish bonds between humans and avatars or robots, and initiate pro-social behavior. We argue that ICT mediated or generated social touch can (a) intensify the perceived social presence of remote communication partners and (b) enable computer systems to more effectively convey affective information. However, this research field on the crossroads of ICT and psychology is still embryonic and we identify several topics that can help to mature the field in the following areas: establishing an overarching theoretical framework, employing better researchmethodologies, developing basic social touch building blocks, and solving specific ICT challenges.", "title": "" }, { "docid": "a9121a1211704006dc8de14a546e3bdc", "text": "This paper describes and compares two straightforward approaches for dependency parsing with partial annotations (PA). The first approach is based on a forest-based training objective for two CRF parsers, i.e., a biaffine neural network graph-based parser (Biaffine) and a traditional log-linear graph-based parser (LLGPar). The second approach is based on the idea of constrained decoding for three parsers, i.e., a traditional linear graphbased parser (LGPar), a globally normalized neural network transition-based parser (GN3Par) and a traditional linear transition-based parser (LTPar). For the test phase, constrained decoding is also used for completing partial trees. We conduct experiments on Penn Treebank under three different settings for simulating PA, i.e., random, most uncertain, and divergent outputs from the five parsers. The results show that LLGPar is most effective in directly learning from PA, and other parsers can achieve best performance when PAs are completed into full trees by LLGPar.", "title": "" }, { "docid": "756b25456494b3ece9b240ba3957f91c", "text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.", "title": "" }, { "docid": "4d2fa4e81281f40626028192cf2f71ff", "text": "In this tutorial paper, we present a general architecture for digital clock and data recovery (CDR) for high-speed binary links. The architecture is based on replacing the analog loop filter and voltage-controlled oscillator (VCO) in a typical analog phase-locked loop (PLL)-based CDR with digital components. We provide a linearized analysis of the bang-bang phase detector and CDR loop including the effects of decimation and self-noise. Additionally, we provide measured results from an implementation of the digital CDR system which are directly comparable to the linearized analysis, plus measurements of the limit cycle behavior which arises in these loops when incoming jitter is small. Finally, the relative advantages of analog and digital implementations of the CDR for high-speed binary links is considered", "title": "" }, { "docid": "ebf92a0faf6538f1d2b85fb2aa497e80", "text": "The generally accepted assumption by most multimedia researchers is that learning is inhibited when on-screen text and narration containing the same information is presented simultaneously, rather than on-screen text or narration alone. This is known as the verbal redundancy effect. Are there situations where the reverse is true? This research was designed to investigate the reverse redundancy effect for non-native English speakers learning English reading comprehension, where two instructional modes were used the redundant mode and the modality mode. In the redundant mode, static pictures and audio narration were presented with synchronized redundant on-screen text. In the modality mode, only static pictures and audio were presented. In both modes, learners were allowed to control the pacing of the lessons. Participants were 209 Yemeni learners in their first year of tertiary education. Examination of text comprehension scores indicated that those learners who were exposed to the redundancy mode performed significantly better than learners in the modality mode. They were also significantly more motivated than their counterparts in the modality mode. This finding has added an important modification to the redundancy effect. That is the reverse redundancy effect is true for multimedia learning of English as a foreign language for students where textual information was foreign to them. In such situations, the redundant synchronized on-screen text did not impede learning; rather it reduced the cognitive load and thereby enhanced learning.", "title": "" }, { "docid": "1a153e0afca80aaf35ffa1b457725fa3", "text": "Cloud computing can reduce mainframe management costs, so more and more users choose to build their own cloud hosting environment. In cloud computing, all the commands through the network connection, therefore, information security is particularly important. In this paper, we will explore the types of intrusion detection systems, and integration of these types, provided an effective and output reports, so system administrators can understand the attacks and damage quickly. With the popularity of cloud computing, intrusion detection system log files are also increasing rapidly, the effect is limited and inefficient by using the conventional analysis system. In this paper, we use Hadoop's MapReduce algorithm analysis of intrusion detection System log files, the experimental results also confirmed that the calculation speed can be increased by about 89%. For the system administrator, IDS Log Cloud Analysis System (called ICAS) can provide fast and high reliability of the system.", "title": "" }, { "docid": "ed4d6179e2e432e752d7598c0db6ec59", "text": "In image deblurring, a fundamental problem is that the blur kernel suppresses a number of spatial frequencies that are difficult to recover reliably. In this paper, we explore the potential of a class-specific image prior for recovering spatial frequencies attenuated by the blurring process. Specifically, we devise a prior based on the class-specific subspace of image intensity responses to band-pass filters. We learn that the aggregation of these subspaces across all frequency bands serves as a good class-specific prior for the restoration of frequencies that cannot be recovered with generic image priors. In an extensive validation, our method, equipped with the above prior, yields greater image quality than many state-of-the-art methods by up to 5 dB in terms of image PSNR, across various image categories including portraits, cars, cats, pedestrians and household objects.", "title": "" }, { "docid": "705eca342fb014d0ae943a17c60a47c0", "text": "This is a critical design paper offering a possible scenario of use intended to provoke reflection about values and politics of design in persuasive computing. We describe the design of a system - Fit4Life - that encourages individuals to address the larger goal of reducing obesity in society by promoting individual healthy behaviors. Using the Persuasive Systems Design Model [26], this paper outlines the Fit4Life persuasion context, the technology, its use of persuasive messages, and an experimental design to test the system's efficacy. We also contribute a novel discussion of the ethical and sociocultural considerations involved in our design, an issue that has remained largely unaddressed in the existing persuasive technologies literature [29].", "title": "" }, { "docid": "8d432d8fd4a6d0f368a608ebca5d67d7", "text": "The origin and continuation of mankind is based on water. Water is one of the most abundant resources on earth, covering three-fourths of the planet’s surface. However, about 97% of the earth’s water is salt water in the oceans, and a tiny 3% is fresh water. This small percentage of the earth’s water—which supplies most of human and animal needs—exists in ground water, lakes and rivers. The only nearly inexhaustible sources of water are the oceans, which, however, are of high salinity. It would be feasible to address the water-shortage problem with seawater desalination; however, the separation of salts from seawater requires large amounts of energy which, when produced from fossil fuels, can cause harm to the environment. Therefore, there is a need to employ environmentally-friendly energy sources in order to desalinate seawater. After a historical introduction into desalination, this paper covers a large variety of systems used to convert seawater into fresh water suitable for human use. It also covers a variety of systems, which can be used to harness renewable energy sources; these include solar collectors, photovoltaics, solar ponds and geothermal energy. Both direct and indirect collection systems are included. The representative example of direct collection systems is the solar still. Indirect collection systems employ two subsystems; one for the collection of renewable energy and one for desalination. For this purpose, standard renewable energy and desalination systems are most often employed. Only industrially-tested desalination systems are included in this paper and they comprise the phase change processes, which include the multistage flash, multiple effect boiling and vapour compression and membrane processes, which include reverse osmosis and electrodialysis. The paper also includes a review of various systems that use renewable energy sources for desalination. Finally, some general guidelines are given for selection of desalination and renewable energy systems and the parameters that need to be considered. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b77d257b62ee7af929b64168c62fd785", "text": "The analysis of time series data is of interest to many application domains. But this analysis is challenging due to many reasons such as missing data in the series, unstructured nature of the data and errors in the data collection procedure, measuring equipment, etc. The problem of missing data while matching two time series is dealt with either by predicting a value for the missing data using the already collected data, or by completely ignoring the missing values. In this paper, we present an approach where we make use of the characteristics of the Mahalanobis Distance to inherently accommodate the missing values while finding the best match between two time series. Using this approach, we have designed two algorithms which can find the best match for a given query series in a candidate series, without imputing the missing values in the candidate. The initial algorithm finds the best nonwarped match between the candidate and the query time series, while the second algorithm is an extension of the initial algorithm to find the best match in the case of warped data using a Dynamic Time Warping (DTW) like algorithm. Thus, with experimental results we go on to conclude that the proposed warping algorithm is a good method for matching between two time series with warping and missing data.", "title": "" }, { "docid": "e299966eded9f65f6446b3cd7ab41f49", "text": "BACKGROUND Asthma is the most common chronic pulmonary disease during pregnancy. Several previous reports have documented reversible electrocardiographic changes during severe acute asthma attacks, including tachycardia, P pulmonale, right bundle branch block, right axis deviation, and ST segment and T wave abnormalities. CASE REPORT We present the case of a pregnant patient with asthma exacerbation in which acute bronchospasm caused S1Q3T3 abnormality on an electrocardiogram (ECG). The complete workup of ECG findings of S1Q3T3 was negative and correlated with bronchospasm. The S1Q3T3 electrocardiographic abnormality can be seen in acute bronchospasm in pregnant women. The other causes like pulmonary embolism, pneumothorax, acute lung disease, cor pulmonale, and left posterior fascicular block were excluded. CONCLUSIONS Asthma exacerbations are of considerable concern during pregnancy due to their adverse effect on the fetus, and optimization of asthma treatment during pregnancy is vital for achieving good outcomes. Prompt recognition of electrocardiographic abnormality and early treatment can prevent adverse perinatal outcomes.", "title": "" }, { "docid": "f4222d776f90050c15032e802d294d1a", "text": "We study the design and optimization of polyhedral patterns, which are patterns of planar polygonal faces on freeform surfaces. Working with polyhedral patterns is desirable in architectural geometry and industrial design. However, the classical tiling patterns on the plane must take on various shapes in order to faithfully and feasibly approximate curved surfaces. We define and analyze the deformations these tiles must undertake to account for curvature, and discover the symmetries that remain invariant under such deformations. We propose a novel method to regularize polyhedral patterns while maintaining these symmetries into a plethora of aesthetic and feasible patterns.", "title": "" }, { "docid": "c20393a25f4e53be6df2bd49abf6635f", "text": "This paper overviews NTCIR-13 Actionable Knowledge Graph (AKG) task. The task focuses on finding possible actions related to input entities and the relevant properties of such actions. AKG is composed of two subtasks: Action Mining (AM) and Actionable Knowledge Graph Generation (AKGG). Both subtasks are focused on English language. 9 runs have been submitted by 4 teams for the task. In this paper we describe both the subtasks, datasets, evaluation methods and the results of meta analyses.", "title": "" }, { "docid": "d003deabc7748959e8c5cc220b243e70", "text": "INTRODUCTION In Britain today, children by the age of 10 years have regular access to an average of five different screens at home. In addition to the main family television, for example, many very young children have their own bedroom TV along with portable handheld computer game consoles (eg, Nintendo, Playstation, Xbox), smartphone with games, internet and video, a family computer and a laptop and/or a tablet computer (eg, iPad). Children routinely engage in two or more forms of screen viewing at the same time, such as TV and laptop. Viewing is starting earlier in life. Nearly one in three American infants has a TV in their bedroom, and almost half of all infants watch TV or DVDs for nearly 2 h/day. Across the industrialised world, watching screen media is the main pastime of children. Over the course of childhood, children spend more time watching TV than they spend in school. When including computer games, internet and DVDs, by the age of seven years, a child born today will have spent one full year of 24 h days watching screen media. By the age of 18 years, the average European child will have spent 3 years of 24 h days watching screen media; at this rate, by the age of 80 years, they will have spent 17.6 years glued to media screens. Yet, irrespective of the content or educational value of what is being viewed, the sheer amount of average daily screen time (ST) during discretionary hours after school is increasingly being considered an independent risk factor for disease, and is recognised as such by other governments and medical bodies but not, however, in Britain or in most of the EU. To date, views of the British and European medical establishments on increasingly high levels of child ST remain conspicuous by their absence. This paper will highlight the dramatic increase in the time children today spend watching screen media. It will provide a brief overview of some specific health and well-being concerns of current viewing levels, explain why screen viewing is distinct from other forms of sedentary behaviour, and point to the potential public health benefits of a reduction in ST. It is proposed that Britain and Europe’s medical establishments now offer guidance on the average number of hours per day children spend viewing screen media, and the age at which they start.", "title": "" }, { "docid": "94b85074da2eedcff74b9ad16c5b562c", "text": "The purpose of the paper is to investigate the design of rectangular patch antenna arrays fed by miscrostrip and coaxial lines at 28 GHz for future 5G applications. Our objective is to design a four element antenna array with a bandwidth higher than 1 GHz and a maximum radiation gain. The performances of the rectangular 4∗1 and 2∗2 patch antenna arrays designed on Rogers RT/Duroid 5880 substrate were optimized and the simulation results reveal that the performance of 4∗1 antenna array fed by microstrip line is better than 2∗2 antenna array fed by coaxial cable. We obtained for the topology of 4∗1 rectangular patch array antenna a bandwidth of 2.15 GHz and 1.3 GHz respectively with almost similar gains of the order of 13.3 dBi.", "title": "" }, { "docid": "3e6aac2e0ff6099aabeee97dc1292531", "text": "A lthough ordinary least-squares (OLS) regression is one of the most familiar statistical tools, far less has been written − especially in the pedagogical literature − on regression through the origin (RTO). Indeed, the subject is surprisingly controversial. The present note highlights situations in which RTO is appropriate, discusses the implementation and evaluation of such models and compares RTO functions among three popular statistical packages. Some examples gleaned from past Teaching Statistics articles are used as illustrations. For expository convenience, OLS and RTO refer here to linear regressions obtained by least-squares methods with and without a constant term, respectively.", "title": "" }, { "docid": "fd27a21d2eaf5fc5b37d4cba6bd4dbef", "text": "RICHARD M. FELDER and JONI SPURLIN North Carolina State University, Raleigh, North Carolina 27695±7905, USA. E-mail: [email protected] The Index of Learning Styles (ILS) is an instrument designed to assess preferences on the four dimensions of the Felder-Silverman learning style model. The Web-based version of the ILS is taken hundreds of thousands of times per year and has been used in a number of published studies, some of which include data reflecting on the reliability and validity of the instrument. This paper seeks to provide the first comprehensive examination of the ILS, including answers to several questions: (1) What are the dimensions and underlying assumptions of the model upon which the ILS is based? (2) How should the ILS be used and what misuses should be avoided? (3) What research studies have been conducted using the ILS and what conclusions regarding its reliability and validity may be inferred from the data?", "title": "" } ]
scidocsrr
c04a4ff1414cfb023adb5bdbd8b4cd1e
RoadGraph - Graph based environmental modelling and function independent situation analysis for driver assistance systems
[ { "docid": "b08fe123ea0acc6b942c9069b661a9f9", "text": "The 2007 DARPA Urban Challenge afforded the golden opportunity for the Technische Universität Braunschweig to demonstrate its abilities to develop an autonomously driving vehicle to compete with the world’s best competitors. After several stages of qualification, our team CarOLO qualified early for the DARPA Urban Challenge Final Event and was among only eleven teams from initially 89 competitors to compete in the final. We had the ability to work together in a large group of experts, each contributing his expertise in his discipline, and significant organisational, financial and technical support by local sponsors who helped us to become the best non-US team. In this report, we describe the 2007 DARPA Urban Challenge, our contribution ”Caroline”, the technology and algorithms along with her performance in the DARPA Urban Challenge Final Event on November 3, 2007. M. Buehler et al. (Eds.): The DARPA Urban Challenge, STAR 56, pp. 441–508. springerlink.com c © Springer-Verlag Berlin Heidelberg 2009 442 F.W. Rauskolb et al. 1 Motivation and Introduction Focused research is often centered around interesting challenges and awards. The airplane industry started off with awards for the first flight over the British Channel as well as the Atlantic Ocean. The Human Genome Project, the RoboCups and the series of DARPA Grand Challenges for autonomous vehicles serve this very same purpose to foster research and development in a particular direction. The 2007 DARPA Urban Challenge is taking place to boost development of unmanned vehicles for urban areas. Although there is an obvious direct benefit for DARPA and the U.S. government, there will also be a large number of spin-offs in technologies, tools and engineering techniques, both for autonomous vehicles, but also for intelligent driver assistance. An intelligent driver assistance function needs to be able to understand the surroundings of the car, evaluate potential risks and help the driver to behave correctly, safely and, in case it is desired, also efficiently. These topics do not only affect ordinary cars, but also buses, trucks, convoys, taxis, special-purpose vehicles in factories, airports and more. It will take a number of years before we will have a mass market for cars that actively and safely protect the passenger and the surroundings, like pedestrians, from accidents in any situation. Intelligent functions in vehicles are obviously complex systems. Large issues in this project where primarily the methods, techniques and tools for the development of such a highly critical, reliable and complex system. Adapting and combining methods from different engineering disciplines were an important prerequisite for our success. For a stringent deadline-oriented development of such a system it is necessary to rely on a clear structure of the project, a dedicated development process and an efficient engineering that fits the project’s needs. Thus, we did not only concentrate on the single software modules of our autonomously driving vehicle named Caroline, but also on the process itself. We furthermore needed an appropriate tool suite that allowed us to run the development and in particular the testing process as efficient as possible. This includes a simulator allowing us to simulate traffic situations and therefore achieve a sufficient coverage of test situations that would have been hardly to conduct in reality. Only a good collaboration between the participating disciplines allowed us to develop Caroline in time to achieve such a good result in the 2007 DARPA Urban Challenge. In the long term, our goal was not only to participate in a competition but also to establish a sound basis for further research on how to enhance vehicle safety by implementing new technologies to provide vehicle users with reliable and robust driver assistance systems, e.g. by giving special attention on technology for sensor data fusion and robust and reliable system architectures including new methods for simulation and testing. Therefore, the 2007 DARPA Urban Challenge provided a golden opportunity to combine several expertise from several fields of science and engineering. For this purpose, the interdisciplinary team CarOLO had been founded, which drew its members Caroline: An Autonomously Driving Vehicle for Urban Environments 443 from five different institutes. In addition, the team received support from a consortium of national and international companies. In this paper, we firstly introduce the 2007 DARPA Urban Challenge and derive the basic requirements for the car from its rules in section 2. Section 3 describes the overall architecture of the system, which is detailed in section 4 describing sensor fusion, vision, artificial intelligence, vehicle control and along with safety concepts. Section 5 describes the overall development process, discusses quality assurance and the simulator used to achieve sufficient testing coverage in detail. Section 6 finally describes the evaluation of Caroline, namely the performance during the National Qualification Event and the DARPA Urban Challenge Final Event in Victorville, California, the results we found and the conclusions to draw from our performance. 2 2007 DARPA Urban Challenge The 2007 DARPA Urban Challenge is the continuation of the well-known Grand Challenge events of 2004 and 2005, which were entitled ”Barstow to Primm” and ”Desert Classic”. To continue the tradition of having names reflect the actual task, DARPA named the 2007 event ”Urban Challenge”, announcing with it the nature of the mission to be accomplished. The 2004 course, as shown in Fig. 1, led from the Barstow, California (A) to Primm, Nevada (B) and had a total length of about 142 miles. Prior to the main event, DARPA held a qualification, inspection and demonstration for each robot. Nevertheless, none of the original fifteen vehicles managed to come even close to the goal of successfully completing the course. With 7.4 miles as the farthest distance travelled, the challenge ended very disappointingly and no one won the $1 million cash prize. Thereafter, the DARPA program managers heightened the barriers for entering the 2005 challenge significantly. They also modified the entire quality inspection process to one involving a step-by-step application process, including a video of the car in action and the holding of so-called Site Visits, which involved the visit of DARPA officials to team-chosen test sites. The rules for these Site Visits were very strict, e.g. determining exactly how the courses had to be equipped and what obstacles had to be available. From initially 195 teams, 118 were selected for site visits and 43 had finally made it into the National Qualification Event at the California Speedway in Ontario, California. The NQE consisted of several tasks to be completed and obstacles to overcome autonomously by the participating vehicles, including tank traps, a tunnel, speed bumps, stationary cars to pass and many more. On October 5, 2005, DARPA announced the 23 teams that would participate in the final event. The course started in Primm, Nevada, where the 2004 challenge should have ended. With a total distance of 131.6 miles and several natural obstacles, the course was by no means easier than the one from the year before. At the end, five teams completed it and the rest did significantly 444 F.W. Rauskolb et al. Fig. 1. 2004 DARPA Grand Challenge Area between Barstow, CA (A) and Primm, NV (B). better as the teams the year before. The Stanford Racing Team was awarded the $2 million first prize. In 2007, DARPA wanted to increase the difficulty of the requirements, in order to meet the goal set by Congress and the Department of Defense that by 2015 a third of the Army’s ground combat vehicles would operate unmanned. Having already proved the feasibility of crossing a desert and overcome natural obstacles without human intervention, now a tougher task had to be mastered. As the United States Armed Forces are currently facing serious challenges in urban environments, the choice of such seemed logical. DARPA used the good experience and knowledge gained from the first and second Grand Challenge event to define the tasks for the autonomous vehicles. The 2007 DARPA Urban Challenge took place in Vicorville, CA as depicted in Fig. 2. The Technische Universität Braunschweig started in June 2006 as a newcomer in the 2007 DARPA Urban Challenge. Significantly supported by industrial partners, five institutes from the faculties of computer science and mechanical and electrical engineering equipped a 2006 Volkswagen Passat station wagon named ”Caroline” to participate in the DARPA Urban Challenge as a ”Track B” competitor. Track B competitors did not receive any financial support from the DARPA compared to ”Track A” competitors. Track A teams had to submit technical proposals to get technology development funding awards up to $1,000,000 in fall 2006. Track B teams had to provide a 5 minutes video demonstrating the vehicles capabilities in April 2007. Using these videos, DARPA selected 53 teams of the initial 89 teams that advanced to the next stage in the Caroline: An Autonomously Driving Vehicle for Urban Environments 445 Fig. 2. 2007 DARPA Grand Challenge Area in Victorville, CA. qualification process, the ”Site Visit” as already conducted in the 2005 Grand Challenge. Team CarOLO got an invitation for a Site Visit that had to take place in the United States. Therefore, team CarOLO accepted gratefully an offer from the Southwest Research Insitute in San Antonio, Texas providing a location for the Site Visit. On June 20, Caroline proved that she was ready for the National Qualification Event in fall 2007. Against great odds, she showed her abilities to the DARPA officials when a huge thunderstorm hit San Antonio during the Site Visit. The tasks to complete included the correct handling of intersection precedence, passing of vehicles, lane keeping and general safe behaviour. Afte", "title": "" } ]
[ { "docid": "8994337878d2ac35464cb4af5e32fccc", "text": "We describe an algorithm for approximate inference in graphical models based on Hölder’s inequality that provides upper and lower bounds on common summation problems such as computing the partition function or probability of evidence in a graphical model. Our algorithm unifies and extends several existing approaches, including variable elimination techniques such as minibucket elimination and variational methods such as tree reweighted belief propagation and conditional entropy decomposition. We show that our method inherits benefits from each approach to provide significantly better bounds on sum-product tasks.", "title": "" }, { "docid": "b7e28e79f938b617ba2e2ed7ef1bade3", "text": "Computing in schools has gained momentum in the last two years resulting in GCSEs in Computing and teachers looking to up skill from Digital Literacy (ICT). For many students the subject of computer science concerns software code but writing code can be challenging, due to specific requirements on syntax and spelling with new ways of thinking required. Not only do many undergraduate students lack these ways of thinking, but there is a general misrepresentation of computing in education. Were computing taught as a more serious subject like science and mathematics, public understanding of the complexities of computer systems would increase, enabling those not directly involved with IT make better informed decisions and avoid incidents such as over budget and underperforming systems. We present our exploration into teaching a variety of computing skills, most significantly \"computational thinking\", to secondary-school age children through three very different engagements. First, we discuss Print craft, in which participants learn about computer-aided design and additive manufacturing by designing and building a miniature world from scratch using the popular open-world game Mine craft and 3D printers. Second, we look at how students can get a new perspective on familiar technology with a workshop using App Inventor, a graphical Android programming environment. Finally, we look at an ongoing after school robotics club where participants face a number of challenges of their own making as they design and create a variety of robots using a number of common tools such as Scratch and Arduino.", "title": "" }, { "docid": "518d8e621e1239a94f50be3d5e2982f9", "text": "With a number of emerging biometric applications there is a dire need of less expensive authentication technique which can authenticate even if the input image is of low resolution and low quality. Foot biometric has both the physiological and behavioral characteristics still it is an abandoned field. The reason behind this is, it involves removal of shoes and socks while capturing the image and also dirty feet makes the image noisy. Cracked heels is also a reason behind noisy images. Physiological and behavioral biometric characteristics makes it a great alternative to computational intensive algorithms like fingerprint, palm print, retina or iris scan [1] and face. On one hand foot biometric has minutia features which is considered totally unique. The uniqueness of minutiae feature is already tested in fingerprint analysis [2]. On the other hand it has geometric features like hand geometry which also give satisfactory results in recognition. We can easily apply foot biometrics at those places where people inherently remove their shoes, like at holy places such as temples and mosque people remove their shoes before entering from the perspective of faith, and also remove shoes at famous monuments such as The Taj Mahal, India from the perspective of cleanliness and preservation. Usually these are the places with a strong foot fall and high risk security due to chaotic crowd. Most of the robbery, theft, terrorist attacks, are happening at these places. One very fine example is Akshardham attack in September 2002. Hence we can secure these places using low cost security algorithms based on footprint recognition.", "title": "" }, { "docid": "e4d550dbd7d2acb446c52c4906e7378e", "text": "Molecular analysis of the 16S rDNA of the intestinal microbiota of whiteleg shrimp Litopenaeus vannamei was examined to investigate the effect of a Bacillus mix (Bacillus endophyticus YC3-b, Bacillus endophyticus C2-2, Bacillus tequilensisYC5-2) and the commercial probiotic (Alibio(®)) on intestinal bacterial communities and resistance to Vibrio infection. PCR and single strain conformation polymorphism (SSCP) analyses were then performed on DNA extracted directly from guts. Injection of shrimp with V. parahaemolyticus at 2.5 × 10(5) CFU g(-1) per shrimp followed 168 h after inoculation with Bacillus mix or the Alibio probiotic or the positive control. Diversity analyses showed that the bacterial community resulting from the Bacillus mix had the highest diversity and evenness and the bacterial community of the control had the lowest diversity. The bacterial community treated with probiotics mainly consisted of α- and γ-proteobacteria, fusobacteria, sphingobacteria, and flavobacteria, while the control mainly consisted of α-proteobacteria and flavobacteria. Differences were grouped using principal component analyses of PCR-SSCP of the microbiota, according to the time of inoculation. In Vibrio parahaemolyticus-infected shrimp, the Bacillus mix (~33 %) induced a significant increase in survival compared to Alibio (~21 %) and the control (~9 %). We conclude that administration of the Bacillus mix induced modulation of the intestinal microbiota of L. vannamei and increased its resistance to V. parahaemolyticus.", "title": "" }, { "docid": "64ed3c6997ed68894db5c30bc91e95cd", "text": "Affine moment invariant (AMI) is a kind of hand-crafted image feature, which is invariant to affine transformations. This property is precisely what the standard convolution neural network (CNN) is difficult to achieve. In this letter, we present a kind of network architecture to introduce AMI into CNN, which is called AMI-Net. We achieved this by calculating AMI on the feature maps of the hidden layers. These AMIs will be concatenated with the standard CNN's FC layer to determine the network's final output. By calculating AMI on the feature maps, we can not only extend the dimension of AMIs, but also introduce affine transformation invariant into CNN. Two network architectures and training strategies of AMI-Net are illuminated, one is two-stage, and the other is end-to-end. To prove the effectiveness of the AMI-Net, several experiments have been conducted on common image datasets, MNIST, MNIST-rot, affNIST, SVHN, and CIFAR-10. By comparing with the corresponding standard CNN, respectively, we verify the validity of AMI-net.", "title": "" }, { "docid": "799573bf08fb91b1ac644c979741e7d2", "text": "This short paper reports the method and the evaluation results of Casio and Shinshu University joint team for the ISBI Challenge 2017 – Skin Lesion Analysis Towards Melanoma Detection – Part 3: Lesion Classification hosted by ISIC. Our online validation score was 0.958 with melanoma classifier AUC 0.924 and seborrheic keratosis classifier AUC 0.993.", "title": "" }, { "docid": "4e35e75d5fc074b1e02f5dded5964c19", "text": "This paper presents a new bidirectional wireless power transfer (WPT) topology using current fed half bridge converter. Generally, WPT topology with current fed converter uses parallel LC resonant tank network in the transmitter side to compensate the reactive power. However, in medium power application this topology suffers a major drawback that the voltage stress in the inverter switches are considerably high due to high reactive power consumed by the loosely coupled coil. In the proposed topology this is mitigated by adding a suitably designed capacitor in series with the transmitter coil. Both during grid to vehicle and vehicle to grid operations the power flow is controlled through variable switching frequency to achieve extended ZVS of the inverter switches. Detail analysis and converter design procedure is presented for both grid to vehicle and vehicle to grid operations. A 1.2kW lab-prototype is developed and experimental results are presented to verify the analysis.", "title": "" }, { "docid": "8955a1d5ad4fcc79bdbbd707d55da2c9", "text": "A three-way power divider with ultra wideband behavior is presented. It has a compact size with an overall dimension of 20 mm times 30 mm. The proposed divider utilizes broadside coupling via multilayer microstrip/slot transitions of elliptical shape. The simulated and measured results show that the proposed device has 4.77 plusmn 1 dB insertion loss, better than 17 dB return loss, and better than 15 dB isolation across the frequency band 3.1 to 10.6 GHz.", "title": "" }, { "docid": "b0575058a6950bc17a976504145dca0e", "text": "BACKGROUND\nCitation screening is time consuming and inefficient. We sought to evaluate the performance of Abstrackr, a semi-automated online tool for predictive title and abstract screening.\n\n\nMETHODS\nFour systematic reviews (aHUS, dietary fibre, ECHO, rituximab) were used to evaluate Abstrackr. Citations from electronic searches of biomedical databases were imported into Abstrackr, and titles and abstracts were screened and included or excluded according to the entry criteria. This process was continued until Abstrackr predicted and classified the remaining unscreened citations as relevant or irrelevant. These classification predictions were checked for accuracy against the original review decisions. Sensitivity analyses were performed to assess the effects of including case reports in the aHUS dataset whilst screening and the effects of using larger imbalanced datasets with the ECHO dataset. The performance of Abstrackr was calculated according to the number of relevant studies missed, the workload saving, the false negative rate, and the precision of the algorithm to correctly predict relevant studies for inclusion, i.e. further full text inspection.\n\n\nRESULTS\nOf the unscreened citations, Abstrackr's prediction algorithm correctly identified all relevant citations for the rituximab and dietary fibre reviews. However, one relevant citation in both the aHUS and ECHO reviews was incorrectly predicted as not relevant. The workload saving achieved with Abstrackr varied depending on the complexity and size of the reviews (9 % rituximab, 40 % dietary fibre, 67 % aHUS, and 57 % ECHO). The proportion of citations predicted as relevant, and therefore, warranting further full text inspection (i.e. the precision of the prediction) ranged from 16 % (aHUS) to 45 % (rituximab) and was affected by the complexity of the reviews. The false negative rate ranged from 2.4 to 21.7 %. Sensitivity analysis performed on the aHUS dataset increased the precision from 16 to 25 % and increased the workload saving by 10 % but increased the number of relevant studies missed. Sensitivity analysis performed with the larger ECHO dataset increased the workload saving (80 %) but reduced the precision (6.8 %) and increased the number of missed citations.\n\n\nCONCLUSIONS\nSemi-automated title and abstract screening with Abstrackr has the potential to save time and reduce research waste.", "title": "" }, { "docid": "483c95f5f42388409dceb8cdb3792d19", "text": "The world of e-commerce is reshaping marketing strategies based on the analysis of e-commerce data. Huge amounts of data are being collecting and can be analyzed for some discoveries that may be used as guidance for people sharing same interests but lacking experience. Indeed, recommendation systems are becoming an essential business strategy tool from just a novelty. Many large e-commerce web sites are already encapsulating recommendation systems to provide a customer friendly environment by helping customers in their decision-making process. A recommendation system learns from a customer behavior patterns and recommend the most valuable from available alternative choices. In this paper, we developed a two-stage algorithm using self-organizing map (SOM) and fuzzy k-means with an improved distance function to classify users into clusters. This will lead to have in the same cluster users who mostly share common interests. Results from the combination of SOM and fuzzy K-means revealed better accuracy in identifying user related classes or clusters. We validated our results using various datasets to check the accuracy of the employed clustering approach. The generated groups of users form the domain for transactional datasets to find most valuable products for customers.", "title": "" }, { "docid": "86f273bc450b9a3b6acee0e8d183b3cd", "text": "This paper aims to determine which is the best human action recognition method based on features extracted from RGB-D devices, such as the Microsoft Kinect. A review of all the papers that make reference to MSR Action3D, the most used dataset that includes depth information acquired from a RGB-D device, has been performed. We found that the validation method used by each work differs from the others. So, a direct comparison among works cannot be made. However, almost all the works present their results comparing them without taking into account this issue. Therefore, we present different rankings according to the methodology used for the validation in orden to clarify the existing confusion.", "title": "" }, { "docid": "b0766f310c4926b475bb646911a27f34", "text": "Currently, two frameworks of causal reasoning compete: Whereas dependency theories focus on dependencies between causes and effects, dispositional theories model causation as an interaction between agents and patients endowed with intrinsic dispositions. One important finding providing a bridge between these two frameworks is that failures of causes to generate their effects tend to be differentially attributed to agents and patients regardless of their location on either the cause or the effect side. To model different types of error attribution, we augmented a causal Bayes net model with separate error sources for causes and effects. In several experiments, we tested this new model using the size of Markov violations as the empirical indicator of differential assumptions about the sources of error. As predicted by the model, the size of Markov violations was influenced by the location of the agents and was moderated by the causal structure and the type of causal variables.", "title": "" }, { "docid": "a4b123705dda7ae3ac7e9e88a50bd64a", "text": "We present a novel approach to video segmentation using multiple object proposals. The problem is formulated as a minimization of a novel energy function defined over a fully connected graph of object proposals. Our model combines appearance with long-range point tracks, which is key to ensure robustness with respect to fast motion and occlusions over longer video sequences. As opposed to previous approaches based on object proposals, we do not seek the best per-frame object hypotheses to perform the segmentation. Instead, we combine multiple, potentially imperfect proposals to improve overall segmentation accuracy and ensure robustness to outliers. Overall, the basic algorithm consists of three steps. First, we generate a very large number of object proposals for each video frame using existing techniques. Next, we perform an SVM-based pruning step to retain only high quality proposals with sufficiently discriminative power. Finally, we determine the fore-and background classification by solving for the maximum a posteriori of a fully connected conditional random field, defined using our novel energy function. Experimental results on a well established dataset demonstrate that our method compares favorably to several recent state-of-the-art approaches.", "title": "" }, { "docid": "99eabf05f0c1b2979aad244c322ecc64", "text": "Stochastic binary hidden units in a multi-layer perceptron (MLP) network give at least three potential benefits when compared to deterministic MLP networks. (1) They allow to learn one-to-many type of mappings. (2) They can be used in structured prediction problems, where modeling the internal structure of the output is important. (3) Stochasticity has been shown to be an excellent regularizer, which makes generalization performance potentially better in general. However, training stochastic networks is considerably more difficult. We study training using M samples of hidden activations per input. We show that the case M = 1 leads to a fundamentally different behavior where the network tries to avoid stochasticity. We propose two new estimators for the training gradient and propose benchmark tests for comparing training algorithms. Our experiments confirm that training stochastic networks is difficult and show that the proposed two estimators perform favorably among all the five known estimators.", "title": "" }, { "docid": "b3cb053d44a90a2a9a9332ac920f0e90", "text": "This study develops a crowdfunding sponsor typology based on sponsors’ motivations for participating in a project. Using a two by two crowdfunding motivation framework, we analyzed six relevant funding motivations—interest, playfulness, philanthropy, reward, relationship, and recognition—and identified four types of crowdfunding sponsors: angelic backer, reward hunter, avid fan, and tasteful hermit. They are profiled in terms of the antecedents and consequences of funding motivations. Angelic backers are similar in some ways to traditional charitable donors while reward hunters are analogous to market investors; thus they differ in their approach to crowdfunding. Avid fans comprise the most passionate sponsor group, and they are similar to members of a brand community. Tasteful hermits support their projects as actively as avid fans, but they have lower extrinsic and others-oriented motivations. The results show that these sponsor types reflect the nature of crowdfunding as a new form of co-creation in the E-commerce context. 2016 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "543d5bf66aae84dab59ed9f124c23375", "text": "This paper describes a blueprint for the proper design of a Fair Internet Regulation System (IRS), i.e. a system that will be implemented in national level and will encourage the participation of Internet users in enriching and correcting its “behavior”. Such a system will be easier to be accepted by Western democracies willing to implement a fair Internet regulation policy.", "title": "" }, { "docid": "b4dd2dd381fc00419172d87ef113a422", "text": "Automatic on-line signature verification is an intriguing intellectual challenge with many practical applications. I review the context of this problem and then describe my own approach to it, which breaks with tradition by relying primarily on the detailed shape of a signature for its automatic verification, rather than relying primarily on the pen dynamics during the production of the signature. I propose a robust, reliable, and elastic localshape-based model for handwritten on-line curves; this model is generated by first parameterizing each on-line curve over its normalized arc-length and then representing along the length of the curve, in a moving coordinate frame, measures of the curve within a sliding window that are analogous to the position of the center of mass, the torque exerted by a force, and the moments of inertia of a mass distribution about its center of mass. Further, I suggest the weighted and biased harmonic mean as a graceful mechanism of combining errors from multiple models of which at least one model is applicable but not necessarily more than one model is applicable, recommending that each signature be represented by multiple models, these models, perhaps, local and global, shape based and dynamics based. Finally, I outline a signature-verification algorithm that I have implemented and tested successfully both on databases and in live experiments.", "title": "" }, { "docid": "11deeb206a297419b5af89d5700fc705", "text": "While offering unique performance and energy-saving advantages, the use of Field-Programmable Gate Arrays (FPGAs) for database acceleration has demanded major concessions from system designers. Either the programmable chips have been used for very basic application tasks (such as implementing a rigid class of selection predicates) or their circuit definition had to be completely recompiled at runtime—a very CPU-intensive and time-consuming effort.\n This work eliminates the need for such concessions. As part of our XLynx implementation—an FPGA-based XML filter—we present skeleton automata, which is a design principle for data-intensive hardware circuits that offers high expressiveness and quick reconfiguration at the same time. Skeleton automata provide a generic implementation for a class of finite-state automata. They can be parameterized to any particular automaton instance in a matter of microseconds or less (as opposed to minutes or hours for complete recompilation).\n We showcase skeleton automata based on XML projection [Marian and Siméon 2003], a filtering technique that illustrates the feasibility of our strategy for a real-world and challenging task. By performing XML projection in hardware and filtering data in the network, we report on performance improvements of several factors while remaining nonintrusive to the back-end XML processor (we evaluate XLynx using the Saxon engine).", "title": "" }, { "docid": "62d1a1214113f74badcbadc83b642eca", "text": "Recent deep learning approaches to Natural Language Generation mostly rely on sequence-to-sequence models. In these approaches, the input is treated as a sequence whereas in most cases, input to generation usually is either a tree or a graph. In this paper, we describe an experiment showing how enriching a sequential input with structural information improves results and help support the generation of paraphrases.", "title": "" }, { "docid": "3412cf0349cee1c21c433477696641b4", "text": "Three experiments examined the impact of excessive violence in sport video games on aggression-related variables. Participants played either a nonviolent simulation-based sports video game (baseball or football) or a matched excessively violent sports video game. Participants then completed measures assessing aggressive cognitions (Experiment 1), aggressive affect and attitudes towards violence in sports (Experiment 2), or aggressive behavior (Experiment 3). Playing an excessively violent sports video game increased aggressive affect, aggressive cognition, aggressive behavior, and attitudes towards violence in sports. Because all games were competitive, these findings indicate that violent content uniquely leads to increases in several aggression-related variables, as predicted by the General Aggression Model and related social–cognitive models. 2009 Elsevier Inc. All rights reserved. In 2002, ESPN aired an investigative piece examining the impact of excessively violent sports video games on youth’s attitudes towards sports (ESPN, 2002). At the time, Midway Games produced several sports games (e.g., NFL Blitz, MLB Slugfest, and NHL Hitz) containing excessive and unrealistic violence, presumably to appeal to non-sport fan video game players. These games were officially licensed by the National Football League, Major League Baseball, and the National Hockey League, which permitted Midway to implement team logos, players’ names, and players’ likenesses into the games. Within these games, players control real-life athletes and can perform excessively violent behaviors on the electronic field. The ESPN program questioned why the athletic leagues would allow their license to be used in this manner and what effect these violent sports games had on young players. Then in December 2004, the NFL granted exclusive license rights to EA Sports (ESPN.com, 2005). In response, Midway Games began publishing a more violent, grittier football game based on a fictitious league. The new football video game, which is rated appropriate only for people seventeen and older, features fictitious players engaging in excessive violent behaviors on and off the field, drug use, sex, and gambling (IGN.com, 2005). Violence in video games has been a major social issue, not limited to violence in sports video games. Over 85% of the games on ll rights reserved. ychology, Iowa State Univernited States. Fax: +1 515 294 the market contain some violence (Children Now, 2001). Approximately half of video games include serious violent actions toward other game characters (Children Now, 2001; Dietz, 1998; Dill, Gentile, Richter, & Dill, 2005). Indeed, Congressman Joe Baca of California recently introduced Federal legislation to require that violent video games contain a warning label about their link to aggression (Baca, 2009). Since 1999, the amount of daily video game usage by youth has nearly doubled (Roberts, Foehr, & Rideout, 2005). Almost 60% of American youth from ages 8 to 18 report playing video games on ‘‘any given day” and 30% report playing for more than an average of an hour a day (Roberts et al., 2005). Video game usage is high in youth regardless of sex, race, parental education, or household income (Roberts et al., 2005). Competition-only versus violent-content hypotheses Recent meta-analyses (e.g., Anderson et al., 2004, submitted for publication) have shown that violent video game exposure increases physiological arousal, aggressive affect, aggressive cognition, and aggressive behavior. Other studies link violent video game play to physiological desensitization to violence (e.g., Bartholow, Bushman, & Sestir, 2006; Carnagey, Anderson, & Bushman, 2007). Particularly interesting is the recent finding that violent video game play can increase aggression in both short and long term contexts. Besides the empirical evidence, there are strong theoretical reasons from the cognitive, social, and personality domains to expect 732 C.A. Anderson, N.L. Carnagey / Journal of Experimental Social Psychology 45 (2009) 731–739 violent video game effects on aggression-related variables. However, currently there are two competing hypotheses as to how violent video games increases aggression: the violent-content hypothesis and the competition-only hypothesis. General Aggression Model and the violent-content hypothesis The General Aggression Model (GAM) is an integration of several prior models of aggression (e.g., social learning theory, cognitive neoassociation) and has been detailed in several publications (Anderson & Bushman, 2002; Anderson & Carnagey, 2004; Anderson, Gentile, & Buckley, 2007; Anderson & Huesmann, 2003). GAM describes a cyclical pattern of interaction between the person and the environment. Input variables, such as provocation and aggressive personality, can affect decision processes and behavior by influencing one’s present internal state in at least one of three primary ways: by influencing current cognitions, affective state, and physiological arousal. That is, a specific input variable may directly influence only one, or two, or all three aspects of a person’s internal state. For example, uncomfortably hot temperature appears to increase aggression primarily by its direct impact on affective state (Anderson, Anderson, Dorr, DeNeve, & Flanagan, 2000). Of course, because affect, arousal, and cognition tend to influence each other, even input variables that primarily influence one aspect of internal state also tend to indirectly influence the other aspects. Although GAM is a general model and not specifically a model of media violence effects, it can easily be applied to media effects. Theoretically, violent media exposure might affect all three components of present internal state. Research has shown that playing violent video games can temporarily increase aggressive thoughts (e.g., Kirsh, 1998), affect (e.g., Ballard & Weist, 1996), and arousal (e.g., Calvert & Tan, 1994). Of course, nonviolent games also can increase arousal, and for this reason much prior work has focused on testing whether violent content can increase aggressive behavior even when physiological arousal is controlled. This usually is accomplished by selecting nonviolent games that are equally arousing (e.g., Anderson et al., 2004). Despite’s GAM’s primary focus on the current social episode, it is not restricted to short-term effects. With repeated exposure to certain types of stimuli (e.g., media violence, certain parenting practices), particular knowledge structures (e.g., aggressive scripts, attitudes towards violence) become chronically accessible. Over time, the individual employs these knowledge structures and occasionally receives environmental reinforcement for their usage. With time and repeated use, these knowledge structures gain strength and connections to other stimuli and knowledge structures, and therefore are more likely to be used in later situations. This accounts for the finding that repeatedly exposing children to media violence increases later aggression, even into adulthood (Anderson, Sakamoto, Gentile, Ihori, & Shibuya, 2008; Huesmann & Miller, 1994; Huesmann, Moise-Titus, Podolski, & Eron, 2003; Möller & Krahé, 2009; Wallenius & Punamaki, 2008). Such longterm effects result from the development, automatization, and reinforcement of aggression-related knowledge structures. In essence, the creation and automatization of these aggression-related knowledge structures and concomitant emotional desensitization changes the individual’s personality. For example, long-term consumers of violent media can become more aggressive in outlook, perceptual biases, attitudes, beliefs, and behavior than they were before the repeated exposure, or would have become without such exposure (e.g., Funk, Baldacci, Pasold, & Baumgardner, 2004; Gentile, Lynch, Linder, & Walsh, 2004; Krahé & Möller, 2004; Uhlmann & Swanson, 2004). In sum, GAM predicts that one way violent video games increase aggression is by the violent content increasing at least one of the aggression-related aspects of a person’s current internal state (short-term context), and over time increasing the chronic accessibility of aggression-related knowledge structures. This is the violent-content hypothesis. The competition hypothesis The competition hypothesis maintains that competitive situations stimulate aggressiveness. According to this hypothesis, many previous short-term (experimental) video game studies have found links between violent games and aggression not because of the violent content, but because violent video games typically involve competition, whereas nonviolent video games frequently are noncompetitive. The competitive aspect of video games might increase aggression by increasing arousal or by increasing aggressive thoughts or affect. Previous research has demonstrated that increases in physiological arousal can cause increases in aggression under some circumstances (Berkowitz, 1993). Competitive aspects of violent video games could also increase aggressive cognitions via links between aggressive and competition concepts (Anderson & Morrow, 1995; Deutsch, 1949, 1993). Thus, at a general level such competition effects are entirely consistent with GAM and with the violentcontent hypothesis. However, a strong version of the competition hypothesis states that violent content has no impact beyond its effects on competition and its sequela. This strong version, which we call the competition-only hypothesis, has not been adequately tested. Testing the competition-only hypothesis There has been little research conducted to examine the violent-content hypothesis versus the competition-only hypothesis (see Carnagey & Anderson, 2005 for one such example). To test these hypotheses against each other, one must randomly assign participants to play either violent or nonviolent video games, all of which are competitive. The use of sports video games meets this requirement and has other benefits. E", "title": "" } ]
scidocsrr
4457e9caa09452b094b448ff520bf0ff
Estimation of Arrival Flight Delay and Delay Propagation in a Busy Hub-Airport
[ { "docid": "feb649029daef80f2ecf33221571a0b1", "text": "The National Airspace System (NAS) is a large and complex system with thousands of interrelated components: administration, control centers, airports, airlines, aircraft, passengers, etc. The complexity of the NAS creates many difficulties in management and control. One of the most pressing problems is flight delay. Delay creates high cost to airlines, complaints from passengers, and difficulties for airport operations. As demand on the system increases, the delay problem becomes more and more prominent. For this reason, it is essential for the Federal Aviation Administration to understand the causes of delay and to find ways to reduce delay. Major contributing factors to delay are congestion at the origin airport, weather, increasing demand, and air traffic management (ATM) decisions such as the Ground Delay Programs (GDP). Delay is an inherently stochastic phenomenon. Even if all known causal factors could be accounted for, macro-level national airspace system (NAS) delays could not be predicted with certainty from micro-level aircraft information. This paper presents a stochastic model that uses Bayesian Networks (BNs) to model the relationships among different components of aircraft delay and the causal factors that affect delays. A case study on delays of departure flights from Chicago O’Hare international airport (ORD) to Hartsfield-Jackson Atlanta International Airport (ATL) reveals how local and system level environmental and human-caused factors combine to affect components of delay, and how these components contribute to the final arrival delay at the destination airport.", "title": "" } ]
[ { "docid": "736a454a8aa08edf645312cecc7925c3", "text": "This paper describes an <i>analogy ontology</i>, a formal representation of some key ideas in analogical processing, that supports the integration of analogical processing with first-principles reasoners. The ontology is based on Gentner's <i>structure-mapping</i> theory, a psychological account of analogy and similarity. The semantics of the ontology are enforced via procedural attachment, using cognitive simulations of structure-mapping to provide analogical processing services. Queries that include analogical operations can be formulated in the same way as standard logical inference, and analogical processing systems in turn can call on the services of first-principles reasoners for creating cases and validating their conjectures. We illustrate the utility of the analogy ontology by demonstrating how it has been used in three systems: A crisis management analogical reasoner that answers questions about international incidents, a course of action analogical critiquer that provides feedback about military plans, and a comparison question-answering system for knowledge capture. These systems rely on large, general-purpose knowledge bases created by other research groups, thus demonstrating the generality and utility of these ideas.", "title": "" }, { "docid": "3c29a0579a2f7d4f010b9b2f2df16e2c", "text": "In recent years research on human activity recognition using wearable sensors has enabled to achieve impressive results on real-world data. However, the most successful activity recognition algorithms require substantial amounts of labeled training data. The generation of this data is not only tedious and error prone but also limits the applicability and scalability of today's approaches. This paper explores and systematically analyzes two different techniques to significantly reduce the required amount of labeled training data. The first technique is based on semi-supervised learning and uses self-training and co-training. The second technique is inspired by active learning. In this approach the system actively asks which data the user should label. With both techniques, the required amount of training data can be reduced significantly while obtaining similar and sometimes even better performance than standard supervised techniques. The experiments are conducted using one of the largest and richest currently available datasets.", "title": "" }, { "docid": "c3ba6fea620b410d5b6d9b07277d431e", "text": "Nanonetworks, i.e., networks of nano-sized devices, are the enabling technology of long-awaited applications in the biological, industrial and military fields. For the time being, the size and power constraints of nano-devices limit the applicability of classical wireless communication in nanonetworks. Alternatively, nanomaterials can be used to enable electromagnetic (EM) communication among nano-devices. In this paper, a novel graphene-based nano-antenna, which exploits the behavior of Surface Plasmon Polariton (SPP) waves in semi-finite size Graphene Nanoribbons (GNRs), is proposed, modeled and analyzed. First, the conductivity of GNRs is analytically and numerically studied by starting from the Kubo formalism to capture the impact of the electron lateral confinement in GNRs. Second, the propagation of SPP waves in GNRs is analytically and numerically investigated, and the SPP wave vector and propagation length are computed. Finally, the nano-antenna is modeled as a resonant plasmonic cavity, and its frequency response is determined. The results show that, by exploiting the high mode compression factor of SPP waves in GNRs, graphene-based plasmonic nano-antennas are able to operate at much lower frequencies than their metallic counterparts, e.g., the Terahertz Band for a one-micrometer-long ten-nanometers-wide antenna. This result has the potential to enable EM communication in nanonetworks.", "title": "" }, { "docid": "f753712eed9e5c210810d2afd1366eb8", "text": "To improve FPGA performance for arithmetic circuits that are dominated by multi-input addition operations, an FPGA logic block is proposed that can be configured as a 6:2 or 7:2 compressor. Compressors have been used successfully in the past to realize parallel multipliers in VLSI technology; however, the peculiar structure of FPGA logic blocks, coupled with the high cost of the routing network relative to ASIC technology, renders compressors ineffective when mapped onto the general logic of an FPGA. On the other hand, current FPGA logic cells have already been enhanced with carry chains to improve arithmetic functionality, for example, to realize fast ternary carry-propagate addition. The contribution of this article is a new FPGA logic cell that is specialized to help realize efficient compressor trees on FPGAs. The new FPGA logic cell has two variants that can respectively be configured as a 6:2 or a 7:2 compressor using additional carry chains that, coupled with lookup tables, provide the necessary functionality. Experiments show that the use of these modified logic cells significantly reduces the delay of compressor trees synthesized on FPGAs compared to state-of-the-art synthesis techniques, with a moderate increase in area and power consumption.", "title": "" }, { "docid": "b15078182915859c3eab4b174115cd0f", "text": "We consider retrieving a specific temporal segment, or moment, from a video given a natural language text description. Methods designed to retrieve whole video clips with natural language determine what occurs in a video but not when. To address this issue, we propose the Moment Context Network (MCN) which effectively localizes natural language queries in videos by integrating local and global video features over time. A key obstacle to training our MCN model is that current video datasets do not include pairs of localized video segments and referring expressions, or text descriptions which uniquely identify a corresponding moment. Therefore, we collect the Distinct Describable Moments (DiDeMo) dataset which consists of over 10,000 unedited, personal videos in diverse visual settings with pairs of localized video segments and referring expressions. We demonstrate that MCN outperforms several baseline methods and believe that our initial results together with the release of DiDeMo will inspire further research on localizing video moments with natural language.", "title": "" }, { "docid": "bf7b3cdb178fd1969257f56c0770b30b", "text": "Relation Extraction is an important subtask of Information Extraction which has the potential of employing deep learning (DL) models with the creation of large datasets using distant supervision. In this review, we compare the contributions and pitfalls of the various DL models that have been used for the task, to help guide the path ahead.", "title": "" }, { "docid": "d3d471b6b377d8958886a2f6c89d5061", "text": "In common Web-based search interfaces, it can be difficult to formulate queries that simultaneously combine temporal, spatial, and topical data filters. We investigate how coordinated visualizations can enhance search and exploration of information on the World Wide Web by easing the formulation of these types of queries. Drawing from visual information seeking and exploratory search, we introduce VisGets - interactive query visualizations of Web-based information that operate with online information within a Web browser. VisGets provide the information seeker with visual overviews of Web resources and offer a way to visually filter the data. Our goal is to facilitate the construction of dynamic search queries that combine filters from more than one data dimension. We present a prototype information exploration system featuring three linked VisGets (temporal, spatial, and topical), and used it to visually explore news items from online RSS feeds.", "title": "" }, { "docid": "a0acd4870951412fa31bc7803f927413", "text": "Surprisingly little is understood about the physiologic and pathologic processes that involve intraoral sebaceous glands. Neoplasms are rare. Hyperplasia of these glands is undoubtedly more common, but criteria for the diagnosis of intraoral sebaceous hyperplasia have not been established. These lesions are too often misdiagnosed as large \"Fordyce granules\" or, when very large, as sebaceous adenomas. On the basis of a series of 31 nonneoplastic sebaceous lesions and on published data, the following definition is proposed: intraoral sebaceous hyperplasia occurs when a lesion, judged clinically to be a distinct abnormality that requires biopsy for diagnosis or confirmation of clinical impression, has histologic features of one or more well-differentiated sebaceous glands that exhibit no fewer than 15 lobules per gland. Sebaceous glands with fewer than 15 lobules that form an apparently distinct clinical lesion on the buccal mucosa are considered normal, whereas similar lesions of other intraoral sites are considered ectopic sebaceous glands. Sebaceous adenomas are less differentiated than sebaceous hyperplasia.", "title": "" }, { "docid": "5a583fe6fae9f0624bcde5043c56c566", "text": "In this paper, a microstrip dipole antenna on a flexible organic substrate is proposed. The antenna arms are tilted to make different variations of the dipole with more compact size and almost same performance. The antennas are fed using a coplanar stripline (CPS) geometry (Simons, 2001). The antennas are then conformed over cylindrical surfaces and their performances are compared to their flat counterparts. Good performance is achieved for both the flat and conformal antennas.", "title": "" }, { "docid": "8b519431416a4bac96a8a975d8973ef9", "text": "A recent and very promising approach for combinatorial optimization is to embed local search into the framework of evolutionary algorithms. In this paper, we present such hybrid algorithms for the graph coloring problem. These algorithms combine a new class of highly specialized crossover operators and a well-known tabu search algorithm. Experiments of such a hybrid algorithm are carried out on large DIMACS Challenge benchmark graphs. Results prove very competitive with and even better than those of state-of-the-art algorithms. Analysis of the behavior of the algorithm sheds light on ways to further improvement.", "title": "" }, { "docid": "0b64a9277c3ad2713a14f0c9ab02fd81", "text": "Insulin-like growth factor 2 (IGF2) is a 7.5 kDa mitogenic peptide hormone expressed by liver and many other tissues. It is three times more abundant in serum than IGF1, but our understanding of its physiological and pathological roles has lagged behind that of IGF1. Expression of the IGF2 gene is strictly regulated. Over-expression occurs in many cancers and is associated with a poor prognosis. Elevated serum IGF2 is also associated with increased risk of developing various cancers including colorectal, breast, prostate and lung. There is established clinical utility for IGF2 measurement in the diagnosis of non-islet cell tumour hypoglycaemia, a condition characterised by a molar IGF2:IGF1 ratio O10. Recent advances in understanding of the pathophysiology of IGF2 in cancer have suggested much novel clinical utility for its measurement. Measurement of IGF2 in blood and genetic and epigenetic tests of the IGF2 gene may help assess cancer risk and prognosis. Further studies will determine whether these tests enter clinical practice. New therapeutic approaches are being developed to target IGF2 action. This review provides a clinical perspective on IGF2 and an update on recent research findings. Key Words", "title": "" }, { "docid": "c16499b3945603d04cf88fec7a2c0a85", "text": "Recovering structure and motion parameters given a image pair or a sequence of images is a well studied problem in computer vision. This is often achieved by employing Structure from Motion (SfM) or Simultaneous Localization and Mapping (SLAM) algorithms based on the real-time requirements. Recently, with the advent of Convolutional Neural Networks (CNNs) researchers have explored the possibility of using machine learning techniques to reconstruct the 3D structure of a scene and jointly predict the camera pose. In this work, we present a framework that achieves state-of-the-art performance on single image depth prediction for both indoor and outdoor scenes. The depth prediction system is then extended to predict optical flow and ultimately the camera pose and trained end-to-end. Our framework outperforms previous deep-learning based motion prediction approaches, and we also demonstrate that the state-of-the-art metric depths can be further improved using the knowledge of pose.", "title": "" }, { "docid": "84d0f682d23d0f54789f83a0d68f4b0e", "text": "AIM\nTetracycline-stained tooth structure is difficult to bleach using nightguard tray methods. The possible benefits of in-office light-accelerated bleaching systems based on the photo-Fenton reaction are of interest as possible adjunctive treatments. This study was a proof of concept for possible benefits of this approach, using dentine slabs from human tooth roots stained in a reproducible manner with the tetracycline antibiotic demeclocycline hydrochloride.\n\n\nMATERIALS AND METHODS\nColor changes overtime in tetra-cycline stained roots from single rooted teeth treated using gel (Zoom! WhiteSpeed(®)) alone, blue LED light alone, or gel plus light in combination were tracked using standardized digital photography. Controls received no treatment. Changes in color channel data were tracked overtime, for each treatment group (N = 20 per group).\n\n\nRESULTS\nDentin was lighter after bleaching, with significant improvements in the dentin color for the blue channel (yellow shade) followed by the green channel and luminosity. The greatest changes occurred with gel activated by light (p < 0.0001), which was superior to effects seen with gel alone. Use of the light alone did not significantly alter shade.\n\n\nCONCLUSION\nThis proof of concept study demonstrates that bleaching using the photo-Fenton chemistry is capable of lightening tetracycline-stained dentine. Further investigation of the use of this method for treating tetracycline-stained teeth in clinical settings appears warranted.\n\n\nCLINICAL SIGNIFICANCE\nBecause tetracycline staining may respond to bleaching treatments based on the photo-Fenton reaction, systems, such as Zoom! WhiteSpeed, may have benefits as adjuncts to home bleaching for patients with tetracycline-staining.", "title": "" }, { "docid": "cdb83e9a31172d6687622dc7ac841c91", "text": "Introduction Various forms of social media are used by many mothers to maintain social ties and manage the stress associated with their parenting roles and responsibilities. ‘Mommy blogging’ as a specific type of social media usage is a common and growing phenomenon, but little is known about mothers’ blogging-related experiences and how these may contribute to their wellbeing. This exploratory study investigated the blogging-related motivations and goals of Australian mothers. Methods An online survey was emailed to members of an Australian online parenting community. The survey included open-ended questions that invited respondents to discuss their motivations and goals for blogging. A thematic analysis using a grounded approach was used to analyze the qualitative data obtained from 235 mothers. Results Five primary motivations for blogging were identified: developing connections with others, experiencing heightened levels of mental stimulation, achieving self-validation, contributing to the welfare of others, and extending skills and abilities. Discussion These motivations are discussed in terms of their various properties and dimensions to illustrate how these mothers appear to use blogging to enhance their psychological wellbeing.", "title": "" }, { "docid": "8ccca373252c045107753081db3de051", "text": "We describe a computer system that provides a real-time musical accompaniment for a live soloist in a piece of non-improvised music for soloist and accompaniment. A Bayesian network is developed that represents the joint distribution on the times at which the solo and accompaniment notes are played, relating the two parts through a layer of hidden variables. The network is first constructed using the rhythmic information contained in the musical score. The network is then trained to capture the musical interpretations of the soloist and accompanist in an off-line rehearsal phase. During live accompaniment the learned distribution of the network is combined with a real-time analysis of the soloist's acoustic signal, performed with a hidden Markov model, to generate a musically principled accompaniment that respects all available sources of knowledge. A live demonstration will be provided.", "title": "" }, { "docid": "291ece850c1c6afcda49ac2e8a74319e", "text": "The aim of this paper is to explore how well the task of text vs. nontext distinction can be solved in online handwritten documents using only offline information. Two systems are introduced. The first system generates a document segmentation first. For this purpose, four methods originally developed for machine printed documents are compared: x-y cut, morphological closing, Voronoi segmentation, and whitespace analysis. A state-of-the art classifier then distinguishes between text and non-text zones. The second system follows a bottom-up approach that classifies connected components. Experiments are performed on a new dataset of online handwritten documents containing different content types in arbitrary arrangements. The best system assigns 94.3% of the pixels to the correct class.", "title": "" }, { "docid": "6b49ccb6cb443c89fd32f407cb575653", "text": "Recently, there has been a growing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments. In this paper, we explore the use of attention-based encoder-decoder model for Mandarin speech recognition on a voice search task. Previous attempts have shown that applying attention-based encoder-decoder to Mandarin speech recognition was quite difficult due to the logographic orthography of Mandarin, the large vocabulary and the conditional dependency of the attention model. In this paper, we use character embedding to deal with the large vocabulary. Several tricks are used for effective model training, including L2 regularization, Gaussian weight noise and frame skipping. We compare two attention mechanisms and use attention smoothing to cover long context in the attention model. Taken together, these tricks allow us to finally achieve a character error rate (CER) of 3.58% and a sentence error rate (SER) of 7.43% on the MiTV voice search dataset. While together with a trigram language model, CER and SER reach 2.81% and 5.77%, respectively.", "title": "" }, { "docid": "14c653f1b4e29fd6cb6a0805471c0906", "text": "3D object detection and pose estimation from a single image are two inherently ambiguous problems. Oftentimes, objects appear similar from different viewpoints due to shape symmetries, occlusion and repetitive textures. This ambiguity in both detection and pose estimation means that an object instance can be perfectly described by several different poses and even classes. In this work we propose to explicitly deal with this uncertainty. For each object instance we predict multiple pose and class outcomes to estimate the specific pose distribution generated by symmetries and repetitive textures. The distribution collapses to a single outcome when the visual appearance uniquely identifies just one valid pose. We show the benefits of our approach which provides not only a better explanation for pose ambiguity, but also a higher accuracy in terms of pose estimation.", "title": "" }, { "docid": "ebe138de5aec0be8cb2e80adb8d59246", "text": "In recent years, online reviews have become the most important resource of customers’ opinions. These reviews are used increasingly by individuals and organizations to make purchase and business decisions. Unfortunately, driven by the desire for profit or publicity, fraudsters have produced deceptive (spam) reviews. The fraudsters’ activities mislead potential customers and organizations reshaping their businesses and prevent opinion-mining techniques from reaching accurate conclusions. The present research focuses on systematically analyzing and categorizingmodels that detect review spam. Next, the study proceeds to assess them in terms of accuracy and results. We find that studies can be categorized into three groups that focus on methods to detect spam reviews, individual spammers and group spam. Different detection techniques have different strengths and weaknesses and thus favor different detection contexts. 2014 Published by Elsevier Ltd.", "title": "" }, { "docid": "e5b2857bfe745468453ef9dabbf5c527", "text": "We assume that a high-dimensional datum, like an image, is a compositional expression of a set of properties, with a complicated non-linear relationship between the datum and its properties. This paper proposes a factorial mixture prior for capturing latent properties, thereby adding structured compositionality to deep generative models. The prior treats a latent vector as belonging to Cartesian product of subspaces, each of which is quantized separately with a Gaussian mixture model. Some mixture components can be set to represent properties as observed random variables whenever labeled properties are present. Through a combination of stochastic variational inference and gradient descent, a method for learning how to infer discrete properties in an unsupervised or semi-supervised way is outlined and empirically evaluated.", "title": "" } ]
scidocsrr
53cdbbf8e5d99570f01d5a6de645d932
Microstrip high-pass filter with attenuation poles using cross-coupling
[ { "docid": "7e61b5f63d325505209c3284c8a444a1", "text": "A method to design low-pass filters (LPF) having a defected ground structure (DGS) and broadened transmission-line elements is proposed. The previously presented technique for obtaining a three-stage LPF using DGS by Lim et al. is generalized to propose a method that can be applied in design N-pole LPFs for N/spl les/5. As an example, a five-pole LPF having a DGS is designed and measured. Accurate curve-fitting results and the successive design process to determine the required size of the DGS corresponding to the LPF prototype elements are described. The proposed LPF having a DGS, called a DGS-LPF, includes transmission-line elements with very low impedance instead of open stubs in realizing the required shunt capacitance. Therefore, open stubs, teeor cross-junction elements, and high-impedance line sections are not required for the proposed LPF, while they all have been essential in conventional LPFs. Due to the widely broadened transmission-line elements, the size of the DGS-LPF is compact.", "title": "" } ]
[ { "docid": "9b8b91bbade21813b16dfa40e70c2b91", "text": "to name a few. Because of its importance to the study of emotion, a number of observer-based systems of facial expression measurement have been developed Using FACS and viewing video-recorded facial behavior at frame rate and slow motion, coders can manually code nearly all possible facial expressions, which are decomposed into action units (AUs). Action units, with some qualifications , are the smallest visually discriminable facial movements. By comparison, other systems are less thorough (Malatesta et al., 1989), fail to differentiate between some anatomically distinct movements (Oster, Hegley, & Nagel, 1992), consider movements that are not anatomically distinct as separable (Oster et al., 1992), and often assume a one-to-one mapping between facial expression and emotion (for a review of these systems, see Cohn & Ekman, in press). Unlike systems that use emotion labels to describe expression , FACS explicitly distinguishes between facial actions and inferences about what they mean. FACS itself is descriptive and includes no emotion-specified descriptors. Hypotheses and inferences about the emotional meaning of facial actions are extrinsic to FACS. If one wishes to make emotion based inferences from FACS codes, a variety of related resources exist. These include the FACS Investigators' Guide These resources suggest combination rules for defining emotion-specified expressions from FACS action units, but this inferential step remains extrinsic to FACS. Because of its descriptive power, FACS is regarded by many as the standard measure for facial behavior and is used widely in diverse fields. Beyond emotion science, these include facial neuromuscular disorders", "title": "" }, { "docid": "ab2e5ec6e48c87b3e4814840ad29afe7", "text": "This article describes a number of log-linear parsing models for an automatically extracted lexicalized grammar. The models are full parsing models in the sense that probabilities are defined for complete parses, rather than for independent events derived by decomposing the parse tree. Discriminative training is used to estimate the models, which requires incorrect parses for each sentence in the training data as well as the correct parse. The lexicalized grammar formalism used is Combinatory Categorial Grammar (CCG), and the grammar is automatically extracted from CCGbank, a CCG version of the Penn Treebank. The combination of discriminative training and an automatically extracted grammar leads to a significant memory requirement (up to 25 GB), which is satisfied using a parallel implementation of the BFGS optimization algorithm running on a Beowulf cluster. Dynamic programming over a packed chart, in combination with the parallel implementation, allows us to solve one of the largest-scale estimation problems in the statistical parsing literature in under three hours. A key component of the parsing system, for both training and testing, is a Maximum Entropy supertagger which assigns CCG lexical categories to words in a sentence. The supertagger makes the discriminative training feasible, and also leads to a highly efficient parser. Surprisingly, given CCG's spurious ambiguity, the parsing speeds are significantly higher than those reported for comparable parsers in the literature. We also extend the existing parsing techniques for CCG by developing a new model and efficient parsing algorithm which exploits all derivations, including CCG's nonstandard derivations. This model and parsing algorithm, when combined with normal-form constraints, give state-of-the-art accuracy for the recovery of predicate-argument dependencies from CCGbank. The parser is also evaluated on DepBank and compared against the RASP parser, outperforming RASP overall and on the majority of relation types. The evaluation on DepBank raises a number of issues regarding parser evaluation. This article provides a comprehensive blueprint for building a wide-coverage CCG parser. We demonstrate that both accurate and highly efficient parsing is possible with CCG.", "title": "" }, { "docid": "98df90734e276e0cf020acfdcaa9b4b4", "text": "High parallel framework has been proved to be very suitable for graph processing. There are various work to optimize the implementation in FPGAs, a pipeline parallel device. The key to make use of the parallel performance of FPGAs is to process graph data in pipeline model and take advantage of on-chip memory to realize necessary locality process. This paper proposes a modularize graph processing framework, which focus on the whole executing procedure with the extremely different degree of parallelism. The framework has three contributions. First, the combination of vertex-centric and edge-centric processing framework can been adjusting in the executing procedure to accommodate top-down algorithm and bottom-up algorithm. Second, owing to the pipeline parallel and finite on-chip memory accelerator, the novel edge-block, a block consist of edges vertex, achieve optimizing the way to utilize the on-chip memory to group the edges and stream the edges in a block to realize the stream pattern to pipeline parallel processing. Third, depending to the analysis of the block structure of nature graph and the executing characteristics during graph processing, we design a novel conversion dispatcher to change processing module, to match the corresponding exchange point. Our evaluation with four graph applications on five diverse scale graph shows that .", "title": "" }, { "docid": "afd32dd6a9b076ed976ecd612c1cc14f", "text": "Many digital images contain blurred regions which are caused by motion or defocus. Automatic detection and classification of blurred image regions are very important for different multimedia analyzing tasks. This paper presents a simple and effective automatic image blurred region detection and classification technique. In the proposed technique, blurred image regions are first detected by examining singular value information for each image pixels. The blur types (i.e. motion blur or defocus blur) are then determined based on certain alpha channel constraint that requires neither image deblurring nor blur kernel estimation. Extensive experiments have been conducted over a dataset that consists of 200 blurred image regions and 200 image regions with no blur that are extracted from 100 digital images. Experimental results show that the proposed technique detects and classifies the two types of image blurs accurately. The proposed technique can be used in many different multimedia analysis applications such as image segmentation, depth estimation and information retrieval.", "title": "" }, { "docid": "d1fa477646e636a3062312d6f6444081", "text": "This paper proposes a novel attention model for semantic segmentation, which aggregates multi-scale and context features to refine prediction. Specifically, the skeleton convolutional neural network framework takes in multiple different scales inputs, by which means the CNN can get representations in different scales. The proposed attention model will handle the features from different scale streams respectively and integrate them. Then location attention branch of the model learns to softly weight the multi-scale features at each pixel location. Moreover, we add an recalibrating branch, parallel to where location attention comes out, to recalibrate the score map per class. We achieve quite competitive results on PASCAL VOC 2012 and ADE20K datasets, which surpass baseline and related works.", "title": "" }, { "docid": "75e9253b7c6333db1aa3cef2ab364f99", "text": "We used single-pulse transcranial magnetic stimulation of the left primary hand motor cortex and motor evoked potentials of the contralateral right abductor pollicis brevis to probe motor cortex excitability during a standard mental rotation task. Based on previous findings we tested the following hypotheses. (i) Is the hand motor cortex activated more strongly during mental rotation than during reading aloud or reading silently? The latter tasks have been shown to increase motor cortex excitability substantially in recent studies. (ii) Is the recruitment of the motor cortex for mental rotation specific for the judgement of rotated but not for nonrotated Shepard & Metzler figures? Surprisingly, motor cortex activation was higher during mental rotation than during verbal tasks. Moreover, we found strong motor cortex excitability during the mental rotation task but significantly weaker excitability during judgements of nonrotated figures. Hence, this study shows that the primary hand motor area is generally involved in mental rotation processes. These findings are discussed in the context of current theories of mental rotation, and a likely mechanism for the global excitability increase in the primary motor cortex during mental rotation is proposed.", "title": "" }, { "docid": "ff947ccb7efdd5517f9b60f9c11ade6a", "text": "Several messages express opinions about events, products, and services, political views or even their author's emotional state and mood. Sentiment analysis has been used in several applications including analysis of the repercussions of events in social networks, analysis of opinions about products and services, and simply to better understand aspects of social communication in Online Social Networks (OSNs). There are multiple methods for measuring sentiments, including lexical-based approaches and supervised machine learning methods. Despite the wide use and popularity of some methods, it is unclear which method is better for identifying the polarity (i.e., positive or negative) of a message as the current literature does not provide a method of comparison among existing methods. Such a comparison is crucial for understanding the potential limitations, advantages, and disadvantages of popular methods in analyzing the content of OSNs messages. Our study aims at filling this gap by presenting comparisons of eight popular sentiment analysis methods in terms of coverage (i.e., the fraction of messages whose sentiment is identified) and agreement (i.e., the fraction of identified sentiments that are in tune with ground truth). We develop a new method that combines existing approaches, providing the best coverage results and competitive agreement. We also present a free Web service called iFeel, which provides an open API for accessing and comparing results across different sentiment methods for a given text.", "title": "" }, { "docid": "68f38ad22fe2c9c24d329b181d1761d2", "text": "Data mining approach can be used to discover knowledge by analyzing the patterns or correlations among of fields in large databases. Data mining approach was used to find the patterns of the data from Tanzania Ministry of Water. It is used to predict current and future status of water pumps in Tanzania. The data mining method proposed is XGBoost (eXtreme Gradient Boosting). XGBoost implement the concept of Gradient Tree Boosting which designed to be highly fast, accurate, efficient, flexible, and portable. In addition, Recursive Feature Elimination (RFE) is also proposed to select the important features of the data to obtain an accurate model. The best accuracy achieved with using 27 input factors selected by RFE and XGBoost as a learning model. The achieved result show 80.38% in accuracy. The information or knowledge which is discovered from data mining approach can be used by the government to improve the inspection planning, maintenance, and identify which factor that can cause damage to the water pumps to ensure the availability of potable water in Tanzania. Using data mining approach is cost-effective, less time consuming and faster than manual inspection.", "title": "" }, { "docid": "997993e389cdb1e40714e20b96927890", "text": "Developer support forums are becoming more popular than ever. Crowdsourced knowledge is an essential resource for many developers yet it can raise concerns about the quality of the shared content. Most existing research efforts address the quality of answers posted by Q&A community members. In this paper, we explore the quality of questions and propose a method of predicting the score of questions on Stack Overflow based on sixteen factors related to questions' format, content and interactions that occur in the post. We performed an extensive investigation to understand the relationship between the factors and the scores of questions. The multiple regression analysis shows that the question's length of the code, accepted answer score, number of tags and the count of views, comments and answers are statistically significantly associated with the scores of questions. Our findings can offer insights to community-based Q&A sites for improving the content of the shared knowledge.", "title": "" }, { "docid": "f8e4db50272d14f026d0956ac25d39d6", "text": "Automated estimation of the allocation of a driver's visual attention could be a critical component of future advanced driver assistance systems. In theory, vision-based tracking of the eye can provide a good estimate of gaze location. But in practice, eye tracking from video is challenging because of sunglasses, eyeglass reflections, lighting conditions, occlusions, motion blur, and other factors. Estimation of head pose, on the other hand, is robust to many of these effects but can't provide as fine-grained of a resolution in localizing the gaze. For the purpose of keeping the driver safe, it's sufficient to partition gaze into regions. In this effort, a proposed system extracts facial features and classifies their spatial configuration into six regions in real time. The proposed method achieves an average accuracy of 91.4 percent at an average decision rate of 11 Hz on a dataset of 50 drivers from an on-road study.", "title": "" }, { "docid": "18bbb75b46f6397a6abab7e0d4af4735", "text": "This paper is concerned with the problem of estimating the relative translation and orientation of an inertial measurement unit and a camera, which are rigidly connected. The key is to realize that this problem is in fact an instance of a standard problem within the area of system identification, referred to as a gray-box problem. We propose a new algorithm for estimating the relative translation and orientation, which does not require any additional hardware, except a piece of paper with a checkerboard pattern on it. The method is based on a physical model which can also be used in solving for example sensor fusion problems. The experimental results show that the method works well in practice, both for perspective and spherical cameras.", "title": "" }, { "docid": "e69ecf0d4d04a956b53f34673e353de3", "text": "Over the past decade, the advent of new technology has brought about the emergence of smart cities aiming to provide their stakeholders with technology-based solutions that are effective and efficient. Insofar as the objective of smart cities is to improve outcomes that are connected to people, systems and processes of businesses, government and other publicand private-sector entities, its main goal is to improve the quality of life of all residents. Accordingly, smart tourism has emerged over the past few years as a subset of the smart city concept, aiming to provide tourists with solutions that address specific travel related needs. Dubai is an emerging tourism destination that has implemented smart city and smart tourism platforms to engage various stakeholders. The objective of this study is to identify best practices related to Dubai’s smart city and smart tourism. In so doing, Dubai’s mission and vision along with key dimensions and pillars are identified in relation to the advancements in the literature while highlighting key resources and challenges. A Smart Tourism Dynamic Responsive System (STDRS) framework is proposed while suggesting how Dubai may able to enhance users’ involvement and their overall experience.", "title": "" }, { "docid": "fada1434ec6e060eee9a2431688f82f3", "text": "Neural language models (NLMs) have been able to improve machine translation (MT) thanks to their ability to generalize well to long contexts. Despite recent successes of deep neural networks in speech and vision, the general practice in MT is to incorporate NLMs with only one or two hidden layers and there have not been clear results on whether having more layers helps. In this paper, we demonstrate that deep NLMs with three or four layers outperform those with fewer layers in terms of both the perplexity and the translation quality. We combine various techniques to successfully train deep NLMs that jointly condition on both the source and target contexts. When reranking nbest lists of a strong web-forum baseline, our deep models yield an average boost of 0.5 TER / 0.5 BLEU points compared to using a shallow NLM. Additionally, we adapt our models to a new sms-chat domain and obtain a similar gain of 1.0 TER / 0.5 BLEU points.", "title": "" }, { "docid": "e649c3a48eccb6165320356e94f5ed7d", "text": "There have been several attempts to create scalable and hardware independent software architectures for Unmanned Aerial Vehicles (UAV). In this work, we propose an onboard architecture for UAVs where hardware abstraction, data storage and communication between modules are efficiently maintained. All processing and software development is done on the UAV while state and mission status of the UAV is monitored from a ground station. The architecture also allows rapid development of mission-specific third party applications on the vehicle with the help of the core module.", "title": "" }, { "docid": "f03a96d81f7eeaf8b9befa73c2b6fbd5", "text": "This research provided the first empirical investigation of how approach and avoidance motives for sacrifice in intimate relationships are associated with personal well-being and relationship quality. In Study 1, the nature of everyday sacrifices made by dating partners was examined, and a measure of approach and avoidance motives for sacrifice was developed. In Study 2, which was a 2-week daily experience study of college students in dating relationships, specific predictions from the theoretical model were tested and both longitudinal and dyadic components were included. Whereas approach motives for sacrifice were positively associated with personal well-being and relationship quality, avoidance motives for sacrifice were negatively associated with personal well-being and relationship quality. Sacrificing for avoidance motives was particularly detrimental to the maintenance of relationships over time. Perceptions of a partner's motives for sacrifice were also associated with well-being and relationship quality. Implications for the conceptualization of relationship maintenance processes along these 2 dimensions are discussed.", "title": "" }, { "docid": "d2aebe4f8d8d90427bee7c8b71b1361f", "text": "Automated vehicles are complex systems with a high degree of interdependencies between its components. This complexity sets increasing demands for the underlying software framework. This paper firstly analyzes the requirements for software frameworks. Afterwards an overview on existing software frameworks, that have been used for automated driving projects, is provided with an in-depth introduction into an emerging open-source software framework, the Robot Operating System (ROS). After discussing the main features, advantages and disadvantages of ROS, the communication overhead of ROS is analyzed quantitatively in various configurations showing its applicability for systems with a high data load.", "title": "" }, { "docid": "c8bd7e1e70ac2dbe613c6eb8efe3bd5f", "text": "This work aims at constructing a semiotic framework for an expanded evolutionary synthesis grounded on Peirce's universal categories and the six space/time/function relations [Taborsky, E., 2004. The nature of the sign as a WFF--a well-formed formula, SEED J. (Semiosis Evol. Energy Dev.) 4 (4), 5-14] that integrate the Lamarckian (internal/external) and Darwinian (individual/population) cuts. According to these guide lines, it is proposed an attempt to formalize developmental systems theory by using the notion of evolving developing agents (EDA) that provides an internalist model of a general transformative tendency driven by organism's need to cope with environmental uncertainty. Development and evolution are conceived as non-programmed open-ended processes of information increase where EDA reach a functional compromise between: (a) increments of phenotype's uniqueness (stability and specificity) and (b) anticipation to environmental changes. Accordingly, changes in mutual information content between the phenotype/environment drag subsequent changes in mutual information content between genotype/phenotype and genotype/environment at two interwoven scales: individual life cycle (ontogeny) and species time (phylogeny), respectively. Developmental terminal additions along with increment minimization of developmental steps must be positively selected.", "title": "" }, { "docid": "207d3e95d3f04cafa417478ed9133fcc", "text": "Urban growth is a worldwide phenomenon but the rate of urbanization is very fast in developing country like Egypt. It is mainly driven by unorganized expansion, increased immigration, rapidly increasing population. In this context, land use and land cover change are considered one of the central components in current strategies for managing natural resources and monitoring environmental changes. In Egypt, urban growth has brought serious losses of agricultural land and water bodies. Urban growth is responsible for a variety of urban environmental issues like decreased air quality, increased runoff and subsequent flooding, increased local temperature, deterioration of water quality, etc. Egypt possessed a number of fast growing cities. Mansoura and Talkha cities in Daqahlia governorate are expanding rapidly with varying growth rates and patterns. In this context, geospatial technologies and remote sensing methodology provide essential tools which can be applied in the analysis of land use change detection. This paper is an attempt to assess the land use change detection by using GIS in Mansoura and Talkha from 1985 to 2010. Change detection analysis shows that built-up area has been increased from 28 to 255 km by more than 30% and agricultural land reduced by 33%. Future prediction is done by using the Markov chain analysis. Information on urban growth, land use and land cover change study is very useful to local government and urban planners for the betterment of future plans of sustainable development of the city. 2015 The Gulf Organisation for Research and Development. Production and hosting by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "9e6bfc7b5cc87f687a699c62da013083", "text": "In order to establish low-cost and strongly-immersive desktop virtual experiment system, a solution based on Kinect and Unity3D engine technology was herein proposed, with a view to applying Kinect gesture recognition and triggering more spontaneous human-computer interactions in three-dimensional virtual environment. A kind of algorithm tailored to the detection of concave-convex points of fingers is put forward to identify various gestures and interaction semantics. In the context of Unity3D, Finite-State Machine (FSM) programming was applied in intelligent management for experimental logic tasks. A “Virtual Experiment System for Electrician Training” was designed and put into practice by these methods. The applications of “Lighting Circuit” module prove that these methods can be satisfyingly helpful to complete virtual experimental tasks and improve user experience. Compared with traditional WIMP interaction, Kinect somatosensory interaction is combined with Unity3D so that three-dimensional virtual system with strong immersion can be established.", "title": "" } ]
scidocsrr
d99d907ffd9190cff50689e768857791
Disease Prediction from Electronic Health Records Using Generative Adversarial Networks
[ { "docid": "897a6d208785b144b5d59e4f346134cd", "text": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems.", "title": "" }, { "docid": "ca331150e60e24f038f9c440b8125ddc", "text": "Class imbalance is one of the challenges of machine learning and data mining fields. Imbalance data sets degrades the performance of data mining and machine learning techniques as the overall accuracy and decision making be biased to the majority class, which lead to misclassifying the minority class samples or furthermore treated them as noise. This paper proposes a general survey for class imbalance problem solutions and the most significant investigations recently introduced by researchers.", "title": "" } ]
[ { "docid": "da43061319adbfd41c77483590a3c819", "text": "Sleep bruxism (SB) is reported by 8% of the adult population and is mainly associated with rhythmic masticatory muscle activity (RMMA) characterized by repetitive jaw muscle contractions (3 bursts or more at a frequency of 1 Hz). The consequences of SB may include tooth destruction, jaw pain, headaches, or the limitation of mandibular movement, as well as tooth-grinding sounds that disrupt the sleep of bed partners. SB is probably an extreme manifestation of a masticatory muscle activity occurring during the sleep of most normal subjects, since RMMA is observed in 60% of normal sleepers in the absence of grinding sounds. The pathophysiology of SB is becoming clearer, and there is an abundance of evidence outlining the neurophysiology and neurochemistry of rhythmic jaw movements (RJM) in relation to chewing, swallowing, and breathing. The sleep literature provides much evidence describing the mechanisms involved in the reduction of muscle tone, from sleep onset to the atonia that characterizes rapid eye movement (REM) sleep. Several brainstem structures (e.g., reticular pontis oralis, pontis caudalis, parvocellularis) and neurochemicals (e.g., serotonin, dopamine, gamma aminobutyric acid [GABA], noradrenaline) are involved in both the genesis of RJM and the modulation of muscle tone during sleep. It remains unknown why a high percentage of normal subjects present RMMA during sleep and why this activity is three times more frequent and higher in amplitude in SB patients. It is also unclear why RMMA during sleep is characterized by co-activation of both jaw-opening and jaw-closing muscles instead of the alternating jaw-opening and jaw-closing muscle activity pattern typical of chewing. The final section of this review proposes that RMMA during sleep has a role in lubricating the upper alimentary tract and increasing airway patency. The review concludes with an outline of questions for future research.", "title": "" }, { "docid": "24c1b31bac3688c901c9b56ef9a331da", "text": "Advanced Persistent Threats (APTs) are a new breed of internet based smart threats, which can go undetected with the existing state of-the-art internet traffic monitoring and protection systems. With the evolution of internet and cloud computing, a new generation of smart APT attacks has also evolved and signature based threat detection systems are proving to be futile and insufficient. One of the essential strategies in detecting APTs is to continuously monitor and analyze various features of a TCP/IP connection, such as the number of transferred packets, the total count of the bytes exchanged, the duration of the TCP/IP connections, and details of the number of packet flows. The current threat detection approaches make extensive use of machine learning algorithms that utilize statistical and behavioral knowledge of the traffic. However, the performance of these algorithms is far from satisfactory in terms of reducing false negatives and false positives simultaneously. Mostly, current algorithms focus on reducing false positives, only. This paper presents a fractal based anomaly classification mechanism, with the goal of reducing both false positives and false negatives, simultaneously. A comparison of the proposed fractal based method with a traditional Euclidean based machine learning algorithm (k-NN) shows that the proposed method significantly outperforms the traditional approach by reducing false positive and false negative rates, simultaneously, while improving the overall classification rates.", "title": "" }, { "docid": "59759e16adfbb3b08cf9a8deb8352b6e", "text": "Media images of the female body commonly represent reigning appearance ideals of the era in which they are published. To date, limited documentation of the genital appearance ideals in mainstream media exists. Analysis 1 sought to describe genital appearance ideals (i.e., mons pubis and labia majora visibility, labia minora size and color, and pubic hair style) and general physique ideals (i.e., hip, waist, and bust size, height, weight, and body mass index [BMI]) across time based on 647 Playboy Magazine centerfolds published between 1953 and 2007. Analysis 2 focused exclusively on the genital appearance ideals embodied by models in 185 Playboy photographs published between 2007 and 2008. Taken together, results suggest the perpetuation of a \"Barbie Doll\" ideal characterized by a low BMI, narrow hips, a prominent bust, and hairless, undefined genitalia resembling those of a prepubescent female.", "title": "" }, { "docid": "c56eac3f4ee971beb833d25d95ff2f10", "text": "Automatic Number Plate Recognition (ANPR) is a real time embedded system which automatically recognizes the license number of vehicles. In this paper, the task of recognizing number plate for Indian conditions is considered, where number plate standards are rarely followed.", "title": "" }, { "docid": "6005ebbe5848655fda5127f555f70764", "text": "The ability to record and replay program execution helps significantly in debugging non-deterministic MPI applications by reproducing message-receive orders. However, the large amount of data that traditional record-and-reply techniques record precludes its practical applicability to massively parallel applications. In this paper, we propose a new compression algorithm, Clock Delta Compression (CDC), for scalable record and replay of non-deterministic MPI applications. CDC defines a reference order of message receives based on a totally ordered relation using Lamport clocks, and only records the differences between this reference logical-clock order and an observed order. Our evaluation shows that CDC significantly reduces the record data size. For example, when we apply CDC to Monte Carlo particle transport Benchmark (MCB), which represents common non-deterministic communication patterns, CDC reduces the record size by approximately two orders of magnitude compared to traditional techniques and incurs between 13.1% and 25.5% of runtime overhead.", "title": "" }, { "docid": "6adbe9f2de5a070cf9c1b7f708f4a452", "text": "Prior research has provided valuable insights into how and why employees make a decision about the adoption and use of information technologies (ITs) in the workplace. From an organizational point of view, however, the more important issue is how managers make informed decisions about interventions that can lead to greater acceptance and effective utilization of IT. There is limited research in the IT implementation literature that deals with the role of interventions to aid such managerial decision making. Particularly, there is a need to understand how various interventions can influence the known determinants of IT adoption and use. To address this gap in the literature, we draw from the vast body of research on the technology acceptance model (TAM), particularly the work on the determinants of perceived usefulness and perceived ease of use, and: (i) develop a comprehensive nomological network (integrated model) of the determinants of individual level (IT) adoption and use; (ii) empirically test the proposed integrated model; and (iii) present a research agenda focused on potential preand postimplementation interventions that can enhance employees’ adoption and use of IT. Our findings and research agenda have important implications for managerial decision making on IT implementation in organizations. Subject Areas: Design Characteristics, Interventions, Management Support, Organizational Support, Peer Support, Technology Acceptance Model (TAM), Technology Adoption, Training, User Acceptance, User Involvement, and User Participation.", "title": "" }, { "docid": "dd1e7bb3ba33c5ea711c0d066db53fa9", "text": "This paper presents the development and test of a flexible control strategy for an 11-kW wind turbine with a back-to-back power converter capable of working in both stand-alone and grid-connection mode. The stand-alone control is featured with a complex output voltage controller capable of handling nonlinear load and excess or deficit of generated power. Grid-connection mode with current control is also enabled for the case of isolated local grid involving other dispersed power generators such as other wind turbines or diesel generators. A novel automatic mode switch method based on a phase-locked loop controller is developed in order to detect the grid failure or recovery and switch the operation mode accordingly. A flexible digital signal processor (DSP) system that allows user-friendly code development and online tuning is used to implement and test the different control strategies. The back-to-back power conversion configuration is chosen where the generator converter uses a built-in standard flux vector control to control the speed of the turbine shaft while the grid-side converter uses a standard pulse-width modulation active rectifier control strategy implemented in a DSP controller. The design of the longitudinal conversion loss filter and of the involved PI-controllers are described in detail. Test results show the proposed methods works properly.", "title": "" }, { "docid": "77a36de6a2bae1a0c2a6e2aa8b097d7b", "text": "We present a palette-based framework for color composition for visual applications. Color composition is a critical aspect of visual applications in art, design, and visualization. The color wheel is often used to explain pleasing color combinations in geometric terms, and, in digital design, to provide a user interface to visualize and manipulate colors. We abstract relationships between palette colors as a compact set of axes describing harmonic templates over perceptually uniform color wheels. Our framework provides a basis for a variety of color-aware image operations, such as color harmonization and color transfer, and can be applied to videos. To enable our approach, we introduce an extremely scalable and efficient yet simple palette-based image decomposition algorithm. Our approach is based on the geometry of images in RGBXY-space. This new geometric approach is orders of magnitude more efficient than previous work and requires no numerical optimization. We demonstrate a real-time layer decomposition tool. After preprocessing, our algorithm can decompose 6 MP images into layers in 20 milliseconds. We also conducted three large-scale, wide-ranging perceptual studies on the perception of harmonic colors and harmonization algorithms.", "title": "" }, { "docid": "2ffb0a4ceb5c049b480001245ba61f21", "text": "Topic models, such as latent Dirichlet allocation (LDA), can be useful tools for the statistical analysis of document collections and other discrete data. The LDA model assumes that the words of each document arise from a mixture of topics, each of which is a distribution over the vocabulary. A limitation of LDA is the inability to model topic correlation even though, for example, a document about genetics is more likely to also be about disease than X-ray astronomy. This limitation stems from the use of the Dirichlet distribution to model the variability among the topic proportions. In this paper we develop the correlated topic model (CTM), where the topic proportions exhibit correlation via the logistic normal distribution [J. Roy. Statist. Soc. Ser. B 44 (1982) 139–177]. We derive a fast variational inference algorithm for approximate posterior inference in this model, which is complicated by the fact that the logistic normal is not conjugate to the multinomial. We apply the CTM to the articles from Science published from 1990–1999, a data set that comprises 57M words. The CTM gives a better fit of the data than LDA, and we demonstrate its use as an exploratory tool of large document collections.", "title": "" }, { "docid": "a9d1cdfd844a7347d255838d5eb74b03", "text": "An economy based on the exchange of capital, assets and services between individuals has grown significantly, spurred by proliferation of internet-based platforms that allow people to share underutilized resources and trade with reasonably low transaction costs. The movement toward this economy of “sharing” translates into market efficiencies that bear new products, reframe established services, have positive environmental effects, and may generate overall economic growth. This emerging paradigm, entitled the collaborative economy, is disruptive to the conventional company-driven economic paradigm as evidenced by the large number of peer-to-peer based services that have captured impressive market shares sectors ranging from transportation and hospitality to banking and risk capital. The panel explores economic, social, and technological implications of the collaborative economy, how digital technologies enable it, and how the massive sociotechnical systems embodied in these new peer platforms may evolve in response to the market and social forces that drive this emerging ecosystem.", "title": "" }, { "docid": "d19eceb87e0ebb03284c867efe709060", "text": "Vehicular Ad hoc Networks (VANETs) are the promising approach to provide safety and other applications to the drivers as well as passengers. It becomes a key component of the intelligent transport system. A lot of works have been done towards it but security in VANET got less attention. In this article, we have discussed about the VANET and its technical and security challenges. We have also discussed some major attacks and solutions that can be implemented against these attacks. We have compared the solution using different parameters. Lastly we have discussed the mechanisms that are used in the solutions.", "title": "" }, { "docid": "f7d06c6f2313417fd2795ce4c4402f0e", "text": "Decades of research suggest that similarity in demographics, values, activities, and attitudes predicts higher marital satisfaction. The present study examined the relationship between similarity in Big Five personality factors and initial levels and 12-year trajectories of marital satisfaction in long-term couples, who were in their 40s and 60s at the beginning of the study. Across the entire sample, greater overall personality similarity predicted more negative slopes in marital satisfaction trajectories. In addition, spousal similarity on Conscientiousness and Extraversion more strongly predicted negative marital satisfaction outcomes among the midlife sample than among the older sample. Results are discussed in terms of the different life tasks faced by young, midlife, and older adults, and the implications of these tasks for the \"ingredients\" of marital satisfaction.", "title": "" }, { "docid": "8f3eaf1a65cd3d81e718143304e4ce81", "text": "Issue tracking systems store valuable data for testing hypotheses concerning maintenance, building statistical prediction models and recently investigating developers \"affectiveness\". In particular, the Jira Issue Tracking System is a proprietary tracking system that has gained a tremendous popularity in the last years and offers unique features like the project management system and the Jira agile kanban board. This paper presents a dataset extracted from the Jira ITS of four popular open source ecosystems (as well as the tools and infrastructure used for extraction) the Apache Software Foundation, Spring, JBoss and CodeHaus communities. Our dataset hosts more than 1K projects, containing more than 700K issue reports and more than 2 million issue comments. Using this data, we have been able to deeply study the communication process among developers, and how this aspect affects the development process. Furthermore, comments posted by developers contain not only technical information, but also valuable information about sentiments and emotions. Since sentiment analysis and human aspects in software engineering are gaining more and more importance in the last years, with this repository we would like to encourage further studies in this direction.", "title": "" }, { "docid": "532ded1b0cc25a21464996a15a976125", "text": "Folded-plate structures provide an efficient design using thin laminated veneer lumber panels. Inspired by Japanese furniture joinery, the multiple tab-and-slot joint was developed for the multi-assembly of timber panels with non-parallel edges without adhesive or metal joints. Because the global analysis of our origami structures reveals that the rotational stiffness at ridges affects the global behaviour, we propose an experimental and numerical study of this linear interlocking connection. Its geometry is governed by three angles that orient the contact faces. Nine combinations of these angles were tested and the rotational slip was measured with two different bending set-ups: closing or opening the fold formed by two panels. The non-linear behaviour was conjointly reproduced numerically using the finite element method and continuum damage mechanics.", "title": "" }, { "docid": "c6a429e06f634e1dee995d0537777b4b", "text": "Digital image editing is usually an iterative process; users repetitively perform short sequences of operations, as well as undo and redo using history navigation tools. In our collected data, undo, redo and navigation constitute about 9 percent of the total commands and consume a significant amount of user time. Unfortunately, such activities also tend to be tedious and frustrating, especially for complex projects.\n We address this crucial issue by adaptive history, a UI mechanism that groups relevant operations together to reduce user workloads. Such grouping can occur at various history granularities. We present two that have been found to be most useful. On a fine level, we group repeating commands patterns together to facilitate smart undo. On a coarse level, we segment commands history into chunks for semantic navigation. The main advantages of our approach are that it is intuitive to use and easy to integrate into any existing tools with text-based history lists. Unlike prior methods that are predominately rule based, our approach is data driven, and thus adapts better to common editing tasks which exhibit sufficient diversity and complexity that may defy predetermined rules or procedures.\n A user study showed that our system performs quantitatively better than two other baselines, and the participants also gave positive qualitative feedbacks on the system features.", "title": "" }, { "docid": "cd7fa5de19b12bdded98f197c1d9cd22", "text": "Many event monitoring systems rely on counting known keywords in streaming text data to detect sudden spikes in frequency. But the dynamic and conversational nature of Twitter makes it hard to select known keywords for monitoring. Here we consider a method of automatically finding noun phrases (NPs) as keywords for event monitoring in Twitter. Finding NPs has two aspects, identifying the boundaries for the subsequence of words which represent the NP, and classifying the NP to a specific broad category such as politics, sports, etc. To classify an NP, we define the feature vector for the NP using not just the words but also the author's behavior and social activities. Our results show that we can classify many NPs by using a sample of training data from a knowledge-base.", "title": "" }, { "docid": "336d91ba4c688350f308982f8b09dd4b", "text": "osting by E Abstract Extraction–transformation–loading (ETL) tools are pieces of software responsible for the extraction of data from several sources, its cleansing, customization, reformatting, integration, and insertion into a data warehouse. Building the ETL process is potentially one of the biggest tasks of building a warehouse; it is complex, time consuming, and consume most of data warehouse project’s implementation efforts, costs, and resources. Building a data warehouse requires focusing closely on understanding three main areas: the source area, the destination area, and the mapping area (ETL processes). The source area has standard models such as entity relationship diagram, and the destination area has standard models such as star schema, but the mapping area has not a standard model till now. In spite of the importance of ETL processes, little research has been done in this area due to its complexity. There is a clear lack of a standard model that can be used to represent the ETL scenarios. In this paper we will try to navigate through the efforts done to conceptualize", "title": "" }, { "docid": "fe9724a94d1aa13e4fbefa7c88ac09dd", "text": "We demonstrate a multimodal dialogue system using reinforcement learning for in-car scenarios, developed at Edinburgh University and Cambridge University for the TALK project1. This prototype is the first “Information State Update” (ISU) dialogue system to exhibit reinforcement learning of dialogue strategies, and also has a fragmentary clarification feature. This paper describes the main components and functionality of the system, as well as the purposes and future use of the system, and surveys the research issues involved in its construction. Evaluation of this system (i.e. comparing the baseline system with handcoded vs. learnt dialogue policies) is ongoing, and the demonstration will show both.", "title": "" }, { "docid": "f1e646a0627a5c61a0f73a41d35ccac7", "text": "Smart cities play an increasingly important role for the sustainable economic development of a determined area. Smart cities are considered a key element for generating wealth, knowledge and diversity, both economically and socially. A Smart City is the engine to reach the sustainability of its infrastructure and facilitate the sustainable development of its industry, buildings and citizens. The first goal to reach that sustainability is reduce the energy consumption and the levels of greenhouse gases (GHG). For that purpose, it is required scalability, extensibility and integration of new resources in order to reach a higher awareness about the energy consumption, distribution and generation, which allows a suitable modeling which can enable new countermeasure and action plans to mitigate the current excessive power consumption effects. Smart Cities should offer efficient support for global communications and access to the services and information. It is required to enable a homogenous and seamless machine to machine (M2M) communication in the different solutions and use cases. This work presents how to reach an interoperable Smart Lighting solution over the emerging M2M protocols such as CoAP built over REST architecture. This follows up the guidelines defined by the IP for Smart Objects Alliance (IPSO Alliance) in order to implement and interoperable semantic level for the street lighting, and describes the integration of the communications and logic over the existing street lighting infrastructure.", "title": "" } ]
scidocsrr
bb69c348c491a9e51967331ec007799f
Mitosis Detection in Breast Cancer Histology Images with Deep Neural Networks
[ { "docid": "27ad413fa5833094fb2e557308fa761d", "text": "A common practice to gain invariant features in object recognition models is to aggregate multiple low-level features over a small neighborhood. However, the differences between those models makes a comparison of the properties of different aggregation functions hard. Our aim is to gain insight into different functions by directly comparing them on a fixed architecture for several common object recognition tasks. Empirical results show that a maximum pooling operation significantly outperforms subsampling operations. Despite their shift-invariant properties, overlapping pooling windows are no significant improvement over non-overlapping pooling windows. By applying this knowledge, we achieve state-of-the-art error rates of 4.57% on the NORB normalized-uniform dataset and 5.6% on the NORB jittered-cluttered dataset.", "title": "" }, { "docid": "e37b3a68c850d1fb54c9030c22b5792f", "text": "We address a central problem of neuroanatomy, namely, the automatic segmentation of neuronal structures depicted in stacks of electron microscopy (EM) images. This is necessary to efficiently map 3D brain structure and connectivity. To segment biological neuron membranes, we use a special type of deep artificial neural network as a pixel classifier. The label of each pixel (membrane or nonmembrane) is predicted from raw pixel values in a square window centered on it. The input layer maps each window pixel to a neuron. It is followed by a succession of convolutional and max-pooling layers which preserve 2D information and extract features with increasing levels of abstraction. The output layer produces a calibrated probability for each class. The classifier is trained by plain gradient descent on a 512 × 512 × 30 stack with known ground truth, and tested on a stack of the same size (ground truth unknown to the authors) by the organizers of the ISBI 2012 EM Segmentation Challenge. Even without problem-specific postprocessing, our approach outperforms competing techniques by a large margin in all three considered metrics, i.e. rand error, warping error and pixel error. For pixel error, our approach is the only one outperforming a second human observer.", "title": "" } ]
[ { "docid": "f9de4041343fb6c570e5cbce4cb1ff66", "text": "Do the languages that we speak affect how we experience the world? This question was taken up in a linguistic survey and two non-linguistic psychophysical experiments conducted in native speakers of English, Indonesian, Greek, and Spanish. All four of these languages use spatial metaphors to talk about time, but the particular metaphoric mappings between time and space vary across languages. A linguistic corpus study revealed that English and Indonesian tend to map duration onto linear distance (e.g., a long time), whereas Greek and Spanish preferentially map duration onto quantity (e.g., much time). Two psychophysical time estimation experiments were conducted to determine whether this cross-linguistic difference has implications for speakers’ temporal thinking. Performance on the psychophysical tasks reflected the relative frequencies of the ‘time as distance’ and ‘time as quantity’ metaphors in English, Indonesian, Greek, and Spanish. This was true despite the fact that the tasks used entirely nonlinguistic stimuli and responses. Results suggest that: (1.) The spatial metaphors in our native language may profoundly influence the way we mentally represent time. (2.) Language can shape even primitive, low-level mental processes such as estimating brief durations – an ability we share with babies and non-human animals.", "title": "" }, { "docid": "b41d8ca866268133f2af88495dad6482", "text": "Text clustering is an important area of interest in the field of Text summarization, sentiment analysis etc. There have been a lot of algorithms experimented during the past years, which have a wide range of performances. One of the most popular method used is k-means, where an initial assumption is made about k, which is the number of clusters to be generated. Now a new method is introduced where the number of clusters is found using a modified spectral bisection and then the output is given to a genetic algorithm where the final solution is obtained. Keywords— Cluster, Spectral Bisection, Genetic Algorithm, kmeans.", "title": "" }, { "docid": "846f8f33181c3143bb8f54ce8eb3e5cc", "text": "Story Point is a relative measure heavily used for agile estimation of size. The team decides how big a point is, and based on that size, determines how many points each work item is. In many organizations, the use of story points for similar features can vary from team to another, and successfully, based on the teams' sizes, skill set and relative use of this tool. But in a CMMI organization, this technique demands a degree of consistency across teams for a more streamlined approach to solution delivery. This generates a challenge for CMMI organizations to adopt Agile in software estimation and planning. In this paper, a process and methodology that guarantees relativity in software sizing while using agile story points is introduced. The proposed process and methodology are applied in a CMMI company level three on different projects. By that, the story point is used on the level of the organization, not the project. Then, the performance of sizing process is measured to show a significant improvement in sizing accuracy after adopting the agile story point in CMMI organizations. To complete the estimation cycle, an improvement in effort estimation dependent on story point is also introduced, and its performance effect is measured.", "title": "" }, { "docid": "55160cc3013b03704555863c710e6d21", "text": "Localization is one of the most important capabilities for autonomous mobile agents. Markov Localization (ML), applied to dense range images, has proven to be an effective technique. But its computational and storage requirements put a large burden on robot systems, and make it difficult to update the map dynamically. In this paper we introduce a new technique, based on correlation of a sensor scan with the map, that is several orders of magnitude more efficient than M L . CBML (correlation-based ML) permits video-rate localization using dense range scans, dynamic map updates, and a more precise error model than M L . In this paper we present the basic method of CBML, and validate its efficiency and correctness in a series of experiments on an implemented mobile robot base.", "title": "" }, { "docid": "7d117525263c970c7c23f2a8ba0357d6", "text": "Entity search is an emerging IR and NLP task that involves the retrieval of entities of a specific type in response to a query. We address the similar researcher search\" or the \"researcher recommendation\" problem, an instance of similar entity search\" for the academic domain. In response to a researcher name' query, the goal of a researcher recommender system is to output the list of researchers that have similar expertise as that of the queried researcher. We propose models for computing similarity between researchers based on expertise profiles extracted from their publications and academic homepages. We provide results of our models for the recommendation task on two publicly-available datasets. To the best of our knowledge, we are the first to address content-based researcher recommendation in an academic setting and demonstrate it for Computer Science via our system, ScholarSearch.", "title": "" }, { "docid": "77d94447b208e5a9fe441c4bda31dc25", "text": "The author reviewed cultural competence models and cultural competence assessment instruments developed and published by nurse researchers since 1982. Both models and instruments were examined in terms of their components, theoretical backgrounds, empirical validation, and psychometric evaluation. Most models were not empirically tested; only a few models developed model-based instruments. About half of the instruments were tested with varying levels of psychometric properties. Other related issues were discussed, including the definition of cultural competence and its significance in model and instrument development, limitations of existing models and instruments, impact of cultural competence on health disparities, and further work in cultural competence research and practice.", "title": "" }, { "docid": "3f5627bb20164666317ba4783ed4eddb", "text": "The agriculture sector is the backbone of an economy which provides the basic ingredients to mankind and raw materials for industrialization. With the increasing number of the population over the world, the demand for agricultural products is also increased. In order to increase the production rate, irrigation technique should be more efficient. The irrigation techniques used till date are not in satisfactory level, especially in a developing country like Bangladesh. This paper has proposed a line follower robot for irrigation based application which may be considered as a cost-effective solution by minimizing water loss as well as an efficient system for irrigation purposes. This proposed system does not require an operator to accomplish its task. This gardening robot is completely portable and is equipped with a microcontroller, an on-board water reservoir, and an attached water pump. The area to be watered by the robot can be any field with plants, placed in a predefined path. It is capable of comparing movable objects and stationary plants to minimize water loss and finally watering them autonomously without any human intervention. The designed robot was tested and it performed nicely.", "title": "" }, { "docid": "d9e39f2513e74917023d508596cff6c7", "text": "In recent years the data mining is data analyzing techniques that used to analyze crime data previously stored from various sources to find patterns and trends in crimes. In additional, it can be applied to increase efficiency in solving the crimes faster and also can be applied to automatically notify the crimes. However, there are many data mining techniques. In order to increase efficiency of crime detection, it is necessary to select the data mining techniques suitably. This paper reviews the literatures on various data mining applications, especially applications that applied to solve the crimes. Survey also throws light on research gaps and challenges of crime data mining. In additional to that, this paper provides insight about the data mining for finding the patterns and trends in crime to be used appropriately and to be a help for beginners in the research of crime data mining.", "title": "" }, { "docid": "987c22f071f91b09b4e4e698c454f16a", "text": "There is much current debate about the existence of mirror neurons in humans. To identify mirror neurons in the inferior frontal gyrus (IFG) of humans, we used a repetition suppression paradigm while measuring neural activity with functional magnetic resonance imaging. Subjects either executed or observed a series of actions. Here we show that in the IFG, responses were suppressed both when an executed action was followed by the same rather than a different observed action and when an observed action was followed by the same rather than a different executed action. This pattern of responses is consistent with that predicted by mirror neurons and is evidence of mirror neurons in the human IFG.", "title": "" }, { "docid": "0b2f0b36bb458221b340b5e4a069fe2b", "text": "The Dendritic Cell Algorithm (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, and abstract model of DC behaviour is developed and subsequently used to form an algorithm, the DCA. The abstraction process was facilitated through close collaboration with laboratorybased immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population based algorithm, with each agent in the system represented as an ‘artificial DC’. Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of port scans and botnets, where it has produced impressive results with relatively low rates of false positives.", "title": "" }, { "docid": "1dee93ec9e8de1cf365534581fb19623", "text": "The term “Business Model”started to gain momentum in the early rise of the new economy and it is currently used both in business practice and scientific research. Under a general point of view BMs are considered as a contact point among technology, organization and strategy used to describe how an organization gets value from technology and uses it as a source of competitive advantage. Recent contributions suggest to use ontologies to define a shareable conceptualization of BM. The aim of this study is to investigate the role of BM Ontologies as a conceptual tool for the cooperation of subjects interested in achieving a common goal and operating in complex and innovative environments. This is the case for example of those contexts characterized by the deployment of e-services from multiple service providers in cross border environments. Through an extensive literature review on BM we selected the most suitable conceptual tool and studied its application to the LD-CAST project during a participatory action research activity in order to analyse the BM design process of a new organisation based on the cooperation of service providers (the Chambers of Commerce from Italy, Romania, Poland and Bulgaria) with different needs, legal constraints and cultural background.", "title": "" }, { "docid": "52b8e748a87a114f5d629f8dcd9a7dfc", "text": "Delay-tolerant networks (DTNs) rely on the mobility of nodes and their contacts to make up with the lack of continuous connectivity and, thus, enable message delivery from source to destination in a “store-carry-forward” fashion. Since message delivery consumes resource such as storage and power, some nodes may choose not to forward or carry others' messages while relying on others to deliver their locally generated messages. These kinds of selfish behaviors may hinder effective communications over DTNs. In this paper, we present an efficient incentive-compatible (IC) routing protocol (ICRP) with multiple copies for two-hop DTNs based on the algorithmic game theory. It takes both the encounter probability and transmission cost into consideration to deal with the misbehaviors of selfish nodes. Moreover, we employ the optimal sequential stopping rule and Vickrey-Clarke-Groves (VCG) auction as a strategy to select optimal relay nodes to ensure that nodes that honestly report their encounter probability and transmission cost can maximize their rewards. We attempt to find the optimal stopping time threshold adaptively based on realistic probability model and propose an algorithm to calculate the threshold. Based on this threshold, we propose a new method to select relay nodes for multicopy transmissions. To ensure that the selected relay nodes can receive their rewards securely, we develop a signature scheme based on a bilinear map to prevent the malicious nodes from tampering. Through simulations, we demonstrate that ICRP can effectively stimulate nodes to forward/carry messages and achieve higher packet delivery ratio with lower transmission cost.", "title": "" }, { "docid": "2ae76fff668bc448e841a27cb951f046", "text": "We present efficient algorithms for the problem of contextual bandits with i.i.d. covariates, an arbitrary sequence of rewards, and an arbitrary class of policies. Our algorithm BISTRO requires d calls to the empirical risk minimization (ERM) oracle per round, where d is the number of actions. The method uses unlabeled data to make the problem computationally simple. When the ERM problem itself is computationally hard, we extend the approach by employing multiplicative approximation algorithms for the ERM. The integrality gap of the relaxation only enters in the regret bound rather than the benchmark. Finally, we show that the adversarial version of the contextual bandit problem is learnable (and efficient) whenever the full-information supervised online learning problem has a non-trivial regret guarantee (and efficient).", "title": "" }, { "docid": "30394ae468bc521e8e00db030f19e983", "text": "A peer-to-peer network, enabling different parties to jointly store and run computations on data while keeping the data completely private. Enigma’s computational model is based on a highly optimized version of secure multi-party computation, guaranteed by a verifiable secret-sharing scheme. For storage, we use a modified distributed hashtable for holding secret-shared data. An external blockchain is utilized as the controller of the network, manages access control, identities and serves as a tamper-proof log of events. Security deposits and fees incentivize operation, correctness and fairness of the system. Similar to Bitcoin, Enigma removes the need for a trusted third party, enabling autonomous control of personal data. For the first time, users are able to share their data with cryptographic guarantees regarding their privacy.", "title": "" }, { "docid": "2b8c0923372e97ca5781378b7e220021", "text": "Motivated by requirements of Web 2.0 applications, a plethora of non-relational databases raised in recent years. Since it is very difficult to choose a suitable database for a specific use case, this paper evaluates the underlying techniques of NoSQL databases considering their applicability for certain requirements. These systems are compared by their data models, query possibilities, concurrency controls, partitioning and replication opportunities.", "title": "" }, { "docid": "6a2e5831f2a2e1625be2bfb7941b9d1b", "text": "Benefited from cloud storage services, users can save their cost of buying expensive storage and application servers, as well as deploying and maintaining applications. Meanwhile they lost the physical control of their data. So effective methods are needed to verify the correctness of the data stored at cloud servers, which are the research issues the Provable Data Possession (PDP) faced. The most important features in PDP are: 1) supporting for public, unlimited numbers of times of verification; 2) supporting for dynamic data update; 3) efficiency of storage space and computing. In mobile cloud computing, mobile end-users also need the PDP service. However, the computing workloads and storage burden of client in existing PDP schemes are too heavy to be directly used by the resource-constrained mobile devices. To solve this problem, with the integration of the trusted computing technology, this paper proposes a novel public PDP scheme, in which the trusted third-party agent (TPA) takes over most of the calculations from the mobile end-users. By using bilinear signature and Merkle hash tree (MHT), the scheme aggregates the verification tokens of the data file into one small signature to reduce communication and storage burden. MHT is also helpful to support dynamic data update. In our framework, the mobile terminal devices only need to generate some secret keys and random numbers with the help of trusted platform model (TPM) chips, and the needed computing workload and storage space is fit for mobile devices. Our scheme realizes provable secure storage service for resource-constrained mobile devices in mobile cloud computing.", "title": "" }, { "docid": "9b085f5cd0a080560d7ae17b7d4d6878", "text": "The commercial roll-type corona-electrostatic separators, which are currently employed for the recovery of metals and plastics from mm-size granular mixtures, are inappropriate for the processing of finely-grinded wastes. The aim of the present work is to demonstrate that a belt-type corona-electrostatic separator could be an appropriate solution for the selective sorting of conductive and non-conductive products contained in micronized wastes. The experiments are carried out on a laboratory-scale multi-functional electrostatic separator designed by the authors. The corona discharge is generated between a wire-type dual electrode and the surface of the metal belt conveyor. The distance between the wire and the belt and the applied voltage are adjusted to values that permit particles charging without having an electric wind that puts them into motion on the surface of the belt. The separation is performed in the electric field generated between a high-voltage roll-type electrode (diameter 30 mm) and the grounded belt electrode. The study is conducted according to experimental design methodology, to enable the evaluation of the effects of the various factors that affect the efficiency of the separation: position of the roll-type electrode and applied high-voltage. The conclusions of this study will serve at the optimum design of an industrial belt-type corona-electrostatic separator for the recycling of metals and plastics from waste electric and electronic equipment.", "title": "" }, { "docid": "ece8f2f4827decf0c440ca328ee272b4", "text": "We describe an algorithm for converting linear support vector machines and any other arbitrary hyperplane-based linear classifiers into a set of non-overlapping rules that, unlike the original classifier, can be easily interpreted by humans. Each iteration of the rule extraction algorithm is formulated as a constrained optimization problem that is computationally inexpensive to solve. We discuss various properties of the algorithm and provide proof of convergence for two different optimization criteria We demonstrate the performance and the speed of the algorithm on linear classifiers learned from real-world datasets, including a medical dataset on detection of lung cancer from medical images. The ability to convert SVM's and other \"black-box\" classifiers into a set of human-understandable rules, is critical not only for physician acceptance, but also to reducing the regulatory barrier for medical-decision support systems based on such classifiers.", "title": "" }, { "docid": "7f8ff45fd3006e9635c0814bd5c11a3e", "text": "On 4 August 2016, DARPA conducted the final event of the Cyber Grand Challenge (CGC). The challenge in CGC was to build an autonomous system capable of playing in a capture-the-flag hacking competition. The final event pitted the systems from seven finalists against each other, with each system attempting to defend its own network services while proving vulnerabilities in other systems’ defended services. Xandra, our automated cyber reasoning system, took second place overall in the final event. Xandra placed first in security (preventing exploits), second in availability (keeping services operational and efficient), and fourth in evaluation (proving vulnerabilities in competitor services). Xandra also drew the least power of any of the competitor systems. In this article, we describe the high-level strategies applied by Xandra, their realization in Xandra’s architecture, the synergistic interplay between offense and defense, and finally, lessons learned via post-mortem analysis of the final event.", "title": "" }, { "docid": "cd1cfbdae08907e27a4e1c51e0508839", "text": "High-level synthesis (HLS) is an increasingly popular approach in electronic design automation (EDA) that raises the abstraction level for designing digital circuits. With the increasing complexity of embedded systems, these tools are particularly relevant in embedded systems design. In this paper, we present our evaluation of a broad selection of recent HLS tools in terms of capabilities, usability and quality of results. Even though HLS tools are still lacking some maturity, they are constantly improving and the industry is now starting to adopt them into their design flows.", "title": "" } ]
scidocsrr
b562b88ab5da620a35fdf35bef750fc5
Practicing Safe Computing: A Multimedia Empirical Examination of Home Computer User Security Behavioral Intentions
[ { "docid": "cd811b8c1324ca0fef6a25e1ca5c4ce9", "text": "This commentary discusses why most IS academic research today lacks relevance to practice and suggests tactics, procedures, and guidelines that the IS academic community might follow in their research efforts and articles to introduce relevance to practitioners. The commentary begins by defining what is meant by relevancy in the context of academic research. It then explains why there is a lack of attention to relevance within the IS scholarly literature. Next, actions that can be taken to make relevance a more central aspect of IS research and to communicate implications of IS research more effectively to IS professionals are suggested.", "title": "" } ]
[ { "docid": "7635ad3e2ac2f8e72811bf056d29dfbb", "text": "Nowadays, many consumer videos are captured by portable devices such as iPhone. Different from constrained videos that are produced by professionals, e.g., those for broadcast, summarizing multiple handheld videos from a same scenery is a challenging task. This is because: 1) these videos have dramatic semantic and style variances, making it difficult to extract the representative key frames; 2) the handheld videos are with different degrees of shakiness, but existing summarization techniques cannot alleviate this problem adaptively; and 3) it is difficult to develop a quality model that evaluates a video summary, due to the subjectiveness of video quality assessment. To solve these problems, we propose perceptual multiattribute optimization which jointly refines multiple perceptual attributes (i.e., video aesthetics, coherence, and stability) in a multivideo summarization process. In particular, a weakly supervised learning framework is designed to discover the semantically important regions in each frame. Then, a few key frames are selected based on their contributions to cover the multivideo semantics. Thereafter, a probabilistic model is proposed to dynamically fit the key frames into an aesthetically pleasing video summary, wherein its frames are stabilized adaptively. Experiments on consumer videos taken from sceneries throughout the world demonstrate the descriptiveness, aesthetics, coherence, and stability of the generated summary.", "title": "" }, { "docid": "74fa56730057ae21f438df46054041c4", "text": "Facial fractures can lead to long-term sequelae if not repaired. Complications from surgical approaches can be equally detrimental to the patient. Periorbital approaches via the lower lid can lead to ectropion, entropion, scleral show, canthal malposition, and lid edema.1–6 Ectropion can cause epiphora, whereas entropion often causes pain and irritation due to contact between the cilia and cornea. Transcutaneous and tranconjunctival approaches are commonly used to address fractures of the infraorbital rim and orbital floor. The transconjunctival approach is popular among otolaryngologists and ophthalmologists, whereas transcutaneous approaches are more commonly used by oral maxillofacial surgeons and plastic surgeons.7Ridgwayet al reported in theirmeta-analysis that lid complications are highest with the subciliary approach (19.1%) and lowest with transconjunctival approach (2.1%).5 Raschke et al also found a lower incidence of lower lid malpositionvia the transconjunctival approach comparedwith the subciliary approach.8 Regardless of approach, complications occur and thefacial traumasurgeonmustknowhowtomanage these issues. In this article, we will review the common complications of lower lid surgery and their treatment.", "title": "" }, { "docid": "c6d3f20e9d535faab83fb34cec0fdb5b", "text": "Over the past two decades several attempts have been made to address the problem of face recognition and a voluminous literature has been produced. Current face recognition systems are able to perform very well in controlled environments e.g. frontal face recognition, where face images are acquired under frontal pose with strict constraints as defined in related face recognition standards. However, in unconstrained situations where a face may be captured in outdoor environments, under arbitrary illumination and large pose variations these systems fail to work. With the current focus of research to deal with these problems, much attention has been devoted in the facial feature extraction stage. Facial feature extraction is the most important step in face recognition. Several studies have been made to answer the questions like what features to use, how to describe them and several feature extraction techniques have been proposed. While many comprehensive literature reviews exist for face recognition a complete reference for different feature extraction techniques and their advantages/disadvantages with regards to a typical face recognition task in unconstrained scenarios is much needed. In this chapter we present a comprehensive review of the most relevant feature extraction techniques used in 2D face recognition and introduce a new feature extraction technique termed as Face-GLOH-signature to be used in face recognition for the first time (Sarfraz and Hellwich, 2008), which has a number of advantages over the commonly used feature descriptions in the context of unconstrained face recognition. The goal of feature extraction is to find a specific representation of the data that can highlight relevant information. This representation can be found by maximizing a criterion or can be a pre-defined representation. Usually, a face image is represented by a high dimensional vector containing pixel values (holistic representation) or a set of vectors where each vector summarizes the underlying content of a local region by using a high level 1", "title": "" }, { "docid": "0bb73266d8e4c18503ccda4903856e44", "text": "Recent progress in advanced driver assistance systems and the race towards autonomous vehicles is mainly driven by two factors: (1) increasingly sophisticated algorithms that interpret the environment around the vehicle and react accordingly, and (2) the continuous improvements of sensor technology itself. In terms of cameras, these improvements typically include higher spatial resolution, which as a consequence requires more data to be processed. The trend to add multiple cameras to cover the entire surrounding of the vehicle is not conducive in that matter. At the same time, an increasing number of special purpose algorithms need access to the sensor input data to correctly interpret the various complex situations that can occur, particularly in urban traffic. By observing those trends, it becomes clear that a key challenge for vision architectures in intelligent vehicles is to share computational resources. We believe this challenge should be faced by introducing a representation of the sensory data that provides compressed and structured access to all relevant visual content of the scene. The Stixel World discussed in this paper is such a representation. It is a medium-level model of the environment that is specifically designed to compress information about obstacles by leveraging the typical layout of outdoor traffic scenes. It has proven useful for a multi∗Corresponding author: [email protected] Authors contributed equally and are listed in alphabetical order Preprint submitted to Image and Vision Computing February 14, 2017 tude of automotive vision applications, including object detection, tracking, segmentation, and mapping. In this paper, we summarize the ideas behind the model and generalize it to take into account multiple dense input streams: the image itself, stereo depth maps, and semantic class probability maps that can be generated, e.g ., by deep convolutional neural networks. Our generalization is embedded into a novel mathematical formulation for the Stixel model. We further sketch how the free parameters of the model can be learned using structured SVMs.", "title": "" }, { "docid": "116d0735ded06ba1dc9814f21236b7b1", "text": "In this paper, we propose a context-aware local binary feature learning (CA-LBFL) method for face recognition. Unlike existing learning-based local face descriptors such as discriminant face descriptor (DFD) and compact binary face descriptor (CBFD) which learn each feature code individually, our CA-LBFL exploits the contextual information of adjacent bits by constraining the number of shifts from different binary bits, so that more robust information can be exploited for face representation. Given a face image, we first extract pixel difference vectors (PDV) in local patches, and learn a discriminative mapping in an unsupervised manner to project each pixel difference vector into a context-aware binary vector. Then, we perform clustering on the learned binary codes to construct a codebook, and extract a histogram feature for each face image with the learned codebook as the final representation. In order to exploit local information from different scales, we propose a context-aware local binary multi-scale feature learning (CA-LBMFL) method to jointly learn multiple projection matrices for face representation. To make the proposed methods applicable for heterogeneous face recognition, we present a coupled CA-LBFL (C-CA-LBFL) method and a coupled CA-LBMFL (C-CA-LBMFL) method to reduce the modality gap of corresponding heterogeneous faces in the feature level, respectively. Extensive experimental results on four widely used face datasets clearly show that our methods outperform most state-of-the-art face descriptors.", "title": "" }, { "docid": "9e7c12fbc790314f6897f0b16d43d0af", "text": "We study in this paper the rate of convergence for learning distributions with the Generative Adversarial Networks (GAN) framework, which subsumes Wasserstein, Sobolev and MMD GANs as special cases. We study a wide range of parametric and nonparametric target distributions, under a collection of objective evaluation metrics. On the nonparametric end, we investigate the minimax optimal rates and fundamental difficulty of the density estimation under the adversarial framework. On the parametric end, we establish theory for neural network classes, that characterizes the interplay between the choice of generator and discriminator. We investigate how to improve the GAN framework with better theoretical guarantee through the lens of regularization. We discover and isolate a new notion of regularization, called the generator/discriminator pair regularization, that sheds light on the advantage of GAN compared to classic parametric and nonparametric approaches for density estimation.", "title": "" }, { "docid": "42b810b7ecd48590661cc5a538bec427", "text": "Most algorithms that rely on deep learning-based approaches to generate 3D point sets can only produce clouds containing fixed number of points. Furthermore, they typically require large networks parameterized by many weights, which makes them hard to train. In this paper, we propose an auto-encoder architecture that can both encode and decode clouds of arbitrary size and demonstrate its effectiveness at upsampling sparse point clouds. Interestingly, we can do so using less than half as many parameters as state-of-the-art architectures while still delivering better performance. We will make our code base fully available.", "title": "" }, { "docid": "103e6ecab7ccd8e11f010fb865091bd2", "text": "The mitogen-activated protein kinase (MAPK) network is a conserved signalling module that regulates cell fate by transducing a myriad of growth-factor signals. The ability of this network to coordinate and process a variety of inputs from different growth-factor receptors into specific biological responses is, however, still not understood. We investigated how the MAPK network brings about signal specificity in PC-12 cells, a model for neuronal differentiation. Reverse engineering by modular-response analysis uncovered topological differences in the MAPK core network dependent on whether cells were activated with epidermal or neuronal growth factor (EGF or NGF). On EGF stimulation, the network exhibited negative feedback only, whereas a positive feedback was apparent on NGF stimulation. The latter allows for bi-stable Erk activation dynamics, which were indeed observed. By rewiring these regulatory feedbacks, we were able to reverse the specific cell responses to EGF and NGF. These results show that growth factor context determines the topology of the MAPK signalling network and that the resulting dynamics govern cell fate.", "title": "" }, { "docid": "10d5049c354015ad93a1bff5ef346e67", "text": "We show that a neural approach to the task of non-factoid answer reranking can benefit from the inclusion of tried-and-tested handcrafted features. We present a novel neural network architecture based on a combination of recurrent neural networks that are used to encode questions and answers, and a multilayer perceptron. We show how this approach can be combined with additional features, in particular, the discourse features presented by Jansen et al. (2014). Our neural approach achieves state-of-the-art performance on a public dataset from Yahoo! Answers and its performance is further improved by incorporating the discourse features. Additionally, we present a new dataset of Ask Ubuntu questions where the hybrid approach also achieves good results.", "title": "" }, { "docid": "a0ce75c68e981d6d9a442d73f97781ad", "text": "Cancer stem cells (CSCs), or alternatively called tumor initiating cells (TICs), are a subpopulation of tumor cells, which possesses the ability to self-renew and differentiate into bulk tumor mass. An accumulating body of evidence suggests that CSCs contribute to the growth and recurrence of tumors and the resistance to chemo- and radiotherapy. CSCs achieve self-renewal through asymmetric division, in which one daughter cell retains the self-renewal ability, and the other is destined to differentiation. Recent studies revealed the mechanisms of asymmetric division in normal stem cells (NSCs) and, to a limited degree, CSCs as well. Asymmetric division initiates when a set of polarity-determining proteins mark the apical side of mother stem cells, which arranges the unequal alignment of mitotic spindle and centrosomes along the apical-basal polarity axis. This subsequently guides the recruitment of fate-determining proteins to the basal side of mother cells. Following cytokinesis, two daughter cells unequally inherit centrosomes, differentiation-promoting fate determinants, and other proteins involved in the maintenance of stemness. Modulation of asymmetric and symmetric division of CSCs may provide new strategies for dual targeting of CSCs and the bulk tumor mass. In this review, we discuss the current understanding of the mechanisms by which NSCs and CSCs achieve asymmetric division, including the functions of polarity- and fate-determining factors.", "title": "" }, { "docid": "b4103e5ddc58672334b66cc504dab5a6", "text": "An open source project typically maintains an open bug repository so that bug reports from all over the world can be gathered. When a new bug report is submitted to the repository, a person, called a triager, examines whether it is a duplicate of an existing bug report. If it is, the triager marks it as DUPLICATE and the bug report is removed from consideration for further work. In the literature, there are approaches exploiting only natural language information to detect duplicate bug reports. In this paper we present a new approach that further involves execution information. In our approach, when a new bug report arrives, its natural language information and execution information are compared with those of the existing bug reports. Then, a small number of existing bug reports are suggested to the triager as the most similar bug reports to the new bug report. Finally, the triager examines the suggested bug reports to determine whether the new bug report duplicates an existing bug report. We calibrated our approach on a subset of the Eclipse bug repository and evaluated our approach on a subset of the Firefox bug repository. The experimental results show that our approach can detect 67%-93% of duplicate bug reports in the Firefox bug repository, compared to 43%-72% using natural language information alone.", "title": "" }, { "docid": "d4d802b296b210a1957b1a214d9fd9fb", "text": "Many task domains require robots to interpret and act upon natural language commands which are given by people and which refer to the robot’s physical surroundings. Such interpretation is known variously as the symbol grounding problem (Harnad, 1990), grounded semantics (Feldman et al., 1996) and grounded language acquisition (Nenov and Dyer, 1993, 1994). This problem is challenging because people employ diverse vocabulary and grammar, and because robots have substantial uncertainty about the nature and contents of their surroundings, making it difficult to associate the constitutive language elements (principally noun phrases and spatial relations) of the command text to elements of those surroundings. Symbolic models capture linguistic structure but have not scaled successfully to handle the diverse language produced by untrained users. Existing statistical approaches can better handle diversity, but have not to date modeled complex linguistic structure, limiting achievable accuracy. Recent hybrid approaches have addressed limitations in scaling and complexity, but have not effectively associated linguistic and perceptual features. Our framework, called Generalized Grounding Graphs (G), addresses these issues by defining a probabilistic graphical model dynamically according to the linguistic parse structure of a natural language command. This approach scales effectively, handles linguistic diversity, and enables the system to associate parts of a command with the specific objects, places, and events in the external world to which they refer. We show that robots can learn word meanings and use those learned meanings to robustly follow natural language commands produced by untrained users. We demonstrate our approach for both mobility commands (e.g. route directions like “Go down the hallway through the door”) and mobile manipulation commands (e.g. physical directives like “Pick up the pallet on the truck”) involving a variety of semi-autonomous robotic platforms, including a wheelchair, a microair vehicle, a forklift, and the Willow Garage PR2. The first two authors contributed equally to this paper. 1 ar X iv :1 71 2. 01 09 7v 1 [ cs .C L ] 2 9 N ov 2 01 7", "title": "" }, { "docid": "7a72f69ad4926798e12f6fa8e598d206", "text": "In this work, we revisit atrous convolution, a powerful tool to explicitly adjust filter’s field-of-view as well as control the resolution of feature responses computed by Deep Convolutional Neural Networks, in the application of semantic image segmentation. To handle the problem of segmenting objects at multiple scales, we design modules which employ atrous convolution in cascade or in parallel to capture multi-scale context by adopting multiple atrous rates. Furthermore, we propose to augment our previously proposed Atrous Spatial Pyramid Pooling module, which probes convolutional features at multiple scales, with image-level features encoding global context and further boost performance. We also elaborate on implementation details and share our experience on training our system. The proposed ‘DeepLabv3’ system significantly improves over our previous DeepLab versions without DenseCRF post-processing and attains comparable performance with other state-of-art models on the PASCAL VOC 2012 semantic image segmentation benchmark.", "title": "" }, { "docid": "f3f2184b1fd6a62540f8547df3014b44", "text": "Social Media Analytics is an emerging interdisciplinary research field that aims on combining, extending, and adapting methods for analysis of social media data. On the one hand it can support IS and other research disciplines to answer their research questions and on the other hand it helps to provide architectural designs as well as solution frameworks for new social media-based applications and information systems. The authors suggest that IS should contribute to this field and help to develop and process an interdisciplinary research agenda.", "title": "" }, { "docid": "ea04dad2ac1de160f78fa79b33a93b6a", "text": "OBJECTIVE\nTo construct new size charts for all fetal limb bones.\n\n\nDESIGN\nA prospective, cross sectional study.\n\n\nSETTING\nUltrasound department of a large hospital.\n\n\nSAMPLE\n663 fetuses scanned once only for the purpose of the study at gestations between 12 and 42 weeks.\n\n\nMETHODS\nCentiles were estimated by combining separate regression models fitted to the mean and standard deviation, assuming that the measurements have a normal distribution at each gestational age.\n\n\nMAIN OUTCOME MEASURES\nDetermination of fetal limb lengths from 12 to 42 weeks of gestation.\n\n\nRESULTS\nSize charts for fetal bones (radius, ulna, humerus, tibia, fibula, femur and foot) are presented and compared with previously published data.\n\n\nCONCLUSIONS\nWe present new size charts for fetal limb bones which take into consideration the increasing variability with gestational age. We have compared these charts with other published data; the differences seen may be largely due to methodological differences. As standards for fetal head and abdominal measurements have been published from the same population, we suggest that the use of the new charts may facilitate prenatal diagnosis of skeletal dysplasias.", "title": "" }, { "docid": "cc4c0a749c6a3f4ac92b9709f24f03f4", "text": "Modern GPUs with their several hundred cores and more accessible programming models are becoming attractive devices for compute-intensive applications. They are particularly well suited for applications, such as image processing, where the end result is intended to be displayed via the graphics card. One of the more versatile and powerful graphics techniques is ray tracing. However, tracing each ray of light in a scene is very computational expensive and have traditionally been preprocessed on CPUs over hours, if not days. In this paper, Nvidia’s new OptiX ray tracing engine is used to show how the power of modern graphics cards, such as the Nvidia Quadro FX 5800, can be harnessed to ray trace several scenes that represent real-life applications in real-time speeds ranging from 20.63 to 67.15 fps. Near-perfect speedup is demonstrated on dual GPUs for scenes with complex geometries. The impact on ray tracing of the recently announced Nvidia Fermi processor, is also discussed.", "title": "" }, { "docid": "edb9d5cbbc7b976a009e583f9947134b", "text": "An important part of image enhancement is color constancy, which aims to make image colors invariant to illumination. In this paper the Color Dog (CD), a new learning-based global color constancy method is proposed. Instead of providing one, it corrects the other methods’ illumination estimations by reducing their scattering in the chromaticity space by using a its previously learning partition. The proposed method outperforms all other methods on most high-quality benchmark datasets. The results are presented and discussed.", "title": "" }, { "docid": "6753c9ed08f6941e1d7dd5fc283cafac", "text": "This letter presents a wideband transformer balun with a center open stub. Since the interconnected line between two coupled-lines greatly deteriorates the performance of balun in millimeter-wave designs, the proposed center open stub provides a good solution to further optimize the balance of balun. The proposed transformer balun with center open stub has been fabricated in 90 nm CMOS technology, with a compact chip area of 0.012 mm2. The balun achieves an amplitude imbalance of less than 1 dB for a frequency band ranging from 1 to 48 GHz along with a phase imbalance of less than 5 degrees for the frequency band ranging from 2 to 47 GHz.", "title": "" }, { "docid": "6b49ccb6cb443c89fd32f407cb575653", "text": "Recently, there has been a growing interest in end-to-end speech recognition that directly transcribes speech to text without any predefined alignments. In this paper, we explore the use of attention-based encoder-decoder model for Mandarin speech recognition on a voice search task. Previous attempts have shown that applying attention-based encoder-decoder to Mandarin speech recognition was quite difficult due to the logographic orthography of Mandarin, the large vocabulary and the conditional dependency of the attention model. In this paper, we use character embedding to deal with the large vocabulary. Several tricks are used for effective model training, including L2 regularization, Gaussian weight noise and frame skipping. We compare two attention mechanisms and use attention smoothing to cover long context in the attention model. Taken together, these tricks allow us to finally achieve a character error rate (CER) of 3.58% and a sentence error rate (SER) of 7.43% on the MiTV voice search dataset. While together with a trigram language model, CER and SER reach 2.81% and 5.77%, respectively.", "title": "" }, { "docid": "95db5921ba31588e962ffcd8eb6469b0", "text": "The purpose of text clustering in information retrieval is to discover groups of semantically related documents. Accurate and comprehensible cluster descriptions (labels) let the user comprehend the collection’s content faster and are essential for various document browsing interfaces. The task of creating descriptive, sensible cluster labels is difficult—typical text clustering algorithms focus on optimizing proximity between documents inside a cluster and rely on keyword representation for describing discovered clusters. In the approach called Description Comes First (DCF) cluster labels are as important as document groups—DCF promotes machine discovery of comprehensible candidate cluster labels later used to discover related document groups. In this paper we describe an application of DCF to the k-Means algorithm, including results of experiments performed on the 20-newsgroups document collection. Experimental evaluation showed that DCF does not decrease the metrics used to assess the quality of document assignment and offers good cluster labels in return. The algorithm utilizes search engine’s data structures directly to scale to large document collections. Introduction Organizing unstructured collections of textual content into semantically related groups, from now on referred to as text clustering or clustering, provides unique ways of digesting large amounts of information. In the context of information retrieval and text mining, a general definition of clustering is the following: given a large set of documents, automatically discover diverse subsets of documents that share a similar topic. In typical applications input documents are first transformed into a mathematical model where each document is described by certain features. The most popular representation for text is the vector space model [Salton, 1989]. In the VSM, documents are expressed as rows in a matrix, where columns represent unique terms (features) and the intersection of a column and a row indicates the importance of a given word to the document. A model such as the VSM helps in calculation of similarity between documents (angle between document vectors) and thus facilitates application of various known (or modified) numerical clustering algorithms. While this is sufficient for many applications, problems arise when one needs to construct some representation of the discovered groups of documents—a label, a symbolic description for each cluster, something to represent the information that makes documents inside a cluster similar to each other and that would convey this information to the user. Cluster labeling problems are often present in modern text and Web mining applications with document browsing interfaces. The process of returning from the mathematical model of clusters to comprehensible, explanatory labels is difficult because text representation used for clustering rarely preserves the inflection and syntax of the original text. Clustering algorithms presented in literature usually fall back to the simplest form of cluster representation—a list of cluster’s keywords (most “central” terms in the cluster). Unfortunately, keywords are stripped from syntactical information and force the user to manually find the underlying concept which is often confusing. Motivation and Related Works The user of a retrieval system judges the clustering algorithm by what he sees in the output— clusters’ descriptions, not the final model which is usually incomprehensible for humans. The experiences with the text clustering framework Carrot (www.carrot2.org) resulted in posing a slightly different research problem (aligned with clustering but not exactly the same). We shifted the emphasis of a clustering method to providing comprehensible and accurate cluster labels in addition to discovery of document groups. We call this problem descriptive clustering: discovery of diverse groups of semantically related documents associated with a meaningful, comprehensible and compact text labels. This definition obviously leaves a great deal of freedom for interpretation because terms such as meaningful or accurate are very vague. We narrowed the set of requirements of descriptive clustering to the following ones: — comprehensibility understood as grammatical correctness (word order, inflection, agreement between words if applicable); — conciseness of labels. Phrases selected for a cluster label should minimize its total length (without sacrificing its comprehensibility); — transparency of the relationship between cluster label and cluster content, best explained by ability to answer questions as: “Why was this label selected for these documents?” and “Why is this document in a cluster labeled X?”. Little research has been done to address the requirements above. In the STC algorithm authors employed frequently recurring phrases as both document similarity feature and final cluster description [Zamir and Etzioni, 1999]. A follow-up work [Ferragina and Gulli, 2004] showed how to avoid certain STC limitations and use non-contiguous phrases (so-called approximate sentences). A different idea of ‘label-driven’ clustering appeared in clustering with committees algorithm [Pantel and Lin, 2002], where strongly associated terms related to unambiguous concepts were evaluated using semantic relationships from WordNet. We introduced the DCF approach in our previous work [Osiński and Weiss, 2005] and showed its feasibility using an algorithm called Lingo. Lingo used singular value decomposition of the term-document matrix to select good cluster labels among candidates extracted from the text (frequent phrases). The algorithm was designed to cluster results from Web search engines (short snippets and fragmented descriptions of original documents) and proved to provide diverse meaningful cluster labels. Lingo’s weak point is its limited scalability to full or even medium sized documents. In this", "title": "" } ]
scidocsrr
c1ed93ce1ab856b0c97cbf38270dd1bf
WHUIRGroup at TREC 2016 Clinical Decision Support Task
[ { "docid": "03b08a01be48aaa76684411b73e5396c", "text": "The goal of TREC 2015 Clinical Decision Support Track was to retrieve biomedical articles relevant for answering three kinds of generic clinical questions, namely diagnosis, test, and treatment. In order to achieve this purpose, we investigated three approaches to improve the retrieval of relevant articles: modifying queries, improving indexes, and ranking with ensembles. Our final submissions were a combination of several different configurations of these approaches. Our system mainly focused on the summary fields of medical reports. We built two different kinds of indexes – an inverted index on the free text and a second kind of indexes on the Unified Medical Language System (UMLS) concepts within the entire articles that were recognized by MetaMap. We studied the variations of including UMLS concepts at paragraph and sentence level and experimented with different thresholds of MetaMap matching scores to filter UMLS concepts. The query modification process in our system involved automatic query construction, pseudo relevance feedback, and manual inputs from domain experts. Furthermore, we trained a re-ranking sub-system based on the results of TREC 2014 Clinical Decision Support track using Indri’s Learning to Rank package, RankLib. Our experiments showed that the ensemble approach could improve the overall results by boosting the ranking of articles that are near the top of several single ranked lists.", "title": "" } ]
[ { "docid": "edbad8d3889a431c16e4a51d0c1cc19c", "text": "We propose to automatically create capsule wardrobes. Given an inventory of candidate garments and accessories, the algorithm must assemble a minimal set of items that provides maximal mix-and-match outfits. We pose the task as a subset selection problem. To permit efficient subset selection over the space of all outfit combinations, we develop submodular objective functions capturing the key ingredients of visual compatibility, versatility, and user-specific preference. Since adding garments to a capsule only expands its possible outfits, we devise an iterative approach to allow near-optimal submodular function maximization. Finally, we present an unsupervised approach to learn visual compatibility from \"in the wild\" full body outfit photos; the compatibility metric translates well to cleaner catalog photos and improves over existing methods. Our results on thousands of pieces from popular fashion websites show that automatic capsule creation has potential to mimic skilled fashionistas in assembling flexible wardrobes, while being significantly more scalable.", "title": "" }, { "docid": "e1f2647131e9194bc4edfd9c629900a8", "text": "Thomson coil actuators (also known as repulsion coil actuators) are well suited for vacuum circuit breakers when fast operation is desired such as for hybrid AC and DC circuit breaker applications. This paper presents investigations on how the actuator drive circuit configurations as well as their discharging pulse patterns affect the magnetic force and therefore the acceleration, as well as the mechanical robustness of these actuators. Comprehensive multi-physics finite-element simulations of the Thomson coil actuated fast mechanical switch are carried out to study the operation transients and how to maximize the actuation speed. Different drive circuits are compared: three single switch circuits are evaluated; the pulse pattern of a typical pulse forming network circuit is studied, concerning both actuation speed and maximum stress; a two stage drive circuit is also investigated. A 630 A, 15 kV / 1 ms prototype employing a vacuum interrupter with 6 mm maximum open gap was developed and tested. The total moving mass accelerated by the actuator is about 1.2 kg. The measured results match well with simulated results in the FEA study.", "title": "" }, { "docid": "bb9829b182241f70dbc1addd1452c09d", "text": "This paper presents the first complete 2.5 V, 77 GHz chipset for Doppler radar and imaging applications fabricated in 0.13 mum SiGe HBT technology. The chipset includes a voltage-controlled oscillator with -101.6 dBc/Hz phase noise at 1 MHz offset, an 25 dB gain low-noise amplifier, a novel low-voltage double-balanced Gilbert-cell mixer with two mm-wave baluns and IF amplifier achieving 12.8 dB noise figure and an OP1dB of +5 dBm, a 99 GHz static frequency divider consuming a record low 75 mW, and a power amplifier with 19 dB gain, +14.4 dBm saturated power, and 15.7% PAE. Monolithic spiral inductors and transformers result in the lowest reported 77 GHz receiver core area of only 0.45 mm times 0.30 mm. Simplified circuit topologies allow 77 GHz operation up to 125degC from 2.5 V/1.8 V supplies. Technology splits of the SiGe HBTs are employed to determine the optimum HBT profile for mm-wave performance.", "title": "" }, { "docid": "45c515da4f8e9c383f6d4e0fa6e09192", "text": "In this paper, we demonstrate our Img2UML system tool. This system tool eliminates the gap between pixel-based diagram and engineering model, that it supports the extraction of the UML class model from images and produces an XMI file of the UML model. In addition to this, Img2UML offers a repository of UML class models of images that have been collected from the Internet. This project has both industrial and academic aims: for industry, this tool proposals a method that enables the updating of software design documentation (that typically contains UML images). For academia, this system unlocks a corpus of UML models that are publicly available, but not easily analyzable for scientific studies.", "title": "" }, { "docid": "3ba6a250322d67cd0a91b703d75b88dc", "text": "Untethered robots miniaturized to the length scale of millimeter and below attract growing attention for the prospect of transforming many aspects of health care and bioengineering. As the robot size goes down to the order of a single cell, previously inaccessible body sites would become available for high-resolution in situ and in vivo manipulations. This unprecedented direct access would enable an extensive range of minimally invasive medical operations. Here, we provide a comprehensive review of the current advances in biomedical untethered mobile milli/microrobots. We put a special emphasis on the potential impacts of biomedical microrobots in the near future. Finally, we discuss the existing challenges and emerging concepts associated with designing such a miniaturized robot for operation inside a biological environment for biomedical applications.", "title": "" }, { "docid": "0573d09bf0fb573b5ad0bdfa7f3c2485", "text": "Social media have been adopted by many businesses. More and more companies are using social media tools such as Facebook and Twitter to provide various services and interact with customers. As a result, a large amount of user-generated content is freely available on social media sites. To increase competitive advantage and effectively assess the competitive environment of businesses, companies need to monitor and analyze not only the customer-generated content on their own social media sites, but also the textual information on their competitors’ social media sites. In an effort to help companies understand how to perform a social media competitive analysis and transform social media data into knowledge for decision makers and e-marketers, this paper describes an in-depth case study which applies text mining to analyze unstructured text content on Facebook and Twitter sites of the three largest pizza chains: Pizza Hut,", "title": "" }, { "docid": "e72f8ad61a7927fee8b0a32152b0aa4b", "text": "Geolocation prediction is vital to geospatial applications like localised search and local event detection. Predominately, social media geolocation models are based on full text data, including common words with no geospatial dimension (e.g. today) and noisy strings (tmrw), potentially hampering prediction and leading to slower/more memory-intensive models. In this paper, we focus on finding location indicative words (LIWs) via feature selection, and establishing whether the reduced feature set boosts geolocation accuracy. Our results show that an information gain ratiobased approach surpasses other methods at LIW selection, outperforming state-of-the-art geolocation prediction methods by 10.6% in accuracy and reducing the mean and median of prediction error distance by 45km and 209km, respectively, on a public dataset. We further formulate notions of prediction confidence, and demonstrate that performance is even higher in cases where our model is more confident, striking a trade-off between accuracy and coverage. Finally, the identified LIWs reveal regional language differences, which could be potentially useful for lexicographers.", "title": "" }, { "docid": "6954c2a51c589987ba7e37bd81289ba1", "text": "TYAs paper looks at some of the algorithms that can be used for effective detection and tracking of vehicles, in particular for statistical analysis. The main methods for tracking discussed and implemented are blob analysis, optical flow and foreground detection. A further analysis is also done testing two of the techniques using a number of video sequences that include different levels of difficulties.", "title": "" }, { "docid": "a64847d15292f9758a337b8481bc7814", "text": "This paper studies the use of tree edit distance for pattern matching of abstract syntax trees of images generated with tree picture grammars. This was done with a view to measuring its effectiveness in determining image similarity, when compared to current state of the art similarity measures used in Content Based Image Retrieval (CBIR). Eight computer based similarity measures were selected for their diverse methodology and effectiveness. The eight visual descriptors and tree edit distance were tested against some of the images from our corpus of thousands of syntactically generated images. The first and second sets of experiments showed that tree edit distance and Spacial Colour Distribution (SpCD) are the most suited for determining similarity of syntactically generated images. A third set of experiments was performed with tree edit distance and SpCD only. Results obtained showed that while both of them performed well in determining similarity of the generated images, the tree edit distance is better able to detect more subtle human observable image differences than SpCD. Also, tree edit distance more closely models the generative sequence of these tree picture grammars.", "title": "" }, { "docid": "320c5bf641fa348cd1c8fb806558fe68", "text": "A CMOS low-dropout regulator (LDO) with 3.3 V output voltage and 100 mA output current for system-on-chip applications is presented. The proposed LDO is independent of off-chip capacitor, thus the board space and external pins are reduced. By utilizing dynamic slew-rate enhancement (SRE) circuit and nested Miller compensation (NMC) on LDO structure, the proposed LDO provides high stability during line and load regulation without off-chip load capacitor. The overshot voltage has been limited within 550 mV and settling time is less than 50 mus when load current reducing from 100 mA to 1 mA. By using 30 nA reference current, the quiescent current is 3.3 muA. The experiment results agree with the simulation results. The proposed design is implemented by CSMC 0.5 mum mixed-signal process.", "title": "" }, { "docid": "e8bdec1a8f28631e0a61d9d1b74e4e05", "text": "As a kernel function in network routers, packet classification requires the incoming packet headers to be checked against a set of predefined rules. There are two trends for packet classification: (1) to examine a large number of packet header fields, and (2) to use software-based solutions on multi-core general purpose processors and virtual machines. Although packet classification has been widely studied, most existing solutions on multi-core systems target the classic 5-field packet classification; it is not easy to scale up their performance with respect to the number of packet header fields. In this work, we present a decomposition-based packet classification approach; it supports large rule sets consisting of a large number of packet header fields. In our approach, range-tree and hashing are used to search the fields of the input packet header in parallel. The partial results from all the fields are represented in rule ID sets; they are merged efficiently to produce the final match result. We implement our approach and evaluate its performance with respect to overall throughput and processing latency for rule set size varying from 1 to 32 K. Experimental results on state-of-the-art 16-core platforms show that, an overall throughput of 48 million packets per second and a processing latency of 2,000 ns per packet can be achieved for a 32 K rule set.", "title": "" }, { "docid": "634509a9d6484ba51d01f9c049551df5", "text": "In this paper, we propose a joint training approach to voice activity detection (VAD) to address the issue of performance degradation due to unseen noise conditions. Two key techniques are integrated into this deep neural network (DNN) based VAD framework. First, a regression DNN is trained to map the noisy to clean speech features similar to DNN-based speech enhancement. Second, the VAD part to discriminate speech against noise backgrounds is also a DNN trained with a large amount of diversified noisy data synthesized by a wide range of additive noise types. By stacking the classification DNN on top of the enhancement DNN, this integrated DNN can be jointly trained to perform VAD. The feature mapping DNN serves as a noise normalization module aiming at explicitly generating the “clean” features which are easier to be correctly recognized by the following classification DNN. Our experiment results demonstrate the proposed noise-universal DNNbased VAD algorithm achieves a good generalization capacity to unseen noises, and the jointly trained DNNs consistently and significantly outperform the conventional classification-based DNN for all the noise types and signal-to-noise levels tested.", "title": "" }, { "docid": "0ea451a2030603899d9ad95649b73908", "text": "Distributed artificial intelligence (DAI) is a subfield of artificial intelligence that deals with interactions of intelligent agents. Precisely, DAI attempts to construct intelligent agents that make decisions that allow them to achieve their goals in a world populated by other intelligent agents with their own goals. This paper discusses major concepts used in DAI today. To do this, a taxonomy of DAI is presented, based on the social abilities of an individual agent, the organization of agents, and the dynamics of this organization through time. Social abilities are characterized by the reasoning about other agents and the assessment of a distributed situation. Organization depends on the degree of cooperation and on the paradigm of communication. Finally, the dynamics of organization is characterized by the global coherence of the group and the coordination between agents. A reasonably representative review of recent work done in DAI field is also supplied in order to provide a better appreciation of this vibrant AI field. The paper concludes with important issues in which further research in DAI is needed.", "title": "" }, { "docid": "2f1e059a0c178b3703c31ad31761dadc", "text": "This paper will serve as an introduction to the body of work on robust subspace recovery. Robust subspace recovery involves finding an underlying low-dimensional subspace in a data set that is possibly corrupted with outliers. While this problem is easy to state, it has been difficult to develop optimal algorithms due to its underlying nonconvexity. This work emphasizes advantages and disadvantages of proposed approaches and unsolved problems in the area.", "title": "" }, { "docid": "3007cf623eff81d46a496e16a0d2d5bc", "text": "Grounded language learning bridges words like ‘red’ and ‘square’ with robot perception. The vast majority of existing work in this space limits robot perception to vision. In this paper, we build perceptual models that use haptic, auditory, and proprioceptive data acquired through robot exploratory behaviors to go beyond vision. Our system learns to ground natural language words describing objects using supervision from an interactive humanrobot “I Spy” game. In this game, the human and robot take turns describing one object among several, then trying to guess which object the other has described. All supervision labels were gathered from human participants physically present to play this game with a robot. We demonstrate that our multi-modal system for grounding natural language outperforms a traditional, vision-only grounding framework by comparing the two on the “I Spy” task. We also provide a qualitative analysis of the groundings learned in the game, visualizing what words are understood better with multi-modal sensory information as well as identifying learned word meanings that correlate with physical object properties (e.g. ‘small’ negatively correlates with object weight).", "title": "" }, { "docid": "9a6f62dd4fc2e9b7f6be5b30c731367c", "text": "In this paper we present a filter algorithm for nonlinear programming and prove its global convergence to stationary points. Each iteration is composed of a feasibility phase, which reduces a measure of infeasibility, and an optimality phase, which reduces the objective function in a tangential approximation of the feasible set. These two phases are totally independent, and the only coupling between them is provided by the filter. The method is independent of the internal algorithms used in each iteration, as long as these algorithms satisfy reasonable assumptions on their efficiency. Under standard hypotheses, we show two results: for a filter with minimum size, the algorithm generates a stationary accumulation point; for a slightly larger filter, all accumulation points are stationary.", "title": "" }, { "docid": "5ac6e54d3ce35297c63ea3fd9c5ad0d9", "text": "In this paper, we intend to propose a new heuristic optimization method, called animal migration optimization algorithm. This algorithm is inspired by the animal migration behavior, which is a ubiquitous phenomenon that can be found in all major animal groups, such as birds, mammals, fish, reptiles, amphibians, insects, and crustaceans. In our algorithm, there are mainly two processes. In the first process, the algorithm simulates how the groups of animals move from the current position to the new position. During this process, each individual should obey three main rules. In the latter process, the algorithm simulates how some animals leave the group and some join the group during the migration. In order to verify the performance of our approach, 23 benchmark functions are employed. The proposed method has been compared with other well-known heuristic search methods. Experimental results indicate that the proposed algorithm performs better than or at least comparable with state-of-the-art approaches from literature when considering the quality of the solution obtained.", "title": "" }, { "docid": "96804634aa7c691aed1eae11d3e44591", "text": "AIMS\nTo investigated the association between the ABO blood group and gestational diabetes mellitus (GDM).\n\n\nMATERIALS AND METHODS\nA retrospective case-control study was conducted using data from 5424 Japanese pregnancies. GDM screening was performed in the first trimester using a casual blood glucose test and in the second trimester using a 50-g glucose challenge test. If the screening was positive, a 75-g oral glucose tolerance test was performed for a GDM diagnosis, which was defined according to the International Association of Diabetes and Pregnancy Study Groups. Logistic regression was used to obtain the odds ratio (OR) and 95% confidence interval (CI) adjusted for traditional risk factors.\n\n\nRESULTS\nWomen with the A blood group (adjusted OR: 0.34, 95% CI: 0.19-0.63), B (adjusted OR: 0.35, 95% CI: 0.18-0.68), or O (adjusted OR: 0.39, 95% CI: 0.21-0.74) were at decreased risk of GDM compared with those with group AB. Women with the AB group were associated with increased risk of GDM as compared with those with A, B, or O (adjusted OR: 2.73, 95% CI: 1.64-4.57).\n\n\nCONCLUSION\nABO blood groups are associated with GDM, and group AB was a risk factor for GDM in Japanese population.", "title": "" }, { "docid": "77df05c7e00485b66a1aacbab44847fb", "text": "Study Objective: To determine the prevalence of vulvovaginitis, predisposing factors, microbial etiology and therapy in patients treated at the Hospital del Niño DIF, Pachuca, Hidalgo, Mexico. Design. This was an observational and descriptive study from 2006 to 2009. Setting: Hospital del Niño DIF, Pachuca, Hidalgo, Mexico. Participants. Patients from 0 to 16 years, with vulvovaginitis and/or vaginal discharge were included. Interventions: None. Main Outcome Measures: Demographic data, etiology, clinical features, risk factors and therapy were analyzed. Results: Four hundred twenty seven patients with diagnosis of vulvovaginitis were included. The average prevalence to 4 years in the study period was 0.19%. The age group most affected was schoolchildren (225 cases: 52.69%). The main signs and symptoms presented were leucorrhea (99.3%), vaginal hyperemia (32.6%), vulvar itching (32.1%) and erythema (28.8%). Identified risk factors were poor hygiene (15.7%), urinary tract infection (14.7%), intestinal parasites (5.6%) and obesity or overweight (3.3%). The main microorganisms found in vaginal cultures were enterobacteriaceae (Escherichia coli, Klebsiella and Enterococcus faecalis), Staphylococcus spp, and Gardnerella vaginalis. Several inconsistent were found in the drug prescription of the patients. Conclusion: Vulvovaginitis prevalence in Mexican girls is low and this was caused mainly by opportunist microorganisms. The initial treatment of vulvovaginitis must include hygienic measure and an antimicrobial according to the clinical features and microorganism found.", "title": "" }, { "docid": "e99d7b425ab1a2a9a2de4e10a3fbe766", "text": "In this paper, a review of the authors' work on inkjet-printed flexible antennas, fabricated on paper substrates, is given. This is presented as a system-level solution for ultra-low-cost mass production of UHF radio-frequency identification (RFID) tags and wireless sensor nodes (WSN), in an approach that could be easily extended to other microwave and wireless applications. First, we discuss the benefits of using paper as a substrate for high-frequency applications, reporting its very good electrical/dielectric performance up to at least 1 GHz. The RF characteristics of the paper-based substrate are studied by using a microstrip-ring resonator, in order to characterize the dielectric properties (dielectric constant and loss tangent). We then give details about the inkjet-printing technology, including the characterization of the conductive ink, which consists of nano-silver particles. We highlight the importance of this technology as a fast and simple fabrication technique, especially on flexible organic (e.g., LCP) or paper-based substrates. A compact inkjet-printed UHF ldquopassive RFIDrdquo antenna, using the classic T-match approach and designed to match the IC's complex impedance, is presented as a demonstration prototype for this technology. In addition, we briefly touch upon the state-of-the-art area of fully-integrated wireless sensor modules on paper. We show the first-ever two-dimensional sensor integration with an RFID tag module on paper, as well as the possibility of a three-dimensional multilayer paper-based RF/microwave structure.", "title": "" } ]
scidocsrr
0b9b8da4a6c3e07a7dd33b61b6d44a3e
Classifying and visualizing motion capture sequences using deep neural networks
[ { "docid": "6af09f57f2fcced0117dca9051917a0d", "text": "We present a novel per-dimension learning rate method for gradient descent called ADADELTA. The method dynamically adapts over time using only first order information and has minimal computational overhead beyond vanilla stochastic gradient descent. The method requires no manual tuning of a learning rate and appears robust to noisy gradient information, different model architecture choices, various data modalities and selection of hyperparameters. We show promising results compared to other methods on the MNIST digit classification task using a single machine and on a large scale voice dataset in a distributed cluster environment.", "title": "" } ]
[ { "docid": "3b07476ebb8b1d22949ec32fc42d2d05", "text": "We provide a systematic review of the adaptive comanagement (ACM) literature to (i) investigate how the concept of governance is considered and (ii) examine what insights ACM offers with reference to six key concerns in environmental governance literature: accountability and legitimacy; actors and roles; fit, interplay, and scale; adaptiveness, flexibility, and learning; evaluation and monitoring; and, knowledge. Findings from the systematic review uncover a complicated relationship with evidence of conceptual closeness as well as relational ambiguities. The findings also reveal several specific contributions from the ACM literature to each of the six key environmental governance concerns, including applied strategies for sharing power and responsibility and value of systems approaches in understanding problems of fit. More broadly, the research suggests a dissolving or fuzzy boundary between ACM and governance, with implications for understanding emerging approaches to navigate social-ecological system change. Future research opportunities may be found at the confluence of ACM and environmental governance scholarship, such as identifying ways to build adaptive capacity and encouraging the development of more flexible governance arrangements.", "title": "" }, { "docid": "d3b24655e01cbb4f5d64006222825361", "text": "A number of leading cognitive architectures that are inspired by the human brain, at various levels of granularity, are reviewed and compared, with special attention paid to the way their internal structures and dynamics map onto neural processes. Four categories of Biologically Inspired Cognitive Architectures (BICAs) are considered, with multiple examples of each category briefly reviewed, and selected examples discussed in more depth: primarily symbolic architectures (e.g. ACT-R), emergentist architectures (e.g. DeSTIN), developmental robotics architectures (e.g. IM-CLEVER), and our central focus, hybrid architectures (e.g. LIDA, CLARION, 4D/RCS, DUAL, MicroPsi, and OpenCog). Given the state of the art in BICA, it is not yet possible to tell whether emulating the brain on the architectural level is going to be enough to allow rough emulation of brain function; and given the state of the art in neuroscience, it is not yet possible to connect BICAs with large-scale brain simulations in a thoroughgoing way. However, it is nonetheless possible to draw reasonably close function connections between various components of various BICAs and various brain regions and dynamics, and as both BICAs and brain simulations mature, these connections should become richer and may extend further into the domain of internal dynamics as well as overall behavior. & 2010 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "89b8317509d27a6f13d8ba38f52f4816", "text": "The merging of optimization and simulation technologies has seen a rapid growth in recent years. A Google search on \"Simulation Optimization\" returns more than six thousand pages where this phrase appears. The content of these pages ranges from articles, conference presentations and books to software, sponsored work and consultancy. This is an area that has sparked as much interest in the academic world as in practical settings. In this paper, we first summarize some of the most relevant approaches that have been developed for the purpose of optimizing simulated systems. We then concentrate on the metaheuristic black-box approach that leads the field of practical applications and provide some relevant details of how this approach has been implemented and used in commercial software. Finally, we present an example of simulation optimization in the context of a simulation model developed to predict performance and measure risk in a real world project selection problem.", "title": "" }, { "docid": "0ca703e4379b89bd79b1c33d6cc0ce3e", "text": "PET image reconstruction is challenging due to the ill-poseness of the inverse problem and limited number of detected photons. Recently, the deep neural networks have been widely and successfully used in computer vision tasks and attracted growing interests in medical imaging. In this paper, we trained a deep residual convolutional neural network to improve PET image quality by using the existing inter-patient information. An innovative feature of the proposed method is that we embed the neural network in the iterative reconstruction framework for image representation, rather than using it as a post-processing tool. We formulate the objective function as a constrained optimization problem and solve it using the alternating direction method of multipliers algorithm. Both simulation data and hybrid real data are used to evaluate the proposed method. Quantification results show that our proposed iterative neural network method can outperform the neural network denoising and conventional penalized maximum likelihood methods.", "title": "" }, { "docid": "5d8bc7d7c3ca5f8ebef7cbdace5a5db2", "text": "The concept of knowledge management (KM) as a powerful competitive weapon has been strongly emphasized in the strategic management literature, yet the sustainability of the competitive advantage provided by KM capability is not well-explained. To fill this gap, this paper develops the concept of KM as an organizational capability and empirically examines the association between KM capabilities and competitive advantage. In order to provide a better presentation of significant relationships, through resource-based view of the firm explicitly recognizes important of KM resources and capabilities. Firm specific KM resources are classified as social KM resources, and technical KM resources. Surveys collected from 177 firms were analyzed and tested. The results confirmed the impact of social KM resource on competitive advantage. Technical KM resource is negatively related with competitive advantage, and KM capability is significantly related with competitive advantage. q 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "da5c56f30c9c162eb80c418ba9dbc31a", "text": "Text detection and recognition in a natural environment are key components of many applications, ranging from business card digitization to shop indexation in a street. This competition aims at assessing the ability of state-of-the-art methods to detect Multi-Lingual Text (MLT) in scene images, such as in contents gathered from the Internet media and in modern cities where multiple cultures live and communicate together. This competition is an extension of the Robust Reading Competition (RRC) which has been held since 2003 both in ICDAR and in an online context. The proposed competition is presented as a new challenge of the RRC. The dataset built for this challenge largely extends the previous RRC editions in many aspects: the multi-lingual text, the size of the dataset, the multi-oriented text, the wide variety of scenes. The dataset is comprised of 18,000 images which contain text belonging to 9 languages. The challenge is comprised of three tasks related to text detection and script classification. We have received a total of 16 participations from the research and industrial communities. This paper presents the dataset, the tasks and the findings of this RRC-MLT challenge.", "title": "" }, { "docid": "0f7f8557ffa238a529f28f9474559cc4", "text": "Fast incipient machine fault diagnosis is becoming one of the key requirements for economical and optimal process operation management. Artificial neural networks have been used to detect machine faults for a number of years and shown to be highly successful in this application area. This paper presents a novel test technique for machine fault detection and classification in electro-mechanical machinery from vibration measurements using one-class support vector machines (SVMs). In order to evaluate one-class SVMs, this paper examines the performance of the proposed method by comparing it with that of multilayer perception, one of the artificial neural network techniques, based on real benchmarking data. q 2005 Published by Elsevier Ltd.", "title": "" }, { "docid": "973426438175226bb46c39cc0a390d97", "text": "This paper proposes a methodology for the creation of specialized data sets for Textual Entailment, made of monothematic Text-Hypothesis pairs (i.e. pairs in which only one linguistic phenomenon relevant to the entailment relation is highlighted and isolated). The annotation procedure assumes that humans have knowledge about the linguistic phenomena relevant to inference, and a classification of such phenomena both into fine grained and macro categories is suggested. We experimented with the proposed methodology over a sample of pairs taken from the RTE-5 data set, and investigated critical issues arising when entailment, contradiction or unknown pairs are considered. The result is a new resource, which can be profitably used both to advance the comprehension of the linguistic phenomena relevant to entailment judgments and to make a first step towards the creation of large-scale specialized data sets.", "title": "" }, { "docid": "458e4b5196805b608e15ee9c566123c9", "text": "For the first half century of animal virology, the major problem was lack of a simple method for quantitating infectious virus particles; the only method available at that time was some form or other of the serial-dilution end-point method in animals, all of which were both slow and expensive. Cloned cultured animal cells, which began to be available around 1950, provided Dulbecco with a new approach. He adapted the technique developed by Emory Ellis and Max Delbrück for assaying bacteriophage, that is, seeding serial dilutions of a given virus population onto a confluent lawn of host cells, to the measurement of Western equine encephalitis virus, and demonstrated that it also formed easily countable plaques in monolayers of chick embryo fibroblasts. The impact of this finding was enormous; animal virologists had been waiting for such a technique for decades. It was immediately found to be widely applicable to many types of cells and most viruses, gained quick acceptance, and is widely regarded as marking the beginning of molecular animal virology. Renato Dulbecco was awarded the Nobel Prize in 1975. W. K. JOKLIK", "title": "" }, { "docid": "d6d07f50778ba3d99f00938b69fe0081", "text": "The use of metal casing is attractive to achieve robustness of modern slim tablet devices. The metal casing includes the metal back cover and the metal frame around the edges thereof. For such metal-casing tablet devices, the frame antenna that uses a part of the metal frame as an antenna's radiator is promising to achieve wide bandwidths for mobile communications. In this paper, the frame antenna based on the simple half-loop antenna structure to cover the long-term evolution 746-960 and 1710-2690 MHz bands is presented. The half-loop structure for the frame antenna is easy for manufacturing and increases the robustness of the metal casing. The dual-wideband operation of the half-loop frame antenna is obtained by using an elevated feed network supported by a thin feed substrate. The measured antenna efficiencies are, respectively, 45%-69% and 60%-83% in the low and high bands. By selecting different feed circuits, the antenna's low band can also be shifted from 746-960 MHz to lower frequencies such as 698-840 MHz, with the antenna's high-band coverage very slightly varied. The working principle of the antenna with the elevated feed network is discussed. The antenna is also fabricated and tested, and experimental results are presented.", "title": "" }, { "docid": "c53c60ceb793e8bc986837b5d82145fd", "text": "Within the information overload on the web and the diversity of the user interests, it is increasingly difficult for search engines to satisfy the user information needs. Personalized search tackles this problem by considering the user profile during the search. This paper describes a personalized search approach involving a semantic graph-based user profile issued from ontology. User profile refers to the user interest in a specific search session defined as a sequence of related queries. It is built using a score propagation that activates a set of semantically related concepts and maintained in the same search session using a graph-based merging scheme. We also define a session boundary recognition mechanism based on tracking changes in the dominant concepts held by the user profile relatively to a new submitted query using the Kendall rank correlation measure. Then, personalization is achieved by re-ranking the search results of related queries using the user profile. Our experimental evaluation is carried out using the HARD 2003 TREC collection and shows that our approach is effective.", "title": "" }, { "docid": "9d9665a21e5126ba98add5a832521cd1", "text": "Recently several different deep learning architectures have been proposed that take a string of characters as the raw input signal and automatically derive features for text classification. Few studies are available that compare the effectiveness of these approaches for character based text classification with each other. In this paper we perform such an empirical comparison for the important cybersecurity problem of DGA detection: classifying domain names as either benign vs. produced by malware (i.e., by a Domain Generation Algorithm). Training and evaluating on a dataset with 2M domain names shows that there is surprisingly little difference between various convolutional neural network (CNN) and recurrent neural network (RNN) based architectures in terms of accuracy, prompting a preference for the simpler architectures, since they are faster to train and to score, and less prone to overfitting.", "title": "" }, { "docid": "80a61f27dab6a8f71a5c27437254778b", "text": "5G will have to cope with a high degree of heterogeneity in terms of services and requirements. Among these latter, the flexible and efficient use of non-contiguous unused spectrum for different network deployment scenarios is considered a key challenge for 5G systems. To maximize spectrum efficiency, the 5G air interface technology will also need to be flexible and capable of mapping various services to the best suitable combinations of frequency and radio resources. In this work, we propose a comparison of several 5G waveform candidates (OFDM, UFMC, FBMC and GFDM) under a common framework. We assess spectral efficiency, power spectral density, peak-to-average power ratio and robustness to asynchronous multi-user uplink transmission. Moreover, we evaluate and compare the complexity of the different waveforms. In addition to the complexity analysis, in this work, we also demonstrate the suitability of FBMC for specific 5G use cases via two experimental implementations. The benefits of these new waveforms for the foreseen 5G use cases are clearly highlighted on representative criteria and experiments.", "title": "" }, { "docid": "b02bcb7e0d7669b69130604157c27c08", "text": "The success of Android phones makes them a prominent target for malicious software, in particular since the Android permission system turned out to be inadequate to protect the user against security and privacy threats. This work presents AppGuard, a powerful and flexible system for the enforcement of user-customizable security policies on untrusted Android applications. AppGuard does not require any changes to a smartphone’s firmware or root access. Our system offers complete mediation of security-relevant methods based on callee-site inline reference monitoring. We demonstrate the general applicability of AppGuard by several case studies, e.g., removing permissions from overly curious apps as well as defending against several recent real-world attacks on Android phones. Our technique exhibits very little space and runtime overhead. AppGuard is publicly available, has been invited to the Samsung Apps market, and has had more than 500,000 downloads so far.", "title": "" }, { "docid": "77be4363f9080eb8a3b73c9237becca4", "text": "Aim: The purpose of this paper is to present findings of an integrative literature review related to employees’ motivational practices in organizations. Method: A broad search of computerized databases focusing on articles published in English during 1999– 2010 was completed. Extensive screening sought to determine current literature themes and empirical research evidence completed in employees’ focused specifically on motivation in organization. Results: 40 articles are included in this integrative literature review. The literature focuses on how job characteristics, employee characteristic, management practices and broader environmental factors influence employees’ motivation. Research that links employee’s motivation is both based on qualitative and quantitative studies. Conclusion: This literature reveals widespread support of motivation concepts in organizations. Theoretical and editorial literature confirms motivation concepts are central to employees. Job characteristics, management practices, employee characteristics and broader environmental factors are the key variables influence employees’ motivation in organization.", "title": "" }, { "docid": "47eef1318d313e2f89bb700f8cd34472", "text": "This paper sets out to detect controversial news reports using online discussions as a source of information. We define controversy as a public discussion that divides society and demonstrate that a content and stylometric analysis of these debates yields useful signals for extracting disputed news items. Moreover, we argue that a debate-based approach could produce more generic models, since the discussion architectures we exploit to measure controversy occur on many different platforms.", "title": "" }, { "docid": "5ffe358766049379b0910ac1181100af", "text": "A novel one-section bandstop filter (BSF), which possesses the characteristics of compact size, wide bandwidth, and low insertion loss is proposed and fabricated. This bandstop filter was constructed by using single quarter-wavelength resonator with one section of anti-coupled lines with short circuits at one end. The attenuation-pole characteristics of this type of bandstop filters are investigated through TEM transmission-line model. Design procedures are clearly presented. The 3-dB bandwidth of the first stopband and insertion loss of the first passband of this BSF is from 2.3 GHz to 9.5 GHz and below 0.3 dB, respectively. There is good agreement between the simulated and experimental results.", "title": "" }, { "docid": "7c7beabf8bcaa2af706b6c1fd92ee8dd", "text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19", "title": "" }, { "docid": "2da1279270b3e8925100f281447bfb6b", "text": "Consideration of confounding is fundamental to the design and analysis of studies of causal effects. Yet, apart from confounding in experimental designs, the topic is given little or no discussion in most statistics texts. We here provide an overview of confounding and related concepts based on a counterfactual model for causation. Special attention is given to definitions of confounding, problems in control of confounding, the relation of confounding to exchangeability and collapsibility, and the importance of distinguishing confounding from noncollapsibility.", "title": "" }, { "docid": "13cb793ca9cdf926da86bb6fc630800a", "text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.", "title": "" } ]
scidocsrr
c9fb85c377ccc1eb4212759698900753
Very High Frame Rate Volumetric Integration of Depth Images on Mobile Devices
[ { "docid": "0e12ea5492b911c8879cc5e79463c9fa", "text": "In this paper, we propose a complete on-device 3D reconstruction pipeline for mobile monocular hand-held devices, which generates dense 3D models with absolute scale on-site while simultaneously supplying the user with real-time interactive feedback. The method fills a gap in current cloud-based mobile reconstruction services as it ensures at capture time that the acquired image set fulfills desired quality and completeness criteria. In contrast to existing systems, the developed framework offers multiple innovative solutions. In particular, we investigate the usability of the available on-device inertial sensors to make the tracking and mapping process more resilient to rapid motions and to estimate the metric scale of the captured scene. Moreover, we propose an efficient and accurate scheme for dense stereo matching which allows to reduce the processing time to interactive speed. We demonstrate the performance of the reconstruction pipeline on multiple challenging indoor and outdoor scenes of different size and depth variability.", "title": "" }, { "docid": "3f6382ed8f0e89be1a752689d54f0d06", "text": "MonoFusion allows a user to build dense 3D reconstructions of their environment in real-time, utilizing only a single, off-the-shelf web camera as the input sensor. The camera could be one already available in a tablet, phone, or a standalone device. No additional input hardware is required. This removes the need for power intensive active sensors that do not work robustly in natural outdoor lighting. Using the input stream of the camera we first estimate the 6DoF camera pose using a sparse tracking method. These poses are then used for efficient dense stereo matching between the input frame and a key frame (extracted previously). The resulting dense depth maps are directly fused into a voxel-based implicit model (using a computationally inexpensive method) and surfaces are extracted per frame. The system is able to recover from tracking failures as well as filter out geometrically inconsistent noise from the 3D reconstruction. Our method is both simple to implement and efficient, making such systems even more accessible. This paper details the algorithmic components that make up our system and a GPU implementation of our approach. Qualitative results demonstrate high quality reconstructions even visually comparable to active depth sensor-based systems such as KinectFusion.", "title": "" }, { "docid": "c8e5257c2ed0023dc10786a3071c6e6a", "text": "Online 3D reconstruction is gaining newfound interest due to the availability of real-time consumer depth cameras. The basic problem takes live overlapping depth maps as input and incrementally fuses these into a single 3D model. This is challenging particularly when real-time performance is desired without trading quality or scale. We contribute an online system for large and fine scale volumetric reconstruction based on a memory and speed efficient data structure. Our system uses a simple spatial hashing scheme that compresses space, and allows for real-time access and updates of implicit surface data, without the need for a regular or hierarchical grid data structure. Surface data is only stored densely where measurements are observed. Additionally, data can be streamed efficiently in or out of the hash table, allowing for further scalability during sensor motion. We show interactive reconstructions of a variety of scenes, reconstructing both fine-grained details and large scale environments. We illustrate how all parts of our pipeline from depth map pre-processing, camera pose estimation, depth map fusion, and surface rendering are performed at real-time rates on commodity graphics hardware. We conclude with a comparison to current state-of-the-art online systems, illustrating improved performance and reconstruction quality.", "title": "" }, { "docid": "bfd97b5576873345b0474a645ccda1d6", "text": "We present a direct monocular visual odometry system which runs in real-time on a smartphone. Being a direct method, it tracks and maps on the images themselves instead of extracted features such as keypoints. New images are tracked using direct image alignment, while geometry is represented in the form of a semi-dense depth map. Depth is estimated by filtering over many small-baseline, pixel-wise stereo comparisons. This leads to significantly less outliers and allows to map and use all image regions with sufficient gradient, including edges. We show how a simple world model for AR applications can be derived from semi-dense depth maps, and demonstrate the practical applicability in the context of an AR application in which simulated objects can collide with real geometry.", "title": "" } ]
[ { "docid": "a0ca7d86ae79c263644c8cd5ae4c0aed", "text": "Research in texture recognition often concentrates on the problem of material recognition in uncluttered conditions, an assumption rarely met by applications. In this work we conduct a first study of material and describable texture attributes recognition in clutter, using a new dataset derived from the OpenSurface texture repository. Motivated by the challenge posed by this problem, we propose a new texture descriptor, FV-CNN, obtained by Fisher Vector pooling of a Convolutional Neural Network (CNN) filter bank. FV-CNN substantially improves the state-of-the-art in texture, material and scene recognition. Our approach achieves 79.8% accuracy on Flickr material dataset and 81% accuracy on MIT indoor scenes, providing absolute gains of more than 10% over existing approaches. FV-CNN easily transfers across domains without requiring feature adaptation as for methods that build on the fully-connected layers of CNNs. Furthermore, FV-CNN can seamlessly incorporate multi-scale information and describe regions of arbitrary shapes and sizes. Our approach is particularly suited at localizing “stuff” categories and obtains state-of-the-art results on MSRC segmentation dataset, as well as promising results on recognizing materials and surface attributes in clutter on the OpenSurfaces dataset.", "title": "" }, { "docid": "9e3d3783aa566b50a0e56c71703da32b", "text": "Heterogeneous networks are widely used to model real-world semi-structured data. The key challenge of learning over such networks is the modeling of node similarity under both network structures and contents. To deal with network structures, most existing works assume a given or enumerable set of meta-paths and then leverage them for the computation of meta-path-based proximities or network embeddings. However, expert knowledge for given meta-paths is not always available, and as the length of considered meta-paths increases, the number of possible paths grows exponentially, which makes the path searching process very costly. On the other hand, while there are often rich contents around network nodes, they have hardly been leveraged to further improve similarity modeling. In this work, to properly model node similarity in content-rich heterogeneous networks, we propose to automatically discover useful paths for pairs of nodes under both structural and content information. To this end, we combine continuous reinforcement learning and deep content embedding into a novel semi-supervised joint learning framework. Specifically, the supervised reinforcement learning component explores useful paths between a small set of example similar pairs of nodes, while the unsupervised deep embedding component captures node contents and enables inductive learning on the whole network. The two components are jointly trained in a closed loop to mutually enhance each other. Extensive experiments on three real-world heterogeneous networks demonstrate the supreme advantages of our algorithm.", "title": "" }, { "docid": "4261e44dad03e8db3c0520126b9c7c4d", "text": "One of the major drawbacks of magnetic resonance imaging (MRI) has been the lack of a standard and quantifiable interpretation of image intensities. Unlike in other modalities, such as X-ray computerized tomography, MR images taken for the same patient on the same scanner at different times may appear different from each other due to a variety of scanner-dependent variations and, therefore, the absolute intensity values do not have a fixed meaning. The authors have devised a two-step method wherein all images (independent of patients and the specific brand of the MR scanner used) can be transformed in such a may that for the same protocol and body region, in the transformed images similar intensities will have similar tissue meaning. Standardized images can be displayed with fixed windows without the need of per-case adjustment. More importantly, extraction of quantitative information about healthy organs or about abnormalities can be considerably simplified. This paper introduces and compares new variants of this standardizing method that can help to overcome some of the problems with the original method.", "title": "" }, { "docid": "143a4fcc0f2949e797e6f51899e811e2", "text": "A long-standing problem at the interface of artificial intelligence and applied mathematics is to devise an algorithm capable of achieving human level or even superhuman proficiency in transforming observed data into predictive mathematical models of the physical world. In the current era of abundance of data and advanced machine learning capabilities, the natural question arises: How can we automatically uncover the underlying laws of physics from high-dimensional data generated from experiments? In this work, we put forth a deep learning approach for discovering nonlinear partial differential equations from scattered and potentially noisy observations in space and time. Specifically, we approximate the unknown solution as well as the nonlinear dynamics by two deep neural networks. The first network acts as a prior on the unknown solution and essentially enables us to avoid numerical differentiations which are inherently ill-conditioned and unstable. The second network represents the nonlinear dynamics and helps us distill the mechanisms that govern the evolution of a given spatiotemporal data-set. We test the effectiveness of our approach for several benchmark problems spanning a number of scientific domains and demonstrate how the proposed framework can help us accurately learn the underlying dynamics and forecast future states of the system. In particular, we study the Burgers’, Kortewegde Vries (KdV), Kuramoto-Sivashinsky, nonlinear Schrödinger, and NavierStokes equations.", "title": "" }, { "docid": "27d7f7935c235a3631fba6e3df08f623", "text": "We investigate the task of Named Entity Recognition (NER) in the domain of biomedical text. There is little published work employing modern neural network techniques in this domain, probably due to the small sizes of human-labeled data sets, as non-trivial neural models would have great difficulty avoiding overfitting. In this work we follow a semi-supervised learning approach: We first train state-of-the art (deep) neural networks on a large corpus of noisy machine-labeled data, then “transfer” and fine-tune the learned model on two higher-quality humanlabeled data sets. This approach yields higher performance than the current best published systems for the class DISEASE. It trails but is not far from the currently best systems for the class CHEM.", "title": "" }, { "docid": "3e605aff5b2ceae91ee0cef42dd36528", "text": "A new super-concentrated aqueous electrolyte is proposed by introducing a second lithium salt. The resultant ultra-high concentration of 28 m led to more effective formation of a protective interphase on the anode along with further suppression of water activities at both anode and cathode surfaces. The improved electrochemical stability allows the use of TiO2 as the anode material, and a 2.5 V aqueous Li-ion cell based on LiMn2 O4 and carbon-coated TiO2 delivered the unprecedented energy density of 100 Wh kg(-1) for rechargeable aqueous Li-ion cells, along with excellent cycling stability and high coulombic efficiency. It has been demonstrated that the introduction of a second salts into the \"water-in-salt\" electrolyte further pushed the energy densities of aqueous Li-ion cells closer to those of the state-of-the-art Li-ion batteries.", "title": "" }, { "docid": "406a8143edfeab7f97d451d0af9b7058", "text": "One of the core questions when designing modern Natural Language Processing (NLP) systems is how to model input textual data such that the learning algorithm is provided with enough information to estimate accurate decision functions. The mainstream approach is to represent input objects as feature vectors where each value encodes some of their aspects, e.g., syntax, semantics, etc. Feature-based methods have demonstrated state-of-the-art results on various NLP tasks. However, designing good features is a highly empirical-driven process, it greatly depends on a task requiring a significant amount of domain expertise. Moreover, extracting features for complex NLP tasks often requires expensive pre-processing steps running a large number of linguistic tools while relying on external knowledge sources that are often not available or hard to get. Hence, this process is not cheap and often constitutes one of the major challenges when attempting a new task or adapting to a different language or domain. The problem of modelling input objects is even more acute in cases when the input examples are not just single objects but pairs of objects, such as in various learning to rank problems in Information Retrieval and Natural Language processing. An alternative to feature-based methods is using kernels which are essentially nonlinear functions mapping input examples into some high dimensional space thus allowing for learning decision functions with higher discriminative power. Kernels implicitly generate a very large number of features computing similarity between input examples in that implicit space. A well-designed kernel function can greatly reduce the effort to design a large set of manually designed features often leading to superior results. However, in the recent years, the use of kernel methods in NLP has been greatly underestimated primarily due to the following reasons: (i) learning with kernels is slow as it requires to carry out optimizaiton in the dual space leading to quadratic complexity; (ii) applying kernels to the input objects encoded with vanilla structures, e.g., generated by syntactic parsers, often yields minor improvements over carefully designed feature-based methods. In this thesis, we adopt the kernel learning approach for solving complex NLP tasks and primarily focus on solutions to the aforementioned problems posed by the use of kernels. In particular, we design novel learning algorithms for training Support Vector Machines with structural kernels, e.g., tree kernels, considerably speeding up the training over the conventional SVM training methods. We show that using the training algorithms developed in this thesis allows for trainining tree kernel models on large-scale datasets containing millions of instances, which was not possible before. Next, we focus on the problem of designing input structures that are fed to tree kernel functions to automatically generate a large set of tree-fragment features. We demonstrate that previously used plain structures generated by syntactic parsers, e.g., syntactic or dependency trees, are often a poor choice thus compromising the expressivity offered by a tree kernel learning framework. We propose several effective design patterns of the input tree structures for various NLP tasks ranging from sentiment analysis to answer passage reranking. The central idea is to inject additional semantic information relevant for the task directly into the tree nodes and let the expressive kernels generate rich feature spaces. For the opinion mining tasks, the additional semantic information injected into tree nodes can be word polarity labels, while for more complex tasks of modelling text pairs the relational information about overlapping words in a pair appears to significantly improve the accuracy of the resulting models. Finally, we observe that both feature-based and kernel methods typically treat words as atomic units where matching different yet semantically similar words is problematic. Conversely, the idea of distributional approaches to model words as vectors is much more effective in establishing a semantic match between words and phrases. While tree kernel functions do allow for a more flexible matching between phrases and sentences through matching their syntactic contexts, their representation can not be tuned on the training set as it is possible with distributional approaches. Recently, deep learning approaches have been applied to generalize the distributional word matching problem to matching sentences taking it one step further by learning the optimal sentence representations for a given task. Deep neural networks have already claimed state-of-the-art performance in many computer vision, speech recognition, and natural language tasks. Following this trend, this thesis also explores the virtue of deep learning architectures for modelling input texts and text pairs where we build on some of the ideas to model input objects proposed within the tree kernel learning framework. In particular, we explore the idea of relational linking (proposed in the preceding chapters to encode text pairs using linguistic tree structures) to design a state-of-the-art deep learning architecture for modelling text pairs. We compare the proposed deep learning models that require even less manual intervention in the feature design process then previously described tree kernel methods that already offer a very good trade-off between the feature-engineering effort and the expressivity of the resulting representation. Our deep learning models demonstrate the state-of-the-art performance on a recent benchmark for Twitter Sentiment Analysis, Answer Sentence Selection and Microblog retrieval.", "title": "" }, { "docid": "f5bea5413ad33191278d7630a7e18e39", "text": "Speech activity detection (SAD) on channel transmissions is a critical preprocessing task for speech, speaker and language recognition or for further human analysis. This paper presents a feature combination approach to improve SAD on highly channel degraded speech as part of the Defense Advanced Research Projects Agency’s (DARPA) Robust Automatic Transcription of Speech (RATS) program. The key contribution is the feature combination exploration of different novel SAD features based on pitch and spectro-temporal processing and the standard Mel Frequency Cepstral Coefficients (MFCC) acoustic feature. The SAD features are: (1) a GABOR feature representation, followed by a multilayer perceptron (MLP); (2) a feature that combines multiple voicing features and spectral flux measures (Combo); (3) a feature based on subband autocorrelation (SAcC) and MLP postprocessing and (4) a multiband comb-filter F0 (MBCombF0) voicing measure. We present single, pairwise and all feature combinations, show high error reductions from pairwise feature level combination over the MFCC baseline and show that the best performance is achieved by the combination of all features.", "title": "" }, { "docid": "a299b0f58aaba6efff9361ff2b5a1e69", "text": "The continuing growth of World Wide Web and on-line text collections makes a large volume of information available to users. Automatic text summarization allows users to quickly understand documents. In this paper, we propose an automated technique for single document summarization which combines content-based and graph-based approaches and introduce the Hopfield network algorithm as a technique for ranking text segments. A series of experiments are performed using the DUC collection and a Thai-document collection. The results show the superiority of the proposed technique over reference systems, in addition the Hopfield network algorithm on undirected graph is shown to be the best text segment ranking algorithm in the study", "title": "" }, { "docid": "41c3505d1341247972d99319cba3e7ba", "text": "A 32-year-old pregnant woman in the 25th week of pregnancy underwent oral glucose tolerance screening at the diabetologist's. Later that day, she was found dead in her apartment possibly poisoned with Chlumsky disinfectant solution (solutio phenoli camphorata). An autopsy revealed chemical burns in the digestive system. The lungs and the brain showed signs of severe edema. The blood of the woman and fetus was analyzed using gas chromatography with mass spectrometry and revealed phenol, its metabolites (phenyl glucuronide and phenyl sulfate) and camphor. No ethanol was found in the blood samples. Both phenol and camphor are contained in Chlumsky disinfectant solution, which is used for disinfecting surgical equipment in healthcare facilities. Further investigation revealed that the deceased woman had been accidentally administered a disinfectant instead of a glucose solution by the nurse, which resulted in acute intoxication followed by the death of the pregnant woman and the fetus.", "title": "" }, { "docid": "00ed940459b92d92981e4132a2b5e9c0", "text": "Variants of Hirschsprung disease are conditions that clinically resemble Hirschsprung disease, despite the presence of ganglion cells in rectal suction biopsies. The characterization and differentiation of various entities are mainly based on histologic, immunohistochemical, and electron microscopy findings of biopsies from patients with functional intestinal obstruction. Intestinal neuronal dysplasia is histologically characterized by hyperganglionosis, giant ganglia, and ectopic ganglion cells. In most intestinal neuronal dysplasia cases, conservative treatments such as laxatives and enema are sufficient. Some patients may require internal sphincter myectomy. Patients with the diagnosis of isolated hypoganglionosis show decreased numbers of nerve cells, decreased plexus area, as well as increased distance between ganglia in rectal biopsies, and resection of the affected segment has been the treatment of choice. The diagnosis of internal anal sphincter achalasia is based on abnormal rectal manometry findings, whereas rectal suction biopsies display presence of ganglion cells as well as normal acetylcholinesterase activity. Internal anal sphincter achalasia is either treated by internal sphincter myectomy or botulinum toxin injection. Megacystis microcolon intestinal hypoperistalsis is a rare condition, and the most severe form of functional intestinal obstruction in the newborn. Megacystis microcolon intestinal hypoperistalsis is characterized by massive abdominal distension caused by a largely dilated nonobstructed bladder, microcolon, and decreased or absent intestinal peristalsis. Although the outcome has improved in recent years, survivors have to be either maintained by total parenteral nutrition or have undergone multivisceral transplant. This review article summarizes the current knowledge of the aforementioned entities of variant HD.", "title": "" }, { "docid": "aa4bad972cb53de2e60fd998df08d774", "text": "170 undergraduate students completed the Boredom Proneness Scale by Farmer and Sundberg and the Multiple Affect Adjective Checklist by Zuckerman and Lubin. Significant negative relationships were found between boredom proneness and negative affect scores (i.e., Depression, Hostility, Anxiety). Significant positive correlations also obtained between boredom proneness and positive affect (i.e., Positive Affect, Sensation Seeking). The correlations between boredom proneness \"subscales\" and positive and negative affect were congruent with those obtained using total boredom proneness scores. Implications for counseling are discussed.", "title": "" }, { "docid": "7a54331811a4a93df69365b6756e1d5f", "text": "With object storage services becoming increasingly accepted as replacements for traditional file or block systems, it is important to effectively measure the performance of these services. Thus people can compare different solutions or tune their systems for better performance. However, little has been reported on this specific topic as yet. To address this problem, we present COSBench (Cloud Object Storage Benchmark), a benchmark tool that we are currently working on in Intel for cloud object storage services. In addition, in this paper, we also share the results of the experiments we have performed so far.", "title": "" }, { "docid": "1aef8b7e5b4e3237b3d6703c15baa990", "text": "This paper demonstrates six-metal-layer antenna-to-receiver signal transitions on panel-scale processed ultra-thin glass-based 5G module substrates with 50-Ω transmission lines and micro-via transitions in re-distribution layers. The glass modules consist of low-loss dielectric thin-films laminated on 100-μm glass cores. Modeling, design, fabrication, and characterization of the multilayered signal interconnects were performed at 28-GHz band. The surface planarity and dimensional stability of glass substrates enabled the fabrication of highly-controlled signal traces with tolerances of 2% inside the re-distribution layers on low-loss dielectric build-up thin-films. The fabricated transmission lines showed 0.435 dB loss with 4.19 mm length, while microvias in low-loss dielectric thin-films showed 0.034 dB/microvia. The superiority of glass substrates enable low-loss link budget with high precision from chip to antenna for 5G communications.", "title": "" }, { "docid": "a1623a10e06537a038ce3eaa1cfbeed7", "text": "We present a simple zero-knowledge proof of knowledge protocol of which many protocols in the literature are instantiations. These include Schnorr’s protocol for proving knowledge of a discrete logarithm, the Fiat-Shamir and Guillou-Quisquater protocols for proving knowledge of a modular root, protocols for proving knowledge of representations (like Okamoto’s protocol), protocols for proving equality of secret values, a protocol for proving the correctness of a Diffie-Hellman key, protocols for proving the multiplicative relation of three commitments (as required in secure multi-party computation), and protocols used in credential systems. This shows that a single simple treatment (and proof), at a high level of abstraction, can replace the individual previous treatments. Moreover, one can devise new instantiations of the protocol.", "title": "" }, { "docid": "db8cbcc8a7d233d404a18a54cb9fedae", "text": "Edge preserving filters preserve the edges and its information while blurring an image. In other words they are used to smooth an image, while reducing the edge blurring effects across the edge like halos, phantom etc. They are nonlinear in nature. Examples are bilateral filter, anisotropic diffusion filter, guided filter, trilateral filter etc. Hence these family of filters are very useful in reducing the noise in an image making it very demanding in computer vision and computational photography applications like denoising, video abstraction, demosaicing, optical-flow estimation, stereo matching, tone mapping, style transfer, relighting etc. This paper provides a concrete introduction to edge preserving filters starting from the heat diffusion equation in olden to recent eras, an overview of its numerous applications, as well as mathematical analysis, various efficient and optimized ways of implementation and their interrelationships, keeping focus on preserving the boundaries, spikes and canyons in presence of noise. Furthermore it provides a realistic notion for efficient implementation with a research scope for hardware realization for further acceleration.", "title": "" }, { "docid": "a4a56e0647849c22b48e7e5dc3f3049b", "text": "The paper describes a 2D sound source mapping system for a mobile robot. We developed a multiple sound sources localization method for a mobile robot with a 32 channel concentric microphone array. The system can separate multiple moving sound sources using direction localization. Directional localization and separation of different pressure sound sources is achieved using the delay and sum beam forming (DSBF) and the frequency band selection (FBS) algorithm. Sound sources were mapped by using a wheeled robot equipped with the microphone array. The robot localizes sounds direction on the move and estimates sound sources position using triangulation. Assuming the movement of sound sources, the system set a time limit and uses only the last few seconds data. By using the random sample consensus (RANSAC) algorithm for position estimation, we achieved 2D multiple sound source mapping from time limited data with high accuracy. Also, moving sound source separation is experimentally demonstrated with segments of the DSBF enhanced signal derived from the localization process", "title": "" }, { "docid": "1a7cfc19e7e3f9baf15e4a7450338c33", "text": "The degree to which perceptual awareness of threat stimuli and bodily states of arousal modulates neural activity associated with fear conditioning is unknown. We used functional magnetic neuroimaging (fMRI) to study healthy subjects and patients with peripheral autonomic denervation to examine how the expression of conditioning-related activity is modulated by stimulus awareness and autonomic arousal. In controls, enhanced amygdala activity was evident during conditioning to both \"seen\" (unmasked) and \"unseen\" (backward masked) stimuli, whereas insula activity was modulated by perceptual awareness of a threat stimulus. Absent peripheral autonomic arousal, in patients with autonomic denervation, was associated with decreased conditioning-related activity in insula and amygdala. The findings indicate that the expression of conditioning-related neural activity is modulated by both awareness and representations of bodily states of autonomic arousal.", "title": "" }, { "docid": "58d7e76a4b960e33fc7b541d04825dc9", "text": "The Internet of Things (IoT) is intended for ubiquitous connectivity among different entities or “things”. While its purpose is to provide effective and efficient solutions, security of the devices and network is a challenging issue. The number of devices connected along with the ad-hoc nature of the system further exacerbates the situation. Therefore, security and privacy has emerged as a significant challenge for the IoT. In this paper, we aim to provide a thorough survey related to the privacy and security challenges of the IoT. This document addresses these challenges from the perspective of technologies and architecture used. This work focuses also in IoT intrinsic vulnerabilities as well as the security challenges of various layers based on the security principles of data confidentiality, integrity and availability. This survey analyzes articles published for the IoT at the time and relates it to the security conjuncture of the field and its projection to the future.", "title": "" }, { "docid": "86e16c911d9a381ca46225c65222177d", "text": "Steep, soil-mantled hillslopes evolve through the downslope movement of soil, driven largely by slope-dependent ransport processes. Most landscape evolution models represent hillslope transport by linear diffusion, in which rates of sediment transport are proportional to slope, such that equilibrium hillslopes should have constant curvature between divides and channels. On many soil-mantled hillslopes, however, curvature appears to vary systematically, such that slopes are typically convex near the divide and become increasingly planar downslope. This suggests that linear diffusion is not an adequate model to describe the entire morphology of soil-mantled hillslopes. Here we show that the interaction between local disturbances (such as rainsplash and biogenic activity) and frictional and gravitational forces results in a diffusive transport law that depends nonlinearly on hillslope gradient. Our proposed transport law (1) approximates linear diffusion at low gradients and (2) indicates that sediment flux increases rapidly as gradient approaches a critical value. We calibrated and tested this transport law using high-resolution topographic data from the Oregon Coast Range. These data, obtained by airborne laser altimetry, allow us to characterize hillslope morphology at •2 m scale. At five small basins in our study area, hillslope curvature approaches zero with increasing gradient, consistent with our proposed nonlinear diffusive transport law. Hillslope gradients tend to cluster near values for which sediment flux increases rapidly with slope, such that large changes in erosion rate will correspond to small changes in gradient. Therefore average hillslope gradient is unlikely to be a reliable indicator of rates of tectonic forcing or baselevel owering. Where hillslope erosion is dominated by nonlinear diffusion, rates of tectonic forcing will be more reliably reflected in hillslope curvature near the divide rather than average hillslope gradient.", "title": "" } ]
scidocsrr
241d93d13aff6824ccc7b6221b6bf765
Imaging human EEG dynamics using independent component analysis
[ { "docid": "3d5fb6eff6d0d63c17ef69c8130d7a77", "text": "A new measure of event-related brain dynamics, the event-related spectral perturbation (ERSP), is introduced to study event-related dynamics of the EEG spectrum induced by, but not phase-locked to, the onset of the auditory stimuli. The ERSP reveals aspects of event-related brain dynamics not contained in the ERP average of the same response epochs. Twenty-eight subjects participated in daily auditory evoked response experiments during a 4 day study of the effects of 24 h free-field exposure to intermittent trains of 89 dB low frequency tones. During evoked response testing, the same tones were presented through headphones in random order at 5 sec intervals. No significant changes in behavioral thresholds occurred during or after free-field exposure. ERSPs induced by target pips presented in some inter-tone intervals were larger than, but shared common features with, ERSPs induced by the tones, most prominently a ridge of augmented EEG amplitude from 11 to 18 Hz, peaking 1-1.5 sec after stimulus onset. Following 3-11 h of free-field exposure, this feature was significantly smaller in tone-induced ERSPs; target-induced ERSPs were not similarly affected. These results, therefore, document systematic effects of exposure to intermittent tones on EEG brain dynamics even in the absence of changes in auditory thresholds.", "title": "" } ]
[ { "docid": "55f95c7b59f17fb210ebae97dbd96d72", "text": "Clustering is a widely studied data mining problem in the text domains. The problem finds numerous applications in customer segmentation, classification, collaborative filtering, visualization, document organization, and indexing. In this chapter, we will provide a detailed survey of the problem of text clustering. We will study the key challenges of the clustering problem, as it applies to the text domain. We will discuss the key methods used for text clustering, and their relative advantages. We will also discuss a number of recent advances in the area in the context of social network and linked data.", "title": "" }, { "docid": "c063474634eb427cf0215b4500182f8c", "text": "Factorization Machines offer good performance and useful embeddings of data. However, they are costly to scale to large amounts of data and large numbers of features. In this paper we describe DiFacto, which uses a refined Factorization Machine model with sparse memory adaptive constraints and frequency adaptive regularization. We show how to distribute DiFacto over multiple machines using the Parameter Server framework by computing distributed subgradients on minibatches asynchronously. We analyze its convergence and demonstrate its efficiency in computational advertising datasets with billions examples and features.", "title": "" }, { "docid": "8b760eff727b1119ff73ec1ba234a675", "text": "Substrate integrated waveguide (SIW) is a new high Q, low loss, low cost, easy processing and integrating planar waveguide structure, which can be widely used in microwave and millimeter-wave integrated circuit. A five-elements resonant slot array antenna at 35GHz has been designed in this paper with a bandwidth of 500MHz (S11<;-15dB), gain of 11.5dB and sidelobe level (SLL) of -23.5dB (using Taylor weighted), which has a small size, low cost and is easy to integrate, etc.", "title": "" }, { "docid": "d35c44a54eaa294a60379b00dd0ce270", "text": "Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.", "title": "" }, { "docid": "5ebefc9d5889cb9c7e3f83a8b38c4cb4", "text": "As organizations start to use data-intensive cluster computing systems like Hadoop and Dryad for more applications, there is a growing need to share clusters between users. However, there is a conflict between fairness in scheduling and data locality (placing tasks on nodes that contain their input data). We illustrate this problem through our experience designing a fair scheduler for a 600-node Hadoop cluster at Facebook. To address the conflict between locality and fairness, we propose a simple algorithm called delay scheduling: when the job that should be scheduled next according to fairness cannot launch a local task, it waits for a small amount of time, letting other jobs launch tasks instead. We find that delay scheduling achieves nearly optimal data locality in a variety of workloads and can increase throughput by up to 2x while preserving fairness. In addition, the simplicity of delay scheduling makes it applicable under a wide variety of scheduling policies beyond fair sharing.", "title": "" }, { "docid": "f0245dca8cc1d3c418c0d915c7982484", "text": "The injection of a high-frequency signal in the stator via inverter has been shown to be a viable option to estimate the magnet temperature in permanent-magnet synchronous machines (PMSMs). The variation of the magnet resistance with temperature is reflected in the stator high-frequency resistance, which can be measured from the resulting current when a high-frequency voltage is injected. However, this method is sensitive to d- and q-axis inductance (Ld and Lq) variations, as well as to the machine speed. In addition, it is only suitable for surface PMSMs (SPMSMs) and inadequate for interior PMSMs (IPMSMs). In this paper, the use of a pulsating high-frequency current injection in the d-axis of the machine for temperature estimation purposes is proposed. The proposed method will be shown to be insensitive to the speed, Lq, and Ld variations. Furthermore, it can be used with both SPMSMs and IPMSMs.", "title": "" }, { "docid": "c927ca7a74732032dd7a0b8ea907640b", "text": "We propose a Bayesian optimization algorithm for objective functions that are sums or integrals of expensive-to-evaluate functions, allowing noisy evaluations. These objective functions arise in multi-task Bayesian optimization for tuning machine learning hyperparameters, optimization via simulation, and sequential design of experiments with random environmental conditions. Our method is average-case optimal by construction when a single evaluation of the integrand remains within our evaluation budget. Achieving this one-step optimality requires solving a challenging value of information optimization problem, for which we provide a novel efficient discretization-free computational method. We also provide consistency proofs for our method in both continuum and discrete finite domains for objective functions that are sums. In numerical experiments comparing against previous state-of-the-art methods, including those that also leverage sum or integral structure, our method performs as well or better across a wide range of problems and offers significant improvements when evaluations are noisy or the integrand varies smoothly in the integrated variables.", "title": "" }, { "docid": "7cbe504e03ab802389c48109ed1f1802", "text": "Despite recent breakthroughs in the applications of deep neural networks, one setting that presents a persistent challenge is that of “one-shot learning.” Traditional gradient-based networks require a lot of data to learn, often through extensive iterative training. When new data is encountered, the models must inefficiently relearn their parameters to adequately incorporate the new information without catastrophic interference. Architectures with augmented memory capacities, such as Neural Turing Machines (NTMs), offer the ability to quickly encode and retrieve new information, and hence can potentially obviate the downsides of conventional models. Here, we demonstrate the ability of a memory-augmented neural network to rapidly assimilate new data, and leverage this data to make accurate predictions after only a few samples. We also introduce a new method for accessing an external memory that focuses on memory content, unlike previous methods that additionally use memory locationbased focusing mechanisms.", "title": "" }, { "docid": "4c82a4e51633b87f2f6b2619ca238686", "text": "Allocentric space is mapped by a widespread brain circuit of functionally specialized cell types located in interconnected subregions of the hippocampal-parahippocampal cortices. Little is known about the neural architectures required to express this variety of firing patterns. In rats, we found that one of the cell types, the grid cell, was abundant not only in medial entorhinal cortex (MEC), where it was first reported, but also in pre- and parasubiculum. The proportion of grid cells in pre- and parasubiculum was comparable to deep layers of MEC. The symmetry of the grid pattern and its relationship to the theta rhythm were weaker, especially in presubiculum. Pre- and parasubicular grid cells intermingled with head-direction cells and border cells, as in deep MEC layers. The characterization of a common pool of space-responsive cells in architecturally diverse subdivisions of parahippocampal cortex constrains the range of mechanisms that might give rise to their unique functional discharge phenotypes.", "title": "" }, { "docid": "0f9a33f8ef5c9c415cf47814c9ef896d", "text": "BACKGROUND\nNeuropathic pain is one of the most devastating kinds of chronic pain. Neuroinflammation has been shown to contribute to the development of neuropathic pain. We have previously demonstrated that lumbar spinal cord-infiltrating CD4+ T lymphocytes contribute to the maintenance of mechanical hypersensitivity in spinal nerve L5 transection (L5Tx), a murine model of neuropathic pain. Here, we further examined the phenotype of the CD4+ T lymphocytes involved in the maintenance of neuropathic pain-like behavior via intracellular flow cytometric analysis and explored potential interactions between infiltrating CD4+ T lymphocytes and spinal cord glial cells.\n\n\nRESULTS\nWe consistently observed significantly higher numbers of T-Bet+, IFN-γ+, TNF-α+, and GM-CSF+, but not GATA3+ or IL-4+, lumbar spinal cord-infiltrating CD4+ T lymphocytes in the L5Tx group compared to the sham group at day 7 post-L5Tx. This suggests that the infiltrating CD4+ T lymphocytes expressed a pro-inflammatory type 1 phenotype (Th1). Despite the observation of CD4+ CD40 ligand (CD154)+ T lymphocytes in the lumbar spinal cord post-L5Tx, CD154 knockout (KO) mice did not display significant changes in L5Tx-induced mechanical hypersensitivity, indicating that T lymphocyte-microglial interaction through the CD154-CD40 pathway is not necessary for L5Tx-induced hypersensitivity. In addition, spinal cord astrocytic activation, represented by glial fibillary acidic protein (GFAP) expression, was significantly lower in CD4 KO mice compared to wild type (WT) mice at day 14 post-L5Tx, suggesting the involvement of astrocytes in the pronociceptive effects mediated by infiltrating CD4+ T lymphocytes.\n\n\nCONCLUSIONS\nIn all, these data indicate that the maintenance of L5Tx-induced neuropathic pain is mostly mediated by Th1 cells in a CD154-independent manner via a mechanism that could involve multiple Th1 cytokines and astrocytic activation.", "title": "" }, { "docid": "0182e6dcf7c8ec981886dfa2586a0d5d", "text": "MOTIVATION\nMetabolomics is a post genomic technology which seeks to provide a comprehensive profile of all the metabolites present in a biological sample. This complements the mRNA profiles provided by microarrays, and the protein profiles provided by proteomics. To test the power of metabolome analysis we selected the problem of discrimating between related genotypes of Arabidopsis. Specifically, the problem tackled was to discrimate between two background genotypes (Col0 and C24) and, more significantly, the offspring produced by the crossbreeding of these two lines, the progeny (whose genotypes would differ only in their maternally inherited mitichondia and chloroplasts).\n\n\nOVERVIEW\nA gas chromotography--mass spectrometry (GCMS) profiling protocol was used to identify 433 metabolites in the samples. The metabolomic profiles were compared using descriptive statistics which indicated that key primary metabolites vary more than other metabolites. We then applied neural networks to discriminate between the genotypes. This showed clearly that the two background lines can be discrimated between each other and their progeny, and indicated that the two progeny lines can also be discriminated. We applied Euclidean hierarchical and Principal Component Analysis (PCA) to help understand the basis of genotype discrimination. PCA indicated that malic acid and citrate are the two most important metabolites for discriminating between the background lines, and glucose and fructose are two most important metabolites for discriminating between the crosses. These results are consistant with genotype differences in mitochondia and chloroplasts.", "title": "" }, { "docid": "9524269df0e8fbae27ee4e63d47b327b", "text": "The quantum of power that a given EHVAC transmission line can safely carry depends on various limits. These limits can be categorized into two types viz. thermal and stability/SIL limits. In case of long lines the capacity is limited by its SIL level only which is much below its thermal capacity due to large inductance. Decrease in line inductance and surge impedance shall increase the SIL and transmission capacity. This paper presents a mathematical model of increasing the SIL level towards thermal limit. Sensitivity of SIL on various configuration of sub-conductors in a bundle, bundle spacing, tower structure, spacing of phase conductors etc. is analyzed and presented. Various issues that need attention for application of high surge impedance loading (HSIL) line are also deliberated", "title": "" }, { "docid": "d6aba23081e11b61d146276e77b3d3cd", "text": "This paper presents a quantitative performance analysis of a conventional passive cell balancing method and a proposed active cell balancing method for automotive batteries. The proposed active cell balancing method was designed to perform continuous cell balancing during charge and discharge with high balancing current. An experimentally validated model was used to simulate the balancing process of both balancing circuits for a high capacity battery module. The results suggest that the proposed method can improve the power loss and extend the discharge time of a battery module. Hence, a higher energy output can be yielded.", "title": "" }, { "docid": "69519dd7e60899acd8b81c141321b052", "text": "In this paper we address the question of how closely everyday human teachers match a theoretically optimal teacher. We present two experiments in which subjects teach a concept to our robot in a supervised fashion. In the first experiment we give subjects no instructions on teaching and observe how they teach naturally as compared to an optimal strategy. We find that people are suboptimal in several dimensions. In the second experiment we try to elicit the optimal teaching strategy. People can teach much faster using the optimal teaching strategy, however certain parts of the strategy are more intuitive than others.", "title": "" }, { "docid": "1e6c2319e7c9e51cd4e31107d56bce91", "text": "Marketing has been criticised from all spheres today since the real worth of all the marketing efforts can hardly be precisely determined. Today consumers are better informed and also misinformed at times due to the bombardment of various pieces of information through a new type of interactive media, i.e., social media (SM). In SM, communication is through dialogue channels wherein consumers pay more attention to SM buzz rather than promotions of marketers. The various forms of SM create a complex set of online social networks (OSN), through which word-of-mouth (WOM) propagates and influence consumer decisions. With the growth of OSN and user generated contents (UGC), WOM metamorphoses to electronic word-of-mouth (eWOM), which spreads in astronomical proportions. Previous works study the effect of external and internal influences in affecting consumer behaviour. However, today the need is to resort to multidisciplinary approaches to find out how SM influence consumers with eWOM and online reviews. This paper reviews the emerging trend of how multiple disciplines viz. Statistics, Data Mining techniques, Network Analysis, etc. are being integrated by marketers today to analyse eWOM and derive actionable intelligence.", "title": "" }, { "docid": "3c315e5cbf13ffca10f4199d094d2f34", "text": "Object tracking under complex circumstances is a challenging task because of background interference, obstacle occlusion, object deformation, etc. Given such conditions, robustly detecting, locating, and analyzing a target through single-feature representation are difficult tasks. Global features, such as color, are widely used in tracking, but may cause the object to drift under complex circumstances. Local features, such as HOG and SIFT, can precisely represent rigid targets, but these features lack the robustness of an object in motion. An effective method is adaptive fusion of multiple features in representing targets. The process of adaptively fusing different features is the key to robust object tracking. This study uses a multi-feature joint descriptor (MFJD) and the distance between joint histograms to measure the similarity between a target and its candidate patches. Color and HOG features are fused as the tracked object of the joint representation. This study also proposes a self-adaptive multi-feature fusion strategy that can adaptively adjust the joint weight of the fused features based on their stability and contrast measure scores. The mean shift process is adopted as the object tracking framework with multi-feature representation. The experimental results demonstrate that the proposed MFJD tracking method effectively handles background clutter, partial occlusion by obstacles, scale changes, and deformations. The novel method performs better than several state-of-the-art methods in real surveillance scenarios.", "title": "" }, { "docid": "0b33249df17737a826dcaa197adccb74", "text": "In the competitive electricity structure, demand response programs enable customers to react dynamically to changes in electricity prices. The implementation of such programs may reduce energy costs and increase reliability. To fully harness such benefits, existing load controllers and appliances need around-the-clock price information. Advances in the development and deployment of advanced meter infrastructures (AMIs), building automation systems (BASs), and various dedicated embedded control systems provide the capability to effectively address this requirement. In this paper we introduce a meter gateway architecture (MGA) to serve as a foundation for integrated control of loads by energy aggregators, facility hubs, and intelligent appliances. We discuss the requirements that motivate the architecture, describe its design, and illustrate its application to a small system with an intelligent appliance and a legacy appliance using a prototype implementation of an intelligent hub for the MGA and ZigBee wireless communications.", "title": "" }, { "docid": "573f12acd3193045104c7d95bbc89f78", "text": "Automatic Face Recognition is one of the most emphasizing dilemmas in diverse of potential relevance like in different surveillance systems, security systems, authentication or verification of individual like criminals etc. Adjoining of dynamic expression in face causes a broad range of discrepancies in recognition systems. Facial Expression not only exposes the sensation or passion of any person but can also be used to judge his/her mental views and psychosomatic aspects. This paper is based on a complete survey of face recognition conducted under varying facial expressions. In order to analyze different techniques, motion-based, model-based and muscles-based approaches have been used in order to handle the facial expression and recognition catastrophe. The analysis has been completed by evaluating various existing algorithms while comparing their results in general. It also expands the scope for other researchers for answering the question of effectively dealing with such problems.", "title": "" }, { "docid": "a47d001dc8305885e42a44171c9a94b2", "text": "Community detection in complex network has become a vital step to understand the structure and dynamics of networks in various fields. However, traditional node clustering and relatively new proposed link clustering methods have inherent drawbacks to discover overlapping communities. Node clustering is inadequate to capture the pervasive overlaps, while link clustering is often criticized due to the high computational cost and ambiguous definition of communities. So, overlapping community detection is still a formidable challenge. In this work, we propose a new overlapping community detection algorithm based on network decomposition, called NDOCD. Specifically, NDOCD iteratively splits the network by removing all links in derived link communities, which are identified by utilizing node clustering technique. The network decomposition contributes to reducing the computation time and noise link elimination conduces to improving the quality of obtained communities. Besides, we employ node clustering technique rather than link similarity measure to discover link communities, thus NDOCD avoids an ambiguous definition of community and becomes less time-consuming. We test our approach on both synthetic and real-world networks. Results demonstrate the superior performance of our approach both in computation time and accuracy compared to state-of-the-art algorithms.", "title": "" }, { "docid": "5687ab1eadd481b6008835817a5dbe0b", "text": "Due to the importance of PM synchronous machine in many categories like in industrial, mechatronics, automotive, energy storage flywheel, centrifugal compressor, vacuum pump and robotic applications moreover in smart power grid applications, this paper is presented. It reviews the improvement of permanent magnet synchronous machines performance researches. This is done depending on many researchers' papers as samples for many aspects like: modelling, control, optimization and design to present a satisfied literature review", "title": "" } ]
scidocsrr
b47ad52c6259a7678a2215e570b97c72
Stability of cyberbullying victimization among adolescents: Prevalence and association with bully-victim status and psychosocial adjustment
[ { "docid": "7875910ad044232b4631ecacfec65656", "text": "In this study, a questionnaire (Cyberbullying Questionnaire, CBQ) was developed to assess the prevalence of numerous modalities of cyberbullying (CB) in adolescents. The association of CB with the use of other forms of violence, exposure to violence, acceptance and rejection by peers was also examined. In the study, participants were 1431 adolescents, aged between 12 and17 years (726 girls and 682 boys). The adolescents responded to the CBQ, measures of reactive and proactive aggression, exposure to violence, justification of the use of violence, and perceived social support of peers. Sociometric measures were also used to assess the use of direct and relational aggression and the degree of acceptance and rejection by peers. The results revealed excellent psychometric properties for the CBQ. Of the adolescents, 44.1% responded affirmatively to at least one act of CB. Boys used CB to greater extent than girls. Lastly, CB was significantly associated with the use of proactive aggression, justification of violence, exposure to violence, and less perceived social support of friends. 2010 Elsevier Ltd. All rights reserved.", "title": "" } ]
[ { "docid": "31ec7ef4e68950919054b59942d4dbfa", "text": "A promising approach to learn to play board games is to use reinforcement learning algorithms that can learn a game position evaluation function. In this paper we examine and compare three different methods for generating training games: (1) Learning by self-play, (2) Learning by playing against an expert program, and (3) Learning from viewing experts play against themselves. Although the third possibility generates highquality games from the start compared to initial random games generated by self-play, the drawback is that the learning program is never allowed to test moves which it prefers. We compared these three methods using temporal difference methods to learn the game of backgammon. For particular games such as draughts and chess, learning from a large database containing games played by human experts has as a large advantage that during the generation of (useful) training games, no expensive lookahead planning is necessary for move selection. Experimental results in this paper show how useful this method is for learning to play chess and draughts.", "title": "" }, { "docid": "c9f48010cdf39b4d024818f1bbb21307", "text": "This paper proposes to use probabilistic model checking to synthesize optimal robot policies in multi-tasking autonomous systems that are subject to human-robot interaction. Given the convincing empirical evidence that human behavior can be related to reinforcement models, we take as input a well-studied Q-table model of the human behavior for flexible scenarios. We first describe an automated procedure to distill a Markov decision process (MDP) for the human in an arbitrary but fixed scenario. The distinctive issue is that – in contrast to existing models – under-specification of the human behavior is included. Probabilistic model checking is used to predict the human’s behavior. Finally, the MDP model is extended with a robot model. Optimal robot policies are synthesized by analyzing the resulting two-player stochastic game. Experimental results with a prototypical implementation using PRISM show promising results.", "title": "" }, { "docid": "c5b2f22f1cc160b19fa689120c35c693", "text": "Commodity depth cameras have created many interesting new applications in the research community recently. These applications often require the calibration information between the color and the depth cameras. Traditional checkerboard based calibration schemes fail to work well for the depth camera, since its corner features cannot be reliably detected in the depth image. In this paper, we present a maximum likelihood solution for the joint depth and color calibration based on two principles. First, in the depth image, points on the checker-board shall be co-planar, and the plane is known from color camera calibration. Second, additional point correspondences between the depth and color images may be manually specified or automatically established to help improve calibration accuracy. Uncertainty in depth values has been taken into account systematically. The proposed algorithm is reliable and accurate, as demonstrated by extensive experimental results on simulated and real-world examples.", "title": "" }, { "docid": "3f8f835605b34d27802f6f2f0a363ae2", "text": "*Correspondence: Enrico Di Minin, Finnish Centre of Excellence in Metapopulation Biology, Department of Biosciences, Biocenter 3, University of Helsinki, PO Box 65 (Viikinkaari 1), 00014 Helsinki, Finland; School of Life Sciences, Westville Campus, University of KwaZulu-Natal, PO Box 54001 (University Road), Durban 4000, South Africa [email protected]; Tuuli Toivonen, Finnish Centre of Excellence in Metapopulation Biology, Department of Biosciences, Biocenter 3, University of Helsinki, PO Box 65 (Viikinkaari 1), 00014 Helsinki, Finland; Department of Geosciences and Geography, University of Helsinki, PO Box 64 (Gustaf Hällströminkatu 2a), 00014 Helsinki, Finland [email protected] These authors have contributed equally to this work.", "title": "" }, { "docid": "6087e066b04b9c3ac874f3c58979f89a", "text": "What does it mean for a machine learning model to be ‘fair’, in terms which can be operationalised? Should fairness consist of ensuring everyone has an equal probability of obtaining some benefit, or should we aim instead to minimise the harms to the least advantaged? Can the relevant ideal be determined by reference to some alternative state of affairs in which a particular social pattern of discrimination does not exist? Various definitions proposed in recent literature make different assumptions about what terms like discrimination and fairness mean and how they can be defined in mathematical terms. Questions of discrimination, egalitarianism and justice are of significant interest to moral and political philosophers, who have expended significant efforts in formalising and defending these central concepts. It is therefore unsurprising that attempts to formalise ‘fairness’ in machine learning contain echoes of these old philosophical debates. This paper draws on existing work in moral and political philosophy in order to elucidate emerging debates about fair machine learning.", "title": "" }, { "docid": "cb4518f95b82e553b698ae136362bd59", "text": "Optimal control theory is a mature mathematical discipline with numerous applications in both science and engineering. It is emerging as the computational framework of choice for studying the neural control of movement, in much the same way that probabilistic inference is emerging as the computational framework of choice for studying sensory information processing. Despite the growing popularity of optimal control models, however, the elaborate mathematical machinery behind them is rarely exposed and the big picture is hard to grasp without reading a few technical books on the subject. While this chapter cannot replace such books, it aims to provide a self-contained mathematical introduction to optimal control theory that is su¢ ciently broad and yet su¢ ciently detailed when it comes to key concepts. The text is not tailored to the …eld of motor control (apart from the last section, and the overall emphasis on systems with continuous state) so it will hopefully be of interest to a wider audience. Of special interest in the context of this book is the material on the duality of optimal control and probabilistic inference; such duality suggests that neural information processing in sensory and motor areas may be more similar than currently thought. The chapter is organized in the following sections:", "title": "" }, { "docid": "85016bc639027363932f9adf7012d7a7", "text": "The output voltage ripple is one of the most significant system parameters in switch-mode power supplies. This ripple degrades the performance of application specific integrated circuits (ASICs). The most common way to reduce it is to use additional integrated low drop-out regulators (LDO) on the ASIC. This technique usually suffers from high system efficiency as it is required for portable electronic systems. It also increases the design challenges of on-chip power management circuits and area required for the LDOs. This work presents a low-power fully integrated 0.97mm2 DC-DC Buck converter with a tuned series LDO with 1mV voltage ripple in a 0.25μm BiCMOS process. The converter prodives a power supply rejection ratio of more than 60 dB from 1 to 6MHz and a load current range of 0...400 mA. A peak efficiency of 93.7% has been measured. For high light load efficiency, automatic mode operation is implemented. To decrease the form factor and costs, the external components count has been reduced to a single inductor of 1 μH and two external capacitors of 2 μF each.", "title": "" }, { "docid": "1014a33211c9ca3448fa02cf734a5775", "text": "We propose a general method called truncated gradient to induce sparsity in the weights of online learning algorithms with convex loss functions. This method has several essential properties: 1. The degree of sparsity is continuous a parameter controls the rate of sparsi cation from no sparsi cation to total sparsi cation. 2. The approach is theoretically motivated, and an instance of it can be regarded as an online counterpart of the popular L1-regularization method in the batch setting. We prove that small rates of sparsi cation result in only small additional regret with respect to typical online learning guarantees. 3. The approach works well empirically. We apply the approach to several datasets and nd that for datasets with large numbers of features, substantial sparsity is discoverable.", "title": "" }, { "docid": "98d23862436d8ff4d033cfd48692c84d", "text": "Memory corruption vulnerabilities are the root cause of many modern attacks. Existing defense mechanisms are inadequate; in general, the software-based approaches are not efficient and the hardware-based approaches are not flexible. In this paper, we present hardware-assisted data-flow isolation, or, HDFI, a new fine-grained data isolation mechanism that is broadly applicable and very efficient. HDFI enforces isolation at the machine word granularity by virtually extending each memory unit with an additional tag that is defined by dataflow. This capability allows HDFI to enforce a variety of security models such as the Biba Integrity Model and the Bell -- LaPadula Model. We implemented HDFI by extending the RISC-V instruction set architecture (ISA) and instantiating it on the Xilinx Zynq ZC706 evaluation board. We ran several benchmarks including the SPEC CINT 2000 benchmark suite. Evaluation results show that the performance overhead caused by our modification to the hardware is low (<; 2%). We also developed or ported several security mechanisms to leverage HDFI, including stack protection, standard library enhancement, virtual function table protection, code pointer protection, kernel data protection, and information leak prevention. Our results show that HDFI is easy to use, imposes low performance overhead, and allows us to create more elegant and more secure solutions.", "title": "" }, { "docid": "6384a691d3b50e252ab76a61e28f012e", "text": "We study the algorithmics of information structure design --- a.k.a. persuasion or signaling --- in a fundamental special case introduced by Arieli and Babichenko: multiple agents, binary actions, and no inter-agent externalities. Unlike prior work on this model, we allow many states of nature. We assume that the principal's objective is a monotone set function, and study the problem both in the public signal and private signal models, drawing a sharp contrast between the two in terms of both efficacy and computational complexity.\n When private signals are allowed, our results are largely positive and quite general. First, we use linear programming duality and the equivalence of separation and optimization to show polynomial-time equivalence between (exactly) optimal signaling and the problem of maximizing the objective function plus an additive function. This yields an efficient implementation of the optimal scheme when the objective is supermodular or anonymous. Second, we exhibit a (1-1/e)-approximation of the optimal private signaling scheme, modulo an additive loss of ε, when the objective function is submodular. These two results simplify, unify, and generalize results of [Arieli and Babichenko, 2016] and [Babichenko and Barman, 2016], extending them from a binary state of nature to many states (modulo the additive loss in the latter result). Third, we consider the binary-state case with a submodular objective, and simplify and slightly strengthen the result of [Babichenko and Barman, 2016] to obtain a (1-1/e)-approximation via a scheme which (i) signals independently to each receiver and (ii) is \"oblivious\" in that it does not depend on the objective function so long as it is monotone submodular.\n When only a public signal is allowed, our results are negative. First, we show that it is NP-hard to approximate the optimal public scheme, within any constant factor, even when the objective is additive. Second, we show that the optimal private scheme can outperform the optimal public scheme, in terms of maximizing the sender's objective, by a polynomial factor.", "title": "" }, { "docid": "5b021c0223ee25535508eb1d6f63ff55", "text": "A 32-KB standard CMOS antifuse one-time programmable (OTP) ROM embedded in a 16-bit microcontroller as its program memory is designed and implemented in 0.18-mum standard CMOS technology. The proposed 32-KB OTP ROM cell array consists of 4.2 mum2 three-transistor (3T) OTP cells where each cell utilizes a thin gate-oxide antifuse, a high-voltage blocking transistor, and an access transistor, which are all compatible with standard CMOS process. In order for high density implementation, the size of the 3T cell has been reduced by 80% in comparison to previous work. The fabricated total chip size, including 32-KB OTP ROM, which can be programmed via external I 2C master device such as universal I2C serial EEPROM programmer, 16-bit microcontroller with 16-KB program SRAM and 8-KB data SRAM, peripheral circuits to interface other system building blocks, and bonding pads, is 9.9 mm2. This paper describes the cell, design, and implementation of high-density CMOS OTP ROM, and shows its promising possibilities in embedded applications", "title": "" }, { "docid": "ac6430e097fb5a7dc1f7864f283dcf47", "text": "In the task of Object Recognition, there exists a dichotomy between the categorization of objects and estimating object pose, where the former necessitates a view-invariant representation, while the latter requires a representation capable of capturing pose information over different categories of objects. With the rise of deep architectures, the prime focus has been on object category recognition. Deep learning methods have achieved wide success in this task. In contrast, object pose regression using these approaches has received relatively much less attention. In this paper we show how deep architectures, specifically Convolutional Neural Networks (CNN), can be adapted to the task of simultaneous categorization and pose estimation of objects. We investigate and analyze the layers of various CNN models and extensively compare between them with the goal of discovering how the layers of distributed representations of CNNs represent object pose information and how this contradicts object category representations. We extensively experiment on two recent large and challenging multi-view datasets. Our models achieve better than state-of-the-art performance on both datasets.", "title": "" }, { "docid": "a4f0b524f79db389c72abd27d36f8944", "text": "In order to summarize the status of rescue robotics, this chapter will cover the basic characteristics of disasters and their impact on robotic design, describe the robots actually used in disasters to date, promising robot designs (e.g., snakes, legged locomotion) and concepts (e.g., robot teams or swarms, sensor networks), methods of evaluation in benchmarks for rescue robotics, and conclude with a discussion of the fundamental problems and open issues facing rescue robotics, and their evolution from an interesting idea to widespread adoption. The Chapter will concentrate on the rescue phase, not recovery, with the understanding that capabilities for rescue can be applied to, and extended for, the recovery phase. The use of robots in the prevention and preparedness phases of disaster management are outside the scope of this chapter.", "title": "" }, { "docid": "5a9113dc952bb51faf40d242e91db09c", "text": "This study highlights the changes in lycopene and β-carotene retention in tomato juice subjected to combined pressure-temperature (P-T) treatments ((high-pressure processing (HPP; 500-700 MPa, 30 °C), pressure-assisted thermal processing (PATP; 500-700 MPa, 100 °C), and thermal processing (TP; 0.1 MPa, 100 °C)) for up to 10 min. Processing treatments utilized raw (untreated) and hot break (∼93 °C, 60 s) tomato juice as controls. Changes in bioaccessibility of these carotenoids as a result of processing were also studied. Microscopy was applied to better understand processing-induced microscopic changes. TP did not alter the lycopene content of the tomato juice. HPP and PATP treatments resulted in up to 12% increases in lycopene extractability. all-trans-β-Carotene showed significant degradation (p < 0.05) as a function of pressure, temperature, and time. Its retention in processed samples varied between 60 and 95% of levels originally present in the control. Regardless of the processing conditions used, <0.5% lycopene appeared in the form of micelles (<0.5% bioaccessibility). Electron microscopy images showed more prominent lycopene crystals in HPP and PATP processed juice than in thermally processed juice. However, lycopene crystals did appear to be enveloped regardless of the processing conditions used. The processed juice (HPP, PATP, TP) showed significantly higher (p < 0.05) all-trans-β-carotene micellarization as compared to the raw unprocessed juice (control). Interestingly, hot break juice subjected to combined P-T treatments showed 15-30% more all-trans-β-carotene micellarization than the raw juice subjected to combined P-T treatments. This study demonstrates that combined pressure-heat treatments increase lycopene extractability. However, the in vitro bioaccessibility of carotenoids was not significantly different among the treatments (TP, PATP, HPP) investigated.", "title": "" }, { "docid": "47afea1e95f86bb44a1cf11e020828fc", "text": "Document clustering is generally the first step for topic identification. Since many clustering methods operate on the similarities between documents, it is important to build representations of these documents which keep their semantics as much as possible and are also suitable for efficient similarity calculation. As we describe in Koopman et al. (Proceedings of ISSI 2015 Istanbul: 15th International Society of Scientometrics and Informetrics Conference, Istanbul, Turkey, 29 June to 3 July, 2015. Bogaziçi University Printhouse. http://www.issi2015.org/files/downloads/all-papers/1042.pdf , 2015), the metadata of articles in the Astro dataset contribute to a semantic matrix, which uses a vector space to capture the semantics of entities derived from these articles and consequently supports the contextual exploration of these entities in LittleAriadne. However, this semantic matrix does not allow to calculate similarities between articles directly. In this paper, we will describe in detail how we build a semantic representation for an article from the entities that are associated with it. Base on such semantic representations of articles, we apply two standard clustering methods, K-Means and the Louvain community detection algorithm, which leads to our two clustering solutions labelled as OCLC-31 (standing for K-Means) and OCLC-Louvain (standing for Louvain). In this paper, we will give the implementation details and a basic comparison with other clustering solutions that are reported in this special issue.", "title": "" }, { "docid": "45a45087a6829486d46eda0adcff978f", "text": "Container technology has the potential to considerably simplify the management of the software stack of High Performance Computing (HPC) clusters. However, poor integration with established HPC technologies is still preventing users and administrators to reap the benefits of containers. Message Passing Interface (MPI) is a pervasive technology used to run scientific software, often written in Fortran and C/C++, that presents challenges for effective integration with containers. This work shows how an existing MPI implementation can be extended to improve this integration.", "title": "" }, { "docid": "e5ce1ddd50a728fab41043324938a554", "text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.", "title": "" }, { "docid": "f10294ed332670587cf9c100f2d75428", "text": "In ancient times, people exchanged their goods and services to obtain what they needed (such as clothes and tools) from other people. This system of bartering compensated for the lack of currency. People offered goods/services and received in kind other goods/services. Now, despite the existence of multiple currencies and the progress of humanity from the Stone Age to the Byte Age, people still barter but in a different way. Mainly, people use money to pay for the goods they purchase and the services they obtain.", "title": "" }, { "docid": "bf3450649fdf5d5bb4ee89fbaf7ec0ff", "text": "In this study, we propose a research model to assess the effect of a mobile health (mHealth) app on exercise motivation and physical activity of individuals based on the design and self-determination theory. The research model is formulated from the perspective of motivation affordance and gamification. We will discuss how the use of specific gamified features of the mHealth app can trigger/afford corresponding users’ exercise motivations, which further enhance users’ participation in physical activity. We propose two hypotheses to test the research model using a field experiment. We adopt a 3-phase longitudinal approach to collect data in three different time zones, in consistence with approach commonly adopted in psychology and physical activity research, so as to reduce the common method bias in testing the two hypotheses.", "title": "" } ]
scidocsrr
939b1c9c5b746e18175e27596c62d788
A Pinch of Humor for Short-Text Conversation: An Information Retrieval Approach
[ { "docid": "3ea104489fb5ac5b3e671659f8498530", "text": "In this paper, we present our work of humor recognition on Twitter, which will facilitate affect and sentimental analysis in the social network. The central question of what makes a tweet (Twitter post) humorous drives us to design humor-related features, which are derived from influential humor theories, linguistic norms, and affective dimensions. Using machine learning techniques, we are able to recognize humorous tweets with high accuracy and F-measure. More importantly, we single out features that contribute to distinguishing non-humorous tweets from humorous tweets, and humorous tweets from other short humorous texts (non-tweets). This proves that humorous tweets possess discernible characteristics that are neither found in plain tweets nor in humorous non-tweets. We believe our novel findings will inform and inspire the burgeoning field of computational humor research in the social media.", "title": "" }, { "docid": "7577dac903003b812c63ea20d09183c8", "text": "Humor is one of the most interesting and puzzling aspects of human behavior. Despite the attention it has received in fields such as philosophy, linguistics, and psychology, there have been only few attempts to create computational models for humor recognition or generation. In this article, we bring empirical evidence that computational approaches can be successfully applied to the task of humor recognition. Through experiments performed on very large data sets, we show that automatic classification techniques can be effectively used to distinguish between humorous and non-humorous texts, with significant improvements observed over a priori known baselines.", "title": "" } ]
[ { "docid": "1d9f683409c3d6f19b9b6738a1a76c4a", "text": "The empirical fact that classifiers, trained on given data collections, perform poorly when tested on data acquired in different settings is theoretically explained in domain adaptation through a shift among distributions of the source and target domains. Alleviating the domain shift problem, especially in the challenging setting where no labeled data are available for the target domain, is paramount for having visual recognition systems working in the wild. As the problem stems from a shift among distributions, intuitively one should try to align them. In the literature, this has resulted in a stream of works attempting to align the feature representations learned from the source and target domains. Here we take a different route. Rather than introducing regularization terms aiming to promote the alignment of the two representations, we act at the distribution level through the introduction of DomaIn Alignment Layers (DIAL), able to match the observed source and target data distributions to a reference one. Thorough experiments on three different public benchmarks we confirm the power of our approach. ∗This work was partially supported by the ERC grant 637076 RoboExNovo (B.C.), and the CHIST-ERA project ALOOF (B.C, F. M. C.).", "title": "" }, { "docid": "8f174607776cd7dc8c69739183121fcc", "text": "We empirically evaluate several state-of-the-art methods for constructing ensembles of heterogeneous classifiers with stacking and show that they perform (at best) comparably to selecting the best classifier from the ensemble by cross validation. Among state-of-the-art stacking methods, stacking with probability distributions and multi-response linear regression performs best. We propose two extensions of this method, one using an extended set of meta-level features and the other using multi-response model trees to learn at the meta-level. We show that the latter extension performs better than existing stacking approaches and better than selecting the best classifier by cross validation.", "title": "" }, { "docid": "51066d24144efe6456f8169f8e60a561", "text": "Face biometric systems are vulnerable to spoofing attacks. Such attacks can be performed in many ways, including presenting a falsified image, video or 3D mask of a valid user. A widely used approach for differentiating genuine faces from fake ones has been to capture their inherent differences in (2D or 3D) texture using local descriptors. One limitation of these methods is that they may fail if an unseen attack type, e.g. a highly realistic 3D mask which resembles real skin texture, is used in spoofing. Here we propose a robust anti-spoofing method by detecting pulse from face videos. Based on the fact that a pulse signal exists in a real living face but not in any mask or print material, the method could be a generalized solution for face liveness detection. The proposed method is evaluated first on a 3D mask spoofing database 3DMAD to demonstrate its effectiveness in detecting 3D mask attacks. More importantly, our cross-database experiment with high quality REAL-F masks shows that the pulse based method is able to detect even the previously unseen mask type whereas texture based methods fail to generalize beyond the development data. Finally, we propose a robust cascade system combining two complementary attack-specific spoof detectors, i.e. utilize pulse detection against print attacks and color texture analysis against video attacks.", "title": "" }, { "docid": "85edcb9c02a0153c94ae62852188a830", "text": "Calcaneonavicular coalition is a congenital anomaly characterized by a connection between the calcaneus and the navicular. Surgery is required in case of chronic pain and after failure of conservative treatment. The authors present here the surgical technique and results of a 2-portals endoscopic resection of a calcaneonavicular synostosis. Both visualization and working portals must be identified with accuracy around the tarsal coalition with fluoroscopic control and according to the localization of the superficial peroneus nerve, to avoid neurologic damages during the resection. The endoscopic procedure provides a better visualization of the whole resection area and allows to achieve a complete resection and avoid plantar residual bone bar. The other important advantage of the endoscopic technique is the possibility to assess and treat in the same procedure-associated pathologies such as degenerative changes in the lateral side of the talar head with debridement and resection.", "title": "" }, { "docid": "53d5bfb8654783bae8a09de651b63dd7", "text": "-This paper introduces a new image thresholding method based on minimizing the measures of fuzziness of an input image. The membership function in the thresholding method is used to denote the characteristic relationship between a pixel and its belonging region (the object or the background). In addition, based on the measure of fuzziness, a fuzzy range is defined to find the adequate threshold value within this range. The principle of the method is easy to understand and it can be directly extended to multilevel thresholding. The effectiveness of the new method is illustrated by using the test images of having various types of histograms. The experimental results indicate that the proposed method has demonstrated good performance in bilevel and trilevel thresholding. Image thresholding Measure of fuzziness Fuzzy membership function I. I N T R O D U C T I O N Image thresholding which extracts the object from the background in an input image is one of the most common applications in image analysis. For example, in automatic recognition of machine printed or handwritten texts, in shape recognition of objects, and in image enhancement, thresholding is a necessary step for image preprocessing. Among the image thresholding methods, bilevel thresholding separates the pixels of an image into two regions (i.e. the object and the background); one region contains pixels with gray values smaller than the threshold value and the other contains pixels with gray values larger than the threshold value. Further, if the pixels of an image are divided into more than two regions, this is called multilevel thresholding. In general, the threshold is located at the obvious and deep valley of the histogram. However, when the valley is not so obvious, it is very difficult to determine the threshold. During the past decade, many research studies have been devoted to the problem of selecting the appropriate threshold value. The survey of these papers can be seen in the literature31-3) Fuzzy set theory has been applied to image thresholding to partition the image space into meaningful regions by minimizing the measure of fuzziness of the image. The measurement can be expressed by terms such as entropy, {4) index of fuzziness, ~5) and index of nonfuzziness36) The \"ent ropy\" involves using Shannon's function to measure the fuzziness of an image so that the threshold can be determined by minimizing the entropy measure. It is very different from the classical entropy measure which measures t Author to whom correspondence should be addressed. probabil ist ic information. The index of fuzziness represents the average amount of fuzziness in an image by measuring the distance between the gray-level image and its near crisp (binary) version. The index of nonfuzziness indicates the average amount of nonfuzziness (crispness) in an image by taking the absolute difference between the crisp version and its complement. In addition, Pal and Rosenfeld ~7) developed an algorithm based on minimizing the compactness of fuzziness to obtain the fuzzy and nonfuzzy versions of an ill-defined image such that the appropriate nonfuzzy threshold can be chosen. They used some fuzzy geometric properties, i.e. the area and the perimeter of an fuzzy image, to obtain the measure of compactness. The effectiveness of the method has been illustrated by using two input images of bimodal and unimodal histograms. Another measurement, which is called the index of area converge (IOAC), ts) has been applied to select the threshold by finding the local minima of the IOAC. Since both the measures of compactness and the IOAC involve the spatial information of an image, they need a long time to compute the perimeter of the fuzzy plane. In this paper, based on the concept of fuzzy set, an effective thresholding method is proposed. Given a certain threshold value, the membership function of a pixel is defined by the absolute difference between the gray level and the average gray level of its belonging region (i.e. the object or the background). The larger the absolute difference is, the smaller the membership value becomes. It is expected that the membership value of each pixel in the input image is as large as possible. In addition, two measures of fuzziness are proposed to indicate the fuzziness of an image. The optimal threshold can then be effectively determined by minimizing the measure of fuzziness of an image. The performance of the proposed approach is compared", "title": "" }, { "docid": "8fd43b39e748d47c02b66ee0d8eecc65", "text": "One standing problem in the area of web-based e-learning is how to support instructional designers to effectively and efficiently retrieve learning materials, appropriate for their educational purposes. Learning materials can be retrieved from structured repositories, such as repositories of Learning Objects and Massive Open Online Courses; they could also come from unstructured sources, such as web hypertext pages. Platforms for distance education often implement algorithms for recommending specific educational resources and personalized learning paths to students. But choosing and sequencing the adequate learning materials to build adaptive courses may reveal to be quite a challenging task. In particular, establishing the prerequisite relationships among learning objects, in terms of prior requirements needed to understand and complete before making use of the subsequent contents, is a crucial step for faculty, instructional designers or automated systems whose goal is to adapt existing learning objects to delivery in new distance courses. Nevertheless, this information is often missing. In this paper, an innovative machine learning-based approach for the identification of prerequisites between text-based resources is proposed. A feature selection methodology allows us to consider the attributes that are most relevant to the predictive modeling problem. These features are extracted from both the input material and weak-taxonomies available on the web. Input data undergoes a Natural language process that makes finding patterns of interest more easy for the applied automated analysis. Finally, the prerequisite identification is cast to a binary statistical classification task. The accuracy of the approach is validated by means of experimental evaluations on real online coursers covering different subjects.", "title": "" }, { "docid": "ec1120018899c6c9fe16240b8e35efac", "text": "Redundant collagen deposition at sites of healing dermal wounds results in hypertrophic scars. Adipose-derived stem cells (ADSCs) exhibit promise in a variety of anti-fibrosis applications by attenuating collagen deposition. The objective of this study was to explore the influence of an intralesional injection of ADSCs on hypertrophic scar formation by using an established rabbit ear model. Twelve New Zealand albino rabbits were equally divided into three groups, and six identical punch defects were made on each ear. On postoperative day 14 when all wounds were completely re-epithelialized, the first group received an intralesional injection of ADSCs on their right ears and Dulbecco’s modified Eagle’s medium (DMEM) on their left ears as an internal control. Rabbits in the second group were injected with conditioned medium of the ADSCs (ADSCs-CM) on their right ears and DMEM on their left ears as an internal control. Right ears of the third group remained untreated, and left ears received DMEM. We quantified scar hypertrophy by measuring the scar elevation index (SEI) on postoperative days 14, 21, 28, and 35 with ultrasonography. Wounds were harvested 35 days later for histomorphometric and gene expression analysis. Intralesional injections of ADSCs or ADSCs-CM both led to scars with a far more normal appearance and significantly decreased SEI (44.04 % and 32.48 %, respectively, both P <0.01) in the rabbit ears compared with their internal controls. Furthermore, we confirmed that collagen was organized more regularly and that there was a decreased expression of alpha-smooth muscle actin (α-SMA) and collagen type Ι in the ADSC- and ADSCs-CM-injected scars according to histomorphometric and real-time quantitative polymerase chain reaction analysis. There was no difference between DMEM-injected and untreated scars. An intralesional injection of ADSCs reduces the formation of rabbit ear hypertrophic scars by decreasing the α-SMA and collagen type Ι gene expression and ameliorating collagen deposition and this may result in an effective and innovative anti-scarring therapy.", "title": "" }, { "docid": "0250d6bb0bcf11ca8af6c2661c1f7f57", "text": "Chemoreception is a biological process essential for the survival of animals, as it allows the recognition of important volatile cues for the detection of food, egg-laying substrates, mates, or predators, among other purposes. Furthermore, its role in pheromone detection may contribute to evolutionary processes, such as reproductive isolation and speciation. This key role in several vital biological processes makes chemoreception a particularly interesting system for studying the role of natural selection in molecular adaptation. Two major gene families are involved in the perireceptor events of the chemosensory system: the odorant-binding protein (OBP) and chemosensory protein (CSP) families. Here, we have conducted an exhaustive comparative genomic analysis of these gene families in 20 Arthropoda species. We show that the evolution of the OBP and CSP gene families is highly dynamic, with a high number of gains and losses of genes, pseudogenes, and independent origins of subfamilies. Taken together, our data clearly support the birth-and-death model for the evolution of these gene families with an overall high gene turnover rate. Moreover, we show that the genome organization of the two families is significantly more clustered than expected by chance and, more important, that this pattern appears to be actively maintained across the Drosophila phylogeny. Finally, we suggest the homologous nature of the OBP and CSP gene families, dating back their most recent common ancestor after the terrestrialization of Arthropoda (380--450 Ma) and we propose a scenario for the origin and diversification of these families.", "title": "" }, { "docid": "3f90af944ed7603fa7bbe8780239116a", "text": "Display advertising has been a significant source of revenue for publishers and ad networks in online advertising ecosystem. One important business model in online display advertising is Ad Exchange marketplace, also called non-guaranteed delivery (NGD), in which advertisers buy targeted page views and audiences on a spot market through real-time auction. In this paper, we describe a bid landscape forecasting system in NGD marketplace for any advertiser campaign specified by a variety of targeting attributes. In the system, the impressions that satisfy the campaign targeting attributes are partitioned into multiple mutually exclusive samples. Each sample is one unique combination of quantified attribute values. We develop a divide-and-conquer approach that breaks down the campaign-level forecasting problem. First, utilizing a novel star-tree data structure, we forecast the bid for each sample using non-linear regression by gradient boosting decision trees. Then we employ a mixture-of-log-normal model to generate campaign-level bid distribution based on the sample-level forecasted distributions. The experiment results of a system developed with our approach show that it can accurately forecast the bid distributions for various campaigns running on the world's largest NGD advertising exchange system, outperforming two baseline methods in term of forecasting errors.", "title": "" }, { "docid": "1ee33813e4d8710a620c4bd47817f774", "text": "This research work concerns the perceptual evaluation of the performance of information systems (IS) and more particularly, the construct of user satisfaction. Faced with the difficulty of obtaining objective measures for the success of IS, user satisfaction appeared as a substitutive measure of IS performance (DeLone & McLean, 1992). Some researchers have indeed shown that the evaluation of an IS could not happen without an analysis of the feelings and perceptions of individuals who make use of it. Consequently, the concept of satisfaction has been considered as a guarantee of the performance of an IS. Also it has become necessary to ponder the drivers of user satisfaction. The analysis of models and measurement tools for satisfaction as well as the adoption of a contingency perspective has allowed the description of principal dimensions that have a direct or less direct impact on user perceptions\n The case study of a large French group, carried out through an interpretativist approach conducted by way of 41 semi-structured interviews, allowed the conceptualization of the problematique of perceptual evaluation of IS in a particular field study. This study led us to confirm the impact of certain factors (such as perceived usefulness, participation, the quality of relations with the IS Function and its resources and also the fit of IS with user needs). On the contrary, other dimensions regarded as fundamental do not receive any consideration or see their influence nuanced in the case studied (the properties of IS, the ease of use, the quality of information). Lastly, this study has allowed for the identification of the influence of certain contingency and contextual variables on user satisfaction and, above all, for the description of the importance of interactions between the IS Function and the users", "title": "" }, { "docid": "732aa9623301d4d3cc6fc9d15c6836fe", "text": "Growing network traffic brings huge pressure to the server cluster. Using load balancing technology in server cluster becomes the choice of most enterprises. Because of many limitations, the development of the traditional load balancing technology has encountered bottlenecks. This has forced companies to find new load balancing method. Software Defined Network (SDN) provides a good method to solve the load balancing problem. In this paper, we implemented two load balancing algorithm that based on the latest SDN network architecture. The first one is a static scheduling algorithm and the second is a dynamic scheduling algorithm. Our experiments show that the performance of the dynamic algorithm is better than the static algorithm.", "title": "" }, { "docid": "e5ab552986fc1ef93ea898ffc85ce0f9", "text": "As Cloud computing is reforming the infrastructure of IT industries, it has become one of the critical security concerns of the defensive mechanisms applied to secure Cloud environment. Even if there are tremendous advancements in defense systems regarding the confidentiality, authentication and access control, there is still a challenge to provide security against availability of associated resources. Denial-of-service (DoS) attack and distributed denial-of-service (DDoS) attack can primarily compromise availability of the system services and can be easily started by using various tools, leading to financial damage or affecting the reputation. These attacks are very difficult to detect and filter, since packets that cause the attack are very much similar to legitimate traffic. DoS attack is considered as the biggest threat to IT industry, and intensity, size and frequency of the attack are observed to be increasing every year. Therefore, there is a need for stronger and universal method to impede these attacks. In this paper, we present an overview of DoS attack and distributed DoS attack that can be carried out in Cloud environment and possible defensive mechanisms, tools and devices. In addition, we discuss many open issues and challenges in defending Cloud environment against DoS attack. This provides better understanding of the DDoS attack problem in Cloud computing environment, current solution space, and future research scope to deal with such attacks efficiently.", "title": "" }, { "docid": "8b86b1a60595bc9557d796a3bf22772f", "text": "Orchid plants are the members of Orchidaceae consisting of more than 25,000 species, which are distributed almost all over the world but more abundantly in the tropics. There are 177 genera, 1,125 species of orchids that originated in Thailand. Orchid plant collected from different nurseries showing Chlorotic and mosaic symptoms were observed on Vanda plants and it was suspected to infect with virus. So the symptomatic plants were tested for Cymbidium Mosaic Virus (CYMV), Odontoglossum ring spot virus (ORSV), Poty virus and Tomato Spotted Wilt Virus (TSWV) with Direct Antigen CoatingEnzyme Linked Immunosorbent Assay (DAC-ELISA) and further confirmed by Transmission Electron Microscopy (TEM). With the two methods CYMV and ORSV were detected positively from the suspected imported samples and low positive results were observed for Potex, Poty virus and Tomato Spotted Wilt Virus (TSWV).", "title": "" }, { "docid": "acb569b267eae92a6e33b52725f28833", "text": "A multi-objective design procedure is applied to the design of a close-coupled inductor for a three-phase interleaved 140kW DC-DC converter. For the multi-objective optimization, a genetic algorithm is used in combination with a detailed physical model of the inductive component. From the solution of the optimization, important conclusions about the advantages and disadvantages of using close-coupled inductors compared to separate inductors can be drawn.", "title": "" }, { "docid": "f4df305ad32ebdd1006eefdec6ee7ca3", "text": "In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.", "title": "" }, { "docid": "b2d3ce62b38ac8d7bd0a7b7a2ff7d663", "text": "It is proposed that using Ethernet in the fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimized performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains. Delay and synchronization issues remain to be solved.", "title": "" }, { "docid": "217742ed285e8de40d68188566475126", "text": "It has been proposed that D-amino acid oxidase (DAO) plays an essential role in degrading D-serine, an endogenous coagonist of N-methyl-D-aspartate (NMDA) glutamate receptors. DAO shows genetic association with amyotrophic lateral sclerosis (ALS) and schizophrenia, in whose pathophysiology aberrant metabolism of D-serine is implicated. Although the pathology of both essentially involves the forebrain, in rodents, enzymatic activity of DAO is hindbrain-shifted and absent in the region. Here, we show activity-based distribution of DAO in the central nervous system (CNS) of humans compared with that of mice. DAO activity in humans was generally higher than that in mice. In the human forebrain, DAO activity was distributed in the subcortical white matter and the posterior limb of internal capsule, while it was almost undetectable in those areas in mice. In the lower brain centers, DAO activity was detected in the gray and white matters in a coordinated fashion in both humans and mice. In humans, DAO activity was prominent along the corticospinal tract, rubrospinal tract, nigrostriatal system, ponto-/olivo-cerebellar fibers, and in the anterolateral system. In contrast, in mice, the reticulospinal tract and ponto-/olivo-cerebellar fibers were the major pathways showing strong DAO activity. In the human corticospinal tract, activity-based staining of DAO did not merge with a motoneuronal marker, but colocalized mostly with excitatory amino acid transporter 2 and in part with GFAP, suggesting that DAO activity-positive cells are astrocytes seen mainly in the motor pathway. These findings establish the distribution of DAO activity in cerebral white matter and the motor system in humans, providing evidence to support the involvement of DAO in schizophrenia and ALS. Our results raise further questions about the regulation of D-serine in DAO-rich regions as well as the physiological/pathological roles of DAO in white matter astrocytes.", "title": "" }, { "docid": "834bc1349d6da53c277ddd7eba95dc6a", "text": "Lymphedema is a common condition frequently seen in cancer patients who have had lymph node dissection +/- radiation treatment. Traditional management is mainly non-surgical and unsatisfactory. Surgical treatment has relied on excisional techniques in the past. Physiologic operations have more recently been devised to help improve this condition. Assessing patients and deciding which of the available operations to offer them can be challenging. MRI is an extremely useful tool in patient assessment and treatment planning. J. Surg. Oncol. 2017;115:18-22. © 2016 Wiley Periodicals, Inc.", "title": "" }, { "docid": "63e45222ea9627ce22e9e90fc1ca4ea1", "text": "A soft switching three-transistor push-pull(TTPP)converter is proposed in this paper. The 3rd transistor is inserted in the primary side of a traditional push-pull converter. Two primitive transistors can achieve zero-voltage-switching (ZVS) easily under a wide load range, the 3rd transistor can also realize zero-voltage-switching assisted by leakage inductance. The rated voltage of the 3rd transistor is half of that of the main transistors. The operation theory is explained in detail. The soft-switching realization conditions are derived. An 800 W with 83.3 kHz switching frequency prototype has been built. The experimental result is provided to verify the analysis.", "title": "" } ]
scidocsrr
f13aed0918913cda0bc7bd425da0422e
CAML: Fast Context Adaptation via Meta-Learning
[ { "docid": "e28ab50c2d03402686cc9a465e1231e7", "text": "Few-shot learning is challenging for learning algorithms that learn each task in isolation and from scratch. In contrast, meta-learning learns from many related tasks a meta-learner that can learn a new task more accurately and faster with fewer examples, where the choice of meta-learners is crucial. In this paper, we develop Meta-SGD, an SGD-like, easily trainable meta-learner that can initialize and adapt any differentiable learner in just one step, on both supervised learning and reinforcement learning. Compared to the popular meta-learner LSTM, Meta-SGD is conceptually simpler, easier to implement, and can be learned more efficiently. Compared to the latest meta-learner MAML, Meta-SGD has a much higher capacity by learning to learn not just the learner initialization, but also the learner update direction and learning rate, all in a single meta-learning process. Meta-SGD shows highly competitive performance for few-shot learning on regression, classification, and reinforcement learning.", "title": "" } ]
[ { "docid": "e8cf458c60dc7b4a8f71df2fabf1558d", "text": "We propose a vision-based method that localizes a ground vehicle using publicly available satellite imagery as the only prior knowledge of the environment. Our approach takes as input a sequence of ground-level images acquired by the vehicle as it navigates, and outputs an estimate of the vehicle's pose relative to a georeferenced satellite image. We overcome the significant viewpoint and appearance variations between the images through a neural multi-view model that learns location-discriminative embeddings in which ground-level images are matched with their corresponding satellite view of the scene. We use this learned function as an observation model in a filtering framework to maintain a distribution over the vehicle's pose. We evaluate our method on different benchmark datasets and demonstrate its ability localize ground-level images in environments novel relative to training, despite the challenges of significant viewpoint and appearance variations.", "title": "" }, { "docid": "577e5f82a0a195b092d7a15df110bd96", "text": "We propose a powerful new tool for conducting research on computational intelligence and games. `PyVGDL' is a simple, high-level description language for 2D video games, and the accompanying software library permits parsing and instantly playing those games. The streamlined design of the language is based on defining locations and dynamics for simple building blocks, and the interaction effects when such objects collide, all of which are provided in a rich ontology. It can be used to quickly design games, without needing to deal with control structures, and the concise language is also accessible to generative approaches. We show how the dynamics of many classical games can be generated from a few lines of PyVGDL. The main objective of these generated games is to serve as diverse benchmark problems for learning and planning algorithms; so we provide a collection of interfaces for different types of learning agents, with visual or abstract observations, from a global or first-person viewpoint. To demonstrate the library's usefulness in a broad range of learning scenarios, we show how to learn competent behaviors when a model of the game dynamics is available or when it is not, when full state information is given to the agent or just subjective observations, when learning is interactive or in batch-mode, and for a number of different learning algorithms, including reinforcement learning and evolutionary search.", "title": "" }, { "docid": "39d6a07bc7065499eb4cb0d8adb8338a", "text": "This paper proposes a DNS Name Autoconfiguration (called DNSNA) for not only the global DNS names, but also the local DNS names of Internet of Things (IoT) devices. Since there exist so many devices in the IoT environments, it is inefficient to manually configure the Domain Name System (DNS) names of such IoT devices. By this scheme, the DNS names of IoT devices can be autoconfigured with the device's category and model in IPv6-based IoT environments. This DNS name lets user easily identify each IoT device for monitoring and remote-controlling in IoT environments. In the procedure to generate and register an IoT device's DNS name, the standard protocols of Internet Engineering Task Force (IETF) are used. Since the proposed scheme resolves an IoT device's DNS name into an IPv6 address in unicast through an authoritative DNS server, it generates less traffic than Multicast DNS (mDNS), which is a legacy DNS application for the DNS name service in IoT environments. Thus, the proposed scheme is more appropriate in global IoT networks than mDNS. This paper explains the design of the proposed scheme and its service scenario, such as smart road and smart home. The results of the simulation prove that our proposal outperforms the legacy scheme in terms of energy consumption.", "title": "" }, { "docid": "2e89bc59f85b14cf40a868399a3ce351", "text": "CONTEXT\nYouth worldwide play violent video games many hours per week. Previous research suggests that such exposure can increase physical aggression.\n\n\nOBJECTIVE\nWe tested whether high exposure to violent video games increases physical aggression over time in both high- (United States) and low- (Japan) violence cultures. We hypothesized that the amount of exposure to violent video games early in a school year would predict changes in physical aggressiveness assessed later in the school year, even after statistically controlling for gender and previous physical aggressiveness.\n\n\nDESIGN\nIn 3 independent samples, participants' video game habits and physically aggressive behavior tendencies were assessed at 2 points in time, separated by 3 to 6 months.\n\n\nPARTICIPANTS\nOne sample consisted of 181 Japanese junior high students ranging in age from 12 to 15 years. A second Japanese sample consisted of 1050 students ranging in age from 13 to 18 years. The third sample consisted of 364 United States 3rd-, 4th-, and 5th-graders ranging in age from 9 to 12 years. RESULTS. Habitual violent video game play early in the school year predicted later aggression, even after controlling for gender and previous aggressiveness in each sample. Those who played a lot of violent video games became relatively more physically aggressive. Multisample structure equation modeling revealed that this longitudinal effect was of a similar magnitude in the United States and Japan for similar-aged youth and was smaller (but still significant) in the sample that included older youth.\n\n\nCONCLUSIONS\nThese longitudinal results confirm earlier experimental and cross-sectional studies that had suggested that playing violent video games is a significant risk factor for later physically aggressive behavior and that this violent video game effect on youth generalizes across very different cultures. As a whole, the research strongly suggests reducing the exposure of youth to this risk factor.", "title": "" }, { "docid": "6981598efd4a70f669b5abdca47b7ea1", "text": "The in-flight alignment is a critical stage for airborne inertial navigation system/Global Positioning System (INS/GPS) applications. The alignment task is usually carried out by the Kalman filtering technique that necessitates a good initial attitude to obtain a satisfying performance. Due to the airborne dynamics, the in-flight alignment is much more difficult than the alignment on the ground. An optimization-based coarse alignment approach that uses GPS position/velocity as input, founded on the newly-derived velocity/position integration formulae is proposed. Simulation and flight test results show that, with the GPS lever arm well handled, it is potentially able to yield the initial heading up to 1 deg accuracy in 10 s. It can serve as a nice coarse in-flight alignment without any prior attitude information for the subsequent fine Kalman alignment. The approach can also be applied to other applications that require aligning the INS on the run.", "title": "" }, { "docid": "05b4df16c35a89ee2a5b9ac482e0a297", "text": "Intensity-based classification of MR images has proven problematic, even when advanced techniques are used. Intrascan and interscan intensity inhomogeneities are a common source of difficulty. While reported methods have had some success in correcting intrascan inhomogeneities, such methods require supervision for the individual scan. This paper describes a new method called adaptive segmentation that uses knowledge of tissue intensity properties and intensity inhomogeneities to correct and segment MR images. Use of the expectation-maximization (EM) algorithm leads to a method that allows for more accurate segmentation of tissue types as well as better visualization of magnetic resonance imaging (MRI) data, that has proven to be effective in a study that includes more than 1000 brain scans. Implementation and results are described for segmenting the brain in the following types of images: axial (dual-echo spin-echo), coronal [three dimensional Fourier transform (3-DFT) gradient-echo T1-weighted] all using a conventional head coil, and a sagittal section acquired using a surface coil. The accuracy of adaptive segmentation was found to be comparable with manual segmentation, and closer to manual segmentation than supervised multivariant classification while segmenting gray and white matter.", "title": "" }, { "docid": "83224037f402a44cf7f819acbb91d69f", "text": "Chinese word segmentation (CWS) is an important task for Chinese NLP. Recently, many neural network based methods have been proposed for CWS. However, these methods require a large number of labeled sentences for model training, and usually cannot utilize the useful information in Chinese dictionary. In this paper, we propose two methods to exploit the dictionary information for CWS. The first one is based on pseudo labeled data generation, and the second one is based on multi-task learning. The experimental results on two benchmark datasets validate that our approach can effectively improve the performance of Chinese word segmentation, especially when training data is insufficient.", "title": "" }, { "docid": "7fdc12cbaa29b1f59d2a850a348317b7", "text": "Arhinia is a rare condition characterised by the congenital absence of nasal structures, with different patterns of presentation, and often associated with other craniofacial or somatic anomalies. To date, about 30 surviving cases have been reported. We report the case of a female patient aged 6 years, who underwent internal and external nose reconstruction using a staged procedure: a nasal airway was obtained through maxillary osteotomy and ostectomy, and lined with a local skin flap and split-thickness skin grafts; then the external nose was reconstructed with an expanded frontal flap, armed with an autogenous rib framework.", "title": "" }, { "docid": "c10829be320a9be6ecbc9ca751e8b56e", "text": "This article analyzes two decades of research regarding the mass media's role in shaping, perpetuating, and reducing the stigma of mental illness. It concentrates on three broad areas common in media inquiry: production, representation, and audiences. The analysis reveals that descriptions of mental illness and the mentally ill are distorted due to inaccuracies, exaggerations, or misinformation. The ill are presented not only as peculiar and different, but also as dangerous. Thus, the media perpetuate misconceptions and stigma. Especially prominent is the absence of agreed-upon definitions of \"mental illness,\" as well as the lack of research on the inter-relationships in audience studies between portrayals in the media and social perceptions. The analysis concludes with suggestions for further research on mass media's inter-relationships with mental illness.", "title": "" }, { "docid": "6ef52ad99498d944e9479252d22be9c8", "text": "The problem of detecting rectangular structures in images arises in many applications, from building extraction in aerial images to particle detection in cryo-electron microscopy. This paper proposes a new technique for rectangle detection using a windowed Hough transform. Every pixel of the image is scanned, and a sliding window is used to compute the Hough transform of small regions of the image. Peaks of the Hough image (which correspond to line segments) are then extracted, and a rectangle is detected when four extracted peaks satisfy certain geometric conditions. Experimental results indicate that the proposed technique produced promising results for both synthetic and natural images.", "title": "" }, { "docid": "0c891acac99279cff995a7471ea9aaff", "text": "The mainstay of diagnosis for Treponema pallidum infections is based on nontreponemal and treponemal serologic tests. Many new diagnostic methods for syphilis have been developed, using specific treponemal antigens and novel formats, including rapid point-of-care tests, enzyme immunoassays, and chemiluminescence assays. Although most of these newer tests are not yet cleared for use in the United States by the Food and Drug Administration, their performance and ease of automation have promoted their application for syphilis screening. Both sensitive and specific, new screening tests detect antitreponemal IgM and IgG antibodies by use of wild-type or recombinant T. pallidum antigens. However, these tests cannot distinguish between recent and remote or treated versus untreated infections. In addition, the screening tests require confirmation with nontreponemal tests. This use of treponemal tests for screening and nontreponemal serologic tests as confirmatory tests is a reversal of long-held practice. Clinicians need to understand the science behind these tests to use them properly in syphilis management.", "title": "" }, { "docid": "34a21bf5241d8cc3a7a83e78f8e37c96", "text": "A current-biased voltage-programmed (CBVP) pixel circuit for active-matrix organic light-emitting diode (AMOLED) displays is proposed. The pixel circuit can not only ensure an accurate and fast compensation for the threshold voltage variation and degeneration of the driving TFT and the OLED, but also provide the OLED with a negative bias during the programming period. The negative bias prevents the OLED from a possible light emitting during the programming period and potentially suppresses the degradation of the OLED.", "title": "" }, { "docid": "a0ff157e543d7944a4a83c95dd0da7b3", "text": "This paper provides a review on some of the significant research work done on abstractive text summarization. The process of generating the summary from one or more text corpus, by keeping the key points in the corpus is called text summarization. The most prominent technique in text summarization is an abstractive and extractive method. The extractive summarization is purely based on the algorithm and it just copies the most relevant sentence/words from the input text corpus and creating the summary. An abstractive method generates new sentences/words that may/may not be in the input corpus. This paper focuses on the abstractive text summarization. This paper explains the overview of the various processes in abstractive text summarization. It includes data processing, word embedding, basic model architecture, training, and validation process and the paper narrates the current research in this field. It includes different types of architectures, attention mechanism, supervised and reinforcement learning, the pros and cons of different architecture. Systematic comparison of different text summarization models will provide the future direction of text summarization.", "title": "" }, { "docid": "4318041c3cf82ce72da5983f20c6d6c4", "text": "In line with cloud computing emergence as the dominant enterprise computing paradigm, our conceptualization of the cloud computing reference architecture and service construction has also evolved. For example, to address the need for cost reduction and rapid provisioning, virtualization has moved beyond hardware to containers. More recently, serverless computing or Function-as-a-Service has been presented as a means to introduce further cost-efficiencies, reduce configuration and management overheads, and rapidly increase an application's ability to speed up, scale up and scale down in the cloud. The potential of this new computation model is reflected in the introduction of serverless computing platforms by the main hyperscale cloud service providers. This paper provides an overview and multi-level feature analysis of seven enterprise serverless computing platforms. It reviews extant research on these platforms and identifies the emergence of AWS Lambda as a de facto base platform for research on enterprise serverless cloud computing. The paper concludes with a summary of avenues for further research.", "title": "" }, { "docid": "5691ca09e609aea46b9fd5e7a83d165a", "text": "View-based 3-D object retrieval and recognition has become popular in practice, e.g., in computer aided design. It is difficult to precisely estimate the distance between two objects represented by multiple views. Thus, current view-based 3-D object retrieval and recognition methods may not perform well. In this paper, we propose a hypergraph analysis approach to address this problem by avoiding the estimation of the distance between objects. In particular, we construct multiple hypergraphs for a set of 3-D objects based on their 2-D views. In these hypergraphs, each vertex is an object, and each edge is a cluster of views. Therefore, an edge connects multiple vertices. We define the weight of each edge based on the similarities between any two views within the cluster. Retrieval and recognition are performed based on the hypergraphs. Therefore, our method can explore the higher order relationship among objects and does not use the distance between objects. We conduct experiments on the National Taiwan University 3-D model dataset and the ETH 3-D object collection. Experimental results demonstrate the effectiveness of the proposed method by comparing with the state-of-the-art methods.", "title": "" }, { "docid": "370b416dd51cfc08dc9b97f87c500eba", "text": "Apollonian circle packings arise by repeatedly filling the interstices between mutually tangent circles with further tangent circles. It is possible for every circle in such a packing to have integer radius of curvature, and we call such a packing an integral Apollonian circle packing. This paper studies number-theoretic properties of the set of integer curvatures appearing in such packings. Each Descartes quadruple of four tangent circles in the packing gives an integer solution to the Descartes equation, which relates the radii of curvature of four mutually tangent circles: x þ y þ z þ w 1⁄4 1 2 ðx þ y þ z þ wÞ: Each integral Apollonian circle packing is classified by a certain root quadruple of integers that satisfies the Descartes equation, and that corresponds to a particular quadruple of circles appearing in the packing. We express the number of root quadruples with fixed minimal element n as a class number, and give an exact formula for it. We study which integers occur in a given integer packing, and determine congruence restrictions which sometimes apply. We present evidence suggesting that the set of integer radii of curvatures that appear in an integral Apollonian circle packing has positive density, and in fact represents all sufficiently large integers not excluded by Corresponding author. E-mail addresses: [email protected] (R.L. Graham), [email protected] (J.C. Lagarias), colinm@ research.avayalabs.com (C.L. Mallows), [email protected] (A.R. Wilks), catherine.yan@math. tamu.edu (C.H. Yan). 1 Current address: Department of Computer Science, University of California at San Diego, La Jolla, CA 92093, USA. 2 Work partly done during a visit to the Institute for Advanced Study. 3 Current address: Avaya Labs, Basking Ridge, NJ 07920, USA. 0022-314X/03/$ see front matter r 2003 Elsevier Science (USA). All rights reserved. doi:10.1016/S0022-314X(03)00015-5 congruence conditions. Finally, we discuss asymptotic properties of the set of curvatures obtained as the packing is recursively constructed from a root quadruple. r 2003 Elsevier Science (USA). All rights reserved.", "title": "" }, { "docid": "5988ef7f9c5b8dd125c78c39f26d5a70", "text": "Diagnosis Related Group (DRG) upcoding is an anomaly in healthcare data that costs hundreds of millions of dollars in many developed countries. DRG upcoding is typically detected through resource intensive auditing. As supervised modeling of DRG upcoding is severely constrained by scope and timeliness of past audit data, we propose in this paper an unsupervised algorithm to filter data for potential identification of DRG upcoding. The algorithm has been applied to a hip replacement/revision dataset and a heart-attack dataset. The results are consistent with the assumptions held by domain experts.", "title": "" }, { "docid": "e4b02298a2ff6361c0a914250f956911", "text": "This paper studies efficient means in dealing with intracategory diversity in object detection. Strategies for occlusion and orientation handling are explored by learning an ensemble of detection models from visual and geometrical clusters of object instances. An AdaBoost detection scheme is employed with pixel lookup features for fast detection. The analysis provides insight into the design of a robust vehicle detection system, showing promise in terms of detection performance and orientation estimation accuracy.", "title": "" }, { "docid": "16eff9f2b7626f53baa95463f18d518a", "text": "The need for fine-grained power management in digital ICs has led to the design and implementation of compact, scalable low-drop out regulators (LDOs) embedded deep within logic blocks. While analog LDOs have traditionally been used in digital ICs, the need for digitally implementable LDOs embedded in digital functional units for ultrafine grained power management is paramount. This paper presents a fully-digital, phase locked LDO implemented in 32 nm CMOS. The control model of the proposed design has been provided and limits of stability have been shown. Measurement results with a resistive load as well as a digital load exhibit peak current efficiency of 98%.", "title": "" }, { "docid": "0e8efa2e84888547a1a4502883316a7a", "text": "Conservation and sustainable management of wetlands requires participation of local stakeholders, including communities. The Bigodi Wetland is unusual because it is situated in a common property landscape but the local community has been running a successful community-based natural resource management programme (CBNRM) for the wetland for over a decade. Whilst external visitors to the wetland provide ecotourism revenues we sought to quantify community benefits through the use of wetland goods such as firewood, plant fibres, and the like, and costs associated with wild animals damaging farming activities. We interviewed 68 households living close to the wetland and valued their cash and non-cash incomes from farming and collection of non-timber forest products (NTFPs) and water. The majority of households collected a wide variety of plant and fish resources and water from the wetland for household use and livestock. Overall, 53% of total household cash and non-cash income was from collected products, mostly the wetland, 28% from arable agriculture, 12% from livestock and 7% from employment and cash transfers. Female-headed households had lower incomes than male-headed ones, and with a greater reliance on NTFPs. Annual losses due to wildlife damage were estimated at 4.2% of total gross income. Most respondents felt that the wetland was important for their livelihoods, with more than 80% identifying health, education, craft materials and firewood as key benefits. Ninety-five percent felt that the wetland was in a good condition and that most residents observed the agreed CBNRM rules regarding use of the wetland. This study confirms the success of the locally run CBNRM processes underlying the significant role that the wetland plays in local livelihoods.", "title": "" } ]
scidocsrr
2413e26c33d8fed722ae62b2ee35d170
Application of stochastic recurrent reinforcement learning to index trading
[ { "docid": "05ddc7e7819e5f9ac777f80e578f63ef", "text": "This paper introduces adaptive reinforcement learning (ARL) as the basis for a fully automated trading system application. The system is designed to trade foreign exchange (FX) markets and relies on a layered structure consisting of a machine learning algorithm, a risk management overlay and a dynamic utility optimization layer. An existing machine-learning method called recurrent reinforcement learning (RRL) was chosen as the underlying algorithm for ARL. One of the strengths of our approach is that the dynamic optimization layer makes a fixed choice of model tuning parameters unnecessary. It also allows for a risk-return trade-off to be made by the user within the system. The trading system is able to make consistent gains out-of-sample while avoiding large draw-downs. q 2005 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "4b8f59d1b416d4869ae38dbca0eaca41", "text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.", "title": "" } ]
[ { "docid": "04d66f58cea190d7d7ec8654b6c81d3b", "text": "Lymphedema is a chronic, progressive condition caused by an imbalance of lymphatic flow. Upper extremity lymphedema has been reported in 16-40% of breast cancer patients following axillary lymph node dissection. Furthermore, lymphedema following sentinel lymph node biopsy alone has been reported in 3.5% of patients. While the disease process is not new, there has been significant progress in the surgical care of lymphedema that can offer alternatives and improvements in management. The purpose of this review is to provide a comprehensive update and overview of the current advances and surgical treatment options for upper extremity lymphedema.", "title": "" }, { "docid": "c504800ce08654fb5bf49356d2f7fce3", "text": "Memristive synapses, the most promising passive devices for synaptic interconnections in artificial neural networks, are the driving force behind recent research on hardware neural networks. Despite significant efforts to utilize memristive synapses, progress to date has only shown the possibility of building a neural network system that can classify simple image patterns. In this article, we report a high-density cross-point memristive synapse array with improved synaptic characteristics. The proposed PCMO-based memristive synapse exhibits the necessary gradual and symmetrical conductance changes, and has been successfully adapted to a neural network system. The system learns, and later recognizes, the human thought pattern corresponding to three vowels, i.e. /a /, /i /, and /u/, using electroencephalography signals generated while a subject imagines speaking vowels. Our successful demonstration of a neural network system for EEG pattern recognition is likely to intrigue many researchers and stimulate a new research direction.", "title": "" }, { "docid": "157b5612644d4d7e1818932108d9119b", "text": "This paper proposes a joint multi-task learning algorithm to better predict attributes in images using deep convolutional neural networks (CNN). We consider learning binary semantic attributes through a multi-task CNN model, where each CNN will predict one binary attribute. The multi-task learning allows CNN models to simultaneously share visual knowledge among different attribute categories. Each CNN will generate attribute-specific feature representations, and then we apply multi-task learning on the features to predict their attributes. In our multi-task framework, we propose a method to decompose the overall model's parameters into a latent task matrix and combination matrix. Furthermore, under-sampled classifiers can leverage shared statistics from other classifiers to improve their performance. Natural grouping of attributes is applied such that attributes in the same group are encouraged to share more knowledge. Meanwhile, attributes in different groups will generally compete with each other, and consequently share less knowledge. We show the effectiveness of our method on two popular attribute datasets.", "title": "" }, { "docid": "fb26c619b72d05815b5b4ddf3100a8e6", "text": "Knowledge graphs are large graph-structured databases of facts, which typically suffer from incompleteness. Link prediction is the task of inferring missing relations (links) between entities (nodes) in a knowledge graph. We approach this task using a hypernetwork architecture to generate convolutional layer filters specific to each relation and apply those filters to the subject entity embeddings. This architecture enables a trade-off between non-linear expressiveness and the number of parameters to learn. Our model simplifies the entity and relation embedding interactions introduced by the predecessor convolutional model, while outperforming all previous approaches to link prediction across all standard link prediction datasets.", "title": "" }, { "docid": "88602ba9bcb297af04e58ed478664ee5", "text": "Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers.", "title": "" }, { "docid": "aea4eb371579b66c75c4cc4d51201253", "text": "Fog computing based radio access network is a promising paradigm for the fifth generation wireless communication system to provide high spectral and energy efficiency. With the help of the new designed fog computing based access points (F-APs), the user-centric objectives can be achieved through the adaptive technique and will relieve the load of fronthaul and alleviate the burden of base band unit pool. In this paper, we derive the coverage probability and ergodic rate for both F-AP users and device-to-device users by taking into account the different nodes locations, cache sizes as well as user access modes. Particularly, the stochastic geometry tool is used to derive expressions for above performance metrics. Simulation results validate the accuracy of our analysis and we obtain interesting tradeoffs that depend on the effect of the cache size, user node density, and the quality of service constrains on the different performance metrics.", "title": "" }, { "docid": "707c5c55c11aac05c783929239f953dd", "text": "Social networks are of significant analytical interest. This is because their data are generated in great quantity, and intermittently, besides that, the data are from a wide variety, and it is widely available to users. Through such data, it is desired to extract knowledge or information that can be used in decision-making activities. In this context, we have identified the lack of methods that apply data mining techniques to the task of analyzing the professional profile of employees. The aim of such analyses is to detect competencies that are of greater interest by being more required and also, to identify their associative relations. Thus, this work introduces MineraSkill methodology that deals with methods to infer the desired profile of a candidate for a job vacancy. In order to do so, we use keyword detection via natural language processing techniques; which are related to others by inferring their association rules. The results are presented in the form of a case study, which analyzed data from LinkedIn, demonstrating the potential of the methodology in indicating trending competencies that are required together.", "title": "" }, { "docid": "bdbb97522eea6cb9f8e11f07c2e83282", "text": "Middle ear surgery is strongly influenced by anatomical and functional characteristics of the middle ear. The complex anatomy means a challenge for the otosurgeon who moves between preservation or improvement of highly important functions (hearing, balance, facial motion) and eradication of diseases. Of these, perforations of the tympanic membrane, chronic otitis media, tympanosclerosis and cholesteatoma are encountered most often in clinical practice. Modern techniques for reconstruction of the ossicular chain aim for best possible hearing improvement using delicate alloplastic titanium prostheses, but a number of prosthesis-unrelated factors work against this intent. Surgery is always individualized to the case and there is no one-fits-all strategy. Above all, both middle ear diseases and surgery can be associated with a number of complications; the most important ones being hearing deterioration or deafness, dizziness, facial palsy and life-threatening intracranial complications. To minimize risks, a solid knowledge of and respect for neurootologic structures is essential for an otosurgeon who must train him- or herself intensively on temporal bones before performing surgery on a patient.", "title": "" }, { "docid": "c2f8de8ed9c796c351a96eedb072ddaf", "text": "We partially replicate and extend Shepard, Hovland, and Jenkins's (1961) classic study of task difficulty for learning six fundamental types of rule-based categorization problems. Our main results mirrored those of Shepard et al., with the ordering of task difficulty being the same as in the original study. A much richer data set was collected, however, which enabled the generation of block-by-block learning curves suitable for quantitative fitting. Four current computational models of classification learning were fitted to the learning data: ALCOVE (Kruschke, 1992), the rational model (Anderson, 1991), the configural-cue model (Gluck & Bower, 1988b), and an extended version of the configural-cue model with dimensionalized, adaptive learning rate mechanisms. Although all of the models captured important qualitative aspects of the learning data, ALCOVE provided the best overall quantitative fit. The results suggest the need to incorporate some form of selective attention to dimensions in category-learning models based on stimulus generalization and cue conditioning.", "title": "" }, { "docid": "6a993cdfbb701b43bb1cf287380e5b2e", "text": "There is a growing need for real-time human pose estimation from monocular RGB images in applications such as human computer interaction, assisted living, video surveillance, people tracking, activity recognition and motion capture. For the task, depth sensors and multi-camera systems are usually more expensive and difficult to set up than conventional RGB video cameras. Recent advances in convolutional neural network research have allowed to replace of traditional methods with more efficient convolutional neural network based methods in many computer vision tasks. This thesis presents a method for real-time multi-person human pose estimation from video by utilizing convolutional neural networks. The method is aimed for use case specific applications, where good accuracy is essential and variation of the background and poses is limited. This enables to use a generic network architecture, which is both accurate and fast. The problem is divided into two phases: (1) pretraining and (2) fine-tuning. In pretraining, the network is learned with highly diverse input data from publicly available datasets, while in fine-tuning it is trained with application specific data recorded with Kinect. The method considers the whole system, including person detector, pose estimator and an automatic way to record application specific training material for fine-tuning. The method can be also thought of as a replacement for Kinect, and it can be used for higher level tasks such as gesture control, games, person tracking and action recognition.", "title": "" }, { "docid": "4a1c76e617fe05b7253cb508ce5119bc", "text": "Interdental cleaning is an important part of a patient's personal oral care regimen. Water flossers, also known as oral irrigators or dental water jets, can play a vital, effective role in interdental hygiene. Evidence has shown a significant reduction in plaque biofilm from tooth surfaces and the reduction of subgingival pathogenic bacteria from pockets as deep as 6 mm with the use of water flossing. In addition, water flossers have been shown to reduce gingivitis, bleeding, probing pocket depth, host inflammatory mediators, and calculus. Educating patients on the use of a water flosser as part of their oral hygiene routine can be a valuable tool in maintaining oral health.", "title": "" }, { "docid": "38438e6a0bd03ad5f076daa1f248d001", "text": "In recent years, research on reading-compr question and answering has drawn intense attention in Language Processing. However, it is still a key issue to the high-level semantic vector representation of quest paragraph. Drawing inspiration from DrQA [1], wh question and answering system proposed by Facebook, tl proposes an attention-based question and answering 11 adds the binary representation of the paragraph, the par; attention to the question, and the question's attentioi paragraph. Meanwhile, a self-attention calculation m proposed to enhance the question semantic vector reption. Besides, it uses a multi-layer bidirectional Lon: Term Memory(BiLSTM) networks to calculate the h semantic vector representations of paragraphs and q Finally, bilinear functions are used to calculate the pr of the answer's position in the paragraph. The expe results on the Stanford Question Answering Dataset(SQl development set show that the F1 score is 80.1% and tl 71.4%, which demonstrates that the performance of the is better than that of the model of DrQA, since they inc 2% and 1.3% respectively.", "title": "" }, { "docid": "f72267cde1287bc3d0a235043c4dc5f5", "text": "End-to-end congestion control mechanisms have been critical to the robustness and stability of the Internet. Most of today’s Internet traffic is TCP, and we expect this to remain so in the future. Thus, having “TCP-friendly” behavior is crucial for new applications. However, the emergence of non-congestion-controlled realtime applications threatens unfairness to competing TCP traffic and possible congestion collapse. We present an end-to-end TCP-friendly Rate Adaptation Protocol (RAP), which employs an additive-increase, multiplicativedecrease (AIMD) algorithm. It is well suited for unicast playback of realtime streams and other semi-reliable rate-based applications. Its primary goal is to be fair and TCP-friendly while separating network congestion control from application-level reliability. We evaluate RAP through extensive simulation, and conclude that bandwidth is usually evenly shared between TCP and RAP traffic. Unfairness to TCP traffic is directly determined by how TCP diverges from the AIMD algorithm. Basic RAP behaves in a TCPfriendly fashion in a wide range of likely conditions, but we also devised a fine-grain rate adaptation mechanism to extend this range further. Finally, we show that deploying RED queue management can result in an ideal fairness between TCP and RAP traffic.", "title": "" }, { "docid": "77ac3a28ffa420a1e4f1366d36b4c188", "text": " Call-Exner bodies are present in ovarian follicles of a range of species including human and rabbit, and in a range of human ovarian tumors. We have also found structures resembling Call-Exner bodies in bovine preantral and small antral follicles. Hematoxylin and eosin staining of single sections of bovine ovaries has shown that 30% of preantral follicles with more than one layer of granulosa cells and 45% of small (less than 650 μm) antral follicles have at least one Call-Exner body composed of a spherical eosinophilic region surrounded by a rosette of granulosa cells. Alcian blue stains the spherical eosinophilic region of the Call-Exner bodies. Electron microscopy has demonstrated that some Call-Exner bodies contain large aggregates of convoluted basal lamina, whereas others also contain regions of unassembled basal-lamina-like material. Individual chains of the basal lamina components type IV collagen (α1 to α5) and laminin (α1, β2 and δ1) have been immunolocalized to Call-Exner bodies in sections of fresh-frozen ovaries. Bovine Call-Exner bodies are presumably analogous to Call-Exner bodies in other species but are predominantly found in preantral and small antral follicles, rather than large antral follicles. With follicular development, the basal laminae of Call-Exner bodies change in their apparent ratio of type IV collagen to laminin, similar to changes observed in the follicular basal lamina, suggesting that these structures have a common cellular origin.", "title": "" }, { "docid": "83f5af68f54f9db0608d8173432188f9", "text": "JaTeCS is an open source Java library that supports research on automatic text categorization and other related problems, such as ordinal regression and quantification, which are of special interest in opinion mining applications. It covers all the steps of an experimental activity, from reading the corpus to the evaluation of the experimental results. As JaTeCS is focused on text as the main input data, it provides the user with many text-dedicated tools, e.g.: data readers for many formats, including the most commonly used text corpora and lexical resources, natural language processing tools, multi-language support, methods for feature selection and weighting, the implementation of many machine learning algorithms as well as wrappers for well-known external software (e.g., SVMlight) which enable their full control from code. JaTeCS support its expansion by abstracting through interfaces many of the typical tools and procedures used in text processing tasks. The library also provides a number of “template” implementations of typical experimental setups (e.g., train-test, k-fold validation, grid-search optimization, randomized runs) which enable fast realization of experiments just by connecting the templates with data readers, learning algorithms and evaluation measures.", "title": "" }, { "docid": "2eed40550c0d011af91a2998ed16f501", "text": "Context: Client-side JavaScript is widely used in web applications to improve user-interactivity and minimize client-server communications. Unfortunately, web applications are prone to JavaScript faults. While prior studies have demonstrated the prevalence of these faults, no attempts have been made to determine their root causes and consequences. Objective: The goal of our study is to understand the root causes and impact of JavaScript faults and how the results can impact JavaScript programmers, testers and tool developers. Method: We perform an empirical study of 317 bug reports from 12 bug repositories. The bug reports are thoroughly examined to classify and extract information about the fault's cause (the error) and consequence (the failure and impact). Result: The majority (65%) of JavaScript faults are DOM-related, meaning they are caused by faulty interactions of the JavaScript code with the Document Object Model (DOM). Further, 80% of the highest impact JavaScript faults are DOM-related. Finally, most JavaScript faults originate from programmer mistakes committed in the JavaScript code itself, as opposed to other web application components such as the server-side or HTML code. Conclusion: Given the prevalence of DOM-related faults, JavaScript programmers need development tools that can help them reason about the DOM. Also, testers should prioritize detection of DOM-related faults as most high impact faults belong to this category. Finally, developers can use the error patterns we found to design more powerful static analysis tools for JavaScript.", "title": "" }, { "docid": "705efc15f0c07c3028c691d5098fe921", "text": "Antisocial behavior is a socially maladaptive and harmful trait to possess. This can be especially injurious for a child who is raised by a parent with this personality structure. The pathology of antisocial behavior implies traits such as deceitfulness, irresponsibility, unreliability, and an incapability to feel guilt, remorse, or even love. This is damaging to a child’s emotional, cognitive, and social development. Parents with this personality makeup can leave a child traumatized, empty, and incapable of forming meaningful personal relationships. Both genetic and environmental factors influence the development of antisocial behavior. Moreover, the child with a genetic predisposition to antisocial behavior who is raised with a parental style that triggers the genetic liability is at high risk for developing the same personality structure. Antisocial individuals are impulsive, irritable, and often have no concerns over their purported responsibilities. As a parent, this can lead to erratic discipline, neglectful parenting, and can undermine effective care giving. This paper will focus on the implications of parents with antisocial behavior and the impact that this behavior has on attachment as well as on the development of antisocial traits in children.", "title": "" }, { "docid": "c7237823182b47cc03c70937bbbb0be0", "text": "To discover patterns in historical data, climate scientists have applied various clustering methods with the goal of identifying regions that share some common climatological behavior. However, past approaches are limited by the fact that they either consider only a single time period (snapshot) of multivariate data, or they consider only a single variable by using the time series data as multi-dimensional feature vector. In both cases, potentially useful information may be lost. Moreover, clusters in high-dimensional data space can be difficult to interpret, prompting the need for a more effective data representation. We address both of these issues by employing a complex network (graph) to represent climate data, a more intuitive model that can be used for analysis while also having a direct mapping to the physical world for interpretation. A cross correlation function is used to weight network edges, thus respecting the temporal nature of the data, and a community detection algorithm identifies multivariate clusters. Examining networks for consecutive periods allows us to study structural changes over time. We show that communities have a climatological interpretation and that disturbances in structure can be an indicator of climate events (or lack thereof). Finally, we discuss how this model can be applied for the discovery of more complex concepts such as unknown teleconnections or the development of multivariate climate indices and predictive insights.", "title": "" }, { "docid": "b60850caccf9be627b15c7c83fb3938e", "text": "Research and development of hip stem implants started centuries ago. However, there is still no yet an optimum design that fulfills all the requirements of the patient. New manufacturing technologies have opened up new possibilities for complicated theoretical designs to become tangible reality. Current trends in the development of hip stems focus on applying porous structures to improve osseointegration and reduce stem stiffness in order to approach the stiffness of the natural human bone. In this field, modern additive manufacturing machines offer unique flexibility in manufacturing parts combining variable density mesh structures with solid and porous metal in a single manufacturing process. Furthermore, additive manufacturing machines became powerful competitors in the economical mass production of hip implants. This is due to their ability to manufacture several parts with different geometries in a single setup and with minimum material consumption. This paper reviews the application of additive manufacturing (AM) techniques in the production of innovative porous femoral hip stem design.", "title": "" } ]
scidocsrr
51921151c2e3c4b4fa039456a32f955f
A task-driven approach to time scale detection in dynamic networks
[ { "docid": "b89a3bc8aa519ba1ccc818fe2a54b4ff", "text": "We present the design, implementation, and deployment of a wearable computing platform for measuring and analyzing human behavior in organizational settings. We propose the use of wearable electronic badges capable of automatically measuring the amount of face-to-face interaction, conversational time, physical proximity to other people, and physical activity levels in order to capture individual and collective patterns of behavior. Our goal is to be able to understand how patterns of behavior shape individuals and organizations. By using on-body sensors in large groups of people for extended periods of time in naturalistic settings, we have been able to identify, measure, and quantify social interactions, group behavior, and organizational dynamics. We deployed this wearable computing platform in a group of 22 employees working in a real organization over a period of one month. Using these automatic measurements, we were able to predict employees' self-assessments of job satisfaction and their own perceptions of group interaction quality by combining data collected with our platform and e-mail communication data. In particular, the total amount of communication was predictive of both of these assessments, and betweenness in the social network exhibited a high negative correlation with group interaction satisfaction. We also found that physical proximity and e-mail exchange had a negative correlation of r = -0.55&nbsp;(p 0.01), which has far-reaching implications for past and future research on social networks.", "title": "" }, { "docid": "e4890b63e9a51029484354535765801c", "text": "Many different machine learning algorithms exist; taking into account each algorithm's hyperparameters, there is a staggeringly large number of possible alternatives overall. We consider the problem of simultaneously selecting a learning algorithm and setting its hyperparameters, going beyond previous work that attacks these issues separately. We show that this problem can be addressed by a fully automated approach, leveraging recent innovations in Bayesian optimization. Specifically, we consider a wide range of feature selection techniques (combining 3 search and 8 evaluator methods) and all classification approaches implemented in WEKA's standard distribution, spanning 2 ensemble methods, 10 meta-methods, 27 base classifiers, and hyperparameter settings for each classifier. On each of 21 popular datasets from the UCI repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, we show classification performance often much better than using standard selection and hyperparameter optimization methods. We hope that our approach will help non-expert users to more effectively identify machine learning algorithms and hyperparameter settings appropriate to their applications, and hence to achieve improved performance.", "title": "" } ]
[ { "docid": "d02e87a00aaf29a86cf94ad0c539fd0d", "text": "Future advanced driver assistance systems will contain multiple sensors that are used for several applications, such as highly automated driving on freeways. The problem is that the sensors are usually asynchronous and their data possibly out-of-sequence, making fusion of the sensor data non-trivial. This paper presents a novel approach to track-to-track fusion for automotive applications with asynchronous and out-of-sequence sensors using information matrix fusion. This approach solves the problem of correlation between sensor data due to the common process noise and common track history, which eliminates the need to replace the global track estimate with the fused local estimate at each fusion cycle. The information matrix fusion approach is evaluated in simulation and its performance demonstrated using real sensor data on a test vehicle designed for highly automated driving on freeways.", "title": "" }, { "docid": "8972e89b0b06bf25e72f8cb82b6d629a", "text": "Community detection is an important task for mining the structure and function of complex networks. Generally, there are several different kinds of nodes in a network which are cluster nodes densely connected within communities, as well as some special nodes like hubs bridging multiple communities and outliers marginally connected with a community. In addition, it has been shown that there is a hierarchical structure in complex networks with communities embedded within other communities. Therefore, a good algorithm is desirable to be able to not only detect hierarchical communities, but also identify hubs and outliers. In this paper, we propose a parameter-free hierarchical network clustering algorithm SHRINK by combining the advantages of density-based clustering and modularity optimization methods. Based on the structural connectivity information, the proposed algorithm can effectively reveal the embedded hierarchical community structure with multiresolution in large-scale weighted undirected networks, and identify hubs and outliers as well. Moreover, it overcomes the sensitive threshold problem of density-based clustering algorithms and the resolution limit possessed by other modularity-based methods. To illustrate our methodology, we conduct experiments with both real-world and synthetic datasets for community detection, and compare with many other baseline methods. Experimental results demonstrate that SHRINK achieves the best performance with consistent improvements.", "title": "" }, { "docid": "5c32b7bea7470a50a900a62e1a3dffc3", "text": "Recommender systems (RSs) have been the most important technology for increasing the business in Taobao, the largest online consumer-to-consumer (C2C) platform in China. There are three major challenges facing RS in Taobao: scalability, sparsity and cold start. In this paper, we present our technical solutions to address these three challenges. The methods are based on a well-known graph embedding framework. We first construct an item graph from users' behavior history, and learn the embeddings of all items in the graph. The item embeddings are employed to compute pairwise similarities between all items, which are then used in the recommendation process. To alleviate the sparsity and cold start problems, side information is incorporated into the graph embedding framework. We propose two aggregation methods to integrate the embeddings of items and the corresponding side information. Experimental results from offline experiments show that methods incorporating side information are superior to those that do not. Further, we describe the platform upon which the embedding methods are deployed and the workflow to process the billion-scale data in Taobao. Using A/B test, we show that the online Click-Through-Rates (CTRs) are improved comparing to the previous collaborative filtering based methods widely used in Taobao, further demonstrating the effectiveness and feasibility of our proposed methods in Taobao's live production environment.", "title": "" }, { "docid": "e8c6cdc70be62c6da150b48ba69c0541", "text": "Stress granules and processing bodies are related mRNA-containing granules implicated in controlling mRNA translation and decay. A genomic screen identifies numerous factors affecting granule formation, including proteins involved in O-GlcNAc modifications. These results highlight the importance of post-translational modifications in translational control and mRNP granule formation.", "title": "" }, { "docid": "8c0a8816028e8c50ebccbd812ee3a4e5", "text": "Songs are representation of audio signal and musical instruments. An audio signal separation system should be able to identify different audio signals such as speech, background noise and music. In a song the singing voice provides useful information regarding pitch range, music content, music tempo and rhythm. An automatic singing voice separation system is used for attenuating or removing the music accompaniment. The paper presents survey of the various algorithm and method for separating singing voice from musical background. From the survey it is observed that most of researchers used Robust Principal Component Analysis method for separation of singing voice from music background, by taking into account the rank of music accompaniment and the sparsity of singing voices.", "title": "" }, { "docid": "8f1d27581e7a83e378129e4287c64bd9", "text": "Online social media plays an increasingly significant role in shaping the political discourse during elections worldwide. In the 2016 U.S. presidential election, political campaigns strategically designed candidacy announcements on Twitter to produce a significant increase in online social media attention. We use large-scale online social media communications to study the factors of party, personality, and policy in the Twitter discourse following six major presidential campaign announcements for the 2016 U.S. presidential election. We observe that all campaign announcements result in an instant bump in attention, with up to several orders of magnitude increase in tweets. However, we find that Twitter discourse as a result of this bump in attention has overwhelmingly negative sentiment. The bruising criticism, driven by crosstalk from Twitter users of opposite party affiliations, is organized by hashtags such as #NoMoreBushes and #WhyImNotVotingForHillary. We analyze how people take to Twitter to criticize specific personality traits and policy positions of presidential candidates.", "title": "" }, { "docid": "76d260180b588f881f1009a420a35b3b", "text": "Appearance changes due to weather and seasonal conditions represent a strong impediment to the robust implementation of machine learning systems in outdoor robotics. While supervised learning optimises a model for the training domain, it will deliver degraded performance in application domains that underlie distributional shifts caused by these changes. Traditionally, this problem has been addressed via the collection of labelled data in multiple domains or by imposing priors on the type of shift between both domains. We frame the problem in the context of unsupervised domain adaptation and develop a framework for applying adversarial techniques to adapt popular, state-of-the-art network architectures with the additional objective to align features across domains. Moreover, as adversarial training is notoriously unstable, we first perform an extensive ablation study, adapting many techniques known to stabilise generative adversarial networks, and evaluate on a surrogate classification task with the same appearance change. The distilled insights are applied to the problem of free-space segmentation for motion planning in autonomous driving.", "title": "" }, { "docid": "49b0cf976357d0c943ff003526ffff1f", "text": "Transcranial direct current stimulation (tDCS) is a promising tool for neurocognitive enhancement. Several studies have shown that just a single session of tDCS over the left dorsolateral pFC (lDLPFC) can improve the core cognitive function of working memory (WM) in healthy adults. Yet, recent studies combining multiple sessions of anodal tDCS over lDLPFC with verbal WM training did not observe additional benefits of tDCS in subsequent stimulation sessions nor transfer of benefits to novel WM tasks posttraining. Using an enhanced stimulation protocol as well as a design that included a baseline measure each day, the current study aimed to further investigate the effects of multiple sessions of tDCS on WM. Specifically, we investigated the effects of three subsequent days of stimulation with anodal (20 min, 1 mA) versus sham tDCS (1 min, 1 mA) over lDLPFC (with a right supraorbital reference) paired with a challenging verbal WM task. WM performance was measured with a verbal WM updating task (the letter n-back) in the stimulation sessions and several WM transfer tasks (different letter set n-back, spatial n-back, operation span) before and 2 days after stimulation. Anodal tDCS over lDLPFC enhanced WM performance in the first stimulation session, an effect that remained visible 24 hr later. However, no further gains of anodal tDCS were observed in the second and third stimulation sessions, nor did benefits transfer to other WM tasks at the group level. Yet, interestingly, post hoc individual difference analyses revealed that in the anodal stimulation group the extent of change in WM performance on the first day of stimulation predicted pre to post changes on both the verbal and the spatial transfer task. Notably, this relationship was not observed in the sham group. Performance of two individuals worsened during anodal stimulation and on the transfer tasks. Together, these findings suggest that repeated anodal tDCS over lDLPFC combined with a challenging WM task may be an effective method to enhance domain-independent WM functioning in some individuals, but not others, or can even impair WM. They thus call for a thorough investigation into individual differences in tDCS respondence as well as further research into the design of multisession tDCS protocols that may be optimal for boosting cognition across a wide range of individuals.", "title": "" }, { "docid": "300485eefc3020135cdaa31ad36f7462", "text": "The number of cyber threats is constantly increasing. In 2013, 200,000 malicious tools were identified each day by antivirus vendors. This figure rose to 800,000 per day in 2014 and then to 1.8 million per day in 2016! The bar of 3 million per day will be crossed in 2017. Traditional security tools (mainly signature-based) show their limits and are less and less effective to detect these new cyber threats. Detecting never-seen-before or zero-day malware, including ransomware, efficiently requires a new approach in cyber security management. This requires a move from signature-based detection to behavior-based detection. We have developed a data breach detection system named CDS using Machine Learning techniques which is able to identify zero-day malware by analyzing the network traffic. In this paper, we present the capability of the CDS to detect zero-day ransomware, particularly WannaCry.", "title": "" }, { "docid": "ad4c9b26e0273ada7236068fb8ac4729", "text": "Understanding user participation is fundamental in anticipating the popularity of online content. In this paper, we explore how the number of users' comments during a short observation period after publication can be used to predict the expected popularity of articles published by a countrywide online newspaper. We evaluate a simple linear prediction model on a real dataset of hundreds of thousands of articles and several millions of comments collected over a period of four years. Analyzing the accuracy of our proposed model for different values of its basic parameters we provide valuable insights on the potentials and limitations for predicting content popularity based on early user activity.", "title": "" }, { "docid": "f55e380c158ae01812f009fd81642d7f", "text": "In this paper, we proposed a system to effectively create music mashups – a kind of re-created music that is made by mixing parts of multiple existing music pieces. Unlike previous studies which merely generate mashups by overlaying music segments on one single base track, the proposed system creates mashups with multiple background (e.g. instrumental) and lead (e.g. vocal) track segments. So, besides the suitability between the vertically overlaid tracks (i.e. vertical mashability) used in previous studies, we proposed to further consider the suitability between the horizontally connected consecutive music segments (i.e. horizontal mashability) when searching for proper music segments to be combined. On the vertical side, two new factors: “harmonic change balance” and “volume weight” have been considered. On the horizontal side, the methods used in the studies of medley creation are incorporated. Combining vertical and horizontal mashabilities together, we defined four levels of mashability that may be encountered and found the proper solution to each of them. Subjective evaluations showed that the proposed four levels of mashability can appropriately reflect the degrees of listening enjoyment. Besides, by taking the newly proposed vertical mashability measurement into account, the improvement in user satisfaction is statistically significant.", "title": "" }, { "docid": "6c149f1f6e9dc859bf823679df175afb", "text": "Neurofeedback is attracting renewed interest as a method to self-regulate one's own brain activity to directly alter the underlying neural mechanisms of cognition and behavior. It not only promises new avenues as a method for cognitive enhancement in healthy subjects, but also as a therapeutic tool. In the current article, we present a review tutorial discussing key aspects relevant to the development of electroencephalography (EEG) neurofeedback studies. In addition, the putative mechanisms underlying neurofeedback learning are considered. We highlight both aspects relevant for the practical application of neurofeedback as well as rather theoretical considerations related to the development of new generation protocols. Important characteristics regarding the set-up of a neurofeedback protocol are outlined in a step-by-step way. All these practical and theoretical considerations are illustrated based on a protocol and results of a frontal-midline theta up-regulation training for the improvement of executive functions. Not least, assessment criteria for the validation of neurofeedback studies as well as general guidelines for the evaluation of training efficacy are discussed.", "title": "" }, { "docid": "6982c79b6fa2cda4f0323421f8e3b4be", "text": "We propose split-brain autoencoders, a straightforward modification of the traditional autoencoder architecture, for unsupervised representation learning. The method adds a split to the network, resulting in two disjoint sub-networks. Each sub-network is trained to perform a difficult task &#x2013; predicting one subset of the data channels from another. Together, the sub-networks extract features from the entire input signal. By forcing the network to solve cross-channel prediction tasks, we induce a representation within the network which transfers well to other, unseen tasks. This method achieves state-of-the-art performance on several large-scale transfer learning benchmarks.", "title": "" }, { "docid": "f7a1eaa86a81b104a9ae62dc87c495aa", "text": "In the Internet of Things, the extreme heterogeneity of sensors, actuators and user devices calls for new tools and design models able to translate the user's needs in machine-understandable scenarios. The scientific community has proposed different solution for such issue, e.g., the MQTT (MQ Telemetry Transport) protocol introduced the topic concept as “the key that identifies the information channel to which payload data is published”. This study extends the topic approach by proposing the Web of Topics (WoX), a conceptual model for the IoT. A WoX Topic is identified by two coordinates: (i) a discrete semantic feature of interest (e.g. temperature, humidity), and (ii) a URI-based location. An IoT entity defines its role within a Topic by specifying its technological and collaborative dimensions. By this approach, it is easier to define an IoT entity as a set of couples Topic-Role. In order to prove the effectiveness of the WoX approach, we developed the WoX APIs on top of an EPCglobal implementation. Then, 10 developers were asked to build a WoX-based application supporting a physics lab scenario at school. They also filled out an ex-ante and an ex-post questionnaire. A set of qualitative and quantitative metrics allowed measuring the model's outcome.", "title": "" }, { "docid": "645f49ff21d31bb99cce9f05449df0d7", "text": "The growing popularity of the JSON format has fueled increased interest in loading and processing JSON data within analytical data processing systems. However, in many applications, JSON parsing dominates performance and cost. In this paper, we present a new JSON parser called Mison that is particularly tailored to this class of applications, by pushing down both projection and filter operators of analytical queries into the parser. To achieve these features, we propose to deviate from the traditional approach of building parsers using finite state machines (FSMs). Instead, we follow a two-level approach that enables the parser to jump directly to the correct position of a queried field without having to perform expensive tokenizing steps to find the field. At the upper level, Mison speculatively predicts the logical locations of queried fields based on previously seen patterns in a dataset. At the lower level, Mison builds structural indices on JSON data to map logical locations to physical locations. Unlike all existing FSM-based parsers, building structural indices converts control flow into data flow, thereby largely eliminating inherently unpredictable branches in the program and exploiting the parallelism available in modern processors. We experimentally evaluate Mison using representative real-world JSON datasets and the TPC-H benchmark, and show that Mison produces significant performance benefits over the best existing JSON parsers; in some cases, the performance improvement is over one order of magnitude.", "title": "" }, { "docid": "9dac75a40e421163c4e05cfd5d36361f", "text": "In recent years, many data mining methods have been proposed for finding useful and structured information from market basket data. The association rule model was recently proposed in order to discover useful patterns and dependencies in such data. This paper discusses a method for indexing market basket data efficiently for similarity search. The technique is likely to be very useful in applications which utilize the similarity in customer buying behavior in order to make peer recommendations. We propose an index called the signature table, which is very flexible in supporting a wide range of similarity functions. The construction of the index structure is independent of the similarity function, which can be specified at query time. The resulting similarity search algorithm shows excellent scalability with increasing memory availability and database size.", "title": "" }, { "docid": "29ac2afc399bbf61927c4821d3a6e0a0", "text": "A well used approach for echo cancellation is the two-path method, where two adaptive filters in parallel are utilized. Typically, one filter is continuously updated, and when this filter is considered better adjusted to the echo-path than the other filter, the coefficients of the better adjusted filter is transferred to the other filter. When this transfer should occur is controlled by the transfer logic. This paper proposes transfer logic that is both more robust and more simple to tune, owing to fewer parameters, than the conventional approach. Extensive simulations show the advantages of the proposed method.", "title": "" }, { "docid": "510439267c11c53b31dcf0b1c40e331b", "text": "Spatial multicriteria decision problems are decision problems where one needs to take multiple conflicting criteria as well as geographical knowledge into account. In such a context, exploratory spatial analysis is known to provide tools to visualize as much data as possible on maps but does not integrate multicriteria aspects. Also, none of the tools provided by multicriteria analysis were initially destined to be used in a geographical context.In this paper, we propose an application of the PROMETHEE and GAIA ranking methods to Geographical Information Systems (GIS). The aim is to help decision makers obtain rankings of geographical entities and understand why such rankings have been obtained. To do that, we make use of the visual approach of the GAIA method and adapt it to display the results on geographical maps. This approach is then extended to cover several weaknesses of the adaptation. Finally, it is applied to a study of the region of Brussels as well as an evaluation of the Human Development Index (HDI) in Europe.", "title": "" }, { "docid": "283708fe3c950ac08bf932d68feb6d56", "text": "Diabetic wounds are unlike typical wounds in that they are slower to heal, making treatment with conventional topical medications an uphill process. Among several different alternative therapies, honey is an effective choice because it provides comparatively rapid wound healing. Although honey has been used as an alternative medicine for wound healing since ancient times, the application of honey to diabetic wounds has only recently been revived. Because honey has some unique natural features as a wound healer, it works even more effectively on diabetic wounds than on normal wounds. In addition, honey is known as an \"all in one\" remedy for diabetic wound healing because it can combat many microorganisms that are involved in the wound process and because it possesses antioxidant activity and controls inflammation. In this review, the potential role of honey's antibacterial activity on diabetic wound-related microorganisms and honey's clinical effectiveness in treating diabetic wounds based on the most recent studies is described. Additionally, ways in which honey can be used as a safer, faster, and effective healing agent for diabetic wounds in comparison with other synthetic medications in terms of microbial resistance and treatment costs are also described to support its traditional claims.", "title": "" }, { "docid": "df6e410fddeb22c7856f5362b7abc1de", "text": "With the increasing prevalence of Web 2.0 and cloud computing, password-based logins play an increasingly important role on user-end systems. We use passwords to authenticate ourselves to countless applications and services. However, login credentials can be easily stolen by attackers. In this paper, we present a framework, TrustLogin, to secure password-based logins on commodity operating systems. TrustLogin leverages System Management Mode to protect the login credentials from malware even when OS is compromised. TrustLogin does not modify any system software in either client or server and is transparent to users, applications, and servers. We conduct two study cases of the framework on legacy and secure applications, and the experimental results demonstrate that TrustLogin is able to protect login credentials from real-world keyloggers on Windows and Linux platforms. TrustLogin is robust against spoofing attacks. Moreover, the experimental results also show TrustLogin introduces a low overhead with the tested applications.", "title": "" } ]
scidocsrr
f044fe45667845e23a37450a4166419f
An effective voting method for circle detection
[ { "docid": "40eaf943d6fa760b064a329254adc5db", "text": "We introduce the Adaptive Hough Transform, AHT, as an efficient way of implementing the Hough Transform, HT, method for the detection of 2-D shapes. The AHT uses a small accumulator array and the idea of a flexible iterative \"coarse to fine\" accumulation and search strategy to identify significant peaks in the Hough parameter spaces. The method is substantially superior to the standard HT implementation in both storage and computational requirements. In this correspondence we illustrate the ideas of the AHT by tackling the problem of identifying linear and circular segments in images by searching for clusters of evidence in 2-D parameter spaces. We show that the method is robust to the addition of extraneous noise and can be used to analyze complex images containing more than one shape.", "title": "" } ]
[ { "docid": "f7e779114a0eb67fd9e3dfbacf5110c9", "text": "Online game is an increasingly popular source of entertainment for all ages, with relatively prevalent negative consequences. Addiction is a problem that has received much attention. This research aims to develop a measure of online game addiction for Indonesian children and adolescents. The Indonesian Online Game Addiction Questionnaire draws from earlier theories and research on the internet and game addiction. Its construction is further enriched by including findings from qualitative interviews and field observation to ensure appropriate expression of the items. The measure consists of 7 items with a 5-point Likert Scale. It is validated by testing 1,477 Indonesian junior and senior high school students from several schools in Manado, Medan, Pontianak, and Yogyakarta. The validation evidence is shown by item-total correlation and criterion validity. The Indonesian Online Game Addiction Questionnaire has good item-total correlation (ranging from 0.29 to 0.55) and acceptable reliability (α = 0.73). It is also moderately correlated with the participant's longest time record to play online games (r = 0.39; p<0.01), average days per week in playing online games (ρ = 0.43; p<0.01), average hours per days in playing online games (ρ = 0.41; p<0.01), and monthly expenditure for online games (ρ = 0.30; p<0.01). Furthermore, we created a clinical cut-off estimate by combining criteria and population norm. The clinical cut-off estimate showed that the score of 14 to 21 may indicate mild online game addiction, and the score of 22 and above may indicate online game addiction. Overall, the result shows that Indonesian Online Game Addiction Questionnaire has sufficient psychometric property for research use, as well as limited clinical application.", "title": "" }, { "docid": "ea048488791219be809072862a061444", "text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .", "title": "" }, { "docid": "628c8b906e3db854ea92c021bb274a61", "text": "Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing popularity of taxi requesting services such as Uber and Didi Chuxing (in China), we are able to collect large-scale taxi demand data continuously. How to utilize such big data to improve the demand prediction is an interesting and critical real-world problem. Traditional demand prediction methods mostly rely on time series forecasting techniques, which fail to model the complex non-linear spatial and temporal relations. Recent advances in deep learning have shown superior performance on traditionally challenging tasks such as image classification by learning the complex features and correlations from largescale data. This breakthrough has inspired researchers to explore deep learning techniques on traffic prediction problems. However, existing methods on traffic prediction have only considered spatial relation (e.g., using CNN) or temporal relation (e.g., using LSTM) independently. We propose a Deep Multi-View Spatial-Temporal Network (DMVST-Net) framework to model both spatial and temporal relations. Specifically, our proposed model consists of three views: temporal view (modeling correlations between future demand values with near time points via LSTM), spatial view (modeling local spatial correlation via local CNN), and semantic view (modeling correlations among regions sharing similar temporal patterns). Experiments on large-scale real taxi demand data demonstrate effectiveness of our approach over state-ofthe-art methods.", "title": "" }, { "docid": "4f44b685adc7e63f18a40d0f3fc25585", "text": "Computational Thinking (CT) has become popular in recent years and has been recognised as an essential skill for all, as members of the digital age. Many researchers have tried to define CT and have conducted studies about this topic. However, CT literature is at an early stage of maturity, and is far from either explaining what CT is, or how to teach and assess this skill. In the light of this state of affairs, the purpose of this study is to examine the purpose, target population, theoretical basis, definition, scope, type and employed research design of selected papers in the literature that have focused on computational thinking, and to provide a framework about the notion, scope and elements of CT. In order to reveal the literature and create the framework for computational thinking, an inductive qualitative content analysis was conducted on 125 papers about CT, selected according to pre-defined criteria from six different databases and digital libraries. According to the results, the main topics covered in the papers composed of activities (computerised or unplugged) that promote CT in the curriculum. The targeted population of the papers was mainly K-12. Gamed-based learning and constructivism were the main theories covered as the basis for CT papers. Most of the papers were written for academic conferences and mainly composed of personal views about CT. The study also identified the most commonly used words in the definitions and scope of CT, which in turn formed the framework of CT. The findings obtained in this study may not only be useful in the exploration of research topics in CT and the identification of CT in the literature, but also support those who need guidance for developing tasks or programs about computational thinking and informatics.", "title": "" }, { "docid": "14fac379b3d4fdfc0024883eba8431b3", "text": "PURPOSE\nTo summarize the literature addressing subthreshold or nondamaging retinal laser therapy (NRT) for central serous chorioretinopathy (CSCR) and to discuss results and trends that provoke further investigation.\n\n\nMETHODS\nAnalysis of current literature evaluating NRT with micropulse or continuous wave lasers for CSCR.\n\n\nRESULTS\nSixteen studies including 398 patients consisted of retrospective case series, prospective nonrandomized interventional case series, and prospective randomized clinical trials. All studies but one evaluated chronic CSCR, and laser parameters varied greatly between studies. Mean central macular thickness decreased, on average, by ∼80 μm by 3 months. Mean best-corrected visual acuity increased, on average, by about 9 letters by 3 months, and no study reported a decrease in acuity below presentation. No retinal complications were observed with the various forms of NRT used, but six patients in two studies with micropulse laser experienced pigmentary changes in the retinal pigment epithelium attributed to excessive laser settings.\n\n\nCONCLUSION\nBased on the current evidence, NRT demonstrates efficacy and safety in 12-month follow-up in patients with chronic and possibly acute CSCR. The NRT would benefit from better standardization of the laser settings and understanding of mechanisms of action, as well as further prospective randomized clinical trials.", "title": "" }, { "docid": "dfa1269878b384b24c7ba6aea6a11373", "text": "Transfer printing represents a set of techniques for deterministic assembly of micro-and nanomaterials into spatially organized, functional arrangements with two and three-dimensional layouts. Such processes provide versatile routes not only to test structures and vehicles for scientific studies but also to high-performance, heterogeneously integrated functional systems, including those in flexible electronics, three-dimensional and/or curvilinear optoelectronics, and bio-integrated sensing and therapeutic devices. This article summarizes recent advances in a variety of transfer printing techniques, ranging from the mechanics and materials aspects that govern their operation to engineering features of their use in systems with varying levels of complexity. A concluding section presents perspectives on opportunities for basic and applied research, and on emerging use of these methods in high throughput, industrial-scale manufacturing.", "title": "" }, { "docid": "8fc8f7e62cf9e9f89957b33c6e45063c", "text": "A controller for a quadratic buck converter is given using average current-mode control. The converter has two filters; thus, it will exhibit fourth-order characteristic dynamics. The proposed scheme employs an inner loop that uses the current of the first inductor. This current can also be used for overload protection; therefore, the full benefits of current-mode control are maintained. For the outer loop, a conventional controller which provides good regulation characteristics is used. The design-oriented analytic results allow the designer to easily pinpoint the control circuit parameters that optimize the converter's performance. Experimental results are given for a 28 W switching regulator where current-mode control and voltage-mode control are compared.", "title": "" }, { "docid": "ac2f02b46a885cf662c41a16f976819e", "text": "This paper presents a conceptual framework for security engineering, with a strong focus on security requirements elicitation and analysis. This conceptual framework establishes a clear-cut vocabulary and makes explicit the interrelations between the different concepts and notions used in security engineering. Further, we apply our conceptual framework to compare and evaluate current security requirements engineering approaches, such as the Common Criteria, Secure Tropos, SREP, MSRA, as well as methods based on UML and problem frames. We review these methods and assess them according to different criteria, such as the general approach and scope of the method, its validation, and quality assurance capabilities. Finally, we discuss how these methods are related to the conceptual framework and to one another.", "title": "" }, { "docid": "69d94a7beb7ed35cc9fdd9ea824a0096", "text": "We introduce an interactive method to assess cataracts in the human eye by crafting an optical solution that measures the perceptual impact of forward scattering on the foveal region. Current solutions rely on highly-trained clinicians to check the back scattering in the crystallin lens and test their predictions on visual acuity tests. Close-range parallax barriers create collimated beams of light to scan through sub-apertures, scattering light as it strikes a cataract. User feedback generates maps for opacity, attenuation, contrast and sub-aperture point-spread functions. The goal is to allow a general audience to operate a portable high-contrast light-field display to gain a meaningful understanding of their own visual conditions. User evaluations and validation with modified camera optics are performed. Compiled data is used to reconstruct the individual's cataract-affected view, offering a novel approach for capturing information for screening, diagnostic, and clinical analysis.", "title": "" }, { "docid": "d026ebfc24e3e48d0ddb373f71d63162", "text": "The claustrum has been proposed as a possible neural candidate for the coordination of conscious experience due to its extensive ‘connectome’. Herein we propose that the claustrum contributes to consciousness by supporting the temporal integration of cortical oscillations in response to multisensory input. A close link between conscious awareness and interval timing is suggested by models of consciousness and conjunctive changes in meta-awareness and timing in multiple contexts and conditions. Using the striatal beatfrequency model of interval timing as a framework, we propose that the claustrum integrates varying frequencies of neural oscillations in different sensory cortices into a coherent pattern that binds different and overlapping temporal percepts into a unitary conscious representation. The proposed coordination of the striatum and claustrum allows for time-based dimensions of multisensory integration and decision-making to be incorporated into consciousness.", "title": "" }, { "docid": "9b519ba8a3b32d7b5b8a117b2d4d06ca", "text": "This article reviews the most current practice guidelines in the diagnosis and management of patients born with cleft lip and/or palate. Such patients frequently have multiple medical and social issues that benefit greatly from a team approach. Common challenges include feeding difficulty, nutritional deficiency, speech disorders, hearing problems, ear disease, dental anomalies, and both social and developmental delays, among others. Interdisciplinary evaluation and collaboration throughout a patient's development are essential.", "title": "" }, { "docid": "67a8a8ef9111edd9c1fa88e7c59b6063", "text": "The process of obtaining intravenous (IV) access, Venipuncture, is an everyday invasive procedure in medical settings and there are more than one billion venipuncture related procedures like blood draws, peripheral catheter insertions, intravenous therapies, etc. performed per year [3]. Excessive venipunctures are both time and resource consuming events causing anxiety, pain and distress in patients, or can lead to severe harmful injuries [8]. The major problem faced by the doctors today is difficulty in accessing veins for intra-venous drug delivery & other medical situations [3]. There is a need to develop vein detection devices which can clearly show veins. This project deals with the design development of non-invasive subcutaneous vein detection system and is implemented based on near infrared imaging and interfaced to a laptop to make it portable. A customized CCD camera is used for capturing the vein images and Computer Software modules (MATLAB & LabVIEW) is used for the processing [3].", "title": "" }, { "docid": "3a4da0cf9f4fdcc1356d25ea1ca38ca4", "text": "Almost all of the existing work on Named Entity Recognition (NER) consists of the following pipeline stages – part-of-speech tagging, segmentation, and named entity type classification. The requirement of hand-labeled training data on these stages makes it very expensive to extend to different domains and entity classes. Even with a large amount of hand-labeled data, existing techniques for NER on informal text, such as social media, perform poorly due to a lack of reliable capitalization, irregular sentence structure and a wide range of vocabulary. In this paper, we address the lack of hand-labeled training data by taking advantage of weak super vision signals. We present our approach in two parts. First, we propose a novel generative model that combines the ideas from Hidden Markov Model (HMM) and n-gram language models into what we call an N-gram Language Markov Model (NLMM). Second, we utilize large-scale weak supervision signals from sources such as Wikipedia titles and the corresponding click counts to estimate parameters in NLMM. Our model is simple and can be implemented without the use of Expectation Maximization or other expensive iterative training techniques. Even with this simple model, our approach to NER on informal text outperforms existing systems trained on formal English and matches state-of-the-art NER systems trained on hand-labeled Twitter messages. Because our model does not require hand-labeled data, we can adapt our system to other domains and named entity classes very easily. We demonstrate the flexibility of our approach by successfully applying it to the different domain of extracting food dishes from restaurant reviews with very little extra work.", "title": "" }, { "docid": "704c62beaf6b9b09265c0daacde69abc", "text": "This paper investigates discrimination capabilities in the texture of fundus images to differentiate between pathological and healthy images. For this purpose, the performance of local binary patterns (LBP) as a texture descriptor for retinal images has been explored and compared with other descriptors such as LBP filtering and local phase quantization. The goal is to distinguish between diabetic retinopathy (DR), age-related macular degeneration (AMD), and normal fundus images analyzing the texture of the retina background and avoiding a previous lesion segmentation stage. Five experiments (separating DR from normal, AMD from normal, pathological from normal, DR from AMD, and the three different classes) were designed and validated with the proposed procedure obtaining promising results. For each experiment, several classifiers were tested. An average sensitivity and specificity higher than 0.86 in all the cases and almost of 1 and 0.99, respectively, for AMD detection were achieved. These results suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in a diagnosis aid system for retinal disease screening.", "title": "" }, { "docid": "c39ab37765fbafdbc2dd3bf70c801d27", "text": "This paper presents the advantages in extending Classical T ensor Algebra (CTA), also known as Kronecker Algebra, to allow the definition of functions, i.e., functional dependencies among its operands. Such extended tensor algebra have been called Generalized Tenso r Algebra (GTA). Stochastic Automata Networks (SAN) and Superposed Generalized Stochastic Petri Ne ts (SGSPN) formalisms use such Kronecker representations. We show that SAN, which uses GTA, has the sa m application scope of SGSPN, which uses CTA. We also show that any SAN model with functions has at least one equivalent representation without functions. In fact, the use of functions, and conseq uently the GTA, is not really a “need” since there is an equivalence of formalisms, but in some cases it represe nts, in a computational cost point of view, some irrefutable “advantages”. Some modeling examples are pres ent d in order to draw comparisons between the memory needs and CPU time to the generation, and the solution of the presented models.", "title": "" }, { "docid": "41b87466db128bee207dd157a9fef761", "text": "Systems that enforce memory safety for today’s operating system kernels and other system software do not account for the behavior of low-level software/hardware interactions such as memory-mapped I/O, MMU configuration, and context switching. Bugs in such low-level interactions can lead to violations of the memory safety guarantees provided by a safe execution environment and can lead to exploitable vulnerabilities in system software . In this work, we present a set of program analysis and run-time instrumentation techniques that ensure that errors in these low-level operations do not violate the assumptions made by a safety checking system. Our design introduces a small set of abstractions and interfaces for manipulating processor state, kernel stacks, memory mapped I/O objects, MMU mappings, and self modifying code to achieve this goal, without moving resource allocation and management decisions out of the kernel. We have added these techniques to a compiler-based virtual machine called Secure Virtual Architecture (SVA), to which the standard Linux kernel has been ported previously. Our design changes to SVA required only an additional 100 lines of code to be changed in this kernel. Our experimental results show that our techniques prevent reported memory safety violations due to low-level Linux operations and that these violations are not prevented by SVA without our techniques . Moreover, the new techniques in this paper introduce very little overhead over and above the existing overheads of SVA. Taken together, these results indicate that it is clearly worthwhile to add these techniques to an existing memory safety system.", "title": "" }, { "docid": "d7242f26b3d7c0f71c09cc2e3914b728", "text": "In this paper, a new offline actor-critic learning algorithm is introduced: Sampled Policy Gradient (SPG). SPG samples in the action space to calculate an approximated policy gradient by using the critic to evaluate the samples. This sampling allows SPG to search the action-Q-value space more globally than deterministic policy gradient (DPG), enabling it to theoretically avoid more local optima. SPG is compared to Q-learning and the actor-critic algorithms CACLA and DPG in a pellet collection task and a self play environment in the game Agar.io. The online game Agar.io has become massively popular on the internet due to intuitive game design and the ability to instantly compete against players around the world. From the point of view of artificial intelligence this game is also very intriguing: The game has a continuous input and action space and allows to have diverse agents with complex strategies compete against each other. The experimental results show that Q-Learning and CACLA outperform a pre-programmed greedy bot in the pellet collection task, but all algorithms fail to outperform this bot in a fighting scenario. The SPG algorithm is analyzed to have great extendability through offline exploration and it matches DPG in performance even in its basic form without extensive sampling.", "title": "" }, { "docid": "4be28b696296ff779c7391b2f8d3b0c4", "text": "The rise of Digital B2B Marketing has presented us with new opportunities and challenges as compared to traditional e-commerce. B2B setup is different from B2C setup in many ways. Along with the contrasting buying entity (company vs. individual), there are dissimilarities in order size (few dollars in e-commerce vs. up to several thousands of dollars in B2B), buying cycle (few days in B2C vs. 6–18 months in B2B) and most importantly a presence of multiple decision makers (individual or family vs. an entire company). Due to easy availability of the data and bargained complexities, most of the existing literature has been set in the B2C framework and there are not many examples in the B2B context. We present a unique approach to model next likely action of B2B customers by observing a sequence of digital actions. In this paper, we propose a unique two-step approach to model next likely action using a novel ensemble method that aims to predict the best digital asset to target customers as a next action. The paper provides a unique approach to translate the propensity model at an email address level into a segment that can target a group of email addresses. In the first step, we identify the high propensity customers for a given asset using traditional and advanced multinomial classification techniques and use non-negative least squares to stack rank different assets based on the output for ensemble model. In the second step, we perform a penalized regression to reduce the number of coefficients and obtain the satisfactory segment variables. Using real world digital marketing campaign data, we further show that the proposed method outperforms the traditional classification methods.", "title": "" }, { "docid": "af3a87d82c1f11a8a111ed4276020161", "text": "In many cases, neurons process information carried by the precise timings of spikes. Here we show how neurons can learn to generate specific temporally precise output spikes in response to input patterns of spikes having precise timings, thus processing and memorizing information that is entirely temporally coded, both as input and as output. We introduce two new supervised learning rules for spiking neurons with temporal coding of information (chronotrons), one that provides high memory capacity (E-learning), and one that has a higher biological plausibility (I-learning). With I-learning, the neuron learns to fire the target spike trains through synaptic changes that are proportional to the synaptic currents at the timings of real and target output spikes. We study these learning rules in computer simulations where we train integrate-and-fire neurons. Both learning rules allow neurons to fire at the desired timings, with sub-millisecond precision. We show how chronotrons can learn to classify their inputs, by firing identical, temporally precise spike trains for different inputs belonging to the same class. When the input is noisy, the classification also leads to noise reduction. We compute lower bounds for the memory capacity of chronotrons and explore the influence of various parameters on chronotrons' performance. The chronotrons can model neurons that encode information in the time of the first spike relative to the onset of salient stimuli or neurons in oscillatory networks that encode information in the phases of spikes relative to the background oscillation. Our results show that firing one spike per cycle optimizes memory capacity in neurons encoding information in the phase of firing relative to a background rhythm.", "title": "" } ]
scidocsrr
02ec7baec5a9136c14dd1e1aa8dde635
Congestion Avoidance with Incremental Filter Aggregation in Content-Based Routing Networks
[ { "docid": "7f7e7f7ddcbb4d98270c0ba50a3f7a25", "text": "Workflow management systems are traditionally centralized, creating a single point of failure and a scalability bottleneck. In collaboration with Cybermation, Inc., we have developed a content-based publish/subscribe platform, called PADRES, which is a distributed middleware platform with features inspired by the requirements of workflow management and business process execution. These features constitute original additions to publish/subscribe systems and include an expressive subscription language, composite subscription processing, a rulebased matching and routing mechanism, historc, query-based data access, and the support for the decentralized execution of business process specified in XML. PADRES constitutes the basis for the next generation of enterprise management systems developed by Cybermation, Inc., including business process automation, monitoring, and execution applications.", "title": "" } ]
[ { "docid": "840c42456a69d20deead9f8574f6ee14", "text": "Millimeter wave (mmWave) is a promising approach for the fifth generation cellular networks. It has a large available bandwidth and high gain antennas, which can offer interference isolation and overcome high frequency-dependent path loss. In this paper, we study the non-uniform heterogeneous mmWave network. Non-uniform heterogeneous networks are more realistic in practical scenarios than traditional independent homogeneous Poisson point process (PPP) models. We derive the signal-to-noise-plus-interference ratio (SINR) and rate coverage probabilities for a two-tier non-uniform millimeter-wave heterogeneous cellular network, where the macrocell base stations (MBSs) are deployed as a homogeneous PPP and the picocell base stations (PBSs) are modeled as a Poisson hole process (PHP), dependent on the MBSs. Using tools from stochastic geometry, we derive the analytical results for the SINR and rate coverage probabilities. The simulation results validate the analytical expressions. Furthermore, we find that there exists an optimum density of the PBS that achieves the best coverage probability and the change rule with different radii of the exclusion region. Finally, we show that as expected, mmWave outperforms microWave cellular network in terms of rate coverage probability for this system.", "title": "" }, { "docid": "d08c24228e43089824357342e0fa0843", "text": "This paper presents a new register assignment heuristic for procedures in SSA Form, whose interference graphs are chordal; the heuristic is called optimistic chordal coloring (OCC). Previous register assignment heuristics eliminate copy instructions via coalescing, in other words, merging nodes in the interference graph. Node merging, however, can not preserve the chordal graph property, making it unappealing for SSA-based register allocation. OCC is based on graph coloring, but does not employ coalescing, and, consequently, preserves graph chordality, and does not increase its chromatic number; in this sense, OCC is conservative as well as optimistic. OCC is observed to eliminate at least as many dynamically executed copy instructions as iterated register coalescing (IRC) for a set of chordal interference graphs generated from several Mediabench and MiBench applications. In many cases, OCC and IRC were able to find optimal or near-optimal solutions for these graphs. OCC ran 1.89x faster than IRC, on average.", "title": "" }, { "docid": "7df3fe3ffffaac2fb6137fdc440eb9f4", "text": "The amount of information in medical publications continues to increase at a tremendous rate. Systematic reviews help to process this growing body of information. They are fundamental tools for evidence-based medicine. In this paper, we show that automatic text classification can be useful in building systematic reviews for medical topics to speed up the reviewing process. We propose a per-question classification method that uses an ensemble of classifiers that exploit the particular protocol of a systematic review. We also show that when integrating the classifier in the human workflow of building a review the per-question method is superior to the global method. We test several evaluation measures on a real dataset.", "title": "" }, { "docid": "c2f807e336be1b8d918d716c07668ae1", "text": "The present article proposes and describes a new ZCS non-isolated bidirectional buck-boost DC-DC converter for energy storage applications in electric vehicles. Usually, the conventional converters are adapted with an auxiliary resonant cell to provide the zero current switching turn-on/turn-off condition for the main switching devices. The advantages of proposed converter has reduced switching losses, reduced component count and improved efficiency. The proposed converter operates either in boost or buck mode. This paper mainly deals with the operating principles, analysis and design simulations of the proposed converter in order to prove the better soft-switching capability, reduced switching losses and efficiency improvement than the conventional converter.", "title": "" }, { "docid": "cbc4fc5d233c55fcc065fcc64b0404d8", "text": "PURPOSE\nTo determine if noise damage in the organ of Corti is different in the low- and high-frequency regions of the cochlea.\n\n\nMATERIALS AND METHODS\nChinchillas were exposed for 2 to 432 days to a 0.5 (low-frequency) or 4 kHz (high-frequency) octave band of noise at 47 to 95 dB sound pressure level. Auditory thresholds were determined before, during, and after the noise exposure. The cochleas were examined microscopically as plastic-embedded flat preparations. Missing cells were counted, and the sequence of degeneration was determined as a function of recovery time (0-30 days).\n\n\nRESULTS\nWith high-frequency noise, primary damage began as small focal losses of outer hair cells in the 4-8 kHz region. With continued exposure, damage progressed to involve loss of an entire segment of the organ of Corti, along with adjacent myelinated nerve fibers. Much of the latter loss is secondary to the intermixing of cochlear fluids through the damaged reticular lamina. With low-frequency noise, primary damage appeared as outer hair cell loss scattered over a broad area in the apex. With continued exposure, additional apical outer hair cells degenerated, while supporting cells, inner hair cells, and nerve fibers remained intact. Continued exposure to low-frequency noise also resulted in focal lesions in the basal cochlea that were indistinguishable from those resulting from exposure to high-frequency noise.\n\n\nCONCLUSIONS\nThe patterns of cochlear damage and their relation to functional measures of hearing in noise-exposed chinchillas are similar to those seen in noise-exposed humans. Thus, the chinchilla is an excellent model for studying noise effects, with the long-term goal of identifying ways to limit noise-induced hearing loss in humans.", "title": "" }, { "docid": "1dbaa72cd95c32d1894750357e300529", "text": "In recognizing the importance of educating aspiring scientists in the responsible conduct of research (RCR), the Office of Research Integrity (ORI) began sponsoring the creation of instructional resources to address this pressing need in 2002. The present guide on avoiding plagiarism and other inappropriate writing practices was created to help students, as well as professionals, identify and prevent such malpractices and to develop an awareness of ethical writing and authorship. This guide is one of the many products stemming from ORI’s effort to promote the RCR.", "title": "" }, { "docid": "e7a86eeb576d4aca3b5e98dc53fcb52d", "text": "Dictionary methods for cross-language information retrieval give performance below that for mono-lingual retrieval. Failure to translate multi-term phrases has km shown to be one of the factors responsible for the errors associated with dictionary methods. First, we study the importance of phrasaI translation for this approach. Second, we explore the role of phrases in query expansion via local context analysis and local feedback and show how they can be used to significantly reduce the error associated with automatic dictionary translation.", "title": "" }, { "docid": "224cb33193938d5bfb8d604a86d3641a", "text": "We show how machine vision, learning, and planning can be combined to solve hierarchical consensus tasks. Hierarchical consensus tasks seek correct answers to a hierarchy of subtasks, where branching depends on answers at preceding levels of the hierarchy. We construct a set of hierarchical classification models that aggregate machine and human effort on different subtasks and use these inferences in planning. Optimal solution of hierarchical tasks is intractable due to the branching of task hierarchy and the long horizon of these tasks. We study Monte Carlo planning procedures that can exploit task structure to constrain the policy space for tractability. We evaluate the procedures on data collected from Galaxy Zoo II in allocating human effort and show that significant gains can be achieved.", "title": "" }, { "docid": "21be75a852ab69d391d8d6f4ed911f46", "text": "We have been developing an exoskeleton robot (ExoRob) for assisting daily upper limb movements (i.e., shoulder, elbow and wrist). In this paper we have focused on the development of a 2DOF ExoRob to rehabilitate elbow joint flexion/extension and shoulder joint internal/external rotation, as a step toward the development of a complete (i.e., 3DOF) shoulder motion assisted exoskeleton robot. The proposed ExoRob is designed to be worn on the lateral side of the upper arm in order to provide naturalistic movements at the level of elbow (flexion/extension) and shoulder joint internal/external rotation. This paper also focuses on the modeling and control of the proposed ExoRob. A kinematic model of ExoRob has been developed based on modified Denavit-Hartenberg notations. In dynamic simulations of the proposed ExoRob, a novel nonlinear sliding mode control technique with exponential reaching law and computed torque control technique is employed, where trajectory tracking that corresponds to typical rehab (passive) exercises has been carried out to evaluate the effectiveness of the developed model and controller. Simulated results show that the controller is able to drive the ExoRob efficiently to track the desired trajectories, which in this case consisted in passive arm movements. Such movements are used in rehabilitation and could be performed very efficiently with the developed ExoRob and the controller. Experiments were carried out to validate the simulated results as well as to evaluate the performance of the controller.", "title": "" }, { "docid": "abf845c459ed415ac77ba91615d7b674", "text": "We study the online market for peer-to-peer (P2P) lending, in which individuals bid on unsecured microloans sought by other individual borrowers. Using a large sample of consummated and failed listings from the largest online P2P lending marketplace Prosper.com, we test whether social networks lead to better lending outcomes, focusing on the distinction between the structural and relational aspects of networks. While the structural aspects have limited to no significance, the relational aspects are consistently significant predictors of lending outcomes, with a striking gradation based on the verifiability and visibility of a borrower’s social capital. Stronger and more verifiable relational network measures are associated with a higher likelihood of a loan being funded, a lower risk of default, and lower interest rates. We discuss the implications of our findings for financial disintermediation and the design of decentralized electronic lending markets. This version: October 2009 ∗Decision, Operations and Information Technologies Department, **Finance Department. All the authors are at Robert H. Smith School of Business, University of Maryland, College Park, MD 20742. Mingfeng Lin can be reached at [email protected]. Prabhala can be reached at [email protected]. Viswanathan can be reached at [email protected]. The authors thank Ethan Cohen-Cole, Sanjiv Das, Jerry Hoberg, Dalida Kadyrzhanova, Nikunj Kapadia, De Liu, Vojislav Maksimovic, Gordon Phillips, Kislaya Prasad, Galit Shmueli, Kelly Shue, and seminar participants at Carnegie Mellon University, University of Utah, the 2008 Summer Doctoral Program of the Oxford Internet Institute, the 2008 INFORMS Annual Conference, the Workshop on Information Systems and Economics (Paris), and Western Finance Association for their valuable comments and suggestions. Mingfeng Lin also thanks to the Ewing Marion Kauffman Foundation for the 2009 Dissertation Fellowship Award, and to the Economic Club of Washington D.C. (2008) for their generous financial support. We also thank Prosper.com for making the data for the study available. The contents of this publication are the sole responsibility of the authors. Judging Borrowers By The Company They Keep: Social Networks and Adverse Selection in Online Peer-to-Peer Lending", "title": "" }, { "docid": "c20da8ccf60fbb753815d006627fa673", "text": "This paper presents LiteOS, a multi-threaded operating system that provides Unix-like abstractions for wireless sensor networks. Aiming to be an easy-to-use platform, LiteOS offers a number of novel features, including: (1) a hierarchical file system and a wireless shell interface for user interaction using UNIX-like commands; (2) kernel support for dynamic loading and native execution of multithreaded applications; and (3) online debugging, dynamic memory, and file system assisted communication stacks. LiteOS also supports software updates through a separation between the kernel and user applications, which are bridged through a suite of system calls. Besides the features that have been implemented, we also describe our perspective on LiteOS as an enabling platform. We evaluate the platform experimentally by measuring the performance of common tasks, and demonstrate its programmability through twenty-one example applications.", "title": "" }, { "docid": "16d52c166a96c5d0d40479530cf52d2b", "text": "The dorsolateral prefrontal cortex (DLPFC) plays a crucial role in working memory. Notably, persistent activity in the DLPFC is often observed during the retention interval of delayed response tasks. The code carried by the persistent activity remains unclear, however. We critically evaluate how well recent findings from functional magnetic resonance imaging studies are compatible with current models of the role of the DLFPC in working memory. These new findings suggest that the DLPFC aids in the maintenance of information by directing attention to internal representations of sensory stimuli and motor plans that are stored in more posterior regions.", "title": "" }, { "docid": "1a13a0d13e0925e327c9b151b3e5b32d", "text": "The topic of this thesis is fraud detection in mobile communications networks by means of user profiling and classification techniques. The goal is to first identify relevant user groups based on call data and then to assign a user to a relevant group. Fraud may be defined as a dishonest or illegal use of services, with the intention to avoid service charges. Fraud detection is an important application, since network operators lose a relevant portion of their revenue to fraud. Whereas the intentions of the mobile phone users cannot be observed, it is assumed that the intentions are reflected in the call data. The call data is subsequently used in describing behavioral patterns of users. Neural networks and probabilistic models are employed in learning these usage patterns from call data. These models are used either to detect abrupt changes in established usage patterns or to recognize typical usage patterns of fraud. The methods are shown to be effective in detecting fraudulent behavior by empirically testing the methods with data from real mobile communications networks. © All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior permission of the author.", "title": "" }, { "docid": "24e0fb7247644ba6324de9c86fdfeb12", "text": "There has recently been a surge of work in explanatory artificial intelligence (XAI). This research area tackles the important problem that complex machines and algorithms often cannot provide insights into their behavior and thought processes. XAI allows users and parts of the internal system to be more transparent, providing explanations of their decisions in some level of detail. These explanations are important to ensure algorithmic fairness, identify potential bias/problems in the training data, and to ensure that the algorithms perform as expected. However, explanations produced by these systems is neither standardized nor systematically assessed. In an effort to create best practices and identify open challenges, we provide our definition of explainability and show how it can be used to classify existing literature. We discuss why current approaches to explanatory methods especially for deep neural networks are insufficient. Finally, based on our survey, we conclude with suggested future research directions for explanatory artificial intelligence.", "title": "" }, { "docid": "bfae60b46b97cf2491d6b1136c60f6a6", "text": "Educational data mining concerns with developing methods for discovering knowledge from data that come from educational domain. In this paper we used educational data mining to improve graduate students’ performance, and overcome the problem of low grades of graduate students. In our case study we try to extract useful knowledge from graduate students data collected from the college of Science and Technology – Khanyounis. The data include fifteen years period [1993-2007]. After preprocessing the data, we applied data mining techniques to discover association, classification, clustering and outlier detection rules. In each of these four tasks, we present the extracted knowledge and describe its importance in educational domain.", "title": "" }, { "docid": "2e6af4ea3a375f67ce5df110a31aeb85", "text": "Controlled power system separation, which separates the transmission system into islands in a controlled manner, is considered the final resort against a blackout under severe disturbances, e.g., cascading events. Three critical problems of controlled separation are where and when to separate and what to do after separation, which are rarely studied together. They are addressed in this paper by a proposed unified controlled separation scheme based on synchrophasors. The scheme decouples the three problems by partitioning them into sub-problems handled strategically in three time stages: the Offline Analysis stage determines elementary generator groups, optimizes potential separation points in between, and designs post-separation control strategies; the Online Monitoring stage predicts separation boundaries by modal analysis on synchrophasor data; the Real-time Control stage calculates a synchrophasor-based separation risk index for each boundary to predict the time to perform separation. The proposed scheme is demonstrated on a 179-bus power system by case studies.", "title": "" }, { "docid": "499e2c0a0170d5b447548f85d4a9f402", "text": "OBJECTIVE\nTo discuss the role of proprioception in motor control and in activation of the dynamic restraints for functional joint stability.\n\n\nDATA SOURCES\nInformation was drawn from an extensive MEDLINE search of the scientific literature conducted in the areas of proprioception, motor control, neuromuscular control, and mechanisms of functional joint stability for the years 1970-1999.\n\n\nDATA SYNTHESIS\nProprioception is conveyed to all levels of the central nervous system. It serves fundamental roles for optimal motor control and sensorimotor control over the dynamic restraints.\n\n\nCONCLUSIONS/APPLICATIONS\nAlthough controversy remains over the precise contributions of specific mechanoreceptors, proprioception as a whole is an essential component to controlling activation of the dynamic restraints and motor control. Enhanced muscle stiffness, of which muscle spindles are a crucial element, is argued to be an important characteristic for dynamic joint stability. Articular mechanoreceptors are attributed instrumental influence over gamma motor neuron activation, and therefore, serve to indirectly influence muscle stiffness. In addition, articular mechanoreceptors appear to influence higher motor center control over the dynamic restraints. Further research conducted in these areas will continue to assist in providing a scientific basis to the selection and development of clinical procedures.", "title": "" }, { "docid": "3024c0cd172eb2a3ec33e0383ac8ba18", "text": "The Android packaging model offers ample opportunities for malware writers to piggyback malicious code in popular apps, which can then be easily spread to a large user base. Although recent research has produced approaches and tools to identify piggybacked apps, the literature lacks a comprehensive investigation into such phenomenon. We fill this gap by: 1) systematically building a large set of piggybacked and benign apps pairs, which we release to the community; 2) empirically studying the characteristics of malicious piggybacked apps in comparison with their benign counterparts; and 3) providing insights on piggybacking processes. Among several findings providing insights analysis techniques should build upon to improve the overall detection and classification accuracy of piggybacked apps, we show that piggybacking operations not only concern app code, but also extensively manipulates app resource files, largely contradicting common beliefs. We also find that piggybacking is done with little sophistication, in many cases automatically, and often via library code.", "title": "" }, { "docid": "b853f492667d4275295c0228566f4479", "text": "This study reports spore germination, early gametophyte development and change in the reproductive phase of Drynaria fortunei, a medicinal fern, in response to changes in pH and light spectra. Germination of D. fortunei spores occurred on a wide range of pH from 3.7 to 9.7. The highest germination (63.3%) occurred on ½ strength Murashige and Skoog basal medium supplemented with 2% sucrose at pH 7.7 under white light condition. Among the different light spectra tested, red, far-red, blue, and white light resulted in 71.3, 42.3, 52.7, and 71.0% spore germination, respectively. There were no morphological differences among gametophytes grown under white and blue light. Elongated or filamentous but multiseriate gametophytes developed under red light, whereas under far-red light gametophytes grew as uniseriate filaments consisting of mostly elongated cells. Different light spectra influenced development of antheridia and archegonia in the gametophytes. Gametophytes gave rise to new gametophytes and developed antheridia and archegonia after they were transferred to culture flasks. After these gametophytes were transferred to plastic tray cells with potting mix of tree fern trunk fiber mix (TFTF mix) and peatmoss the highest number of sporophytes was found. Sporophytes grown in pots developed rhizomes.", "title": "" }, { "docid": "f3cfd3e026c368146102185c31761fd2", "text": "In this paper, we summarize the human emotion recognition using different set of electroencephalogram (EEG) channels using discrete wavelet transform. An audio-visual induction based protocol has been designed with more dynamic emotional content for inducing discrete emotions (disgust, happy, surprise, fear and neutral). EEG signals are collected using 64 electrodes from 20 subjects and are placed over the entire scalp using International 10-10 system. The raw EEG signals are preprocessed using Surface Laplacian (SL) filtering method and decomposed into three different frequency bands (alpha, beta and gamma) using Discrete Wavelet Transform (DWT). We have used “db4” wavelet function for deriving a set of conventional and modified energy based features from the EEG signals for classifying emotions. Two simple pattern classification methods, K Nearest Neighbor (KNN) and Linear Discriminant Analysis (LDA) methods are used and their performances are compared for emotional states classification. The experimental results indicate that, one of the proposed features (ALREE) gives the maximum average classification rate of 83.26% using KNN and 75.21% using LDA compared to those of conventional features. Finally, we present the average classification rate and subsets of emotions classification rate of these two different classifiers for justifying the performance of our emotion recognition system.", "title": "" } ]
scidocsrr
708a0d082f133d01b236fd86ff4c9732
CBCD: Cloned buggy code detector
[ { "docid": "3cd67f617b3a68844e9766d6f670f6ef", "text": "Software security vulnerabilities are discovered on an almost daily basis and have caused substantial damage. Aiming at supporting early detection and resolution for them, we have conducted an empirical study on thousands of vulnerabilities and found that many of them are recurring due to software reuse. Based on the knowledge gained from the study, we developed SecureSync, an automatic tool to detect recurring software vulnerabilities on the systems that reuse source code or libraries. The core of SecureSync includes two techniques to represent and compute the similarity of vulnerable code across different systems. The evaluation for 60 vulnerabilities on 176 releases of 119 open-source software systems shows that SecureSync is able to detect recurring vulnerabilities with high accuracy and to identify 90 releases having potentially vulnerable code that are not reported or fixed yet, even in mature systems. A couple of cases were actually confirmed by their developers.", "title": "" } ]
[ { "docid": "a0fa0ea42201d552e9d7c750d9e3450d", "text": "With the proliferation of computing and information technologies, we have an opportunity to envision a fully participatory democracy in the country through a fully digitized voting platform. However, the growing interconnectivity of systems and people across the globe, and the proliferation of cybersecurity issues pose a significant bottleneck towards achieving such a vision. In this paper, we discuss a vision to modernize our voting processes and discuss the challenges for creating a national e-voting framework that incorporates policies, standards and technological infrastructure that is secure, privacy-preserving, resilient and transparent. Through partnerships among private industry, academia, and State and Federal Government, technology must be the catalyst to develop a national platform for American voters. Along with integrating biometrics to authenticate each registered voter for transparency and accountability, the platform provides depth in the e-voting infrastructure with emerging blockchain technologies. We outline the way voting process runs today with the challenges; states are having from funding to software development concerns. Additionally, we highlight attacks from malware infiltrations from off the shelf products made from factories from countries such as China. This paper illustrates a strategic level of voting challenges and modernizing processes that will enhance the voter’s trust in America democracy.", "title": "" }, { "docid": "104cf54cfa4bc540b17176593cdb77d8", "text": "Nonlinear manifold learning from unorganized data points is a very challenging unsupervised learning and data visualization problem with a great variety of applications. In this paper we present a new algorithm for manifold learning and nonlinear dimension reduction. Based on a set of unorganized data points sampled with noise from the manifold, we represent the local geometry of the manifold using tangent spaces learned by fitting an affine subspace in a neighborhood of each data point. Those tangent spaces are aligned to give the internal global coordinates of the data points with respect to the underlying manifold by way of a partial eigendecomposition of the neighborhood connection matrix. We present a careful error analysis of our algorithm and show that the reconstruction errors are of second-order accuracy. We illustrate our algorithm using curves and surfaces both in 2D/3D and higher dimensional Euclidean spaces, and 64-by-64 pixel face images with various pose and lighting conditions. We also address several theoretical and algorithmic issues for further research and improvements.", "title": "" }, { "docid": "9af928f8d620630cfd2938905adeb930", "text": "This paper describes the application of a pedagogical model called \\learning as a research activity\" [D. Gil-P erez and J. Carrascosa-Alis, Science Education 78 (1994) 301{315] to the design and implementation of a two-semester course on compiler design for Computer Engineering students. In the new model, the classical pattern of classroom activity based mainly on one-way knowledge transmission/reception of pre-elaborated concepts is replaced by an active working environment that resembles that of a group of novice researchers under the supervision of an expert. The new model, rooted in the now commonly-accepted constructivist postulates, strives for meaningful acquisition of fundamental concepts through problem solving |in close parallelism to the construction of scienti c knowledge through history.", "title": "" }, { "docid": "9180fe4fc7020bee9a52aa13de3adf54", "text": "A new Depth Image Layers Separation (DILS) algorithm for synthesizing inter-view images based on disparity depth map layers representation is presented. The approach is to separate the depth map into several layers identified through histogram-based clustering. Each layer is extracted using inter-view interpolation to create objects based on location and depth. DILS is a new paradigm in selecting interesting image locations based on depth, but also in producing new image representations that allow objects or parts of an image to be described without the need of segmentation and identification. The image view synthesis can reduce the configuration complexity of multi-camera arrays in 3D imagery and free-viewpoint applications. The simulation results show that depth layer separation is able to create inter-view images that may be integrated with other techniques such as occlusion handling processes. The DILS algorithm can be implemented using both simple as well as sophisticated stereo matching methods to synthesize inter-view images.", "title": "" }, { "docid": "8cfb150c71b310cf89bb5ded86ec7684", "text": "This article argues that technological innovation is transforming the flow of information, the fluidity of social action, and is giving birth to new forms of bottom up innovation that are capable of expanding and exploding old theories of reproduction and resistance because 'smart mobs', 'street knowledge', and 'social movements' cannot be neutralized by powerful structural forces in the same old ways. The purpose of this article is to develop the concept of YPAR 2.0 in which new technologies enable young people to visualize, validate, and transform social inequalities by using local knowledge in innovative ways that deepen civic engagement, democratize data, expand educational opportunity, inform policy, and mobilize community assets. Specifically this article documents how digital technology (including a mobile, mapping and SMS platform called Streetwyze and paper-mapping tool Local Ground) - coupled with 'ground-truthing' - an approach in which community members work with researchers to collect and verify 'public' data - sparked a food revolution in East Oakland that led to an increase in young people's self-esteem, environmental stewardship, academic engagement, and positioned urban youth to become community leaders and community builders who are connected and committed to health and well-being of their neighborhoods. This article provides an overview of how the YPAR 2.0 Model was developed along with recommendations and implications for future research and collaborations between youth, teachers, neighborhood leaders, and youth serving organizations.", "title": "" }, { "docid": "5f8a5ea87859bf80cb630b0f3734d4cb", "text": "Existing Natural Language Generation (nlg) systems are weak AI systems and exhibit limited capabilities when language generation tasks demand higher levels of creativity, originality and brevity. E‚ective solutions or, at least evaluations of modern nlg paradigms for such creative tasks have been elusive, unfortunately. Œis paper introduces and addresses the task of coherent story generation from independent descriptions, describing a scene or an event. Towards this, we explore along two popular text-generation paradigms – (1) Statistical Machine Translation (smt), posing story generation as a translation problem and (2) Deep Learning, posing story generation as a sequence to sequence learning problem. In SMT, we chose two popular methods such as phrase based SMT (pb-SMT) and syntax based SMT (syntax-SMT) to ‘translate’ the incoherent input text into stories. We then implement a deep recurrent neural network (rnn) architecture that encodes sequence of variable length input descriptions to corresponding latent representations and decodes them to produce well formed comprehensive story like summaries. Œe ecacy of the suggested approaches is demonstrated on a publicly available dataset with the help of popular machine translation and summarization evaluation metrics. We believe, a system like ours has di‚erent interesting applicationsfor example, creating news articles from phrases of event information.", "title": "" }, { "docid": "b3cdd76dd50bea401ede3bb945c377dc", "text": "First we report on a new threat campaign, underway in Korea, which infected around 20,000 Android users within two months. The campaign attacked mobile users with malicious applications spread via different channels, such as email attachments or SMS spam. A detailed investigation of the Android malware resulted in the identification of a new Android malware family Android/BadAccents. The family represents current state-of-the-art in mobile malware development for banking trojans. Second, we describe in detail the techniques this malware family uses and confront them with current state-of-the-art static and dynamic codeanalysis techniques for Android applications. We highlight various challenges for automatic malware analysis frameworks that significantly hinder the fully automatic detection of malicious components in current Android malware. Furthermore, the malware exploits a previously unknown tapjacking vulnerability in the Android operating system, which we describe. As a result of this work, the vulnerability, affecting all Android versions, will be patched in one of the next releases of the Android Open Source Project.", "title": "" }, { "docid": "aa9450cdbdb1162015b4d931c32010fb", "text": "The design of a low-cost rectenna for low-power applications is presented. The rectenna is designed with the use of analytical models and closed-form analytical expressions. This allows for a fast design of the rectenna system. To acquire a small-area rectenna, a layered design is proposed. Measurements indicate the validity range of the analytical models.", "title": "" }, { "docid": "01fac1331705dcda8ce14b0145854294", "text": "This meta-analysis evaluated predictors of both objective and subjective sales performance. Biodata measures and sales ability inventories were good predictors of the ratings criterion, with corrected rs of .52 and .45, respectively. Potency (a subdimension of the Big 5 personality dimension Extraversion) predicted supervisor ratings of performance (r = .28) and objective measures of sales (r — .26). Achievement (a component of the Conscientiousness dimension) predicted ratings (r = .25) and objective sales (r = .41). General cognitive ability showed a correlation of .40 with ratings but only .04 with objective sales. Similarly, age predicted ratings (r = .26) but not objective sales (r = —.06). On the basis of a small number of studies, interest appears to be a promising predictor of sales success.", "title": "" }, { "docid": "cc3f821bd9617d31a8b303c4982e605f", "text": "Body composition in older adults can be assessed using simple, convenient but less precise anthropometric methods to assess (regional) body fat and skeletal muscle, or more elaborate, precise and costly methods such as computed tomography and magnetic resonance imaging. Body weight and body fat percentage generally increase with aging due to an accumulation of body fat and a decline in skeletal muscle mass. Body weight and fatness plateau at age 75–80 years, followed by a gradual decline. However, individual weight patterns may differ and the periods of weight loss and weight (re)gain common in old age may affect body composition. Body fat redistributes with aging, with decreasing subcutaneous and appendicular fat and increasing visceral and ectopic fat. Skeletal muscle mass declines with aging, a process called sarcopenia. Obesity in old age is associated with a higher risk of mobility limitations, disability and mortality. A higher waist circumference and more visceral fat increase these risks, independent of overall body fatness, as do involuntary weight loss and weight cycling. The role of low skeletal muscle mass in the development of mobility limitations and disability remains controversial, but it is much smaller than the role of high body fat. Low muscle mass does not seem to increase mortality risk in older adults.", "title": "" }, { "docid": "f21850cde63b844e95db5b9916db1c30", "text": "Foreign Exchange (Forex) market is a complex and challenging task for prediction due to uncertainty movement of exchange rate. However, these movements over timeframe also known as historical Forex data that offered a generic repeated trend patterns. This paper uses the features extracted from trend patterns to model and predict the next day trend. Hidden Markov Models (HMMs) is applied to learn the historical trend patterns, and use to predict the next day movement trends. We use the 2011 Forex historical data of Australian Dollar (AUS) and European Union Dollar (EUD) against the United State Dollar (USD) for modeling, and the 2012 and 2013 Forex historical data for validating the proposed model. The experimental results show outperforms prediction result for both years.", "title": "" }, { "docid": "cd31be485b4b914508a5a9e7c5445459", "text": "Deep learning has become increasingly popular in both academic and industrial areas in the past years. Various domains including pattern recognition, computer vision, and natural language processing have witnessed the great power of deep networks. However, current studies on deep learning mainly focus on data sets with balanced class labels, while its performance on imbalanced data is not well examined. Imbalanced data sets exist widely in real world and they have been providing great challenges for classification tasks. In this paper, we focus on the problem of classification using deep network on imbalanced data sets. Specifically, a novel loss function called mean false error together with its improved version mean squared false error are proposed for the training of deep networks on imbalanced data sets. The proposed method can effectively capture classification errors from both majority class and minority class equally. Experiments and comparisons demonstrate the superiority of the proposed approach compared with conventional methods in classifying imbalanced data sets on deep neural networks.", "title": "" }, { "docid": "7c1fd4f8978e012ed00249271ed8c0cf", "text": "Graph clustering aims to discovercommunity structures in networks, the task being fundamentally challenging mainly because the topology structure and the content of the graphs are difficult to represent for clustering analysis. Recently, graph clustering has moved from traditional shallow methods to deep learning approaches, thanks to the unique feature representation learning capability of deep learning. However, existing deep approaches for graph clustering can only exploit the structure information, while ignoring the content information associated with the nodes in a graph. In this paper, we propose a novel marginalized graph autoencoder (MGAE) algorithm for graph clustering. The key innovation of MGAE is that it advances the autoencoder to the graph domain, so graph representation learning can be carried out not only in a purely unsupervised setting by leveraging structure and content information, it can also be stacked in a deep fashion to learn effective representation. From a technical viewpoint, we propose a marginalized graph convolutional network to corrupt network node content, allowing node content to interact with network features, and marginalizes the corrupted features in a graph autoencoder context to learn graph feature representations. The learned features are fed into the spectral clustering algorithm for graph clustering. Experimental results on benchmark datasets demonstrate the superior performance of MGAE, compared to numerous baselines.", "title": "" }, { "docid": "6701b0ad4c53a57984504c4465bf1364", "text": "In the aftermath of recent corporate scandals, managers and researchers have turned their attention to questions of ethics management. We identify five common myths about business ethics and provide responses that are grounded in theory, research, and business examples. Although the scientific study of business ethics is relatively new, theory and research exist that can guide executives who are trying to better manage their employees' and their own ethical behavior. We recommend that ethical conduct be managed proactively via explicit ethical leadership and conscious management of the organization's ethical culture.", "title": "" }, { "docid": "3621dd85dc4ba3007cfa8ec1017b4e96", "text": "The current lack of knowledge about the effect of maternally administered drugs on the developing fetus is a major public health concern worldwide. The first critical step toward predicting the safety of medications in pregnancy is to screen drug compounds for their ability to cross the placenta. However, this type of preclinical study has been hampered by the limited capacity of existing in vitro and ex vivo models to mimic physiological drug transport across the maternal-fetal interface in the human placenta. Here the proof-of-principle for utilizing a microengineered model of the human placental barrier to simulate and investigate drug transfer from the maternal to the fetal circulation is demonstrated. Using the gestational diabetes drug glyburide as a model compound, it is shown that the microphysiological system is capable of reconstituting efflux transporter-mediated active transport function of the human placental barrier to limit fetal exposure to maternally administered drugs. The data provide evidence that the placenta-on-a-chip may serve as a new screening platform to enable more accurate prediction of drug transport in the human placenta.", "title": "" }, { "docid": "17a11a48d3ee024b8a606caf2c028986", "text": "For evaluating or training different kinds of vision algorithms, a large amount of precise and reliable data is needed. In this paper we present a system to create extended synthetic sequences of traffic environment scenarios, associated with several types of ground truth data. By integrating vehicle dynamics in a configuration tool, and by using path-tracing in an external rendering engine to render the scenes, a system is created that allows ongoing and flexible creation of highly realistic traffic images. For all images, ground truth data is provided for depth, optical flow, surface normals and semantic scene labeling. Sequences that are produced with this system are more varied and closer to natural images than other synthetic datasets before.", "title": "" }, { "docid": "db849661cd9f748b05183cb39e36383e", "text": "Generative adversarial networks (GANs) implicitly learn the probability distribution of a dataset and can draw samples from the distribution. This paper presents, Tabular GAN (TGAN), a generative adversarial network which can generate tabular data like medical or educational records. Using the power of deep neural networks, TGAN generates high-quality and fully synthetic tables while simultaneously generating discrete and continuous variables. When we evaluate our model on three datasets, we find that TGAN outperforms conventional statistical generative models in both capturing the correlation between columns and scaling up for large datasets.", "title": "" }, { "docid": "587f58f291732bfb8954e34564ba76fd", "text": "Blood pressure oscillometric waveforms behave as amplitude modulated nonlinear signals with frequency fluctuations. Their oscillating nature can be better analyzed by the digital Taylor-Fourier transform (DTFT), recently proposed for phasor estimation in oscillating power systems. Based on a relaxed signal model that includes Taylor components greater than zero, the DTFT is able to estimate not only the oscillation itself, as does the digital Fourier transform (DFT), but also its derivatives included in the signal model. In this paper, an oscillometric waveform is analyzed with the DTFT, and its zeroth and first oscillating harmonics are illustrated. The results show that the breathing activity can be separated from the cardiac one through the critical points of the first component, determined by the zero crossings of the amplitude derivatives estimated from the third Taylor order model. On the other hand, phase derivative estimates provide the fluctuations of the cardiac frequency and its derivative, new parameters that could improve the precision of the systolic and diastolic blood pressure assignment. The DTFT envelope estimates uniformly converge from K=3, substantially improving the harmonic separation of the DFT.", "title": "" }, { "docid": "a52d2a2c8fdff0bef64edc1a97b89c63", "text": "This paper provides a review of recent developments in speech recognition research. The concept of sources of knowledge is introduced and the use of knowledge to generate and verify hypotheses is discussed. The difficulties that arise in the construction of different types of speech recognition systems are discussed and the structure and performance of several such systems is presented. Aspects of component subsystems at the acoustic, phonetic, syntactic, and semantic levels are presented. System organizations that are required for effective interaction and use of various component subsystems in the presence of error and ambiguity are discussed.", "title": "" } ]
scidocsrr
b22ae6719a5f4426add3827a12eeef7b
Shallow and Deep Convolutional Networks for Saliency Prediction
[ { "docid": "a77eddf9436652d68093946fbe1d2ed0", "text": "The Pascal Visual Object Classes (VOC) challenge consists of two components: (i) a publicly available dataset of images together with ground truth annotation and standardised evaluation software; and (ii) an annual competition and workshop. There are five challenges: classification, detection, segmentation, action classification, and person layout. In this paper we provide a review of the challenge from 2008–2012. The paper is intended for two audiences: algorithm designers, researchers who want to see what the state of the art is, as measured by performance on the VOC datasets, along with the limitations and weak points of the current generation of algorithms; and, challenge designers, who want to see what we as organisers have learnt from the process and our recommendations for the organisation of future challenges. To analyse the performance of submitted algorithms on the VOC datasets we introduce a number of novel evaluation methods: a bootstrapping method for determining whether differences in the performance of two algorithms are significant or not; a normalised average precision so that performance can be compared across classes with different proportions of positive instances; a clustering method for visualising the performance across multiple algorithms so that the hard and easy images can be identified; and the use of a joint classifier over the submitted algorithms in order to measure their complementarity and combined performance. We also analyse the community’s progress through time using the methods of Hoiem et al. (Proceedings of European Conference on Computer Vision, 2012) to identify the types of occurring errors. We conclude the paper with an appraisal of the aspects of the challenge that worked well, and those that could be improved in future challenges.", "title": "" }, { "docid": "925d0a4b4b061816c540f2408ea593d1", "text": "It is believed that eye movements in free-viewing of natural scenes are directed by both bottom-up visual saliency and top-down visual factors. In this paper, we propose a novel computational framework to simultaneously learn these two types of visual features from raw image data using a multiresolution convolutional neural network (Mr-CNN) for predicting eye fixations. The Mr-CNN is directly trained from image regions centered on fixation and non-fixation locations over multiple resolutions, using raw image pixels as inputs and eye fixation attributes as labels. Diverse top-down visual features can be learned in higher layers. Meanwhile bottom-up visual saliency can also be inferred via combining information over multiple resolutions. Finally, optimal integration of bottom-up and top-down cues can be learned in the last logistic regression layer to predict eye fixations. The proposed approach achieves state-of-the-art results over four publically available benchmark datasets, demonstrating the superiority of our work.", "title": "" } ]
[ { "docid": "eca2bfe1b96489e155e19d02f65559d6", "text": "• Oracle experiment: to understand how well these attributes, when used together, can explain persuasiveness, we train 3 linear SVM regressors, one for each component type, to score an arguments persuasiveness using gold attribute’s as features • Two human annotators who were both native speakers of English were first familiarized with the rubrics and definitions and then trained on five essays • 30 essays were doubly annotated for computing inter-annotator agreement • Each of the remaining essays was annotated by one of the annotators • Score/Class distributions by component type: Give me More Feedback: Annotating Argument Persusiveness and Related Attributes in Student Essays", "title": "" }, { "docid": "4d3baff85c302b35038f35297a8cdf90", "text": "Most speech recognition applications in use today rely heavily on confidence measure for making optimal decisions. In this paper, we aim to answer the question: what can be done to improve the quality of confidence measure if we cannot modify the speech recognition engine? The answer provided in this paper is a post-processing step called confidence calibration, which can be viewed as a special adaptation technique applied to confidence measure. Three confidence calibration methods have been developed in this work: the maximum entropy model with distribution constraints, the artificial neural network, and the deep belief network. We compare these approaches and demonstrate the importance of key features exploited: the generic confidence-score, the application-dependent word distribution, and the rule coverage ratio. We demonstrate the effectiveness of confidence calibration on a variety of tasks with significant normalized cross entropy increase and equal error rate reduction.", "title": "" }, { "docid": "5ea42460dc2bdd2ebc2037e35e01dca9", "text": "Mobile edge clouds (MECs) are small cloud-like infrastructures deployed in close proximity to users, allowing users to have seamless and low-latency access to cloud services. When users move across different locations, their service applications often need to be migrated to follow the user so that the benefit of MEC is maintained. In this paper, we propose a layered framework for migrating running applications that are encapsulated either in virtual machines (VMs) or containers. We evaluate the migration performance of various real applications under the proposed framework.", "title": "" }, { "docid": "ad8762ae878b7e731b11ab6d67f9867d", "text": "We describe a posterolateral transfibular neck approach to the proximal tibia. This approach was developed as an alternative to the anterolateral approach to the tibial plateau for the treatment of two fracture subtypes: depressed and split depressed fractures in which the comminution and depression are located in the posterior half of the lateral tibial condyle. These fractures have proved particularly difficult to reduce and adequately internally fix through an anterior or anterolateral approach. The approach described in this article exposes the posterolateral aspect of the tibial plateau between the posterior margin of the iliotibial band and the posterior cruciate ligament. The approach allows lateral buttressing of the lateral tibial plateau and may be combined with a simultaneous posteromedial and/or anteromedial approach to the tibial plateau. Critically, the proximal tibial soft tissue envelope and its blood supply are preserved. To date, we have used this approach either alone or in combination with a posteromedial approach for the successful reduction of tibial plateau fractures in eight patients. No complications related to this approach were documented, including no symptoms related to the common peroneal nerve, and all fractures and fibular neck osteotomies healed uneventfully.", "title": "" }, { "docid": "c940cfa3a74cce2aed59640975b4b80d", "text": "A novel ultra-wideband bandpass filter (BPF) is presented using a back-to-back microstrip-to-coplanar waveguide (CPW) transition employed as the broadband balun structure in this letter. The proposed BPF is based on the electromagnetic coupling between open-circuited microstrip line and short-circuited CPW. The equivalent circuit of half of the filter is used to calculate the input impedance. The broadband microstip-to-CPW transition is designed at the center frequency of 6.85 GHz. The simulated and measured results are shown in this letter.", "title": "" }, { "docid": "cd3fbe507e685b3f62ebd5e5243ddb0b", "text": "Changes in the background EEG activity occurring at the same time as visual and auditory evoked potentials, as well as during the interstimulus interval in a CNV paradigm were analysed in human subjects, using serial power measurements of overlapping EEG segments. The analysis was focused on the power of the rhythmic activity within the alpha band (RAAB power). A decrease in RAAB power occurring during these event-related phenomena was indicative of desynchronization. Phasic, i.e. short lasting, localised desynchronization was present during sensory stimulation, and also preceding the imperative signal and motor response (motor preactivation) in the CNV paradigm.", "title": "" }, { "docid": "614cc9968370bffb32cf70f44c8f8688", "text": "The abundance of event data in today’s information systems makes it possible to “confront” process models with the actual observed behavior. Process mining techniques use event logs to discover process models that describe the observed behavior, and to check conformance of process models by diagnosing deviations between models and reality. In many situations, it is desirable to mediate between a preexisting model and observed behavior. Hence, we would like to repair the model while improving the correspondence between model and log as much as possible. The approach presented in this article assigns predefined costs to repair actions (allowing inserting or skipping of activities). Given a maximum degree of change, we search for models that are optimal in terms of fitness—that is, the fraction of behavior in the log not possible according to the model is minimized. To compute fitness, we need to align the model and log, which can be time consuming. Hence, finding an optimal repair may be intractable. We propose different alternative approaches to speed up repair. The number of alignment computations can be reduced dramatically while still returning near-optimal repairs. The different approaches have been implemented using the process mining framework ProM and evaluated using real-life logs.", "title": "" }, { "docid": "0d0f6e946bd9125f87a78d8cf137ba97", "text": "Acute renal failure increases risk of death after cardiac surgery. However, it is not known whether more subtle changes in renal function might have an impact on outcome. Thus, the association between small serum creatinine changes after surgery and mortality, independent of other established perioperative risk indicators, was analyzed. In a prospective cohort study in 4118 patients who underwent cardiac and thoracic aortic surgery, the effect of changes in serum creatinine within 48 h postoperatively on 30-d mortality was analyzed. Cox regression was used to correct for various established demographic preoperative risk indicators, intraoperative parameters, and postoperative complications. In the 2441 patients in whom serum creatinine decreased, early mortality was 2.6% in contrast to 8.9% in patients with increased postoperative serum creatinine values. Patients with large decreases (DeltaCrea <-0.3 mg/dl) showed a progressively increasing 30-d mortality (16 of 199 [8%]). Mortality was lowest (47 of 2195 [2.1%]) in patients in whom serum creatinine decreased to a maximum of -0.3 mg/dl; mortality increased to 6% in patients in whom serum creatinine remained unchanged or increased up to 0.5 mg/dl. Mortality (65 of 200 [32.5%]) was highest in patients in whom creatinine increased > or =0.5 mg/dl. For all groups, increases in mortality remained significant in multivariate analyses, including postoperative renal replacement therapy. After cardiac and thoracic aortic surgery, 30-d mortality was lowest in patients with a slight postoperative decrease in serum creatinine. Any even minimal increase or profound decrease of serum creatinine was associated with a substantial decrease in survival.", "title": "" }, { "docid": "717d1c31ac6766fcebb4ee04ca8aa40f", "text": "We present an incremental maintenance algorithm for leapfrog triejoin. The algorithm maintains rules in time proportional (modulo log factors) to the edit distance between leapfrog triejoin traces.", "title": "" }, { "docid": "cc57f21666ece3c6ba7c9a28228a44c1", "text": "The past few years have seen rapid advances in communication and information technology (C&IT), and the pervasion of the worldwide web into everyday life has important implications for education. Most medical schools provide extensive computer networks for their students, and these are increasingly becoming a central component of the learning and teaching environment. Such advances bring new opportunities and challenges to medical education, and are having an impact on the way that we teach and on the way that students learn, and on the very design and delivery of the curriculum. The plethora of information available on the web is overwhelming, and both students and staff need to be taught how to manage it effectively. Medical schools must develop clear strategies to address the issues raised by these technologies. We describe how medical schools are rising to this challenge, look at some of the ways in which communication and information technology can be used to enhance the learning and teaching environment, and discuss the potential impact of future developments on medical education.", "title": "" }, { "docid": "15dc2cd497f782d16311cd0e658e2e90", "text": "We present a novel view of the structuring of distributed systems, and a few examples of its utilization in an object-oriented context. In a distributed system, the structure of a service or subsystem may be complex, being implemented as a set of communicating server objects; however, this complexity of structure should not be apparent to the client. In our proposal, a client must first acquire a local object, called a proxy, in order to use such a service. The proxy represents the whole set of servers. The client directs all its communication to the proxy. The proxy, and all the objects it represents, collectively form one distributed object, which is not decomposable by the client. Any higher-level communication protocols are internal to this distributed object. Such a view provides a powerful structuring framework for distributed systems; it can be implemented cheaply without sacrificing much flexibility. It subsumes may previous proposals, but encourages better information-hiding and encapsulation.", "title": "" }, { "docid": "2bd53f469a81d2c1ef17c239761a5758", "text": "This paper addresses the stability problem of a class of delayed neural networks with time-varying impulses. One important feature of the time-varying impulses is that both the stabilizing and destabilizing impulses are considered simultaneously. Based on the comparison principle, the stability of delayed neural networks with time-varying impulses is investigated. Finally, the simulation results demonstrate the effectiveness of the results.", "title": "" }, { "docid": "01eadabcfbe9274c47d9ebcd45ea2332", "text": "The classical uncertainty principle provides a fundamental tradeoff in the localization of a signal in the time and frequency domains. In this paper we describe a similar tradeoff for signals defined on graphs. We describe the notions of “spread” in the graph and spectral domains, using the eigenvectors of the graph Laplacian as a surrogate Fourier basis. We then describe how to find signals that, among all signals with the same spectral spread, have the smallest graph spread about a given vertex. For every possible spectral spread, the desired signal is the solution to an eigenvalue problem. Since localization in graph and spectral domains is a desirable property of the elements of wavelet frames on graphs, we compare the performance of some existing wavelet transforms to the obtained bound.", "title": "" }, { "docid": "ebc57f065fa7f3206564ff14539b0707", "text": "Following the Daubert ruling in 1993, forensic evidence based on fingerprints was first challenged in the 1999 case of the U.S. versus Byron C. Mitchell and, subsequently, in 20 other cases involving fingerprint evidence. The main concern with the admissibility of fingerprint evidence is the problem of individualization, namely, that the fundamental premise for asserting the uniqueness of fingerprints has not been objectively tested and matching error rates are unknown. In order to assess the error rates, we require quantifying the variability of fingerprint features, namely, minutiae in the target population. A family of finite mixture models has been developed in this paper to represent the distribution of minutiae in fingerprint images, including minutiae clustering tendencies and dependencies in different regions of the fingerprint image domain. A mathematical model that computes the probability of a random correspondence (PRC) is derived based on the mixture models. A PRC of 2.25 times10-6 corresponding to 12 minutiae matches was computed for the NIST4 Special Database, when the numbers of query and template minutiae both equal 46. This is also the estimate of the PRC for a target population with a similar composition as that of NIST4.", "title": "" }, { "docid": "147719cdac405333d8f8c2b7558be472", "text": "OBJECTIVES\nBiliary injuries are frequently accompanied by vascular injuries, which may worsen the bile duct injury and cause liver ischemia. We performed an analytical review with the aim of defining vasculobiliary injury and setting out the important issues in this area.\n\n\nMETHODS\nA literature search of relevant terms was performed using OvidSP. Bibliographies of papers were also searched to obtain older literature.\n\n\nRESULTS\n Vasculobiliary injury was defined as: an injury to both a bile duct and a hepatic artery and/or portal vein; the bile duct injury may be caused by operative trauma, be ischaemic in origin or both, and may or may not be accompanied by various degrees of hepatic ischaemia. Right hepatic artery (RHA) vasculobiliary injury (VBI) is the most common variant. Injury to the RHA likely extends the biliary injury to a higher level than the gross observed mechanical injury. VBI results in slow hepatic infarction in about 10% of patients. Repair of the artery is rarely possible and the overall benefit unclear. Injuries involving the portal vein or common or proper hepatic arteries are much less common, but have more serious effects including rapid infarction of the liver.\n\n\nCONCLUSIONS\nRoutine arteriography is recommended in patients with a biliary injury if early repair is contemplated. Consideration should be given to delaying repair of a biliary injury in patients with occlusion of the RHA. Patients with injuries to the portal vein or proper or common hepatic should be emergently referred to tertiary care centers.", "title": "" }, { "docid": "c7fd5a26da59fab4e66e0cb3e93530d6", "text": "Switching audio amplifiers are widely used in HBridge topology thanks to their high efficiency; however low audio performances in single ended power stage topology is a strong weakness leading to not be used for headset applications. This paper explains the importance of efficient error correction in Single Ended Class-D audio amplifier. A hysteresis control for Class-D amplifier with a variable window is also presented. The analyses are verified by simulations and measurements. The proposed solution was fabricated in 0.13µm CMOS technology with an active area of 0.2mm2. It could be used in single ended output configuration fully compatible with common headset connectors. The proposed Class-D amplifier achieves a harmonic distortion of 0.01% and a power supply rejection of 70dB with a quite low static current consumption.", "title": "" }, { "docid": "526707cbd0083267c4d84808aa206d8a", "text": "The research of probiotics for aquatic animals is increasing with the demand for environmentfriendly aquaculture. The probiotics were defined as live microbial feed supplements that improve health of man and terrestrial livestock. The gastrointestinal microbiota of fish and shellfish are peculiarly dependent on the external environment, due to the water flow passing through the digestive tract. Most bacterial cells are transient in the gut, with continuous intrusion of microbes coming from water and food. Some commercial products are referred to as probiotics, though they were designed to treat the rearing medium, not to supplement the diet. This extension of the probiotic concept is pertinent when the administered microbes survive in the gastrointestinal tract. Otherwise, more general terms are suggested, like biocontrol when the treatment is antagonistic to pathogens, or bioremediation when water quality is improved. However, the first probiotics tested in fish were commercial preparations devised for land animals. Though some effects were observed with such preparations, the survival of these bacteria was uncertain in aquatic environment. Most attempts to propose probiotics have been undertaken by isolating and selecting strains from aquatic environment. These microbes were Vibrionaceae, pseudomonads, lactic acid bacteria, Bacillus spp. and yeasts. Three main characteristics have been searched in microbes as candidates Ž . to improve the health of their host. 1 The antagonism to pathogens was shown in vitro in most Ž . Ž . cases. 2 The colonization potential of some candidate probionts was also studied. 3 Challenge tests confirmed that some strains could increase the resistance to disease of their host. Many other beneficial effects may be expected from probiotics, e.g., competition with pathogens for nutrients or for adhesion sites, and stimulation of the immune system. The most promising prospects are sketched out, but considerable efforts of research will be necessary to develop the applications to aquaculture. q 1999 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "8d0c5de2054b7c6b4ef97a211febf1d0", "text": "This paper revisits the problem of optimal learning and decision-making when different misclassification errors incur different penalties. We characterize precisely but intuitively when a cost matrix is reasonable, and we show how to avoid the mistake of defining a cost matrix that is economically incoherent. For the two-class case, we prove a theorem that shows how to change the proportion of negative examples in a training set in order to make optimal cost-sensitive classification decisions using a classifier learned by a standard non-costsensitive learning method. However, we then argue that changing the balance of negative and positive training examples has little effect on the classifiers produced by standard Bayesian and decision tree learning methods. Accordingly, the recommended way of applying one of these methods in a domain with differing misclassification costs is to learn a classifier from the training set as given, and then to compute optimal decisions explicitly using the probability estimates given by the classifier. 1 Making decisions based on a cost matrix Given a specification of costs for correct and incorrect predictions, an example should be predicted to have the class that leads to the lowest expected cost, where the expectatio n is computed using the conditional probability of each class given the example. Mathematically, let the (i; j) entry in a cost matrixC be the cost of predicting class i when the true class isj. If i = j then the prediction is correct, while if i 6= j the prediction is incorrect. The optimal prediction for an examplex is the classi that minimizes L(x; i) =Xj P (jjx)C(i; j): (1) Costs are not necessarily monetary. A cost can also be a waste of time, or the severity of an illness, for example. For eachi,L(x; i) is a sum over the alternative possibilities for the true class of x. In this framework, the role of a learning algorithm is to produce a classifier that for any example x can estimate the probability P (jjx) of each classj being the true class ofx. For an examplex, making the predictioni means acting as if is the true class of x. The essence of cost-sensitive decision-making is that it can be optimal to ct as if one class is true even when some other class is more probable. For example, it can be rational not to approve a large credit card transaction even if the transaction is mos t likely legitimate. 1.1 Cost matrix properties A cost matrixC always has the following structure when there are only two classes: actual negative actual positive predict negative C(0; 0) = 00 C(0; 1) = 01 predict positive C(1; 0) = 10 C(1; 1) = 11 Recent papers have followed the convention that cost matrix rows correspond to alternative predicted classes, whi le columns correspond to actual classes, i.e. row/column = i/j predicted/actual. In our notation, the cost of a false positive is 10 while the cost of a false negative is 01. Conceptually, the cost of labeling an example incorrectly should always be greater th an the cost of labeling it correctly. Mathematically, it shoul d always be the case that 10 > 00 and 01 > 11. We call these conditions the “reasonableness” conditions. Suppose that the first reasonableness condition is violated , so 00 10 but still 01 > 11. In this case the optimal policy is to label all examples positive. Similarly, if 10 > 00 but 11 01 then it is optimal to label all examples negative. We leave the case where both reasonableness conditions are violated for the reader to analyze. Margineantu[2000] has pointed out that for some cost matrices, some class labels are never predicted by the optimal policy as given by Equation (1). We can state a simple, intuitive criterion for when this happens. Say that row m dominates rown in a cost matrixC if for all j,C(m; j) C(n; j). In this case the cost of predictingis no greater than the cost of predictingm, regardless of what the true class j is. So it is optimal never to predict m. As a special case, the optimal prediction is alwaysn if row n is dominated by all other rows in a cost matrix. The two reasonableness conditions for a two-class cost matrix imply that neither row in the matrix dominates the other. Given a cost matrix, the decisions that are optimal are unchanged if each entry in the matrix is multiplied by a positiv e constant. This scaling corresponds to changing the unit of account for costs. Similarly, the decisions that are optima l are unchanged if a constant is added to each entry in the matrix. This shifting corresponds to changing the baseline aw ay from which costs are measured. By scaling and shifting entries, any two-class cost matrix that satisfies the reasonab leness conditions can be transformed into a simpler matrix tha t always leads to the same decisions:", "title": "" }, { "docid": "52e29410e4115f411407bcbd96a17ad0", "text": "Empirical methods in geoparsing have thus far lacked a standard evaluation framework describing the task, data and metrics used to establish state-of-the-art systems. Evaluation is further made inconsistent, even unrepresentative of real world usage, by the lack of distinction between the different types of toponyms, which necessitates new guidelines, a consolidation of metrics and a detailed toponym taxonomy with implications for Named Entity Recognition (NER). To address these deficiencies, our manuscript introduces such a framework in three parts. Part 1) Task Definition: clarified via corpus linguistic analysis proposing a fine-grained Pragmatic Taxonomy of Toponyms with new guidelines. Part 2) Evaluation Data: shared via a dataset called GeoWebNews to provide test/train data to enable immediate use of our contributions. In addition to fine-grained Geotagging and Toponym Resolution (Geocoding), this dataset is also suitable for prototyping machine learning NLP models. Part 3) Metrics: discussed and reviewed for a rigorous evaluation with appropriate recommendations for NER/Geoparsing practitioners. We gratefully acknowledge the funding support of the Natural Environment Research Council (NERC) PhD Studentship (Milan Gritta NE/M009009/1), EPSRC (Nigel Collier EP/M005089/1) and MRC (Mohammad Taher Pilehvar MR/M025160/1 for PheneBank). We also acknowledge Cambridge University linguists Mina Frost and Qianchu (Flora) Liu for providing expertise and verification (IAA) during dataset construction/annotation. Milan Gritta E-mail: [email protected] Mohammad Taher Pilehvar E-mail: [email protected] Nigel Collier E-mail: [email protected] Language Technology Lab (LTL) Department of Theoretical and Applied Linguistics (DTAL) University of Cambridge, 9 West Road, Cambridge CB3 9DP ar X iv :1 81 0. 12 36 8v 2 [ cs .C L ] 2 N ov 2 01 8 2 Milan Gritta et al.", "title": "" }, { "docid": "98a820c806b392e18b38d075b91a4fe9", "text": "This paper presents a scalable method to efficiently search for the most likely state trajectory leading to an event given only a simulator of a system. Our approach uses a reinforcement learning formulation and solves it using Monte Carlo Tree Search (MCTS). The approach places very few requirements on the underlying system, requiring only that the simulator provide some basic controls, the ability to evaluate certain conditions, and a mechanism to control the stochasticity in the system. Access to the system state is not required, allowing the method to support systems with hidden state. The method is applied to stress test a prototype aircraft collision avoidance system to identify trajectories that are likely to lead to near mid-air collisions. We present results for both single and multi-threat encounters and discuss their relevance. Compared with direct Monte Carlo search, this MCTS method performs significantly better both in finding events and in maximizing their likelihood.", "title": "" } ]
scidocsrr
b0bf55e123a1d0efe1fd44d5b3c1e4e9
Oruta: Privacy-Preserving Public Auditing for Shared Data in the Cloud
[ { "docid": "70cc8c058105b905eebdf941ca2d3f2e", "text": "Cloud computing is an emerging computing paradigm in which resources of the computing infrastructure are provided as services over the Internet. As promising as it is, this paradigm also brings forth many new challenges for data security and access control when users outsource sensitive data for sharing on cloud servers, which are not within the same trusted domain as data owners. To keep sensitive user data confidential against untrusted servers, existing solutions usually apply cryptographic methods by disclosing data decryption keys only to authorized users. However, in doing so, these solutions inevitably introduce a heavy computation overhead on the data owner for key distribution and data management when fine-grained data access control is desired, and thus do not scale well. The problem of simultaneously achieving fine-grainedness, scalability, and data confidentiality of access control actually still remains unresolved. This paper addresses this challenging open issue by, on one hand, defining and enforcing access policies based on data attributes, and, on the other hand, allowing the data owner to delegate most of the computation tasks involved in fine-grained data access control to untrusted cloud servers without disclosing the underlying data contents. We achieve this goal by exploiting and uniquely combining techniques of attribute-based encryption (ABE), proxy re-encryption, and lazy re-encryption. Our proposed scheme also has salient properties of user access privilege confidentiality and user secret key accountability. Extensive analysis shows that our proposed scheme is highly efficient and provably secure under existing security models.", "title": "" } ]
[ { "docid": "8f78f2efdd2fecaf32fbc7f5ffa79218", "text": "Evolutionary population dynamics (EPD) deal with the removal of poor individuals in nature. It has been proven that this operator is able to improve the median fitness of the whole population, a very effective and cheap method for improving the performance of meta-heuristics. This paper proposes the use of EPD in the grey wolf optimizer (GWO). In fact, EPD removes the poor search agents of GWO and repositions them around alpha, beta, or delta wolves to enhance exploitation. The GWO is also required to randomly reinitialize its worst search agents around the search space by EPD to promote exploration. The proposed GWO–EPD algorithm is benchmarked on six unimodal and seven multi-modal test functions. The results are compared to the original GWO algorithm for verification. It is demonstrated that the proposed operator is able to significantly improve the performance of the GWO algorithm in terms of exploration, local optima avoidance, exploitation, local search, and convergence rate.", "title": "" }, { "docid": "8905bd760b0c72fbfe4fbabd778ff408", "text": "Boredom and low levels of task engagement while driving can pose road safety risks, e.g., inattention during low traffic, routine trips, or semi-automated driving. Digital technology interventions that increase task engagement, e.g., through performance feedback, increased challenge, and incentives (often referred to as ‘gamification’), could therefore offer safety benefits. To explore the impact of such interventions, we conducted experiments in a highfidelity driving simulator with thirty-two participants. In two counterbalanced conditions (control and intervention), we compared driving behaviour, physiological arousal, and subjective experience. Results indicate that the gamified boredom intervention reduced unsafe coping mechanisms such as speeding while promoting anticipatory driving. We can further infer that the intervention not only increased one’s attention and arousal during the intermittent gamification challenges, but that these intermittent stimuli may also help sustain one’s attention and arousal in between challenges and throughout a drive. At the same time, the gamified condition led to slower hazard reactions and short off-road glances. Our contributions deepen our understanding of driver boredom and pave the way for engaging interventions for safety critical tasks.", "title": "" }, { "docid": "d5d96493b34cfbdf135776e930ec5979", "text": "We propose an approach for the static analysis of probabilistic programs that sense, manipulate, and control based on uncertain data. Examples include programs used in risk analysis, medical decision making and cyber-physical systems. Correctness properties of such programs take the form of queries that seek the probabilities of assertions over program variables. We present a static analysis approach that provides guaranteed interval bounds on the values (assertion probabilities) of such queries. First, we observe that for probabilistic programs, it is possible to conclude facts about the behavior of the entire program by choosing a finite, adequate set of its paths. We provide strategies for choosing such a set of paths and verifying its adequacy. The queries are evaluated over each path by a combination of symbolic execution and probabilistic volume-bound computations. Each path yields interval bounds that can be summed up with a \"coverage\" bound to yield an interval that encloses the probability of assertion for the program as a whole. We demonstrate promising results on a suite of benchmarks from many different sources including robotic manipulators and medical decision making programs.", "title": "" }, { "docid": "90c3543eca7a689188725e610e106ce9", "text": "Lithium-based battery technology offers performance advantages over traditional battery technologies at the cost of increased monitoring and controls overhead. Multiple-cell Lead-Acid battery packs can be equalized by a controlled overcharge, eliminating the need to periodically adjust individual cells to match the rest of the pack. Lithium-based based batteries cannot be equalized by an overcharge, so alternative methods are required. This paper discusses several cell-balancing methodologies. Active cell balancing methods remove charge from one or more high cells and deliver the charge to one or more low cells. Dissipative techniques find the high cells in the pack, and remove excess energy through a resistive element until their charges match the low cells. This paper presents the theory of charge balancing techniques and the advantages and disadvantages of the presented methods. INTRODUCTION Lithium Ion and Lithium Polymer battery chemistries cannot be overcharged without damaging active materials [1-5]. The electrolyte breakdown voltage is precariously close to the fully charged terminal voltage, typically in the range of 4.1 to 4.3 volts/cell. Therefore, careful monitoring and controls must be implemented to avoid any single cell from experiencing an overvoltage due to excessive charging. Single lithium-based cells require monitoring so that cell voltage does not exceed predefined limits of the chemistry. Series connected lithium cells pose a more complex problem: each cell in the string must be monitored and controlled. Even though the pack voltage may appear to be within acceptable limits, one cell of the series string may be experiencing damaging voltage due to cell-to-cell imbalances. Traditionally, cell-to-cell imbalances in lead-acid batteries have been solved by controlled overcharging [6,7]. Leadacid batteries can be brought into overcharge conditions without permanent cell damage, as the excess energy is released by gassing. This gassing mechanism is the natural method for balancing a series string of lead acid battery cells. Other chemistries, such as NiMH, exhibit similar natural cell-to-cell balancing mechanisms [8]. Because a Lithium battery cannot be overcharged, there is no natural mechanism for cell equalization. Therefore, an alternative method must be employed. This paper discusses three categories of cell balancing methodologies: charging methods, active methods, and passive methods. Cell balancing is necessary for highly transient lithium battery applications, especially those applications where charging occurs frequently, such as regenerative braking in electric vehicle (EV) or hybrid electric vehicle (HEV) applications. Regenerative braking can cause problems for Lithium Ion batteries because the instantaneous regenerative braking current inrush can cause battery voltage to increase suddenly, possibly over the electrolyte breakdown threshold voltage. Deviations in cell behaviors generally occur because of two phenomenon: changes in internal impedance or cell capacity reduction due to aging. In either case, if one cell in a battery pack experiences deviant cell behavior, that cell becomes a likely candidate to overvoltage during high power charging events. Cells with reduced capacity or high internal impedance tend to have large voltage swings when charging and discharging. For HEV applications, it is necessary to cell balance lithium chemistry because of this overvoltage potential. For EV applications, cell balancing is desirable to obtain maximum usable capacity from the battery pack. During charging, an out-of-balance cell may prematurely approach the end-of-charge voltage (typically 4.1 to 4.3 volts/cell) and trigger the charger to turn off. Cell balancing is useful to control the higher voltage cells until the rest of the cells can catch up. In this way, the charger is not turned off until the cells simultaneously reach the end-of-charge voltage. END-OF-CHARGE CELL BALANCING METHODS Typically, cell-balancing methods employed during and at end-of-charging are useful only for electric vehicle purposes. This is because electric vehicle batteries are generally fully charged between each use cycle. Hybrid electric vehicle batteries may or may not be maintained fully charged, resulting in unpredictable end-of-charge conditions to enact the balancing mechanism. Hybrid vehicle batteries also require both high power charge (regenerative braking) and discharge (launch assist or boost) capabilities. For this reason, their batteries are usually maintained at a SOC that can discharge the required power but still have enough headroom to accept the necessary regenerative power. To fully charge the HEV battery for cell balancing would diminish charge acceptance capability (regenerative braking). CHARGE SHUNTING The charge-shunting cell balancing method selectively shunts the charging current around each cell as they become fully charged (Figure 1). This method is most efficiently employed on systems with known charge rates. The shunt resistor R is sized to shunt exactly the charging current I when the fully charged cell voltage V is reached. If the charging current decreases, resistor R will discharge the shunted cell. To avoid extremely large power dissipations due to R, this method is best used with stepped-current chargers with a small end-of-charge current.", "title": "" }, { "docid": "e49f9ad79d3d4d31003c0cda7d7d49c5", "text": "Greater trochanter pain syndrome due to tendinopathy or bursitis is a common cause of hip pain. The previously reported magnetic resonance (MR) findings of trochanteric tendinopathy and bursitis are peritrochanteric fluid and abductor tendon abnormality. We have often noted peritrochanteric high T2 signal in patients without trochanteric symptoms. The purpose of this study was to determine whether the MR findings of peritrochanteric fluid or hip abductor tendon pathology correlate with trochanteric pain. We retrospectively reviewed 131 consecutive MR examinations of the pelvis (256 hips) for T2 peritrochanteric signal and abductor tendon abnormalities without knowledge of the clinical symptoms. Any T2 peritrochanteric abnormality was characterized by size as tiny, small, medium, or large; by morphology as feathery, crescentic, or round; and by location as bursal or intratendinous. The clinical symptoms of hip pain and trochanteric pain were compared to the MR findings on coronal, sagittal, and axial T2 sequences using chi-square or Fisher’s exact test with significance assigned as p < 0.05. Clinical symptoms of trochanteric pain syndrome were present in only 16 of the 256 hips. All 16 hips with trochanteric pain and 212 (88%) of 240 without trochanteric pain had peritrochanteric abnormalities (p = 0.15). Eighty-eight percent of hips with trochanteric symptoms had gluteus tendinopathy while 50% of those without symptoms had such findings (p = 0.004). Other than tendinopathy, there was no statistically significant difference between hips with or without trochanteric symptoms and the presence of peritrochanteric T2 abnormality, its size or shape, and the presence of gluteus medius or minimus partial thickness tears. Patients with trochanteric pain syndrome always have peritrochanteric T2 abnormalities and are significantly more likely to have abductor tendinopathy on magnetic resonance imaging (MRI). However, although the absence of peritrochanteric T2 MR abnormalities makes trochanteric pain syndrome unlikely, detection of these abnormalities on MRI is a poor predictor of trochanteric pain syndrome as these findings are present in a high percentage of patients without trochanteric pain.", "title": "" }, { "docid": "8aa305f217314d60ed6c9f66d20a7abf", "text": "The circadian timing system drives daily rhythmic changes in drug metabolism and controls rhythmic events in cell cycle, DNA repair, apoptosis, and angiogenesis in both normal tissue and cancer. Rodent and human studies have shown that the toxicity and anticancer activity of common cancer drugs can be significantly modified by the time of administration. Altered sleep/activity rhythms are common in cancer patients and can be disrupted even more when anticancer drugs are administered at their most toxic time. Disruption of the sleep/activity rhythm accelerates cancer growth. The complex circadian time-dependent connection between host, cancer and therapy is further impacted by other factors including gender, inter-individual differences and clock gene polymorphism and/or down regulation. It is important to take circadian timing into account at all stages of new drug development in an effort to optimize the therapeutic index for new cancer drugs. Better measures of the individual differences in circadian biology of host and cancer are required to further optimize the potential benefit of chronotherapy for each individual patient.", "title": "" }, { "docid": "9164dab8c4c55882f8caecc587c32eb1", "text": "We suggest an approach to exploratory analysis of diverse types of spatiotemporal data with the use of clustering and interactive visual displays. We can apply the same generic clustering algorithm to different types of data owing to the separation of the process of grouping objects from the process of computing distances between the objects. In particular, we apply the densitybased clustering algorithm OPTICS to events (i.e. objects having spatial and temporal positions), trajectories of moving entities, and spatial distributions of events or moving entities in different time intervals. Distances are computed in a specific way for each type of objects; moreover, it may be useful to have several different distance functions for the same type of objects. Thus, multiple distance functions available for trajectories support different analysis tasks. We demonstrate the use of our approach by example of two datasets from the VAST Challenge 2008: evacuation traces (trajectories of moving entities) and landings and interdictions of migrant boats (events).", "title": "" }, { "docid": "052a83669b39822eda51f2e7222074b4", "text": "A class-E synchronous rectifier has been designed and implemented using 0.13-μm CMOS technology. A design methodology based on the theory of time-reversal duality has been used where a class-E amplifier circuit is transformed into a class-E rectifier circuit. The methodology is distinctly different from other CMOS RF rectifier designs which use voltage multiplier techniques. Power losses in the rectifier are analyzed including saturation resistance in the switch, inductor losses, and current/voltage overlap losses. The rectifier circuit includes a 50-Ω single-ended RF input port with on-chip matching. The circuit is self-biased and completely powered from the RF input signal. Experimental results for the rectifier show a peak RF-to-dc conversion efficiency of 30% measured at a frequency of 2.4 GHz.", "title": "" }, { "docid": "0bcff493580d763dbc1dd85421546201", "text": "The development of powerful imaging tools, editing images for changing their data content is becoming a mark to undertake. Tempering image contents by adding, removing, or copying/moving without leaving a trace or unable to be discovered by the investigation is an issue in the computer forensic world. The protection of information shared on the Internet like images and any other con?dential information is very signi?cant. Nowadays, forensic image investigation tools and techniques objective is to reveal the tempering strategies and restore the firm belief in the reliability of digital media. This paper investigates the challenges of detecting steganography in computer forensics. Open source tools were used to analyze these challenges. The experimental investigation focuses on using steganography applications that use same algorithms to hide information exclusively within an image. The research finding denotes that, if a certain steganography tool A is used to hide some information within a picture, and then tool B which uses the same procedure would not be able to recover the embedded image.", "title": "" }, { "docid": "a0d34b1c003b7e88c2871deaaba761ed", "text": "Sentence simplification aims to make sentences easier to read and understand. Most recent approaches draw on insights from machine translation to learn simplification rewrites from monolingual corpora of complex and simple sentences. We address the simplification problem with an encoder-decoder model coupled with a deep reinforcement learning framework. Our model, which we call DRESS (as shorthand for Deep REinforcement Sentence Simplification), explores the space of possible simplifications while learning to optimize a reward function that encourages outputs which are simple, fluent, and preserve the meaning of the input. Experiments on three datasets demonstrate that our model outperforms competitive simplification systems.1", "title": "" }, { "docid": "7e78dd27dd2d4da997ceef7e867b7cd2", "text": "Extracting facial feature is a key step in facial expression recognition (FER). Inaccurate feature extraction very often results in erroneous categorizing of facial expressions. Especially in robotic application, environmental factors such as illumination variation may cause FER system to extract feature inaccurately. In this paper, we propose a robust facial feature point extraction method to recognize facial expression in various lighting conditions. Before extracting facial features, a face is localized and segmented from a digitized image frame. Face preprocessing stage consists of face normalization and feature region localization steps to extract facial features efficiently. As regions of interest corresponding to relevant features are determined, Gabor jets are applied based on Gabor wavelet transformation to extract the facial points. Gabor jets are more invariable and reliable than gray-level values, which suffer from ambiguity as well as illumination variation while representing local features. Each feature point can be matched by a phase-sensitivity similarity function in the relevant regions of interest. Finally, the feature values are evaluated from the geometric displacement of facial points. After tested using the AR face database and the database built in our lab, average facial expression recognition rates of 84.1% and 81.3% are obtained respectively.", "title": "" }, { "docid": "be29160b73b9ab727eb760a108a7254a", "text": "Two-dimensional (2-D) analytical permanent-magnet (PM) eddy-current loss calculations are presented for slotless PM synchronous machines (PMSMs) with surface-inset PMs considering the current penetration effect. In this paper, the term slotless implies that either the stator is originally slotted but the slotting effects are neglected or the stator is originally slotless. The analytical magnetic field distribution is computed in polar coordinates from the 2-D subdomain method (i.e., based on formal resolution of Maxwell's equation applied in subdomain). Based on the predicted magnetic field distribution, the eddy-currents induced in the PMs are analytically obtained and the PM eddy-current losses considering eddy-current reaction field are calculated. The analytical expressions can be used for slotless PMSMs with any number of phases and any form of current and overlapping winding distribution. The effects of stator slotting are neglected and the current density distribution is modeled by equivalent current sheets located on the slot opening. To evaluate the efficacy of the proposed technique, the 2-D PM eddy-current losses for two slotless PMSMs are analytically calculated and compared with those obtained by 2-D finite-element analysis (FEA). The effects of the rotor rotational speed and the initial rotor mechanical angular position are investigated. The analytical results are in good agreement with those obtained by the 2-D FEA.", "title": "" }, { "docid": "136ed8dc00926ceec6d67b9ab35e8444", "text": "This paper addresses the property requirements of repair materials for high durability performance for concrete structure repair. It is proposed that the high tensile strain capacity of High Performance Fiber Reinforced Cementitious Composites (HPFRCC) makes such materials particularly suitable for repair applications, provided that the fresh properties are also adaptable to those required in placement techniques in typical repair applications. A specific version of HPFRCC, known as Engineered Cementitious Composites (ECC), is described. It is demonstrated that the fresh and hardened properties of ECC meet many of the requirements for durable repair performance. Recent experience in the use of this material in a bridge deck patch repair is highlighted. The origin of this article is a summary of a keynote lecture with the same title given at the Conference on Fiber Composites, High-Performance Concretes and Smart Materials, Chennai, India, Jan., 2004. It is only slightly updated here.", "title": "" }, { "docid": "d7eb92756c8c3fb0ab49d7b101d96343", "text": "Pretraining with language modeling and related unsupervised tasks has recently been shown to be a very effective enabling technology for the development of neural network models for language understanding tasks. In this work, we show that although language model-style pretraining is extremely effective at teaching models about language, it does not yield an ideal starting point for efficient transfer learning. By supplementing language model-style pretraining with further training on data-rich supervised tasks, we are able to achieve substantial additional performance improvements across the nine target tasks in the GLUE benchmark. We obtain an overall score of 76.9 on GLUE—a 2.3 point improvement over our baseline system adapted from Radford et al. (2018) and a 4.1 point improvement over Radford et al.’s reported score. We further use training data downsampling to show that the benefits of this supplementary training are even more pronounced in data-constrained regimes.", "title": "" }, { "docid": "0bf150f6cd566c31ec840a57d8d2fa55", "text": "Within the past few years, organizations in diverse industries have adopted MapReduce-based systems for large-scale data processing. Along with these new users, important new workloads have emerged which feature many small, short, and increasingly interactive jobs in addition to the large, long-running batch jobs for which MapReduce was originally designed. As interactive, large-scale query processing is a strength of the RDBMS community, it is important that lessons from that field be carried over and applied where possible in this new domain. However, these new workloads have not yet been described in the literature. We fill this gap with an empirical analysis of MapReduce traces from six separate business-critical deployments inside Facebook and at Cloudera customers in e-commerce, telecommunications, media, and retail. Our key contribution is a characterization of new MapReduce workloads which are driven in part by interactive analysis, and which make heavy use of querylike programming frameworks on top of MapReduce. These workloads display diverse behaviors which invalidate prior assumptions about MapReduce such as uniform data access, regular diurnal patterns, and prevalence of large jobs. A secondary contribution is a first step towards creating a TPC-like data processing benchmark for MapReduce.", "title": "" }, { "docid": "ef4272cd4b0d4df9aa968cc9ff528c1e", "text": "Estimating action quality, the process of assigning a \"score\" to the execution of an action, is crucial in areas such as sports and health care. Unlike action recognition, which has millions of examples to learn from, the action quality datasets that are currently available are small-typically comprised of only a few hundred samples. This work presents three frameworks for evaluating Olympic sports which utilize spatiotemporal features learned using 3D convolutional neural networks (C3D) and perform score regression with i) SVR ii) LSTM and iii) LSTM followed by SVR. An efficient training mechanism for the limited data scenarios is presented for clip-based training with LSTM. The proposed systems show significant improvement over existing quality assessment approaches on the task of predicting scores of diving, vault, figure skating. SVR-based frameworks yield better results, LSTM-based frameworks are more natural for describing an action and can be used for improvement feedback.", "title": "" }, { "docid": "d8befc5eb47ac995e245cf9177c16d3d", "text": "Our hypothesis is that the video game industry, in the attempt to simulate a realistic experience, has inadvertently collected very accurate data which can be used to solve problems in the real world. In this paper we describe a novel approach to soccer match prediction that makes use of only virtual data collected from a video game(FIFA 2015). Our results were comparable and in some places better than results achieved by predictors that used real data. We also use the data provided for each player and the players present in the squad, to analyze the team strategy. Based on our analysis, we were able to suggest better strategies for weak teams", "title": "" }, { "docid": "eba545eb04c950ecd9462558c9d3da85", "text": "The ability to recognize facial expressions automatically enables novel applications in human-computer interaction and other areas. Consequently, there has been active research in this field, with several recent works utilizing Convolutional Neural Networks (CNNs) for feature extraction and inference. These works differ significantly in terms of CNN architectures and other factors. Based on the reported results alone, the performance impact of these factors is unclear. In this paper, we review the state of the art in image-based facial expression recognition using CNNs and highlight algorithmic differences and their performance impact. On this basis, we identify existing bottlenecks and consequently directions for advancing this research field. Furthermore, we demonstrate that overcoming one of these bottlenecks – the comparatively basic architectures of the CNNs utilized in this field – leads to a substantial performance increase. By forming an ensemble of modern deep CNNs, we obtain a FER2013 test accuracy of 75.2%, outperforming previous works without requiring auxiliary training data or face registration.", "title": "" }, { "docid": "a31692667282fe92f2eefc63cd562c9e", "text": "Multivariate data sets including hundreds of variables are increasingly common in many application areas. Most multivariate visualization techniques are unable to display such data effectively, and a common approach is to employ dimensionality reduction prior to visualization. Most existing dimensionality reduction systems focus on preserving one or a few significant structures in data. For many analysis tasks, however, several types of structures can be of high significance and the importance of a certain structure compared to the importance of another is often task-dependent. This paper introduces a system for dimensionality reduction by combining user-defined quality metrics using weight functions to preserve as many important structures as possible. The system aims at effective visualization and exploration of structures within large multivariate data sets and provides enhancement of diverse structures by supplying a range of automatic variable orderings. Furthermore it enables a quality-guided reduction of variables through an interactive display facilitating investigation of trade-offs between loss of structure and the number of variables to keep. The generality and interactivity of the system is demonstrated through a case scenario.", "title": "" } ]
scidocsrr
83ff51ddc5d8764e9fc199434ce90fa4
UTOPIAN: User-Driven Topic Modeling Based on Interactive Nonnegative Matrix Factorization
[ { "docid": "fce925493fc9f7cbbe4c202e5e625605", "text": "Topic models are a useful and ubiquitous tool for understanding large corpora. However, topic models are not perfect, and for many users in computational social science, digital humanities, and information studies—who are not machine learning experts—existing models and frameworks are often a “take it or leave it” proposition. This paper presents a mechanism for giving users a voice by encoding users’ feedback to topic models as correlations between words into a topic model. This framework, interactive topic modeling (itm), allows untrained users to encode their feedback easily and iteratively into the topic models. Because latency in interactive systems is crucial, we develop more efficient inference algorithms for tree-based topic models. We validate the framework both with simulated and real users.", "title": "" }, { "docid": "9e0f3f1ec7b54c5475a0448da45e4463", "text": "Significant effort has been devoted to designing clustering algorithms that are responsive to user feedback or that incorporate prior domain knowledge in the form of constraints. However, users desire more expressive forms of interaction to influence clustering outcomes. In our experiences working with diverse application scientists, we have identified an interaction style scatter/gather clustering that helps users iteratively restructure clustering results to meet their expectations. As the names indicate, scatter and gather are dual primitives that describe whether clusters in a current segmentation should be broken up further or, alternatively, brought back together. By combining scatter and gather operations in a single step, we support very expressive dynamic restructurings of data. Scatter/gather clustering is implemented using a nonlinear optimization framework that achieves both locality of clusters and satisfaction of user-supplied constraints. We illustrate the use of our scatter/gather clustering approach in a visual analytic application to study baffle shapes in the bat biosonar (ears and nose) system. We demonstrate how domain experts are adept at supplying scatter/gather constraints, and how our framework incorporates these constraints effectively without requiring numerous instance-level constraints.", "title": "" } ]
[ { "docid": "bdb4aba2b34731ffdf3989d6d1186270", "text": "In order to push the performance on realistic computer vision tasks, the number of classes in modern benchmark datasets has significantly increased in recent years. This increase in the number of classes comes along with increased ambiguity between the class labels, raising the question if top-1 error is the right performance measure. In this paper, we provide an extensive comparison and evaluation of established multiclass methods comparing their top-k performance both from a practical as well as from a theoretical perspective. Moreover, we introduce novel top-k loss functions as modifications of the softmax and the multiclass SVM losses and provide efficient optimization schemes for them. In the experiments, we compare on various datasets all of the proposed and established methods for top-k error optimization. An interesting insight of this paper is that the softmax loss yields competitive top-k performance for all k simultaneously. For a specific top-k error, our new top-k losses lead typically to further improvements while being faster to train than the softmax.", "title": "" }, { "docid": "a81b4f234f126589165994bb1b2d844f", "text": "Most social media commentary in the Arabic language space is made using unstructured non-grammatical slang Arabic language, presenting complex challenges for sentiment analysis and opinion extraction of online commentary and micro blogging data in this important domain. This paper provides a comprehensive analysis of the important research works in the field of Arabic sentiment analysis. An in-depth qualitative analysis of the various features of the research works is carried out and a summary of objective findings is presented. We used smoothness analysis to evaluate the percentage error in the performance scores reported in the studies from their linearly-projected values (smoothness) which is an estimate of the influence of the different approaches used by the authors on the performance scores obtained. To solve a bounding issue with the data as it was reported, we modified existing logarithmic smoothing technique and applied it to pre-process the performance scores before the analysis. Our results from the analysis have been reported and interpreted for the various performance parameters: accuracy, precision, recall and F-score. Keywords—Arabic Sentiment Analysis; Qualitative Analysis; Quantitative Analysis; Smoothness Analysis", "title": "" }, { "docid": "2ae7d7272c2cf82a3488e0b83b13f694", "text": "Valgus extension osteotomy (VGEO) is a salvage procedure for 'hinge abduction' in Perthes' disease. The indications for its use are pain and fixed deformity. Our study shows the clinical results at maturity of VGEO carried out in 48 children (51 hips) and the factors which influence subsequent remodelling of the hip. After a mean follow-up of ten years, total hip replacement has been carried out in four patients and arthrodesis in one. The average Iowa Hip Score in the remainder was 86 (54 to 100). Favourable remodelling of the femoral head was seen in 12 hips. This was associated with three factors at surgery; younger age (p = 0.009), the phase of reossification (p = 0.05) and an open triradiate cartilage (p = 0.0007). Our study has shown that, in the short term, VGEO relieves pain and corrects deformity; as growth proceeds it may produce useful remodelling in this worst affected subgroup of children with Perthes' disease.", "title": "" }, { "docid": "2f6b4c5ff4f9fbb4a9f24efb4f42cfd2", "text": "Painful acute cysts in the natal cleft or lower back, known as pilonidal sinus disease, are a severe burden to many younger patients. Although surgical intervention is the preferred first line treatment, postsurgical wound healing disturbances are frequently reported due to infection or other complications. Different treatment options of pilonidal cysts have been discussed in the literature, however, no standardised guideline for the postsurgical wound treatment is available. After surgery, a common recommended treatment to patients is rinsing the wound with clean water and dressing with a sterile compress. We present a case series of seven patients with wounds healing by secondary intention after surgical intervention of a pilonidal cyst. The average age of the patients was 40 years old. Of the seven patients, three had developed a wound healing disturbance, one wound had started to develop a fibrin coating and three were in a good condition. The applied wound care regimens comprised appropriate mechanical or autolytic debridement, rinsing with an antimicrobial solution, haemoglobin application, and primary and secondary dressings. In all seven cases a complete wound closure was achieved within an average of 76 days with six out of seven wounds achieving wound closure within 23-98 days. Aesthetic appearance was deemed excellent in five out of seven cases excellent and acceptable in one. Treatment of one case with a sustained healing disturbance did result in wound closure but with a poor aesthetic outcome and an extensive cicatrisation of the new tissue. Based on these results we recommend that to avoid healing disturbances of wounds healing by secondary intention after surgical pilonidal cyst intervention, an adequate wound care regime comprising appropriate wound debridement, rinsing, topically applied haemoglobin and adequate wound dressing is recommendable as early as possible after surgery.", "title": "" }, { "docid": "c72e8982a13f43d8e3debda561f3cf41", "text": "This paper presents AOP++, a generic aspect-oriented programming framework in C++. It successfully incorporates AOP with object-oriented programming as well as generic programming naturally in the framework of standard C++. It innovatively makes use of C++ templates to express pointcut expressions and match join points at compile time. It innovatively creates a full-fledged aspect weaver by using template metaprogramming techniques to perform aspect weaving. It is notable that AOP++ itself is written completely in standard C++, and requires no language extensions. With the help of AOP++, C++ programmers can facilitate AOP with only a little effort.", "title": "" }, { "docid": "9902a306ff4c633f30f6d9e56aa8335c", "text": "The bank director was pretty upset noticing Joe, the system administrator, spending his spare time playing Mastermind, an old useless game of the 70ies. He had fought the instinct of telling him how to better spend his life, just limiting to look at him in disgust long enough to be certain to be noticed. No wonder when the next day the director fell on his chair astonished while reading, on the newspaper, about a huge digital fraud on the ATMs of his bank, with millions of Euros stolen by a team of hackers all around the world. The article mentioned how the hackers had ‘played with the bank computers just like playing Mastermind’, being able to disclose thousands of user PINs during the one-hour lunch break. That precise moment, a second before falling senseless, he understood the subtle smile on Joe’s face the day before, while training at his preferred game, Mastermind.", "title": "" }, { "docid": "fb33cb426377a2fdc2bc597ab59c0f78", "text": "OBJECTIVES\nTo present a combination of clinical and histopathological criteria for diagnosing cheilitis glandularis (CG), and to evaluate the association between CG and squamous cell carcinoma (SCC).\n\n\nMATERIALS AND METHODS\nThe medical literature in English was searched from 1950 to 2010 and selected demographic data, and clinical and histopathological features of CG were retrieved and analysed.\n\n\nRESULTS\nA total of 77 cases have been published and four new cases were added to the collective data. The clinical criteria applied included the coexistence of multiple lesions and mucoid/purulent discharge, while the histopathological criteria included two or more of the following findings: sialectasia, chronic inflammation, mucous/oncocytic metaplasia and mucin in ducts. Only 47 (58.0%) cases involving patients with a mean age of 48.5 ± 20.3 years and a male-to-female ratio of 2.9:1 fulfilled the criteria. The lower lip alone was most commonly affected (70.2%). CG was associated with SCC in only three cases (3.5%) for which there was a clear aetiological factor for the malignancy.\n\n\nCONCLUSIONS\nThe proposed diagnostic criteria can assist in delineating true CG from a variety of lesions with a comparable clinical/histopathological presentation. CG in association with premalignant/malignant epithelial changes of the lower lip may represent secondary, reactive changes of the salivary glands.", "title": "" }, { "docid": "3f5f3a31cbf45065ea82cf60140a8bf5", "text": "This paper presents a nonholonomic path planning method, aiming at taking into considerations of curvature constraint, length minimization, and computational demand, for car-like mobile robot based on cubic spirals. The generated path is made up of at most five segments: at most two maximal-curvature cubic spiral segments with zero curvature at both ends in connection with up to three straight line segments. A numerically efficient process is presented to generate a Cartesian shortest path among the family of paths considered for a given pair of start and destination configurations. Our approach is resorted to minimization via linear programming over the sum of length of each path segment of paths synthesized based on minimal locomotion cubic spirals linking start and destination orientations through a selected intermediate orientation. The potential intermediate configurations are not necessarily selected from the symmetric mean circle for non-parallel start and destination orientations. The novelty of the presented path generation method based on cubic spirals is: (i) Practical: the implementation is straightforward so that the generation of feasible paths in an environment free of obstacles is efficient in a few milliseconds; (ii) Flexible: it lends itself to various generalizations: readily applicable to mobile robots capable of forward and backward motion and Dubins’ car (i.e. car with only forward driving capability); well adapted to the incorporation of other constraints like wall-collision avoidance encountered in robot soccer games; straightforward extension to planning a path connecting an ordered sequence of target configurations in simple obstructed environment. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "41d338dd3a1d0b37e9050d0fcdb27569", "text": "Loneliness and depression are associated, in particular in older adults. Less is known about the role of social networks in this relationship. The present study analyzes the influence of social networks in the relationship between loneliness and depression in the older adult population in Spain. A population-representative sample of 3535 adults aged 50 years and over from Spain was analyzed. Loneliness was assessed by means of the three-item UCLA Loneliness Scale. Social network characteristics were measured using the Berkman–Syme Social Network Index. Major depression in the previous 12 months was assessed with the Composite International Diagnostic Interview (CIDI). Logistic regression models were used to analyze the survey data. Feelings of loneliness were more prevalent in women, those who were younger (50–65), single, separated, divorced or widowed, living in a rural setting, with a lower frequency of social interactions and smaller social network, and with major depression. Among people feeling lonely, those with depression were more frequently married and had a small social network. Among those not feeling lonely, depression was associated with being previously married. In depressed people, feelings of loneliness were associated with having a small social network; while among those without depression, feelings of loneliness were associated with being married. The type and size of social networks have a role in the relationship between loneliness and depression. Increasing social interaction may be more beneficial than strategies based on improving maladaptive social cognition in loneliness to reduce the prevalence of depression among Spanish older adults.", "title": "" }, { "docid": "d84bd9aecd5e5a5b744bbdbffddfd65f", "text": "Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver. However, the index construction of these variables could result in their strong correlation, thus preventing rated characters from being plotted accurately. Phase 1 of this study tested the indices of the Godspeed questionnaire as measures of humanlike characters. The results indicate significant and strong correlations among the relevant indices (Bartneck, Kulić, Croft, & Zoghbi, 2009). Phase 2 of this study developed alternative indices with nonsignificant correlations (p > .05) between the proposed y-axis eeriness and x-axis perceived humanness (r = .02). The new humanness and eeriness indices facilitate plotting relations among rated characters of varying human likeness. 2010 Elsevier Ltd. All rights reserved. 1. Plotting emotional responses to humanlike characters Mori (1970) proposed a hypothetical graph describing a nonlinear relation between a character’s degree of human likeness and the emotional response of the human perceiver (Fig. 1). The graph predicts that more human-looking characters will be perceived as more agreeable up to a point at which they become so human people find their nonhuman imperfections unsettling (MacDorman, Green, Ho, & Koch, 2009; MacDorman & Ishiguro, 2006; Mori, 1970). This dip in appraisal marks the start of the uncanny valley (bukimi no tani in Japanese). As characters near complete human likeness, they rise out of the valley, and people once again feel at ease with them. In essence, a character’s imperfections expose a mismatch between the human qualities that are expected and the nonhuman qualities that instead follow, or vice versa. As an example of things that lie in the uncanny valley, Mori (1970) cites corpses, zombies, mannequins coming to life, and lifelike prosthetic hands. Assuming the uncanny valley exists, what dependent variable is appropriate to represent Mori’s graph? Mori referred to the y-axis as shinwakan, a neologism even in Japanese, which has been variously translated as familiarity, rapport, and comfort level. Bartneck, Kanda, Ishiguro, and Hagita (2009) have proposed using likeability to represent shinwakan, and they applied a likeability index to the evaluation of interactions with Ishiguro’s android double, the Geminoid HI-1. Likeability is virtually synonymous with interpersonal warmth (Asch, 1946; Fiske, Cuddy, & Glick, 2007; Rosenberg, Nelson, & Vivekananthan, 1968), which is also strongly correlated with other important measures, such as comfortability, communality, sociability, and positive (vs. negative) affect (Abele & Wojciszke, 2007; MacDorman, Ough, & Ho, 2007; Mehrabian & Russell, 1974; Sproull, Subramani, Kiesler, Walker, & Waters, 1996; Wojciszke, Abele, & Baryla, 2009). Warmth is the primary dimension of human social perception, accounting for 53% of the variance in perceptions of everyday social behaviors (Fiske, Cuddy, Glick, & Xu, 2002; Fiske et al., 2007; Wojciszke, Bazinska, & Jaworski, 1998). Despite the importance of warmth, this concept misses the essence of the uncanny valley. Mori (1970) refers to negative shinwakan as bukimi, which translates as eeriness. However, eeriness is not the negative anchor of warmth. A person can be cold and disagreeable without being eerie—at least not eerie in the way that an artificial human being is eerie. In addition, the set of negative emotions that predict eeriness (e.g., fear, anxiety, and disgust) are more specific than coldness (Ho, MacDorman, & Pramono, 2008). Thus, shinwakan and bukimi appear to constitute distinct dimensions. Although much has been written on potential benchmarks for anthropomorphic robots (for reviews see Kahn et al., 2007; MacDorman & Cowley, 2006; MacDorman & Kahn, 2007), no indices have been developed and empirically validated for measuring shinwakan or related concepts across a range of humanlike stimuli, such as computer-animated human characters and humanoid robots. The Godspeed questionnaire, compiled by Bartneck, Kulić, Croft, and Zoghbi (2009), includes at least two concepts, anthropomorphism and likeability, that could potentially serve as the xand y-axes of Mori’s graph (Bartneck, Kanda, et al., 2009). Although the 0747-5632/$ see front matter 2010 Elsevier Ltd. All rights reserved. doi:10.1016/j.chb.2010.05.015 * Corresponding author. Tel.: +1 317 215 7040. E-mail address: [email protected] (K.F. MacDorman). URL: http://www.macdorman.com (K.F. MacDorman). Computers in Human Behavior 26 (2010) 1508–1518", "title": "" }, { "docid": "cbb6c80bc986b8b1e1ed3e70abb86a79", "text": "CD44 is a cell surface adhesion receptor that is highly expressed in many cancers and regulates metastasis via recruitment of CD44 to the cell surface. Its interaction with appropriate extracellular matrix ligands promotes the migration and invasion processes involved in metastases. It was originally identified as a receptor for hyaluronan or hyaluronic acid and later to several other ligands including, osteopontin (OPN), collagens, and matrix metalloproteinases. CD44 has also been identified as a marker for stem cells of several types. Beside standard CD44 (sCD44), variant (vCD44) isoforms of CD44 have been shown to be created by alternate splicing of the mRNA in several cancer. Addition of new exons into the extracellular domain near the transmembrane of sCD44 increases the tendency for expressing larger size vCD44 isoforms. Expression of certain vCD44 isoforms was linked with progression and metastasis of cancer cells as well as patient prognosis. The expression of CD44 isoforms can be correlated with tumor subtypes and be a marker of cancer stem cells. CD44 cleavage, shedding, and elevated levels of soluble CD44 in the serum of patients is a marker of tumor burden and metastasis in several cancers including colon and gastric cancer. Recent observations have shown that CD44 intracellular domain (CD44-ICD) is related to the metastatic potential of breast cancer cells. However, the underlying mechanisms need further elucidation.", "title": "" }, { "docid": "10b8223c9005bd5bdd2836d17541bbb1", "text": "This study explores the stability of attachment security and representations from infancy to early adulthood in a sample chosen originally for poverty and high risk for poor developmental outcomes. Participants for this study were 57 young adults who are part of an ongoing prospective study of development and adaptation in a high-risk sample. Attachment was assessed during infancy by using the Ainsworth Strange Situation (Ainsworth & Wittig) and at age 19 by using the Berkeley Adult Attachment Interview (George, Kaplan, & Main). Possible correlates of continuity and discontinuity in attachment were drawn from assessments of the participants and their mothers over the course of the study. Results provided no evidence for significant continuity between infant and adult attachment in this sample, with many participants transitioning to insecurity. The evidence, however, indicated that there might be lawful discontinuity. Analyses of correlates of continuity and discontinuity in attachment classification from infancy to adulthood indicated that the continuous and discontinuous groups were differentiated on the basis of child maltreatment, maternal depression, and family functioning in early adolescence. These results provide evidence that although attachment has been found to be stable over time in other samples, attachment representations are vulnerable to difficult and chaotic life experiences.", "title": "" }, { "docid": "f83228e2130f464b8c5b1837d338d7e1", "text": "This article is focused on examining the factors and relationships that influence the browsing and buying behavior of individuals when they shop online. Specifically, we are interested in individual buyers using business-to-consumer sites. We are also interested in examining shopping preferences based on various demographic categories that might exhibit distinct purchasing attitudes and behaviors for certain categories of products and services. We examine these behaviors in the context of both products and services. After a period of decline in recent months, online shopping is on the rise again. By some estimates, total U.S. spending on online sales increased to $5.7 billion in December 2001 from $3.2 billion in June of 2001 [3, 5]. By these same estimates, the number of households shopping online increased to 18.7 million in December 2001 from 13.1 million in June 2001. Consumers spent an average of $304 per person in December 2001, compared with $247 in June 2001. According to an analyst at Forrester: “The fact that online retail remained stable during ... such social and economic instability speaks volumes about how well eCommerce is positioned to stand up to a poor economy” [4]. What do consumers utilize the Internet for? Nie and Erbring suggest that 52% of the consumers use the Internet for product information, 42% for travel information, and 24% for buying [9]. Recent online consumer behavior-related research refers to any Internet-related activity associated with the consumption of goods, services, and information [6]. In the definition of Internet consumption, Goldsmith and Bridges include “gathering information passively via exposure to advertising; shopping, which includes both browsing and deliberate information search, and the selection and buying of specific goods, services, and information” [7]. For the purposes of this study, we focus on all aspects of this consumption. We include all of them because information gathering aspects of e-commerce serve to educate the consumer, which is ulti-", "title": "" }, { "docid": "b3d42332cd9572813bc08efc670d34d7", "text": "Context: The use of Systematic Literature Review (SLR) requires expertise and poses many challenges for novice researchers. The experiences of those who have used this research methodology can benefit novice researchers in effectively dealing with these challenges. Objective: The aim of this study is to record the reported experiences of conducting Systematic Literature Reviews, for the benefit of new researchers. Such a review will greatly benefit the researchers wanting to conduct SLR for the very first time. Method: We conducted a tertiary study to gather the experiences published by researchers. Studies that have used the SLR research methodology in software engineering and have implicitly or explicitly reported their experiences are included in this review. Results: Our research has revealed 116 studies relevant to the theme. The data has been extracted by two researchers working independently and conflicts resolved after discussion with third researcher. Findings from these studies highlight Search Strategy, Online Databases, Planning and Data Extraction as the most challenging phases of SLR. Lack of standard terminology in software engineering papers, poor quality of abstracts and problems with search engines are some of the most cited challenges. Conclusion: Further research and guidelines is required to facilitate novice researchers in conducting these phases properly.", "title": "" }, { "docid": "06b86a3d7f324fba7d95c358e0c38a8f", "text": "Anomaly detection in streaming data is of high interest in numerous application domains. In this paper, we propose a novel one-class semi-supervised algorithm to detect anomalies in streaming data. Underlying the algorithm is a fast and accurate density estimator implemented by multiple fully randomized space trees (RS-Trees), named RS-Forest. The piecewise constant density estimate of each RS-tree is defined on the tree node into which an instance falls. Each incoming instance in a data stream is scored by the density estimates averaged over all trees in the forest. Two strategies, statistical attribute range estimation of high probability guarantee and dual node profiles for rapid model update, are seamlessly integrated into RS Forestto systematically address the ever-evolving nature of data streams. We derive the theoretical upper bound for the proposed algorithm and analyze its asymptotic properties via bias-variance decomposition. Empirical comparisons to the state-of-the-art methods on multiple benchmark datasets demonstrate that the proposed method features high detection rate, fast response, and insensitivity to most of the parameter settings. Algorithm implementations and datasets are available upon request.", "title": "" }, { "docid": "4a9b82729bc4658bf2e54c90f74ea1c8", "text": "To operate reliably in real-world traffic, an autonomous car must evaluate the consequences of its potential actions by anticipating the uncertain intentions of other traffic participants. This paper presents an integrated behavioral inference and decision-making approach that models vehicle behavior for both our vehicle and nearby vehicles as a discrete set of closedloop policies that react to the actions of other agents. Each policy captures a distinct high-level behavior and intention, such as driving along a lane or turning at an intersection. We first employ Bayesian changepoint detection on the observed history of states of nearby cars to estimate the distribution over potential policies that each nearby car might be executing. We then sample policies from these distributions to obtain high-likelihood actions for each participating vehicle. Through closed-loop forward simulation of these samples, we can evaluate the outcomes of the interaction of our vehicle with other participants (e.g., a merging vehicle accelerates and we slow down to make room for it, or the vehicle in front of ours suddenly slows down and we decide to pass it). Based on those samples, our vehicle then executes the policy with the maximum expected reward value. Thus, our system is able to make decisions based on coupled interactions between cars in a tractable manner. This work extends our previous multipolicy system [11] by incorporating behavioral anticipation into decision-making to evaluate sampled potential vehicle interactions. We evaluate our approach using real-world traffic-tracking data from our autonomous vehicle platform, and present decision-making results in simulation involving highway traffic scenarios.", "title": "" }, { "docid": "328c1c6ed9e38a851c6e4fd3ab71c0f8", "text": "We present the MSP-IMPROV corpus, a multimodal emotional database, where the goal is to have control over lexical content and emotion while also promoting naturalness in the recordings. Studies on emotion perception often require stimuli with fixed lexical content, but that convey different emotions. These stimuli can also serve as an instrument to understand how emotion modulates speech at the phoneme level, in a manner that controls for coarticulation. Such audiovisual data are not easily available from natural recordings. A common solution is to record actors reading sentences that portray different emotions, which may not produce natural behaviors. We propose an alternative approach in which we define hypothetical scenarios for each sentence that are carefully designed to elicit a particular emotion. Two actors improvise these emotion-specific situations, leading them to utter contextualized, non-read renditions of sentences that have fixed lexical content and convey different emotions. We describe the context in which this corpus was recorded, the key features of the corpus, the areas in which this corpus can be useful, and the emotional content of the recordings. The paper also provides the performance for speech and facial emotion classifiers. The analysis brings novel classification evaluations where we study the performance in terms of inter-evaluator agreement and naturalness perception, leveraging the large size of the audiovisual database.", "title": "" }, { "docid": "e84ff3f37e049bd649a327366a4605f9", "text": "Once thought of as a technology restricted primarily to the scientific community, High-performance Computing (HPC) has now been established as an important value creation tool for the enterprises. Predominantly, the enterprise HPC is fueled by the needs for high-performance data analytics (HPDA) and large-scale machine learning – trades instrumental to business growth in today’s competitive markets. Cloud computing, characterized by the paradigm of on-demand network access to computational resources, has great potential of bringing HPC capabilities to a broader audience. Clouds employing traditional lossy network technologies, however, at large, have not proved to be sufficient for HPC applications. Both the traditional HPC workloads and HPDA require high predictability, large bandwidths, and low latencies, features which combined are not readily available using best-effort cloud networks. On the other hand, lossless interconnection networks commonly deployed in HPC systems, lack the flexibility needed for dynamic cloud environments. In this thesis, we identify and address research challenges that hinder the realization of an efficient HPC cloud computing platform, utilizing the InfiniBand interconnect as a demonstration technology. In particular, we address challenges related to efficient routing, load-balancing, low-overhead virtualization, performance isolation, and fast network reconfiguration, all to improve the utilization and flexibility of the underlying interconnect of an HPC cloud. In addition, we provide a framework to realize a self-adaptive network architecture for HPC clouds, offering dynamic and autonomic adaptation of the underlying interconnect according to varying traffic patterns, resource availability, workload distribution, and also in accordance with service provider defined policies. The work presented in this thesis helps bridging the performance gap between the cloud and traditional HPC infrastructures; the thesis provides practical solutions to enable an efficient, flexible, multi-tenant HPC network suitable for high-performance cloud computing.", "title": "" }, { "docid": "5a589c7beb17374e17c766634d822a80", "text": "Images now come in different forms – color, near-infrared, depth, etc. – due to the development of special and powerful cameras in computer vision and computational photography. Their cross-modal correspondence establishment is however left behind. We address this challenging dense matching problem considering structure variation possibly existing in these image sets and introduce new model and solution. Our main contribution includes designing the descriptor named robust selective normalized cross correlation (RSNCC) to establish dense pixel correspondence in input images and proposing its mathematical parameterization to make optimization tractable. A computationally robust framework including global and local matching phases is also established. We build a multi-modal dataset including natural images with labeled sparse correspondence. Our method will benefit image and vision applications that require accurate image alignment.", "title": "" }, { "docid": "e043f20a60df6399c2f93d064d61e648", "text": "Recent research in recommender systems has shown that collaborative filtering algorithms are highly susceptible to attacks that insert biased profile data. Theoretical analyses and empirical experiments have shown that certain attacks can have a significant impact on the recommendations a system provides. These analyses have generally not taken into account the cost of mounting an attack or the degree of prerequisite knowledge for doing so. For example, effective attacks often require knowledge about the distribution of user ratings: the more such knowledge is required, the more expensive the attack to be mounted. In our research, we are examining a variety of attack models, aiming to establish the likely practical risks to collaborative systems. In this paper, we examine user-based collaborative filtering and some attack models that are successful against it, including a limited knowledge \"bandwagon\" attack that requires only that the attacker identify a small number of very popular items and a user-focused \"favorite item\" attack that is also effective against item-based algorithms.", "title": "" } ]
scidocsrr
7aa6ca63560cbb00fb545ad439475c9b
CAAD: Computer Architecture for Autonomous Driving
[ { "docid": "368a37e8247d8a6f446b31f1dc0f635e", "text": "In order to achieve autonomous operation of a vehicle in urban situations with unpredictable traffic, several realtime systems must interoperate, including environment perception, localization, planning, and control. In addition, a robust vehicle platform with appropriate sensors, computational hardware, networking, and software infrastructure is essential.", "title": "" }, { "docid": "ed9d6571634f30797fb338a928cc8361", "text": "In this paper, we study the challenging problem of tracking the trajectory of a moving object in a video with possibly very complex background. In contrast to most existing trackers which only learn the appearance of the tracked object online, we take a different approach, inspired by recent advances in deep learning architectures, by putting more emphasis on the (unsupervised) feature learning problem. Specifically, by using auxiliary natural images, we train a stacked denoising autoencoder offline to learn generic image features that are more robust against variations. This is then followed by knowledge transfer from offline training to the online tracking process. Online tracking involves a classification neural network which is constructed from the encoder part of the trained autoencoder as a feature extractor and an additional classification layer. Both the feature extractor and the classifier can be further tuned to adapt to appearance changes of the moving object. Comparison with the state-of-the-art trackers on some challenging benchmark video sequences shows that our deep learning tracker is more accurate while maintaining low computational cost with real-time performance when our MATLAB implementation of the tracker is used with a modest graphics processing unit (GPU).", "title": "" } ]
[ { "docid": "35da724255bbceb859d01ccaa0dec3b1", "text": "A linear differential equation with rational function coefficients has a Bessel type solution when it is solvable in terms of <i>B</i><sub><i>v</i></sub>(<i>f</i>), <i>B</i><sub><i>v</i>+1</sub>(<i>f</i>). For second order equations, with rational function coefficients, <i>f</i> must be a rational function or the square root of a rational function. An algorithm was given by Debeerst, van Hoeij, and Koepf, that can compute Bessel type solutions if and only if <i>f</i> is a rational function. In this paper we extend this work to the square root case, resulting in a complete algorithm to find all Bessel type solutions.", "title": "" }, { "docid": "6195cf6b266d070cce5ff705daa84db7", "text": "The geographical properties of words have recently begun to be exploited for geolocating documents based solely on their text, often in the context of social media and online content. One common approach for geolocating texts is rooted in information retrieval. Given training documents labeled with latitude/longitude coordinates, a grid is overlaid on the Earth and pseudo-documents constructed by concatenating the documents within a given grid cell; then a location for a test document is chosen based on the most similar pseudo-document. Uniform grids are normally used, but they are sensitive to the dispersion of documents over the earth. We define an alternative grid construction using k-d trees that more robustly adapts to data, especially with larger training sets. We also provide a better way of choosing the locations for pseudo-documents. We evaluate these strategies on existing Wikipedia and Twitter corpora, as well as a new, larger Twitter corpus. The adaptive grid achieves competitive results with a uniform grid on small training sets and outperforms it on the large Twitter corpus. The two grid constructions can also be combined to produce consistently strong results across all training sets.", "title": "" }, { "docid": "133b2f033245dad2a2f35ff621741b2f", "text": "In wireless sensor networks (WSNs), long lifetime requirement of different applications and limited energy storage capability of sensor nodes has led us to find out new horizons for reducing power consumption upon nodes. To increase sensor node's lifetime, circuit and protocols have to be energy efficient so that they can make a priori reactions by estimating and predicting energy consumption. The goal of this study is to present and discuss several strategies such as power-aware protocols, cross-layer optimization, and harvesting technologies used to alleviate power consumption constraint in WSNs.", "title": "" }, { "docid": "e34ef27660f2e084d22863060b1c6ab1", "text": "Plants are widely used in many indigenous systems of medicine for therapeutic purposes and are increasingly becoming popular in modern society as alternatives to synthetic medicines. Bioactive principles are derived from the products of plant primary metabolites, which are associated with the process of photosynthesis. The present review highlighted the chemical diversity and medicinal potentials of bioactive principles as well inherent toxicity concerns associated with the use of these plant products, which are of relevance to the clinician, pharmacist or toxicologist. Plant materials are composed of vast array of bioactive principles of which their isolation, identification and characterization for analytical evaluation requires expertise with cutting edge analytical protocols and instrumentations. Bioactive principles are responsible for the therapeutic activities of medicinal plants and provide unlimited opportunities for new drug leads because of their unmatched availability and chemical diversity. For the most part, the beneficial or toxic outcomes of standardized plant extracts depend on the chemical peculiarities of the containing bioactive principles.", "title": "" }, { "docid": "8948409bbfe3e4d7a9384ef85383679e", "text": "The security of today's Web rests in part on the set of X.509 certificate authorities trusted by each user's browser. Users generally do not themselves configure their browser's root store but instead rely upon decisions made by the suppliers of either the browsers or the devices upon which they run. In this work we explore the nature and implications of these trust decisions for Android users. Drawing upon datasets collected by Netalyzr for Android and ICSI's Certificate Notary, we characterize the certificate root store population present in mobile devices in the wild. Motivated by concerns that bloated root stores increase the attack surface of mobile users, we report on the interplay of certificate sets deployed by the device manufacturers, mobile operators, and the Android OS. We identify certificates installed exclusively by apps on rooted devices, thus breaking the audited and supervised root store model, and also discover use of TLS interception via HTTPS proxies employed by a market research company.", "title": "" }, { "docid": "07f1caa5f4c0550e3223e587239c0a14", "text": "Due to the unavailable GPS signals in indoor environments, indoor localization has become an increasingly heated research topic in recent years. Researchers in robotics community have tried many approaches, but this is still an unsolved problem considering the balance between performance and cost. The widely deployed low-cost WiFi infrastructure provides a great opportunity for indoor localization. In this paper, we develop a system for WiFi signal strength-based indoor localization and implement two approaches. The first is improved KNN algorithm-based fingerprint matching method, and the other is the Gaussian Process Regression (GPR) with Bayes Filter approach. We conduct experiments to compare the improved KNN algorithm with the classical KNN algorithm and evaluate the localization performance of the GPR with Bayes Filter approach. The experiment results show that the improved KNN algorithm can bring enhancement for the fingerprint matching method compared with the classical KNN algorithm. In addition, the GPR with Bayes Filter approach can provide about 2m localization accuracy for our test environment.", "title": "" }, { "docid": "f9b6662dc19c47892bb7b95c5b7dc181", "text": "The ability to update firmware is a feature that is found in nearly all modern embedded systems. We demonstrate how this feature can be exploited to allow attackers to inject malicious firmware modifications into vulnerable embedded devices. We discuss techniques for exploiting such vulnerable functionality and the implementation of a proof of concept printer malware capable of network reconnaissance, data exfiltration and propagation to general purpose computers and other embedded device types. We present a case study of the HP-RFU (Remote Firmware Update) LaserJet printer firmware modification vulnerability, which allows arbitrary injection of malware into the printer’s firmware via standard printed documents. We show vulnerable population data gathered by continuously tracking all publicly accessible printers discovered through an exhaustive scan of IPv4 space. To show that firmware update signing is not the panacea of embedded defense, we present an analysis of known vulnerabilities found in third-party libraries in 373 LaserJet firmware images. Prior research has shown that the design flaws and vulnerabilities presented in this paper are found in other modern embedded systems. Thus, the exploitation techniques presented in this paper can be generalized to compromise other embedded systems. Keywords-Embedded system exploitation; Firmware modification attack; Embedded system rootkit; HP-RFU vulnerability.", "title": "" }, { "docid": "b9b0b6974353d4cad948b0681d8bf23b", "text": "We describe a novel approach to modeling idiosyncra tic prosodic behavior for automatic speaker recognition. The approach computes various duration , pitch, and energy features for each estimated syl lable in speech recognition output, quantizes the featur s, forms N-grams of the quantized values, and mode ls normalized counts for each feature N-gram using sup port vector machines (SVMs). We refer to these features as “SNERF-grams” (N-grams of Syllable-base d Nonuniform Extraction Region Features). Evaluation of SNERF-gram performance is conducted o n two-party spontaneous English conversational telephone data from the Fisher corpus, using one co versation side in both training and testing. Resul ts show that SNERF-grams provide significant performance ga ins when combined with a state-of-the-art baseline system, as well as with two highly successful longrange feature systems that capture word usage and lexically constrained duration patterns. Further ex periments examine the relative contributions of fea tures by quantization resolution, N-gram length, and feature type. Results show that the optimal number of bins depends on both feature type and N-gram length, but is roughly in the range of 5 to 10 bins. We find t hat longer N-grams are better than shorter ones, and th at pitch features are most useful, followed by dura tion and energy features. The most important pitch features are those capturing pitch level, whereas the most important energy features reflect patterns of risin g a d falling. For duration features, nucleus dura tion is more important for speaker recognition than are dur ations from the onset or coda of a syllable. Overal l, we find that SVM modeling of prosodic feature sequence s yields valuable information for automatic speaker recognition. It also offers rich new opportunities for exploring how speakers differ from each other i n voluntary but habitual ways.", "title": "" }, { "docid": "e2459b9991cfda1e81119e27927140c5", "text": "This research demo describes the implementation of a mobile AR-supported educational course application, AR Circuit, which is designed to promote the effectiveness of remote collaborative learning for physics. The application employs the TCP/IP protocol enabling multiplayer functionality in a mobile AR environment. One phone acts as the server and the other acts as the client. The server phone will capture the video frames, process the video frame, and send the current frame and the markers transformation matrices to the client phone.", "title": "" }, { "docid": "1349ee751afaddd06f81da2b92198537", "text": "Rapid changes in mobile cloud computing tremendously affect the telecommunication, education and healthcare industries and also business perspectives. Nowadays, advanced information and communication technology enhanced healthcare sector to improved medical services at reduced cost. However, issues related to security, privacy, quality of services and mobility and viability need to be solved before mobile cloud computing can be adopted in the healthcare industry. Mobile healthcare (mHealthcare) is one of the latest technologies in the healthcare industry which enable the industry players to collaborate each other’s especially in sharing the patience’s medical reports and histories. MHealthcare offer real-time monitoring and provide rapid diagnosis of health condition. User’s context such as location, identities and etc which are collected by active sensor is important element in MHealthcare. This paper conducts a study pertaining to mobile cloud healthcare, mobile healthcare and comparisons between the variety of applications and architecture developed/proposed by researchers.", "title": "" }, { "docid": "37913e0bfe44ab63c0c229c20b53c779", "text": "The authors present several versions of a general model, titled the E-Z Reader model, of eye movement control in reading. The major goal of the modeling is to relate cognitive processing (specifically aspects of lexical access) to eye movements in reading. The earliest and simplest versions of the model (E-Z Readers 1 and 2) merely attempt to explain the total time spent on a word before moving forward (the gaze duration) and the probability of fixating a word; later versions (E-Z Readers 3-5) also attempt to explain the durations of individual fixations on individual words and the number of fixations on individual words. The final version (E-Z Reader 5) appears to be psychologically plausible and gives a good account of many phenomena in reading. It is also a good tool for analyzing eye movement data in reading. Limitations of the model and directions for future research are also discussed.", "title": "" }, { "docid": "ebc7f54b969eb491afb7032f6c2a46b6", "text": "The Wi-Fi fingerprinting (WF) technique normally suffers from the RSS (Received Signal Strength) variance problem caused by environmental changes that are inherent in both the training and localization phases. Several calibration algorithms have been proposed but they only focus on the hardware variance problem. Moreover, smartphones were not evaluated and these are now widely used in WF systems. In this paper, we analyze various aspect of the RSS variance problem when using smartphones for WF: device type, device placement, user direction, and environmental changes over time. To overcome the RSS variance problem, we also propose a smartphone-based, indoor pedestrian-tracking system. The scheme uses the location where the maximum RSS is observed, which is preserved even though RSS varies significantly. We experimentally validate that the proposed system is robust to the RSS variance problem.", "title": "" }, { "docid": "a90802bd8cb132334999e6376053d5ef", "text": "We use single-agent and multi-agent Reinforcement Learning (RL) for learning dialogue policies in a resource allocation negotiation scenario. Two agents learn concurrently by interacting with each other without any need for simulated users (SUs) to train against or corpora to learn from. In particular, we compare the Qlearning, Policy Hill-Climbing (PHC) and Win or Learn Fast Policy Hill-Climbing (PHC-WoLF) algorithms, varying the scenario complexity (state space size), the number of training episodes, the learning rate, and the exploration rate. Our results show that generally Q-learning fails to converge whereas PHC and PHC-WoLF always converge and perform similarly. We also show that very high gradually decreasing exploration rates are required for convergence. We conclude that multiagent RL of dialogue policies is a promising alternative to using single-agent RL and SUs or learning directly from corpora.", "title": "" }, { "docid": "8f9bf08bb52e5c192512f7b43ed50ba7", "text": "Finding the sparse solution of an underdetermined system of linear equations (the so called sparse recovery problem) has been extensively studied in the last decade because of its applications in many different areas. So, there are now many sparse recovery algorithms (and program codes) available. However, most of these algorithms have been developed for real-valued systems. This paper discusses an approach for using available real-valued algorithms (or program codes) to solve complex-valued problems, too. The basic idea is to convert the complex-valued problem to an equivalent real-valued problem and solve this new real-valued problem using any real-valued sparse recovery algorithm. Theoretical guarantees for the success of this approach will be discussed, too. On the other hand, a widely used sparse recovery idea is finding the minimum ℓ1 norm solution. For real-valued systems, this idea requires to solve a linear programming (LP) problem, but for complex-valued systems it needs to solve a second-order cone programming (SOCP) problem, which demands more computational load. However, based on the approach of this paper, the complex case can also be solved by linear programming, although the theoretical guarantee for finding the sparse solution is more limited.", "title": "" }, { "docid": "b50498964a73a59f54b3a213f2626935", "text": "To reduce the significant redundancy in deep Convolutional Neural Networks (CNNs), most existing methods prune neurons by only considering the statistics of an individual layer or two consecutive layers (e.g., prune one layer to minimize the reconstruction error of the next layer), ignoring the effect of error propagation in deep networks. In contrast, we argue that for a pruned network to retain its predictive power, it is essential to prune neurons in the entire neuron network jointly based on a unified goal: minimizing the reconstruction error of important responses in the \"final response layer\" (FRL), which is the second-to-last layer before classification. Specifically, we apply feature ranking techniques to measure the importance of each neuron in the FRL, formulate network pruning as a binary integer optimization problem, and derive a closed-form solution to it for pruning neurons in earlier layers. Based on our theoretical analysis, we propose the Neuron Importance Score Propagation (NISP) algorithm to propagate the importance scores of final responses to every neuron in the network. The CNN is pruned by removing neurons with least importance, and it is then fine-tuned to recover its predictive power. NISP is evaluated on several datasets with multiple CNN models and demonstrated to achieve significant acceleration and compression with negligible accuracy loss.", "title": "" }, { "docid": "81f5905805f6faea108995cbe74a8435", "text": "In simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) studies, average reference (AR), and digitally linked mastoid (LM) are popular re-referencing techniques in event-related potential (ERP) analyses. However, they may introduce their own physiological signals and alter the EEG/ERP outcome. A reference electrode standardization technique (REST) that calculated a reference point at infinity was proposed to solve this problem. To confirm the advantage of REST in ERP analyses of synchronous EEG-fMRI studies, we compared the reference effect of AR, LM, and REST on task-related ERP results of a working memory task during an fMRI scan. As we hypothesized, we found that the adopted reference did not change the topography map of ERP components (N1 and P300 in the present study), but it did alter the task-related effect on ERP components. LM decreased or eliminated the visual working memory (VWM) load effect on P300, and the AR distorted the distribution of VWM location-related effect at left posterior electrodes as shown in the statistical parametric scalp mapping (SPSM) of N1. ERP cortical source estimates, which are independent of the EEG reference choice, were used as the golden standard to infer the relative utility of different references on the ERP task-related effect. By comparison, REST reference provided a more integrated and reasonable result. These results were further confirmed by the results of fMRI activations and a corresponding EEG-only study. Thus, we recommend the REST, especially with a realistic head model, as the optimal reference method for ERP data analysis in simultaneous EEG-fMRI studies.", "title": "" }, { "docid": "d1e43c347f708547aefa07b3c83ee428", "text": "Studies using Nomura et al.’s “Negative Attitude toward Robots Scale” (NARS) [1] as an attitudinal measure have featured robots that were perceived to be autonomous, independent agents. State of the art telepresence robots require an explicit human-in-the-loop to drive the robot around. In this paper, we investigate if NARS can be used with telepresence robots. To this end, we conducted three studies in which people watched videos of telepresence robots (n=70), operated telepresence robots (n=38), and interacted with telepresence robots (n=12). Overall, the results from our three studies indicated that NARS may be applied to telepresence robots, and culture, gender, and prior robot experience can be influential factors on the NARS score.", "title": "" }, { "docid": "6726479c1b8e5502552dfb8e4fdccb0d", "text": "Cluster ensembles generate a large number of different clustering solutions and combine them into a more robust and accurate consensus clustering. On forming the ensembles, the literature has suggested that higher diversity among ensemble members produces higher performance gain. In contrast, some studies also indicated that medium diversity leads to the best performing ensembles. Such contradicting observations suggest that different data, with varying characteristics, may require different treatments. We empirically investigate this issue by examining the behavior of cluster ensembles on benchmark data sets. This leads to a novel framework that selects ensemble members for each data set based on its own characteristics. Our framework first generates a diverse set of solutions and combines them into a consensus partition P*. Based on the diversity between the ensemble members and P*, a subset of ensemble members is selected and combined to obtain the final output. We evaluate the proposed method on benchmark data sets and the results show that the proposed method can significantly improve the clustering performance, often by a substantial margin. In some cases, we were able to produce final solutions that significantly outperform even the best ensemble members.", "title": "" }, { "docid": "99880fca88bef760741f48166a51ca6f", "text": "This paper describes first results using the Unified Medical Language System (UMLS) for distantly supervised relation extraction. UMLS is a large knowledge base which contains information about millions of medical concepts and relations between them. Our approach is evaluated using existing relation extraction data sets that contain relations that are similar to some of those in UMLS.", "title": "" }, { "docid": "0df1a15c02c29d9462356641fbe78b43", "text": "Localization is an essential and important research issue in wireless sensor networks (WSNs). Most localization schemes focus on static sensor networks. However, mobile sensors are required in some applications such that the sensed area can be enlarged. As such, a localization scheme designed for mobile sensor networks is necessary. In this paper, we propose a localization scheme to improve the localization accuracy of previous work. In this proposed scheme, the normal nodes without location information can estimate their own locations by gathering the positions of location-aware nodes (anchor nodes) and the one-hop normal nodes whose locations are estimated from the anchor nodes. In addition, we propose a scheme that predicts the moving direction of sensor nodes to increase localization accuracy. Simulation results show that the localization error in our proposed scheme is lower than the previous schemes in various mobility models and moving speeds.", "title": "" } ]
scidocsrr
0ac611db7f902244fabd8b175abad757
Deep Learning Strong Parts for Pedestrian Detection
[ { "docid": "ca20d27b1e6bfd1f827f967473d8bbdd", "text": "We propose a simple yet effective detector for pedestrian detection. The basic idea is to incorporate common sense and everyday knowledge into the design of simple and computationally efficient features. As pedestrians usually appear up-right in image or video data, the problem of pedestrian detection is considerably simpler than general purpose people detection. We therefore employ a statistical model of the up-right human body where the head, the upper body, and the lower body are treated as three distinct components. Our main contribution is to systematically design a pool of rectangular templates that are tailored to this shape model. As we incorporate different kinds of low-level measurements, the resulting multi-modal & multi-channel Haar-like features represent characteristic differences between parts of the human body yet are robust against variations in clothing or environmental settings. Our approach avoids exhaustive searches over all possible configurations of rectangle features and neither relies on random sampling. It thus marks a middle ground among recently published techniques and yields efficient low-dimensional yet highly discriminative features. Experimental results on the INRIA and Caltech pedestrian datasets show that our detector reaches state-of-the-art performance at low computational costs and that our features are robust against occlusions.", "title": "" }, { "docid": "6c7156d5613e1478daeb08eecb17c1e2", "text": "The idea behind the experiments in section 4.1 of the main paper is to demonstrate that, within a single framework, varying the features can replicate the jump in detection performance over a ten-year span (2004 2014), i.e. the jump in performance between VJ and the current state-of-the-art. See figure 1 for results on INRIA and Caltech-USA of the following methods (all based on SquaresChnFtrs, described in section 4 of the paper):", "title": "" } ]
[ { "docid": "9c61ac11d2804323ba44ed91d05a0e46", "text": "Nostalgia fulfills pivotal functions for individuals, but lacks an empirically derived and comprehensive definition. We examined lay conceptions of nostalgia using a prototype approach. In Study 1, participants generated open-ended features of nostalgia, which were coded into categories. In Study 2, participants rated the centrality of these categories, which were subsequently classified as central (e.g., memories, relationships, happiness) or peripheral (e.g., daydreaming, regret, loneliness). Central (as compared with peripheral) features were more often recalled and falsely recognized (Study 3), were classified more quickly (Study 4), were judged to reflect more nostalgia in a vignette (Study 5), better characterized participants' own nostalgic (vs. ordinary) experiences (Study 6), and prompted higher levels of actual nostalgia and its intrapersonal benefits when used to trigger a personal memory, regardless of age (Study 7). These findings highlight that lay people view nostalgia as a self-relevant and social blended emotional and cognitive state, featuring a mixture of happiness and loss. The findings also aid understanding of nostalgia's functions and identify new methods for future research.", "title": "" }, { "docid": "381ce2a247bfef93c67a3c3937a29b5a", "text": "Product reviews are now widely used by individuals and organizations for decision making (Litvin et al., 2008; Jansen, 2010). And because of the profits at stake, people have been known to try to game the system by writing fake reviews to promote target products. As a result, the task of deceptive review detection has been gaining increasing attention. In this paper, we propose a generative LDA-based topic modeling approach for fake review detection. Our model can aptly detect the subtle differences between deceptive reviews and truthful ones and achieves about 95% accuracy on review spam datasets, outperforming existing baselines by a large margin.", "title": "" }, { "docid": "69f853b90b837211e24155a2f55b9a95", "text": "We introduce a light-weight, power efficient, and general purpose convolutional neural network, ESPNetv2 , for modeling visual and sequential data. Our network uses group point-wise and depth-wise dilated separable convolutions to learn representations from a large effective receptive field with fewer FLOPs and parameters. The performance of our network is evaluated on three different tasks: (1) object classification, (2) semantic segmentation, and (3) language modeling. Experiments on these tasks, including image classification on the ImageNet and language modeling on the PenTree bank dataset, demonstrate the superior performance of our method over the state-ofthe-art methods. Our network has better generalization properties than ShuffleNetv2 when tested on the MSCOCO multi-object classification task and the Cityscapes urban scene semantic segmentation task. Our experiments show that ESPNetv2 is much more power efficient than existing state-of-the-art efficient methods including ShuffleNets and MobileNets. Our code is open-source and available at https://github.com/sacmehta/ESPNetv2.", "title": "" }, { "docid": "6ff681e22778abaf3b79f054fa5a1f30", "text": "Computer generated battleeeld agents need to be able to explain the rationales for their actions. Such explanations make it easier to validate agent behavior, and can enhance the eeectiveness of the agents as training devices. This paper describes an explanation capability called Debrief that enables agents implemented in Soar to describe and justify their decisions. Debrief determines the motivation for decisions by recalling the context in which decisions were made, and determining what factors were critical to those decisions. In the process Debrief learns to recognize similar situations where the same decision would be made for the same reasons. Debrief currently being used by the TacAir-Soar tactical air agent to explain its actions , and is being evaluated for incorporation into other reactive planning agents.", "title": "" }, { "docid": "64a98c3bc9aebfc470ad689b66b6d86b", "text": "In his famous thought experiments on synthetic vehicles, Valentino Braitenberg stipulated that simple stimulus-response reactions in an organism could evoke the appearance of complex behavior, which, to the unsuspecting human observer, may even appear to be driven by emotions such as fear, aggression, and even love (Braitenberg, Vehikel. Experimente mit künstlichen Wesen, Lit Verlag, 2004). In fact, humans appear to have a strong propensity to anthropomorphize, driven by our inherent desire for predictability that will quickly lead us to discern patterns, cause-and-effect relationships, and yes, emotions, in animated entities, be they natural or artificial. But might there be reasons, that we should intentionally “implement” emotions into artificial entities, such as robots? How would we proceed in creating robot emotions? And what, if any, are the ethical implications of creating “emotional” robots? The following article aims to shed some light on these questions with a multi-disciplinary review of recent empirical investigations into the various facets of emotions in robot psychology.", "title": "" }, { "docid": "78d33d767f9eb15ef79a6d016ffcfb3a", "text": "Healthcare scientific applications, such as body area network, require of deploying hundreds of interconnected sensors to monitor the health status of a host. One of the biggest challenges is the streaming data collected by all those sensors, which needs to be processed in real time. Follow-up data analysis would normally involve moving the collected big data to a cloud data center for status reporting and record tracking purpose. Therefore, an efficient cloud platform with very elastic scaling capacity is needed to support such kind of real time streaming data applications. The current cloud platform either lacks of such a module to process streaming data, or scales in regard to coarse-grained compute nodes. In this paper, we propose a task-level adaptive MapReduce framework. This framework extends the generic MapReduce architecture by designing each Map and Reduce task as a consistent running loop daemon. The beauty of this new framework is the scaling capability being designed at the Map and Task level, rather than being scaled from the compute-node level. This strategy is capable of not only scaling up and down in real time, but also leading to effective use of compute resources in cloud data center. As a first step towards implementing this framework in real cloud, we developed a simulator that captures workload strength, and provisions the amount of Map and Reduce tasks just in need and in real time. To further enhance the framework, we applied two streaming data workload prediction methods, smoothing and Kalman filter, to estimate the unknown workload characteristics. We see 63.1% performance improvement by using the Kalman filter method to predict the workload. We also use real streaming data workload trace to test the framework. Experimental results show that this framework schedules the Map and Reduce tasks very efficiently, as the streaming data changes its arrival rate. © 2014 Elsevier B.V. All rights reserved. ∗ Corresponding author at: Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139, USA. Tel.: +1", "title": "" }, { "docid": "268087f94c1d5183fe8bdf6360280fab", "text": "Big Data is a new term used to identify datasets that we cannot manage with current methodologies or data mining software tools due to their large size and complexity. Big Data mining is the capability of extracting useful information from these large datasets or streams of data. New mining techniques are necessary due to the volume, variability, and velocity, of such data. MOA is a software framework with classification, regression, and frequent pattern methods, and the new APACHE SAMOA is a distributed streaming software for mining data streams.", "title": "" }, { "docid": "c72a2e504934580f9542a62b7037cdd4", "text": "Software defect prediction is one of the most active research areas in software engineering. We can build a prediction model with defect data collected from a software project and predict defects in the same project, i.e. within-project defect prediction (WPDP). Researchers also proposed cross-project defect prediction (CPDP) to predict defects for new projects lacking in defect data by using prediction models built by other projects. In recent studies, CPDP is proved to be feasible. However, CPDP requires projects that have the same metric set, meaning the metric sets should be identical between projects. As a result, current techniques for CPDP are difficult to apply across projects with heterogeneous metric sets. To address the limitation, we propose heterogeneous defect prediction (HDP) to predict defects across projects with heterogeneous metric sets. Our HDP approach conducts metric selection and metric matching to build a prediction model between projects with heterogeneous metric sets. Our empirical study on 28 subjects shows that about 68% of predictions using our approach outperform or are comparable to WPDP with statistical significance.", "title": "" }, { "docid": "90aeccd6d6f94c668ed6cf5d3cc11298", "text": "We develop a computational model for binocular stereopsis, attempting to explain the process by which the information detailing the 3-D geometry of object surfaces is encoded in a pair of stereo images. We design our model within a Bayesian framework, making explicit all of our assumptions about the nature of image coding and the structure of the world. We start by deriving our model for image formation, introducing a definition of half-occluded regions and deriving simple equations relating these regions to the disparity function. We show that the disparity function alone contains enough information to determine the half-occluded regions. We use these relations to derive a model for image formation in which the half-occluded regions are explicitly represented and computed. Next, we present our prior model in a series of three stages, or “worlds,” where each world considers an additional complication to the prior. We eventually argue that the prior model must be constructed from all of the local quantities in the scene geometry-i.e., depth, surface orientation, object boundaries, and surface creases. In addition, we present a new dynamic programming strategy for estimating these quantities. Throughout the article, we provide motivation for the development of our model by psychophysical examinations of the human visual system.", "title": "" }, { "docid": "8dbe7ed9d801c7c39d583de6ebef9908", "text": "We propose a novel approach for content based color image classification using Support Vector Machine (SVM). Traditional classification approaches deal poorly on content based image classification tasks being one of the reasons of high dimensionality of the feature space. In this paper, color image classification is done on features extracted from histograms of color components. The benefit of using color image histograms are better efficiency, and insensitivity to small changes in camera view-point i.e. translation and rotation. As a case study for validation purpose, experimental trials were done on a database of about 500 images divided into four different classes has been reported and compared on histogram features for RGB, CMYK, Lab, YUV, YCBCR, HSV, HVC and YIQ color spaces. Results based on the proposed approach are found encouraging in terms of color image classification accuracy.", "title": "" }, { "docid": "6a68cf6f5503c5253b6035a11888ca15", "text": "A method is developed that processes Global Navigation Satellite System (GNSS) beat carrier phase measurements from a single moving antenna in order to determine whether the GNSS signals are being spoofed. This technique allows a specially equipped GNSS receiver to detect sophisticated spoofing that cannot be detected using receiver autonomous integrity monitoring techniques. It works for both encrypted military signals and for unencrypted civilian signals. It does not require changes to the signal structure of unencrypted civilian GNSS signals. The method uses a short segment of beat carrier-phase time histories that are collected while the receiver's single antenna is undergoing a known, highfrequency motion profile, typically one pre-programmed into an antenna articulation system. The antenna also can be moving in an unknown way at lower frequencies, as might be the case if it were mounted on a ground vehicle, a ship, an airplane, or a spacecraft. The spoofing detection algorithm correlates high-pass-filtered versions of the known motion component with high-pass-filtered versions of the carrier phase variations. True signals produce a specific correlation pattern, and spoofed signals produce a recognizably different correlation pattern if the spoofer transmits its false signals from a single antenna. The most pronounced difference is that non-spoofed signals display variations between the beat carrier phase responses of multiple signals, but all signals' responses are identical in the spoofed case. These differing correlation characteristics are used to develop a hypothesis test in order to detect a spoofing attack or the lack thereof. For moving-base receivers, there is no need for prior knowledge of the vehicle's attitude. Instead, the detection calculations also provide a rough attitude measurement. Several versions of this spoofing detection system have been designed and tested. Some have been tested only with truth-model data, but one has been tested with actual live-signal data from the Global Positioning System (GPS) C/A code on the L1 frequency. The livedata tests correctly identified spoofing attacks in the 4 cases out of 8 trials that had actual attacks. These detections used worst-case false-alarm probabilities of 10 , and their worst-case probabilities of missed detection were no greater than 1.6x10. The ranges of antenna motion used to detect spoofing in these trials were between 4 and 6 cm, i.e., on the order of a quarter-cycle of the GPS L1 carrier wavelength.", "title": "" }, { "docid": "2df35b05a40a646ba6f826503955601a", "text": "This paper describes a new prototype system for detecting the demeanor of patients in emergency situations using the Intel RealSense camera system [1]. It describes how machine learning, a support vector machine (SVM) and the RealSense facial detection system can be used to track patient demeanour for pain monitoring. In a lab setting, the application has been trained to detect four different intensities of pain and provide demeanour information about the patient's eyes, mouth, and agitation state. Its utility as a basis for evaluating the condition of patients in situations using video, machine learning and 5G technology is discussed.", "title": "" }, { "docid": "b191b9829aac1c1e74022c33e2488bbd", "text": "We investigated the normal and parallel ground reaction forces during downhill and uphill running. Our rationale was that these force data would aid in the understanding of hill running injuries and energetics. Based on a simple spring-mass model, we hypothesized that the normal force peaks, both impact and active, would increase during downhill running and decrease during uphill running. We anticipated that the parallel braking force peaks would increase during downhill running and the parallel propulsive force peaks would increase during uphill running. But, we could not predict the magnitude of these changes. Five male and five female subjects ran at 3m/s on a force treadmill mounted on the level and on 3 degrees, 6 degrees, and 9 degrees wedges. During downhill running, normal impact force peaks and parallel braking force peaks were larger compared to the level. At -9 degrees, the normal impact force peaks increased by 54%, and the parallel braking force peaks increased by 73%. During uphill running, normal impact force peaks were smaller and parallel propulsive force peaks were larger compared to the level. At +9 degrees, normal impact force peaks were absent, and parallel propulsive peaks increased by 75%. Neither downhill nor uphill running affected normal active force peaks. Combined with previous biomechanics studies, our normal impact force data suggest that downhill running substantially increases the probability of overuse running injury. Our parallel force data provide insight into past energetic studies, which show that the metabolic cost increases during downhill running at steep angles.", "title": "" }, { "docid": "477769b83e70f1d46062518b1d692664", "text": "Deep Neural Networks (DNNs) have been demonstrated to perform exceptionally well on most recognition tasks such as image classification and segmentation. However, they have also been shown to be vulnerable to adversarial examples. This phenomenon has recently attracted a lot of attention but it has not been extensively studied on multiple, large-scale datasets and complex tasks such as semantic segmentation which often require more specialised networks with additional components such as CRFs, dilated convolutions, skip-connections and multiscale processing. In this paper, we present what to our knowledge is the first rigorous evaluation of adversarial attacks on modern semantic segmentation models, using two large-scale datasets. We analyse the effect of different network architectures, model capacity and multiscale processing, and show that many observations made on the task of classification do not always transfer to this more complex task. Furthermore, we show how mean-field inference in deep structured models and multiscale processing naturally implement recently proposed adversarial defenses. Our observations will aid future efforts in understanding and defending against adversarial examples. Moreover, in the shorter term, we show which segmentation models should currently be preferred in safety-critical applications due to their inherent robustness.", "title": "" }, { "docid": "cb1952a4931955856c6479d7054c57e7", "text": "This paper presents a static race detection analysis for multithreaded Java programs. Our analysis is based on a formal type system that is capable of capturing many common synchronization patterns. These patterns include classes with internal synchronization, classes thatrequire client-side synchronization, and thread-local classes. Experience checking over 40,000 lines of Java code with the type system demonstrates that it is an effective approach for eliminating races conditions. On large examples, fewer than 20 additional type annotations per 1000 lines of code were required by the type checker, and we found a number of races in the standard Java libraries and other test programs.", "title": "" }, { "docid": "d59e64c1865193db3aaecc202f688690", "text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.", "title": "" }, { "docid": "6932912b1b880014b8eb2d1b796d7a91", "text": "The ability to identify authors of computer programs based on their coding style is a direct threat to the privacy and anonymity of programmers. While recent work found that source code can be attributed to authors with high accuracy, attribution of executable binaries appears to be much more difficult. Many distinguishing features present in source code, e.g. variable names, are removed in the compilation process, and compiler optimization may alter the structure of a program, further obscuring features that are known to be useful in determining authorship. We examine programmer de-anonymization from the standpoint of machine learning, using a novel set of features that include ones obtained by decompiling the executable binary to source code. We adapt a powerful set of techniques from the domain of source code authorship attribution along with stylistic representations embedded in assembly, resulting in successful deanonymization of a large set of programmers. We evaluate our approach on data from the Google Code Jam, obtaining attribution accuracy of up to 96% with 100 and 83% with 600 candidate programmers. We present an executable binary authorship attribution approach, for the first time, that is robust to basic obfuscations, a range of compiler optimization settings, and binaries that have been stripped of their symbol tables. We perform programmer de-anonymization using both obfuscated binaries, and real-world code found “in the wild” in single-author GitHub repositories and the recently leaked Nulled.IO hacker forum. We show that programmers who would like to remain anonymous need to take extreme countermeasures to protect their privacy.", "title": "" }, { "docid": "3415fb5e9b994d6015a17327fc0fe4f4", "text": "A human stress monitoring patch integrates three sensors of skin temperature, skin conductance, and pulsewave in the size of stamp (25 mm × 15 mm × 72 μm) in order to enhance wearing comfort with small skin contact area and high flexibility. The skin contact area is minimized through the invention of an integrated multi-layer structure and the associated microfabrication process; thus being reduced to 1/125 of that of the conventional single-layer multiple sensors. The patch flexibility is increased mainly by the development of flexible pulsewave sensor, made of a flexible piezoelectric membrane supported by a perforated polyimide membrane. In the human physiological range, the fabricated stress patch measures skin temperature with the sensitivity of 0.31 Ω/°C, skin conductance with the sensitivity of 0.28 μV/0.02 μS, and pulse wave with the response time of 70 msec. The skin-attachable stress patch, capable to detect multimodal bio-signals, shows potential for application to wearable emotion monitoring.", "title": "" }, { "docid": "1389e232bef9499c301fa4f4bbcb3e56", "text": "PURPOSE\nTo review studies of healing touch and its implications for practice and research.\n\n\nDESIGN\nA review of the literature from published works, abstracts from conference proceedings, theses, and dissertations was conducted to synthesize information on healing touch. Works available until June 2003 were referenced.\n\n\nMETHODS\nThe studies were categorized by target of interventions and outcomes were evaluated.\n\n\nFINDINGS AND CONCLUSIONS\nOver 30 studies have been conducted with healing touch as the independent variable. Although no generalizable results were found, a foundation exists for further research to test its benefits.", "title": "" } ]
scidocsrr
b5a2a8306f9669a92d6e618327d63bf0
Adversarial Distillation of Bayesian Neural Network Posteriors
[ { "docid": "3bad6f7bf3680d33eca19f924fa9084a", "text": "Deep Learning models are vulnerable to adversarial examples, i.e. images obtained via deliberate imperceptible perturbations, such that the model misclassifies them with high confidence. However, class confidence by itself is an incomplete picture of uncertainty. We therefore use principled Bayesian methods to capture model uncertainty in prediction for observing adversarial misclassification. We provide an extensive study with different Bayesian neural networks attacked in both white-box and black-box setups. The behaviour of the networks for noise, attacks and clean test data is compared. We observe that Bayesian neural networks are uncertain in their predictions for adversarial perturbations, a behaviour similar to the one observed for random Gaussian perturbations. Thus, we conclude that Bayesian neural networks can be considered for detecting adversarial examples.", "title": "" }, { "docid": "d5c67b93732fbf1f572b9b35a58d425e", "text": "We consider the two related problems of detecting if an example is misclassified or out-of-distribution. We present a simple baseline that utilizes probabilities from softmax distributions. Correctly classified examples tend to have greater maximum softmax probabilities than erroneously classified and out-of-distribution examples, allowing for their detection. We assess performance by defining several tasks in computer vision, natural language processing, and automatic speech recognition, showing the effectiveness of this baseline across all. We then show the baseline can sometimes be surpassed, demonstrating the room for future research on these underexplored detection tasks.", "title": "" }, { "docid": "b12bae586bc49a12cebf11cca49c0386", "text": "Deep neural networks (DNNs) are powerful nonlinear architectures that are known to be robust to random perturbations of the input. However, these models are vulnerable to adversarial perturbations—small input changes crafted explicitly to fool the model. In this paper, we ask whether a DNN can distinguish adversarial samples from their normal and noisy counterparts. We investigate model confidence on adversarial samples by looking at Bayesian uncertainty estimates, available in dropout neural networks, and by performing density estimation in the subspace of deep features learned by the model. The result is a method for implicit adversarial detection that is oblivious to the attack algorithm. We evaluate this method on a variety of standard datasets including MNIST and CIFAR-10 and show that it generalizes well across different architectures and attacks. Our findings report that 85-93% ROC-AUC can be achieved on a number of standard classification tasks with a negative class that consists of both normal and noisy samples.", "title": "" }, { "docid": "11a69c06f21e505b3e05384536108325", "text": "Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it. In many cases, these modifications can be so subtle that a human observer does not even notice the modification at all, yet the classifier still makes a mistake. Adversarial examples pose security concerns because they could be used to perform an attack on machine learning systems, even if the adversary has no access to the underlying model. Up to now, all previous work have assumed a threat model in which the adversary can feed data directly into the machine learning classifier. This is not always the case for systems operating in the physical world, for example those which are using signals from cameras and other sensors as an input. This paper shows that even in such physical world scenarios, machine learning systems are vulnerable to adversarial examples. We demonstrate this by feeding adversarial images obtained from cell-phone camera to an ImageNet Inception classifier and measuring the classification accuracy of the system. We find that a large fraction of adversarial examples are classified incorrectly even when perceived through the camera.", "title": "" } ]
[ { "docid": "69bb52e45db91f142b8c5297abd21282", "text": "IP-based solutions to accommodate mobile hosts within existing internetworks do not address the distinctive features of wireless mobile computing. IP-based transport protocols thus suffer from poor performance when a mobile host communicates with a host on the fixed network. This is caused by frequent disruptions in network layer connectivity due to — i) mobility and ii) unreliable nature of the wireless link. We describe the design and implementation of I-TCP, which is an indirect transport layer protocol for mobile hosts. I-TCP utilizes the resources of Mobility Support Routers (MSRs) to provide transport layer communication between mobile hosts and hosts on the fixed network. With I-TCP, the problems related to mobility and the unreliability of wireless link are handled entirely within the wireless link; the TCP/IP software on the fixed hosts is not modified. Using I-TCP on our testbed, the throughput between a fixed host and a mobile host improved substantially in comparison to regular TCP.", "title": "" }, { "docid": "e202a32d88a315419eba627ed336a881", "text": "Innovation is defined as the development and implementation of new ^^eas by people who over time engage in transactions with others within an institutional order. Thxs defmUion focuses on four basic factors (new ideas, people, transactions, and ms itut.onal context)^An understanding of how these factors are related leads to four basic problems confronting most general managers: (1) a human problem of managing attention, (2) a process probleni in manlgng new ideas into good currency, (3) a structural problem of managing part-whole TelatLnships, and (4) a strategic problem of institutional leadership. This paper discusses thes four basic problems and concludes by suggesting how they fit together into an overall framework to guide longitudinal study of the management of innovation. (ORGANIZATIONAL EFFECTIVENESS; INNOVATION)", "title": "" }, { "docid": "c91196dcb309b9c706a1de8b2a879d0f", "text": "The goal of process design is the construction of a process model that is a priori optimal w.r.t. the goal(s) of the business owning the process. Process design is therefore a major factor in determining the process performance and ultimately the success of a business. Despite this importance, the designed process is often less than optimal. This is due to two major challenges: First, since the design is an a priori ability, no actual execution data is available to provide the foundations for design decisions. Second, since modeling decision support is typically basic at best, the quality of the design largely depends on the ability of business analysts to make the ”right” design choices. To address these challenges, we present in this paper our deep Business Optimization Platform that enables (semi-) automated process optimization during process design based on actual execution data. Our platform achieves this task by matching new processes to existing processes stored in a repository based on similarity metrics and by using a set of formalized best-practice process optimization patterns.", "title": "" }, { "docid": "ec37a20ce084cf471838dc9e2fa55c9f", "text": "Recently, deep learning has gained prominence due to the potential it portends for machine learning. For this reason, deep learning techniques have been applied in many fields, such as recognizing some kinds of patterns or classification. Intrusion detection analyses got data from monitoring security events to get situation assessment of network. Lots of traditional machine learning method has been put forward to intrusion detection, but it is necessary to improvement the detection performance and accuracy. This paper discusses different methods which were used to classify network traffic. We decided to use different methods on open data set and did experiment with these methods to find out a best way to intrusion detection.", "title": "" }, { "docid": "e237320556387e6b83affc1ae091f154", "text": "Considering the difficult technical and sociological issues affecting the regulation of artificial intelligence research and applications.", "title": "" }, { "docid": "59dd112faf8b485e91f70b713d1eee29", "text": "Background. Imperforate hymen is usually treated with hymenotomy, and the management after its spontaneous rupture is not very well known. Case. In this paper, we present spontaneous rupture of the imperforate hymen in a 13-year-old adolescent girl with hematocolpometra just before a planned hymenotomy operation. The patient was managed conservatively with a satisfactory outcome. Conclusion. Hymenotomy may not be needed in cases with spontaneous rupture of the imperforate hymen if adequate opening for menstrual discharge is warranted.", "title": "" }, { "docid": "3f1a2efdff6be4df064f3f5b978febee", "text": "D-galactose injection has been shown to induce many changes in mice that represent accelerated aging. This mouse model has been widely used for pharmacological studies of anti-aging agents. The underlying mechanism of D-galactose induced aging remains unclear, however, it appears to relate to glucose and 1ipid metabolic disorders. Currently, there has yet to be a study that focuses on investigating gene expression changes in D-galactose aging mice. In this study, integrated analysis of gas chromatography/mass spectrometry-based metabonomics and gene expression profiles was used to investigate the changes in transcriptional and metabolic profiles in mimetic aging mice injected with D-galactose. Our findings demonstrated that 48 mRNAs were differentially expressed between control and D-galactose mice, and 51 potential biomarkers were identified at the metabolic level. The effects of D-galactose on aging could be attributed to glucose and 1ipid metabolic disorders, oxidative damage, accumulation of advanced glycation end products (AGEs), reduction in abnormal substance elimination, cell apoptosis, and insulin resistance.", "title": "" }, { "docid": "6d7188bd9d7a9a6c80c573d6184d467d", "text": "Background: Feedback of the weak areas of knowledge in RPD using continuous competency or other test forms is very essential to develop the student knowledge and the syllabus as well. This act should be a regular practice. Aim: To use the outcome of competency test and the objectives structured clinical examination of removable partial denture as a reliable measure to provide a continuous feedback to the teaching system. Method: This sectional study was performed on sixty eight, fifth year students for the period from 2009 to 2010. The experiment was divided into two parts: continuous assessment and the final examination. In the first essay; some basic removable partial denture knowledge, surveying technique, and designing of the metal framework were used to estimate the learning outcome. While in the second essay, some components of the objectives structured clinical examination were compared to the competency test to see the difference in learning outcome. Results: The students’ performance was improved in the final assessment just in some aspects of removable partial denture. However, for the surveying, the students faced some problems. Conclusion: the continuous and final tests can provide a simple tool to advice the teachers for more effective teaching of the RPD. So that the weakness in specific aspects of the RPD syllabus can be detected and corrected continuously from the beginning, during and at the end of the course.", "title": "" }, { "docid": "08d9b5af2c9d8095bf6a6b3453c89f40", "text": "Alzheimer's disease (AD) is a neurodegenerative disorder associated with loss of memory and cognitive abilities. Previous evidence suggested that exercise ameliorates learning and memory deficits by increasing brain derived neurotrophic factor (BDNF) and activating downstream pathways in AD animal models. However, upstream pathways related to increase BDNF induced by exercise in AD animal models are not well known. We investigated the effects of moderate treadmill exercise on Aβ-induced learning and memory impairment as well as the upstream pathway responsible for increasing hippocampal BDNF in an animal model of AD. Animals were divided into five groups: Intact, Sham, Aβ1-42, Sham-exercise (Sham-exe) and Aβ1-42-exercise (Aβ-exe). Aβ was microinjected into the CA1 area of the hippocampus and then animals in the exercise groups were subjected to moderate treadmill exercise (for 4 weeks with 5 sessions per week) 7 days after microinjection. In the present study the Morris water maze (MWM) test was used to assess spatial learning and memory. Hippocampal mRNA levels of BDNF, peroxisome proliferator-activated receptor gamma co-activator 1 alpha (PGC-1α), fibronectin type III domain-containing 5 (FNDC5) as well as protein levels of AMPK-activated protein kinase (AMPK), PGC-1α, BDNF, phosphorylation of AMPK were measured. Our results showed that intra-hippocampal injection of Aβ1-42 impaired spatial learning and memory which was accompanied by reduced AMPK activity (p-AMPK/total-AMPK ratio) and suppression of the PGC-1α/FNDC5/BDNF pathway in the hippocampus of rats. In contrast, moderate treadmill exercise ameliorated the Aβ1-42-induced spatial learning and memory deficit, which was accompanied by restored AMPK activity and PGC-1α/FNDC5/BDNF levels. Our results suggest that the increased AMPK activity and up-regulation of the PGC-1α/FNDC5/BDNF pathway by exercise are likely involved in mediating the beneficial effects of exercise on Aβ-induced learning and memory impairment.", "title": "" }, { "docid": "957e103d533b3013e24aebd3617edd87", "text": "The recent success of deep neural networks relies on massive amounts of labeled data. For a target task where labeled data is unavailable, domain adaptation can transfer a learner from a different source domain. In this paper, we propose a new approach to domain adaptation in deep networks that can jointly learn adaptive classifiers and transferable features from labeled data in the source domain and unlabeled data in the target domain. We relax a shared-classifier assumption made by previous methods and assume that the source classifier and target classifier differ by a residual function. We enable classifier adaptation by plugging several layers into deep network to explicitly learn the residual function with reference to the target classifier. We fuse features of multiple layers with tensor product and embed them into reproducing kernel Hilbert spaces to match distributions for feature adaptation. The adaptation can be achieved in most feed-forward models by extending them with new residual layers and loss functions, which can be trained efficiently via back-propagation. Empirical evidence shows that the new approach outperforms state of the art methods on standard domain adaptation benchmarks.", "title": "" }, { "docid": "6a59641369fefcb7c7a917718f1d067c", "text": "This paper presents an adaptive fuzzy sliding-mode dynamic controller (AFSMDC) of the car-like mobile robot (CLMR) for the trajectory tracking issue. First, a kinematics model of the nonholonomic CLMR is introduced. Then, according to the Lagrange formula, a dynamic model of the CLMR is created. For a real time trajectory tracking problem, an optimal controller capable of effectively driving the CLMR to track the desired trajectory is necessary. Therefore, an AFSMDC is proposed to accomplish the tracking task and to reduce the effect of the external disturbances and system uncertainties of the CLMR. The proposed controller could reduce the tracking errors between the output of the velocity controller and the real velocity of the CLMR. Therefore, the CLMR could track the desired trajectory without posture and orientation errors. Additionally, the stability of the proposed controller is proven by utilizing the Lyapunov stability theory. Finally, the simulation results validate the effectiveness of the proposed AFSMDC.", "title": "" }, { "docid": "967f1e68847111ecf96d964422bea913", "text": "Text preprocessing is an essential stage in text categorization (TC) particularly and text mining generally. Morphological tools can be used in text preprocessing to reduce multiple forms of the word to one form. There has been a debate among researchers about the benefits of using morphological tools in TC. Studies in the English language illustrated that performing stemming during the preprocessing stage degrades the performance slightly. However, they have a great impact on reducing the memory requirement and storage resources needed. The effect of the preprocessing tools on Arabic text categorization is an area of research. This work provides an evaluation study of several morphological tools for Arabic Text Categorization. The study includes using the raw text, the stemmed text, and the root text. The stemmed and root text are obtained using two different preprocessing tools. The results illustrated that using light stemmer combined with a good performing feature selection method enhances the performance of Arabic Text Categorization especially for small threshold values.", "title": "" }, { "docid": "3613dd18a4c930a28ed520192f7ac23f", "text": "OBJECTIVES\nIn this paper we present a contemporary understanding of \"nursing informatics\" and relate it to applications in three specific contexts, hospitals, community health, and home dwelling, to illustrate achievements that contribute to the overall schema of health informatics.\n\n\nMETHODS\nWe identified literature through database searches in MEDLINE, EMBASE, CINAHL, and the Cochrane Library. Database searching was complemented by one author search and hand searches in six relevant journals. The literature review helped in conceptual clarification and elaborate on use that are supported by applications in different settings.\n\n\nRESULTS\nConceptual clarification of nursing data, information and knowledge has been expanded to include wisdom. Information systems and support for nursing practice benefits from conceptual clarification of nursing data, information, knowledge, and wisdom. We introduce three examples of information systems and point out core issues for information integration and practice development.\n\n\nCONCLUSIONS\nExploring interplays of data, information, knowledge, and wisdom, nursing informatics takes a practice turn, accommodating to processes of application design and deployment for purposeful use by nurses in different settings. Collaborative efforts will be key to further achievements that support task shifting, mobility, and ubiquitous health care.", "title": "" }, { "docid": "457ea53f0a303e8eba8847422ef61e5a", "text": "Tele-operated hydraulic underwater manipulators are commonly used to perform remote underwater intervention tasks such as weld inspection or mating of connectors. Automation of these tasks to use tele-assistance requires a suitable hybrid position/force control scheme, to specify simultaneously the robot motion and contact forces. Classical linear control does not allow for the highly non-linear and time varying robot dynamics in this situation. Adequate control performance requires more advanced controllers. This paper presents and compares two different advanced hybrid control algorithms. The first is based on a modified Variable Structure Control (VSC-HF) with a virtual environment, and the second uses a multivariable self-tuning adaptive controller. A direct comparison of the two proposed control schemes is performed in simulation, using a model of the dynamics of a hydraulic underwater manipulator (a Slingsby TA9) in contact with a surface. These comparisons look at the performance of the controllers under a wide variety of operating conditions, including different environment stiffnesses, positions of the robot and", "title": "" }, { "docid": "6046c04b170c68476affb306841c5043", "text": "Innovative ship design projects often require an extensive concept design phase to allow a wide range of potential solutions to be investigated, identifying which best suits the requirements. In these situations, the majority of ship design tools do not provide the best solution, limiting quick reconfiguration by focusing on detailed definition only. Parametric design, including generation of the hull surface, can model topology as well as geometry offering advantages often not exploited. Paramarine is an integrated ship design environment that is based on an objectorientated framework which allows the parametric connection of all aspects of both the product model and analysis together. Design configuration is managed to ensure that relationships within the model are topologically correct and kept up to date. While this offers great flexibility, concept investigation is streamlined by the Early Stage Design module, based on the (University College London) Functional Building Block methodology, collating design requirements, product model definition and analysis together to establish the form, function and layout of the design. By bringing this information together, the complete design requirements for the hull surface itself are established and provide the opportunity for parametric hull form generation techniques to have a fully integrated role in the concept design process. This paper explores several different hull form generation techniques which have been combined with the Early Stage Design module to demonstrate the capability of this design partnership.", "title": "" }, { "docid": "d15ce9f62f88a07db6fa427fae61f26c", "text": "This paper introduced a detail ElGamal digital signature scheme, and mainly analyzed the existing problems of the ElGamal digital signature scheme. Then improved the scheme according to the existing problems of ElGamal digital signature scheme, and proposed an implicit ElGamal type digital signature scheme with the function of message recovery. As for the problem that message recovery not being allowed by ElGamal signature scheme, this article approached a method to recover message. This method will make ElGamal signature scheme have the function of message recovery. On this basis, against that part of signature was used on most attacks for ElGamal signature scheme, a new implicit signature scheme with the function of message recovery was formed, after having tried to hid part of signature message and refining forthcoming implicit type signature scheme. The safety of the refined scheme was anlyzed, and its results indicated that the new scheme was better than the old one.", "title": "" }, { "docid": "15195baf3ec186887e4c5ee5d041a5a6", "text": "We show that generating English Wikipedia articles can be approached as a multidocument summarization of source documents. We use extractive summarization to coarsely identify salient information and a neural abstractive model to generate the article. For the abstractive model, we introduce a decoder-only architecture that can scalably attend to very long sequences, much longer than typical encoderdecoder architectures used in sequence transduction. We show that this model can generate fluent, coherent multi-sentence paragraphs and even whole Wikipedia articles. When given reference documents, we show it can extract relevant factual information as reflected in perplexity, ROUGE scores and human evaluations.", "title": "" }, { "docid": "e4347c1b3df0bf821f552ef86a17a8c8", "text": "Volumetric lesion segmentation via medical imaging is a powerful means to precisely assess multiple time-point lesion/tumor changes. Because manual 3D segmentation is prohibitively time consuming and requires radiological experience, current practices rely on an imprecise surrogate called response evaluation criteria in solid tumors (RECIST). Despite their coarseness, RECIST marks are commonly found in current hospital picture and archiving systems (PACS), meaning they can provide a potentially powerful, yet extraordinarily challenging, source of weak supervision for full 3D segmentation. Toward this end, we introduce a convolutional neural network based weakly supervised self-paced segmentation (WSSS) method to 1) generate the initial lesion segmentation on the axial RECISTslice; 2) learn the data distribution on RECIST-slices; 3) adapt to segment the whole volume slice by slice to finally obtain a volumetric segmentation. In addition, we explore how super-resolution images (2 ∼ 5 times beyond the physical CT imaging), generated from a proposed stacked generative adversarial network, can aid the WSSS performance. We employ the DeepLesion dataset, a comprehensive CTimage lesion dataset of 32, 735 PACS-bookmarked findings, which include lesions, tumors, and lymph nodes of varying sizes, categories, body regions and surrounding contexts. These are drawn from 10, 594 studies of 4, 459 patients. We also validate on a lymph-node dataset, where 3D ground truth masks are available for all images. For the DeepLesion dataset, we report mean Dice coefficients of 93% on RECIST-slices and 76% in 3D lesion volumes. We further validate using a subjective user study, where an experienced ∗Indicates equal contribution. †This work is done during Jinzheng Cai’s internship at National Institutes of Health. Le Lu is now with Nvidia Corp ([email protected]). CN N Initial 2D Segmentation Self-Paced 3D Segmentation CN N CN N CN N Image Image", "title": "" } ]
scidocsrr
841ead8607dd8724013c08b638834473
Scalable and Lightweight CTF Infrastructures Using Application Containers
[ { "docid": "bc6a13cc44a77d29360d04a2bc96bd61", "text": "Security competitions have become a popular way to foster security education by creating a competitive environment in which participants go beyond the effort usually required in traditional security courses. Live security competitions (also called “Capture The Flag,” or CTF competitions) are particularly well-suited to support handson experience, as they usually have both an attack and a defense component. Unfortunately, because these competitions put several (possibly many) teams against one another, they are difficult to design, implement, and run. This paper presents a framework that is based on the lessons learned in running, for more than 10 years, the largest educational CTF in the world, called iCTF. The framework’s goal is to provide educational institutions and other organizations with the ability to run customizable CTF competitions. The framework is open and leverages the security community for the creation of a corpus of educational security challenges.", "title": "" } ]
[ { "docid": "db72513dd3d75f63d351a93fcb53cc46", "text": "The occurrence of visually induced motion sickness has been frequently linked to the sensation of illusory self-motion (vection), however, the precise nature of this relationship is still not fully understood. To date, it is still a matter of debate as to whether vection is a necessary prerequisite for visually induced motion sickness (VIMS). That is, can there be VIMS without any sensation of self-motion? In this paper, we will describe the possible nature of this relationship, review the literature that addresses this relationship (including theoretical accounts of vection and VIMS), and offer suggestions with respect to operationally defining and reporting these phenomena in future.", "title": "" }, { "docid": "2e9786cfe8e7a759ed1e1481d59624ba", "text": "Global path planning for mobile robot using genetic algorithm and A* algorithm is investigated in this paper. The proposed algorithm includes three steps: the MAKLINK graph theory is adopted to establish the free space model of mobile robots firstly, then Dijkstra algorithm is utilized for finding a feasible collision-free path, finally the global optimal path of mobile robots is obtained based on the hybrid algorithm of A* algorithm and genetic algorithm. Experimental results indicate that the proposed algorithm has better performance than Dijkstra algorithm in term of both solution quality and computational time, and thus it is a viable approach to mobile robot global path planning.", "title": "" }, { "docid": "b7a04d56d6d06a0d89f6113c3ab639a8", "text": "Poker is an interesting test-bed for artificial intelligence research. It is a game of imperfect knowledge, where multiple competing agents must deal with risk management, agent modeling, unreliable information and deception, much like decision-making applications in the real world. Agent modeling is one of the most difficult problems in decision-making applications and in poker it is essential to achieving high performance. This paper describes and evaluates Loki, a poker program capable of observing its opponents, constructing opponent models and dynamically adapting its play to best exploit patterns in the opponents’ play.", "title": "" }, { "docid": "370c728b64c8cf6c63815729f4f9b03e", "text": "Previous researchers studying baseball pitching have compared kinematic and kinetic parameters among different types of pitches, focusing on the trunk, shoulder, and elbow. The lack of data on the wrist and forearm limits the understanding of clinicians, coaches, and researchers regarding the mechanics of baseball pitching and the differences among types of pitches. The purpose of this study was to expand existing knowledge of baseball pitching by quantifying and comparing kinematic data of the wrist and forearm for the fastball (FA), curveball (CU) and change-up (CH) pitches. Kinematic and temporal parameters were determined from 8 collegiate pitchers recorded with a four-camera system (200 Hz). Although significant differences were observed for all pitch comparisons, the least number of differences occurred between the FA and CH. During arm cocking, peak wrist extension for the FA and CH pitches was greater than for the CU, while forearm supination was greater for the CU. In contrast to the current study, previous comparisons of kinematic data for trunk, shoulder, and elbow revealed similarities between the FA and CU pitches and differences between the FA and CH pitches. Kinematic differences among pitches depend on the segment of the body studied.", "title": "" }, { "docid": "d1e96872bb61cc16597827ec11f6bb4f", "text": "Audit regulators and the auditing profession have responded to this expectation by issuing a number of standards outlining auditors’ responsibilities to detect fraud (e.g., PCAOB 2010; IAASB 2009, PCAOB 2002; AICPA 2002; AICPA 1997; AICPA 1988). These standards indicate that auditors are responsible for providing reasonable assurance that audited financial statements are free of material misstatements due to fraud. Nonetheless, prior research indicates that auditors detect relatively few significant frauds (Dyck et al. 2010, KPMG 2009). This finding raises the obvious question: Why do auditors rarely detect fraud?", "title": "" }, { "docid": "56c41892216823b592bcafbe00508a67", "text": "Nowadays, universities offer most of their services using corporate website. In higher education services including admission services, a university needs to always provide excellent service to ensure student candidate satisfaction. To obtain student candidate satisfaction apart from the quality of education must also be accompanied by providing consultation services and information to them. This paper proposes the development of Chatbot which acts as a conversation agent that can play a role of as student candidate service. This Chatbot is called Dinus Intelligent Assistance (DINA). DINA uses knowledge based as a center for machine learning approach. The pattern extracted from the knowledge based can be used to provide responses to the user. The source of knowledge based is taken from Universitas Dian Nuswantoro (UDINUS) guest book. It contains of questions and answers about UDINUS admission services. Testing of this system is done by entering questions. From 166 intents, the author tested it using ten random sample questions. Among them, it got eight tested questions answered correctly. Therefore, by using this study we can develop further intelligent Chatbots to help student candidates find the information they need without waiting for the admission staffs's answer.", "title": "" }, { "docid": "e8f28a4e17650041350e535c1ac792ff", "text": "A compact multiple-input-multiple-output (MIMO) antenna with a small size of 26×40 mm2 is proposed for portable ultrawideband (UWB) applications. The antenna consists of two planar-monopole (PM) antenna elements with microstrip-fed printed on one side of the substrate and placed perpendicularly to each other to achieve good isolation. To enhance isolation and increase impedance bandwidth, two long protruding ground stubs are added to the ground plane on the other side and a short ground strip is used to connect the ground planes of the two PMs together to form a common ground. Simulation and measurement are used to study the antenna performance in terms of reflection coefficients at the two input ports, coupling between the two input ports, radiation pattern, realized peak gain, efficiency and envelope correlation coefficient for pattern diversity. Results show that the MIMO antenna has an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than -15 dB, and a low envelope correlation coefficient of less than 0.2 across the frequency band, making it a good candidate for portable UWB applications.", "title": "" }, { "docid": "dae567414224b24dbb7bc06b9b9ea57f", "text": "With the increasing computational power of computers, software design systems are progressing from being tools enabling architects and designers to express their ideas, to tools capable of creating designs under human guidance. One of the main limitations for these computer-automated design systems is the representation with which they encode designs. If the representation cannot encode a certain design, then the design system cannot produce it. To be able to produce new types of designs, and not just optimize pre-defined parameterizations, evolutionary design systems must use generative representations. Generative representations are assembly procedures, or algorithms, for constructing a design, thereby allowing for truly novel design solutions to be encoded. In addition, by enabling modularity, regularity and hierarchy, the level of sophistication that can be evolved is increased. We demonstrate the advantages of generative representations on two different design domains: the evolution of spacecraft antennas and the evolution of 3D solid objects.", "title": "" }, { "docid": "4e8365fbc07d7d8bc55b18d52abec38a", "text": "Depression's influence on mother-infant interactious at 2 months postpartum was studied in 24 depressed and 22 nondepressed mothex-infant dyads. Depression was diagnosed using the SADS-L and RDC. In S's homes, structured interactions of 3 min duration were videotaped and later coded using behavioral descriptors and a l-s time base. Unstructured interactions were described using rating scales. During structured interactions, depressed mothers were more negative and their babies were less positive than were nondepressed dyads. The reduced positivity of depressed dyads was achieved through contingent resixmfiveness. Ratings from unstructured interactions were consistent with these findings. Results support the hypothesis that depression negatively influences motherinfant behaviol; but indicate that influence may vary with development, chronicity, and presence of other risk factors.", "title": "" }, { "docid": "06a3bf091404fc51bb3ee0a9f1d8a759", "text": "A compact design of a circularly-polarized microstrip antenna in order to achieve dual-band behavior for Radio Frequency Identification (RFID) applications is presented, defected ground structure (DGS) technique is used to miniaturize and get a dual-band antenna, the entire size is 38×40×1.58 mm3. This antenna was designed to cover both ultra-height frequency (740MHz ~ 1GHz) and slow height frequency (2.35 GHz ~ 2.51GHz), return loss <; -10 dB, the 3-dB axial ratio bandwidths are about 110 MHz at the lower band (900 MHz).", "title": "" }, { "docid": "d103d7793a9ff39c43dce47d45742905", "text": "This paper proposes an architecture for an open-domain conversational system and evaluates an implemented system. The proposed architecture is fully composed of modules based on natural language processing techniques. Experimental results using human subjects show that our architecture achieves significantly better naturalness than a retrieval-based baseline and that its naturalness is close to that of a rule-based system using 149K hand-crafted rules.", "title": "" }, { "docid": "cdad4ee7017fb232425aceff8b50dca4", "text": "At the core of interpretable machine learning is the question of whether humans are able to make accurate predictions about a model’s behavior. Assumed in this question are three properties of the interpretable output: coverage, precision, and effort. Coverage refers to how often humans think they can predict the model’s behavior, precision to how accurate humans are in those predictions, and effort is either the up-front effort required in interpreting the model, or the effort required to make predictions about a model’s behavior.", "title": "" }, { "docid": "6c92ed5a38cc4ba5b7fe644cd086ca48", "text": "BACKGROUND\nOsteoarthritis (OA), a chronic degenerative disease of synovial joints is characterised by pain and stiffness. Aim of treatment is pain relief. Complementary and alternative medicine (CAM) refers to practices which are not an integral part of orthodox medicine.\n\n\nAIMS AND OBJECTIVES\nTo determine the pattern of usage of CAM among OA patients in Nigeria.\n\n\nPATIENTS AND METHODS\nConsecutive patients with OA attending orthopaedic clinic of Havana Specialist Hospital, Lagos, Nigeria were interviewed over a 6- month period st st of 1 May to 31 October 2007 on usage of CAM. Structured and open-ended questions were used. Demographic data, duration of OA and treatment as well as compliance to orthodox medications were documented.\n\n\nRESULTS\nOne hundred and sixty four patients were studied.120 (73.25%) were females and 44(26.89%) were males. Respondents age range between 35-74 years. 66(40.2%) patients used CAM. 35(53.0%) had done so before presenting to the hospital. The most commonly used CAM were herbal products used by 50(75.8%) of CAM users. Among herbal product users, 74.0% used non- specific local products, 30.0% used ginger, 36.0% used garlic and 28.0% used Aloe Vera. Among CAM users, 35(53.0%) used local embrocation and massage, 10(15.2%) used spiritual methods. There was no significant difference in demographics, clinical characteristics and pain control among CAM users and non-users.\n\n\nCONCLUSION\nMany OA patients receiving orthodox therapy also use CAM. Medical doctors need to keep a wary eye on CAM usage among patients and enquire about this health-seeking behaviour in order to educate them on possible drug interactions, adverse effects and long term complications.", "title": "" }, { "docid": "c4ecb79dc2185fe0f7f422a092bc1334", "text": "The set of minutia points is considered to be the most distinctive feature for fingerprint representation and is widely used in fingerprint matching. It was believed that the minutiae set does not contain sufficient information to reconstruct the original fingerprint image from which minutiae were extracted. However, recent studies have shown that it is indeed possible to reconstruct fingerprint images from their minutiae representations. Reconstruction techniques demonstrate the need for securing fingerprint templates, improving the template interoperability, and improving fingerprint synthesis. But, there is still a large gap between the matching performance obtained from original fingerprint images and their corresponding reconstructed fingerprint images. In this paper, the prior knowledge about fingerprint ridge structures is encoded in terms of orientation patch and continuous phase patch dictionaries to improve the fingerprint reconstruction. The orientation patch dictionary is used to reconstruct the orientation field from minutiae, while the continuous phase patch dictionary is used to reconstruct the ridge pattern. Experimental results on three public domain databases (FVC2002 DB1_A, FVC2002 DB2_A, and NIST SD4) demonstrate that the proposed reconstruction algorithm outperforms the state-of-the-art reconstruction algorithms in terms of both: 1) spurious minutiae and 2) matching performance with respect to type-I attack (matching the reconstructed fingerprint against the same impression from which minutiae set was extracted) and type-II attack (matching the reconstructed fingerprint against a different impression of the same finger).", "title": "" }, { "docid": "f2f95f70783be5d5ee1260a3c5b9d892", "text": "Information Extraction is the process of automatically obtaining knowledge from plain text. Because of the ambiguity of written natural language, Information Extraction is a difficult task. Ontology-based Information Extraction (OBIE) reduces this complexity by including contextual information in the form of a domain ontology. The ontology provides guidance to the extraction process by providing concepts and relationships about the domain. However, OBIE systems have not been widely adopted because of the difficulties in deployment and maintenance. The Ontology-based Components for Information Extraction (OBCIE) architecture has been proposed as a form to encourage the adoption of OBIE by promoting reusability through modularity. In this paper, we propose two orthogonal extensions to OBCIE that allow the construction of hybrid OBIE systems with higher extraction accuracy and a new functionality. The first extension utilizes OBCIE modularity to integrate different types of implementation into one extraction system, producing a more accurate extraction. For each concept or relationship in the ontology, we can select the best implementation for extraction, or we can combine both implementations under an ensemble learning schema. The second extension is a novel ontology-based error detection mechanism. Following a heuristic approach, we can identify sentences that are logically inconsistent with the domain ontology. Because the implementation strategy for the extraction of a concept is independent of the functionality of the extraction, we can design a hybrid OBIE system with concepts utilizing different implementation strategies for extracting correct or incorrect sentences. Our evaluation shows that, in the implementation extension, our proposed method is more accurate in terms of correctness and completeness of the extraction. Moreover, our error detection method can identify incorrect statements with a high accuracy.", "title": "" }, { "docid": "c699ce2a06276f722bf91806378b11eb", "text": "The success of deep neural networks (DNNs) is heavily dependent on the availability of labeled data. However, obtaining labeled data is a big challenge in many real-world problems. In such scenarios, a DNN model can leverage labeled and unlabeled data from a related domain, but it has to deal with the shift in data distributions between the source and the target domains. In this paper, we study the problem of classifying social media posts during a crisis event (e.g., Earthquake). For that, we use labeled and unlabeled data from past similar events (e.g., Flood) and unlabeled data for the current event. We propose a novel model that performs adversarial learning based domain adaptation to deal with distribution drifts and graph based semi-supervised learning to leverage unlabeled data within a single unified deep learning framework. Our experiments with two real-world crisis datasets collected from Twitter demonstrate significant improvements over several baselines.", "title": "" }, { "docid": "c51acd24cb864b050432a055fef2de9a", "text": "Electric motor and power electronics-based inverter are the major components in industrial and automotive electric drives. In this paper, we present a model-based fault diagnostics system developed using a machine learning technology for detecting and locating multiple classes of faults in an electric drive. Power electronics inverter can be considered to be the weakest link in such a system from hardware failure point of view; hence, this work is focused on detecting faults and finding which switches in the inverter cause the faults. A simulation model has been developed based on the theoretical foundations of electric drives to simulate the normal condition, all single-switch and post-short-circuit faults. A machine learning algorithm has been developed to automatically select a set of representative operating points in the (torque, speed) domain, which in turn is sent to the simulated electric drive model to generate signals for the training of a diagnostic neural network, fault diagnostic neural network (FDNN). We validated the capability of the FDNN on data generated by an experimental bench setup. Our research demonstrates that with a robust machine learning approach, a diagnostic system can be trained based on a simulated electric drive model, which can lead to a correct classification of faults over a wide operating domain.", "title": "" }, { "docid": "c30f721224317a41c1e316c158549d81", "text": "The oxysterol receptor LXR is a key transcriptional regulator of lipid metabolism. LXR increases expression of SREBP-1, which in turn regulates at least 32 genes involved in lipid synthesis and transport. We recently identified 25-hydroxycholesterol-3-sulfate (25HC3S) as an important regulatory molecule in the liver. We have now studied the effects of 25HC3S and its precursor, 25-hydroxycholesterol (25HC), on lipid metabolism as mediated by the LXR/SREBP-1 signaling in macrophages. Addition of 25HC3S to human THP-1-derived macrophages markedly decreased nuclear LXR protein levels. 25HC3S administration was followed by dose- and time-dependent decreases in SREBP-1 mature protein and mRNA levels. 25HC3S decreased the expression of SREBP-1-responsive genes, acetyl-CoA carboxylase-1, and fatty acid synthase (FAS) as well as HMGR and LDLR, which are key proteins involved in lipid metabolism. Subsequently, 25HC3S decreased intracellular lipids and increased cell proliferation. In contrast to 25HC3S, 25HC acted as an LXR ligand, increasing ABCA1, ABCG1, SREBP-1, and FAS mRNA levels. In the presence of 25HC3S, 25HC, and LXR agonist T0901317, stimulation of LXR targeting gene expression was repressed. We conclude that 25HC3S acts in macrophages as a cholesterol satiety signal, downregulating cholesterol and fatty acid synthetic pathways via inhibition of LXR/SREBP signaling. A possible role of oxysterol sulfation is proposed.", "title": "" }, { "docid": "841a4a9f1a43b06064ccb769f29c2fe4", "text": "A simple way to mitigate the potential negative side-effects associated with chemical lysis of a blood clot is to tear its fibrin network via mechanical rubbing using a helical robot. Here, we achieve mechanical rubbing of blood clots under ultrasound guidance and using external magnetic actuation. Position of the helical robot is determined using ultrasound feedback and used to control its motion toward the clot, whereas the volume of the clots is estimated simultaneously using visual feedback. We characterize the shear modulus and ultimate shear strength of the blood clots to predict their removal rate during rubbing. Our <italic>in vitro</italic> experiments show the ability to move the helical robot controllably toward clots using ultrasound feedback with average and maximum errors of <inline-formula> <tex-math notation=\"LaTeX\">${\\text{0.84}\\pm \\text{0.41}}$</tex-math></inline-formula> and 2.15 mm, respectively, and achieve removal rate of <inline-formula><tex-math notation=\"LaTeX\">$-\\text{0.614} \\pm \\text{0.303}$</tex-math> </inline-formula> mm<inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math></inline-formula>/min at room temperature (<inline-formula><tex-math notation=\"LaTeX\">${\\text{25}}^{\\circ }$</tex-math></inline-formula>C) and <inline-formula><tex-math notation=\"LaTeX\">$-\\text{0.482} \\pm \\text{0.23}$</tex-math></inline-formula> mm <inline-formula><tex-math notation=\"LaTeX\">$^{3}$</tex-math></inline-formula>/min at body temperature (37 <inline-formula><tex-math notation=\"LaTeX\">$^{\\circ}$</tex-math></inline-formula>C), under the influence of two rotating dipole fields at frequency of 35 Hz. We also validate the effectiveness of mechanical rubbing by measuring the number of red blood cells and platelets past the clot. Our measurements show that rubbing achieves cell count of <inline-formula><tex-math notation=\"LaTeX\">$(\\text{46} \\pm \\text{10.9}) \\times \\text{10}^{4}$</tex-math> </inline-formula> cell/ml, whereas the count in the absence of rubbing is <inline-formula><tex-math notation=\"LaTeX\"> $(\\text{2} \\pm \\text{1.41}) \\times \\text{10}^{4}$</tex-math></inline-formula> cell/ml, after 40 min.", "title": "" }, { "docid": "790a310f599ff9475cc5a66c0e1ca291", "text": "In the past 20 years, there has been a great advancement in knowledge pertaining to compliance with amblyopia treatments. The occlusion dose monitor introduced quantitative monitoring methods in patching, which sparked our initial understanding of the dose-response relationship for patching amblyopia treatment. This review focuses on current compliance knowledge and the impact it has on patching and atropine amblyopia treatment.", "title": "" } ]
scidocsrr
976d92080eeeba1720e4a263f7f45c66
Power grid's Intelligent Stability Analysis based on big data technology
[ { "docid": "56785d7f01cb2e1ab8754cbb931a9d0b", "text": "This paper describes an online dynamic security assessment scheme for large-scale interconnected power systems using phasor measurements and decision trees. The scheme builds and periodically updates decision trees offline to decide critical attributes as security indicators. Decision trees provide online security assessment and preventive control guidelines based on real-time measurements of the indicators from phasor measurement units. The scheme uses a new classification method involving each whole path of a decision tree instead of only classification results at terminal nodes to provide more reliable security assessment results for changes in system conditions. The approaches developed are tested on a 2100-bus, 2600-line, 240-generator operational model of the Entergy system. The test results demonstrate that the proposed scheme is able to identify key security indicators and give reliable and accurate online dynamic security predictions.", "title": "" } ]
[ { "docid": "848fbbcf6e679191fd4160db5650ef65", "text": "The capturing of angular and spatial information of the scene using single camera is made possible by new emerging technology referred to as plenoptic camera. Both angular and spatial information, enable various post-processing applications, e.g. refocusing, synthetic aperture, super-resolution, and 3D scene reconstruction. In the past, multiple traditional cameras were used to capture the angular and spatial information of the scene. However, recently with the advancement in optical technology, plenoptic cameras have been introduced to capture the scene information. In a plenoptic camera, a lenslet array is placed between the main lens and the image sensor that allows multiplexing of the spatial and angular information onto a single image, also referred to as plenoptic image. The placement of the lenslet array relative to the main lens and the image sensor, results in two different optical designs of a plenoptic camera, also referred to as plenoptic 1.0 and plenoptic 2.0. In this work, we present a novel dataset captured with plenoptic 1.0 (Lytro Illum) and plenoptic 2.0 (Raytrix R29) cameras for the same scenes under the same conditions. The dataset provides the benchmark contents for various research and development activities for plenoptic images.", "title": "" }, { "docid": "487c011cb0701b4b909dedca2d128fe6", "text": "It is necessary and essential to discovery protein function from the novel primary sequences. Wet lab experimental procedures are not only time-consuming, but also costly, so predicting protein structure and function reliably based only on amino acid sequence has significant value. TATA-binding protein (TBP) is a kind of DNA binding protein, which plays a key role in the transcription regulation. Our study proposed an automatic approach for identifying TATA-binding proteins efficiently, accurately, and conveniently. This method would guide for the special protein identification with computational intelligence strategies. Firstly, we proposed novel fingerprint features for TBP based on pseudo amino acid composition, physicochemical properties, and secondary structure. Secondly, hierarchical features dimensionality reduction strategies were employed to improve the performance furthermore. Currently, Pretata achieves 92.92% TATA-binding protein prediction accuracy, which is better than all other existing methods. The experiments demonstrate that our method could greatly improve the prediction accuracy and speed, thus allowing large-scale NGS data prediction to be practical. A web server is developed to facilitate the other researchers, which can be accessed at http://server.malab.cn/preTata/ .", "title": "" }, { "docid": "0669dc3c9867752cf88e6b46ce73e07d", "text": "In this paper, we focus on the problem of preserving the privacy of sensitive relationships in graph data. We refer to the problem of inferring sensitive relationships from anonymized graph data as link re-identification. We propose five different privacy preservation strategies, which vary in terms of the amount of data removed (and hence their utility) and the amount of privacy preserved. We assume the adversary has an accurate predictive model for links, and we show experimentally the success of different link re-identification strategies under varying structural characteristics of the data.", "title": "" }, { "docid": "6745e91294ae763f1f7ad7790bc9ccb4", "text": "In this paper we propose an asymmetric semantic similarity among instances within an ontology. We aim to define a measurement of semantic similarity that exploit as much as possible the knowledge stored in the ontology taking into account different hints hidden in the ontology definition. The proposed similarity measurement considers different existing similarities, which we have combined and extended. Moreover, the similarity assessment is explicitly parameterised according to the criteria induced by the context. The parameterisation aims to assist the user in the decision making pertaining to similarity evaluation, as the criteria can be refined according to user needs. Experiments and an evaluation of the similarity assessment are presented showing the efficiency of the method.", "title": "" }, { "docid": "fa1025c86ce9fce67ee148b7a37975da", "text": "Context-aware Web services are emerging as a promising technology for the electronic businesses in mobile and pervasive environments. Unfortunately, complex context-aware services are still hard to build. In this paper, we present a modeling language for the model-driven development of context-aware Web services based on the Unified Modeling Language (UML). Specifically, we show how UML can be used to specify information related to the design of context-aware services. We present the abstract syntax and notation of the language and illustrate its usage using an example service. Our language offers significant design flexibility that considerably simplifies the development of context-aware Web services.", "title": "" }, { "docid": "372f137098bd5817896d82ed0cb0c771", "text": "Under today's bursty web traffic, the fine-grained per-container control promises more efficient resource provisioning for web services and better resource utilization in cloud datacenters. In this paper, we present Two-stage Stochastic Programming Resource A llocator (2SPRA). It optimizes resource provisioning for containerized n-tier web services in accordance with fluctuations of incoming workload to accommodate predefined SLOs on response latency. In particular, 2SPRA is capable of minimizing resource over-provisioning by addressing dynamics of web traffic as workload uncertainty in a native stochastic optimization model. Using special-purpose OpenOpt optimization framework, we fully implement 2SPRA in Python and evaluate it against three other existing allocation schemes, in a Docker-based CoreOS Linux VMs on Amazon EC2. We generate workloads based on four real-world web traces of various traffic variations: AOL, WorldCup98, ClarkNet, and NASA. Our experimental results demonstrate that 2SPRA achieves the minimum resource over-provisioning outperforming other schemes. In particular, 2SPRA allocates only 6.16 percent more than application's actual demand on average and at most 7.75 percent in the worst case. It achieves 3x further reduction in total resources provisioned compared to other schemes delivering overall cost-savings of 53.6 percent on average and up to 66.8 percent. Furthermore, 2SPRA demonstrates consistency in its provisioning decisions and robust responsiveness against workload fluctuations.", "title": "" }, { "docid": "78b453d487294121a14e71e639906c36", "text": "Modern mobile devices provide several functionalities and new ones are being added at a breakneck pace. Unfortunately browsing the menu and accessing the functions of a mobile phone is not a trivial task for visual impaired users. Low vision people typically rely on screen readers and voice commands. However, depending on the situations, screen readers are not ideal because blind people may need their hearing for safety, and automatic recognition of voice commands is challenging in noisy environments. Novel smart watches technologies provides an interesting opportunity to design new forms of user interaction with mobile phones. We present our first works towards the realization of a system, based on the combination of a mobile phone and a smart watch for gesture control, for assisting low vision people during daily life activities. More specifically we propose a novel approach for gesture recognition which is based on global alignment kernels and is shown to be effective in the challenging scenario of user independent recognition. This method is used to build a gesture-based user interaction module and is embedded into a system targeted to visually impaired which will also integrate several other modules. We present two of them: one for identifying wet floor signs, the other for automatic recognition of predefined logos.", "title": "" }, { "docid": "565b07fee5a5812d04818fa132c0da4c", "text": "PHP is the most popular scripting language for web applications. Because no native solution to compile or protect PHP scripts exists, PHP applications are usually shipped as plain source code which is easily understood or copied by an adversary. In order to prevent such attacks, commercial products such as ionCube, Zend Guard, and Source Guardian promise a source code protection. In this paper, we analyze the inner working and security of these tools and propose a method to recover the source code by leveraging static and dynamic analysis techniques. We introduce a generic approach for decompilation of obfuscated bytecode and show that it is possible to automatically recover the original source code of protected software. As a result, we discovered previously unknown vulnerabilities and backdoors in 1 million lines of recovered source code of 10 protected applications.", "title": "" }, { "docid": "6286480f676c75e1cac4af9329227258", "text": "Understanding physical phenomena is a key competence that enables humans and animals to act and interact under uncertain perception in previously unseen environments containing novel object and their configurations. Developmental psychology has shown that such skills are acquired by infants from observations at a very early stage. In this paper, we contrast a more traditional approach of taking a modelbased route with explicit 3D representations and physical simulation by an end-to-end approach that directly predicts stability and related quantities from appearance. We ask the question if and to what extent and quality such a skill can directly be acquired in a data-driven way— bypassing the need for an explicit simulation. We present a learning-based approach based on simulated data that predicts stability of towers comprised of wooden blocks under different conditions and quantities related to the potential fall of the towers. The evaluation is carried out on synthetic data and compared to human judgments on the same stimuli.", "title": "" }, { "docid": "0674479836883d572b05af6481f27a0d", "text": "Contents Preface vii Chapter 1. Graph Theory in the Information Age 1 1.1. Introduction 1 1.2. Basic definitions 3 1.3. Degree sequences and the power law 6 1.4. History of the power law 8 1.5. Examples of power law graphs 10 1.6. An outline of the book 17 Chapter 2. Old and New Concentration Inequalities 21 2.1. The binomial distribution and its asymptotic behavior 21 2.2. General Chernoff inequalities 25 2.3. More concentration inequalities 30 2.4. A concentration inequality with a large error estimate 33 2.5. Martingales and Azuma's inequality 35 2.6. General martingale inequalities 38 2.7. Supermartingales and Submartingales 41 2.8. The decision tree and relaxed concentration inequalities 46 Chapter 3. A Generative Model — the Preferential Attachment Scheme 55 3.1. Basic steps of the preferential attachment scheme 55 3.2. Analyzing the preferential attachment model 56 3.3. A useful lemma for rigorous proofs 59 3.4. The peril of heuristics via an example of balls-and-bins 60 3.5. Scale-free networks 62 3.6. The sharp concentration of preferential attachment scheme 64 3.7. Models for directed graphs 70 Chapter 4. Duplication Models for Biological Networks 75 4.1. Biological networks 75 4.2. The duplication model 76 4.3. Expected degrees of a random graph in the duplication model 77 4.4. The convergence of the expected degrees 79 4.5. The generating functions for the expected degrees 83 4.6. Two concentration results for the duplication model 84 4.7. Power law distribution of generalized duplication models 89 Chapter 5. Random Graphs with Given Expected Degrees 91 5.1. The Erd˝ os-Rényi model 91 5.2. The diameter of G n,p 95 iii iv CONTENTS 5.3. A general random graph model 97 5.4. Size, volume and higher order volumes 97 5.5. Basic properties of G(w) 100 5.6. Neighborhood expansion in random graphs 103 5.7. A random power law graph model 107 5.8. Actual versus expected degree sequence 109 Chapter 6. The Rise of the Giant Component 113 6.1. No giant component if w < 1? 114 6.2. Is there a giant component if˜w > 1? 115 6.3. No giant component if˜w < 1? 116 6.4. Existence and uniqueness of the giant component 117 6.5. A lemma on neighborhood growth 126 6.6. The volume of the giant component 129 6.7. Proving the volume estimate of the giant component 131 6.8. Lower bounds for the volume of the giant component 136 6.9. The complement of the giant component and its size 138 6.10. …", "title": "" }, { "docid": "97e2de6bfce73c9a5fa0a474ded5b37a", "text": "OBJECTIVE\nThis study was undertaken to determine the effects of rectovaginal fascia reattachment on symptoms and vaginal topography.\n\n\nSTUDY DESIGN\nStandardized preoperative and postoperative assessments of vaginal topography (the Pelvic Organ Prolapse staging system of the International Continence Society, American Urogynecologic Society, and Society of Gynecologic Surgeons) and 5 symptoms commonly attributed to rectocele were used to evaluate 66 women who underwent rectovaginal fascia reattachment for rectocele repair. All patients had abnormal fluoroscopic results with objective rectocele formation.\n\n\nRESULTS\nSeventy percent (n = 46) of the women were objectively assessed at 1 year. Preoperative symptoms included the following: protrusion, 85% (n = 39); difficult defecation, 52% (n = 24); constipation, 46% (n = 21); dyspareunia, 26% (n = 12); and manual evacuation, 24% (n = 11). Posterior vaginal topography was considered abnormal in all patients with a mean Ap point (a point located in the midline of the posterior vaginal wall 3 cm proximal to the hymen) value of -0.5 cm (range, -2 to 3 cm). Postoperative symptom resolution was as follows: protrusion, 90% (35/39; P <.0005); difficult defecation, 54% (14/24; P <.0005); constipation, 43% (9/21; P =.02); dyspareunia, 92% (11/12; P =.01); and manual evacuation, 36% (4/11; P =.125). Vaginal topography at 1 year was improved, with a mean Ap point value of -2 cm (range, -3 to 2 cm).\n\n\nCONCLUSION\nThis technique of rectocele repair improves vaginal topography and alleviates 3 symptoms commonly attributed to rectoceles. It is relatively ineffective for relief of manual evacuation, and constipation is variably decreased.", "title": "" }, { "docid": "dc5c78f8f8e07e8b6b38e13bffeb3197", "text": "A penetrating head injury belongs to the most severe traumatic brain injuries, in which communication can arise between the intracranial cavity and surrounding environment. The authors present a literature review and typical case reports of a penetrating head injury in children. The list of patients treated at the neurosurgical department in the last 5 years for penetrating TBI is briefly referred. Rapid transfer to the specialized center with subsequent urgent surgical treatment is the important point in the treatment algorithm. It is essential to clean the wound very properly with all the foreign material during the surgery and to close the dura with a water-tight suture. Wide-spectrum antibiotics are of great use. In case of large-extent brain damage, the use of anticonvulsants is recommended. The prognosis of such severe trauma could be influenced very positively by a good medical care organization; obviously, the extent of brain tissue laceration is the limiting factor.", "title": "" }, { "docid": "c6de5f33ca775fb42db4667b0dcc74bf", "text": "Robotic-assisted laparoscopic prostatectomy is a surgical procedure performed to eradicate prostate cancer. Use of robotic assistance technology allows smaller incisions than the traditional laparoscopic approach and results in better patient outcomes, such as less blood loss, less pain, shorter hospital stays, and better postoperative potency and continence rates. This surgical approach creates unique challenges in patient positioning for the perioperative team because the patient is placed in the lithotomy with steep Trendelenburg position. Incorrect positioning can lead to nerve damage, pressure ulcers, and other complications. Using a special beanbag positioning device made specifically for use with this severe position helps prevent these complications.", "title": "" }, { "docid": "9b702c679d7bbbba2ac29b3a0c2f6d3b", "text": "Mobile-edge computing (MEC) has recently emerged as a prominent technology to liberate mobile devices from computationally intensive workloads, by offloading them to the proximate MEC server. To make offloading effective, the radio and computational resources need to be dynamically managed, to cope with the time-varying computation demands and wireless fading channels. In this paper, we develop an online joint radio and computational resource management algorithm for multi-user MEC systems, with the objective of minimizing the long-term average weighted sum power consumption of the mobile devices and the MEC server, subject to a task buffer stability constraint. Specifically, at each time slot, the optimal CPU-cycle frequencies of the mobile devices are obtained in closed forms, and the optimal transmit power and bandwidth allocation for computation offloading are determined with the Gauss-Seidel method; while for the MEC server, both the optimal frequencies of the CPU cores and the optimal MEC server scheduling decision are derived in closed forms. Besides, a delay-improved mechanism is proposed to reduce the execution delay. Rigorous performance analysis is conducted for the proposed algorithm and its delay-improved version, indicating that the weighted sum power consumption and execution delay obey an $\\left [{O\\left ({1 / V}\\right), O\\left ({V}\\right) }\\right ]$ tradeoff with $V$ as a control parameter. Simulation results are provided to validate the theoretical analysis and demonstrate the impacts of various parameters.", "title": "" }, { "docid": "6f1d7e2faff928c80898bfbf05ac0669", "text": "This study examined level of engagement with Disney Princess media/products as it relates to gender-stereotypical behavior, body esteem (i.e. body image), and prosocial behavior during early childhood. Participants consisted of 198 children (Mage  = 58 months), who were tested at two time points (approximately 1 year apart). Data consisted of parent and teacher reports, and child observations in a toy preference task. Longitudinal results revealed that Disney Princess engagement was associated with more female gender-stereotypical behavior 1 year later, even after controlling for initial levels of gender-stereotypical behavior. Parental mediation strengthened associations between princess engagement and adherence to female gender-stereotypical behavior for both girls and boys, and for body esteem and prosocial behavior for boys only.", "title": "" }, { "docid": "ec49f419b86fc4276ceba06fd0208749", "text": "In order to organize the large number of products listed in e-commerce sites, each product is usually assigned to one of the multi-level categories in the taxonomy tree. It is a time-consuming and difficult task for merchants to select proper categories within thousan ds of options for the products they sell. In this work, we propose an automatic classification tool to predict the matching category for a given product title and description. We used a combinatio n of two different neural models, i.e., deep belief nets and deep autoencoders, for both titles and descriptions. We implemented a selective reconstruction approach for the input layer during the training of the deep neural networks, in order to scale-out for large-sized sparse feature vectors. GPUs are utilized in order to train neural networks in a reasonable time. We have trained o ur m dels for around 150 million products with a taxonomy tree with at most 5 levels that contains 28,338 leaf categories. Tests with millions of products show that our first prediction s matches 81% of merchants’ assignments, when “others” categories are excluded.", "title": "" }, { "docid": "c92892ac05025e7ce4dddf1669b43df6", "text": "Joint torque sensing represents one of the foundations and vital components of modern robotic systems that target to match closely the physical interaction performance of biological systems through the realization of torque controlled actuators. However, despite decades of studies on the development of different torque sensors, the design of accurate and reliable torque sensors still remains challenging for the majority of the robotics community preventing the use of the technology. This letter proposes and evaluates two joint torque sensing elements based on strain gauge and deflection-encoder principles. The two designs are elaborated and their performance from different perspectives and practical factors are evaluated including resolution, nonaxial moments load crosstalk, torque ripple rejection, bandwidth, noise/residual offset level, and thermal/time dependent signal drift. The letter reveals the practical details and the pros and cons of each sensor principle providing valuable contributions into the field toward the realization of higher fidelity joint torque sensing performance.", "title": "" }, { "docid": "52504a4825bf773ced200a675d291dde", "text": "Natural Language Generation (NLG) is defined as the systematic approach for producing human understandable natural language text based on nontextual data or from meaning representations. This is a significant area which empowers human-computer interaction. It has also given rise to a variety of theoretical as well as empirical approaches. This paper intends to provide a detailed overview and a classification of the state-of-the-art approaches in Natural Language Generation. The paper explores NLG architectures and tasks classed under document planning, micro-planning and surface realization modules. Additionally, this paper also identifies the gaps existing in the NLG research which require further work in order to make NLG a widely usable technology.", "title": "" }, { "docid": "e8f09d0b156f890839d18074eac1cc01", "text": "This paper addresses the problems that must be considered if computers are going to treat their users as individuals with distinct personalities, goals, and so forth. It first outlines the issues, and then proposes stereotypes as a useful mechanism for building models of individual users on the basis of a small amount of information about them. In order to build user models quickly, a large amount of uncertain knowledge must be incorporated into the models. The issue of how to resolve the conflicts that will arise among such inferences is discussed. A system, Grundy, is described that bunds models of its users, with the aid of stereotypes, and then exploits those models to guide it in its task, suggesting novels that people may find interesting. If stereotypes are to be useful to Grundy, they must accurately characterize the users of the system. Some techniques to modify stereotypes on the basis of experience are discussed. An analysis of Grundy's performance shows that its user models are effective in guiding its performance.", "title": "" } ]
scidocsrr
b0772812a9182f6354e8b447ff0558a0
Maximum Power Point Tracking for PV system under partial shading condition via particle swarm optimization
[ { "docid": "470093535d4128efa9839905ab2904a5", "text": "Photovolatic systems normally use a maximum power point tracking (MPPT) technique to continuously deliver the highest possible power to the load when variations in the insolation and temperature occur. It overcomes the problem of mismatch between the solar arrays and the given load. A simple method of tracking the maximum power points (MPP’s) and forcing the system to operate close to these points is presented. The principle of energy conservation is used to derive the largeand small-signal model and transfer function. By using the proposed model, the drawbacks of the state-space-averaging method can be overcome. The TI320C25 digital signal processor (DSP) was used to implement the proposed MPPT controller, which controls the dc/dc converter in the photovoltaic system. Simulations and experimental results show excellent performance.", "title": "" } ]
[ { "docid": "e4132ac9af863c2c17489817898dbd1c", "text": "This paper presents automatic parallel parking for car-like vehicle, with highlights on a path planning algorithm for arbitrary initial angle using two tangential arcs of different radii. The algorithm is divided into three parts. Firstly, a simple kinematic model of the vehicle is established based on Ackerman steering geometry; secondly, not only a minimal size of the parking space is analyzed based on the size and the performance of the vehicle but also an appropriate target point is chosen based on the size of the parking space and the vehicle; Finally, a path is generated based on two tangential arcs of different radii. The simulation results show that the feasibility of the proposed algorithm.", "title": "" }, { "docid": "26095dbc82b68c32881ad9316256bc42", "text": "BACKGROUND\nSchizophrenia causes great suffering for patients and families. Today, patients are treated with medications, but unfortunately many still have persistent symptoms and an impaired quality of life. During the last 20 years of research in cognitive behavioral therapy (CBT) for schizophrenia, evidence has been found that the treatment is good for patients but it is not satisfactory enough, and more studies are being carried out hopefully to achieve further improvement.\n\n\nPURPOSE\nClinical trials and meta-analyses are being used to try to prove the efficacy of CBT. In this article, we summarize recent research using the cognitive model for people with schizophrenia.\n\n\nMETHODS\nA systematic search was carried out in PubMed (Medline). Relevant articles were selected if they contained a description of cognitive models for schizophrenia or psychotic disorders.\n\n\nRESULTS\nThere is now evidence that positive and negative symptoms exist in a continuum, from normality (mild form and few symptoms) to fully developed disease (intensive form with many symptoms). Delusional patients have reasoning bias such as jumping to conclusions, and those with hallucination have impaired self-monitoring and experience their own thoughts as voices. Patients with negative symptoms have negative beliefs such as low expectations regarding pleasure and success. In the entire patient group, it is common to have low self-esteem.\n\n\nCONCLUSIONS\nThe cognitive model integrates very well with the aberrant salience model. It takes into account neurobiology, cognitive, emotional and social processes. The therapist uses this knowledge when he or she chooses techniques for treatment of patients.", "title": "" }, { "docid": "49ff105e4bd35d88e2cbf988e22a7a3a", "text": "Personality testing is a popular method that used to be commonly employed in selection decisions in organizational settings. However, it is also a controversial practice according to a number researcher who claims that especially explicit measures of personality may be prone to the negative effects of faking and response distortion. The first aim of the present paper is to summarize Morgeson, Morgeson, Campion, Dipboye, Hollenbeck, Murphy and Schmitt’s paper that discussed the limitations of personality testing for performance ratings in relation to its basic conclusions about faking and response distortion. Secondly, the results of Rosse, Stecher, Miller and Levin’s study that investigated the effects of faking in personality testing on selection decisions will be discussed in detail. Finally, recent research findings related to implicit personality measures will be introduced along with the examples of the results related to the implications of those measures for response distortion in personality research and the suggestions for future research.", "title": "" }, { "docid": "1d1f14cb78693e56d014c89eacfcc3ef", "text": "We undertook a meta-analysis of six Crohn's disease genome-wide association studies (GWAS) comprising 6,333 affected individuals (cases) and 15,056 controls and followed up the top association signals in 15,694 cases, 14,026 controls and 414 parent-offspring trios. We identified 30 new susceptibility loci meeting genome-wide significance (P < 5 × 10−8). A series of in silico analyses highlighted particular genes within these loci and, together with manual curation, implicated functionally interesting candidate genes including SMAD3, ERAP2, IL10, IL2RA, TYK2, FUT2, DNMT3A, DENND1B, BACH2 and TAGAP. Combined with previously confirmed loci, these results identify 71 distinct loci with genome-wide significant evidence for association with Crohn's disease.", "title": "" }, { "docid": "9a7016a02eda7fcae628197b0625832b", "text": "We present a vertical-silicon-nanowire-based p-type tunneling field-effect transistor (TFET) using CMOS-compatible process flow. Following our recently reported n-TFET , a low-temperature dopant segregation technique was employed on the source side to achieve steep dopant gradient, leading to excellent tunneling performance. The fabricated p-TFET devices demonstrate a subthreshold swing (SS) of 30 mV/decade averaged over a decade of drain current and an Ion/Ioff ratio of >; 105. Moreover, an SS of 50 mV/decade is maintained for three orders of drain current. This demonstration completes the complementary pair of TFETs to implement CMOS-like circuits.", "title": "" }, { "docid": "c4fe9fd7e506e18f1a38bc71b7434b99", "text": "We introduce Evenly Cascaded convolutional Network (ECN), a neural network taking inspiration from the cascade algorithm of wavelet analysis. ECN employs two feature streams - a low-level and high-level steam. At each layer these streams interact, such that low-level features are modulated using advanced perspectives from the high-level stream. ECN is evenly structured through resizing feature map dimensions by a consistent ratio, which removes the burden of ad-hoc specification of feature map dimensions. ECN produces easily interpretable features maps, a result whose intuition can be understood in the context of scale-space theory. We demonstrate that ECN’s design facilitates the training process through providing easily trainable shortcuts. We report new state-of-the-art results for small networks, without the need for additional treatment such as pruning or compression - a consequence of ECN’s simple structure and direct training. A 6-layered ECN design with under 500k parameters achieves 95.24% and 78.99% accuracy on CIFAR-10 and CIFAR-100 datasets, respectively, outperforming the current state-of-the-art on small parameter networks, and a 3 million parameter ECN produces results competitive to the state-of-the-art.", "title": "" }, { "docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0", "text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.", "title": "" }, { "docid": "65a990303d1d6efd3aea5307e7db9248", "text": "The presentation of news articles to meet research needs has traditionally been a document-centric process. Yet users often want to monitor developing news stories based on an event, rather than by examining an exhaustive list of retrieved documents. In this work, we illustrate a news retrieval system, eventNews, and an underlying algorithm which is event-centric. Through this system, news articles are clustered around a single news event or an event and its sub-events. The algorithm presented can leverage the creation of new Reuters stories and their compact labels as seed documents for the clustering process. The system is configured to generate top-level clusters for news events based on an editorially supplied topical label, known as a ‘slugline,’ and to generate sub-topic-focused clusters based on the algorithm. The system uses an agglomerative clustering algorithm to gather and structure documents into distinct result sets. Decisions on whether to merge related documents or clusters are made according to the similarity of evidence derived from two distinct sources, one, relying on a digital signature based on the unstructured text in the document, the other based on the presence of named entity tags that have been assigned to the document by a named entity tagger, in this case Thomson Reuters’ Calais engine. Copyright c © 2016 for the individual papers by the paper’s authors. Copying permitted for private and academic purposes. This volume is published and copyrighted by its editors. In: M. Martinez, U. Kruschwitz, G. Kazai, D. Corney, F. Hopfgartner, R. Campos and D. Albakour (eds.): Proceedings of the NewsIR’16 Workshop at ECIR, Padua, Italy, 20-March-2016, published at http://ceur-ws.org", "title": "" }, { "docid": "6e8cf6a53e1a9d571d5e5d1644c56e57", "text": "Previous research on relation classification has verified the effectiveness of using dependency shortest paths or subtrees. In this paper, we further explore how to make full use of the combination of these dependency information. We first propose a new structure, termed augmented dependency path (ADP), which is composed of the shortest dependency path between two entities and the subtrees attached to the shortest path. To exploit the semantic representation behind the ADP structure, we develop dependency-based neural networks (DepNN): a recursive neural network designed to model the subtrees, and a convolutional neural network to capture the most important features on the shortest path. Experiments on the SemEval-2010 dataset show that our proposed method achieves state-of-art results.", "title": "" }, { "docid": "9814af3a2c855717806ad7496d21f40e", "text": "This chapter gives an extended introduction to the lightweight profiles OWL EL, OWL QL, and OWL RL of the Web Ontology Language OWL. The three ontology language standards are sublanguages of OWL DL that are restricted in ways that significantly simplify ontological reasoning. Compared to OWL DL as a whole, reasoning algorithms for the OWL profiles show higher performance, are easier to implement, and can scale to larger amounts of data. Since ontological reasoning is of great importance for designing and deploying OWL ontologies, the profiles are highly attractive for many applications. These advantages come at a price: various modelling features of OWL are not available in all or some of the OWL profiles. Moreover, the profiles are mutually incomparable in the sense that each of them offers a combination of features that is available in none of the others. This chapter provides an overview of these differences and explains why some of them are essential to retain the desired properties. To this end, we recall the relationship between OWL and description logics (DLs), and show how each of the profiles is typically treated in reasoning algorithms.", "title": "" }, { "docid": "1f93c117c048be827d0261f419c9cce3", "text": "Due to increasing number of internet users, popularity of Broadband Internet also increasing. Hence the connection cost should be decrease due to Wi Fi connectivity and built-in sensors in devices as well the maximum number of devices should be connected through a common medium. To meet all these requirements, the technology so called Internet of Things is evolved. Internet of Things (IoT) can be considered as a connection of computing devices like smart phones, coffee maker, washing machines, wearable device with an internet. IoT create network and connect \"things\" and people together by creating relationship between either people-people, people-things or things-things. As the number of device connection is increased, it increases the Security risk. Security is the biggest issue for IoT at any companies across the globe. Furthermore, privacy and data sharing can again be considered as a security concern for IoT. Companies, those who use IoT technique, need to find a way to store, track, analyze and make sense of the large amounts of data that will be generated. Few security techniques of IoT are necessary to implement to protect your confidential and important data as well for device protection through some internet security threats.", "title": "" }, { "docid": "e62e09ce3f4f135b12df4d643df02de6", "text": "Septic arthritis/tenosynovitis in the horse can have life-threatening consequences. The purpose of this cross-sectional retrospective study was to describe ultrasound characteristics of septic arthritis/tenosynovitis in a group of horses. Diagnosis of septic arthritis/tenosynovitis was based on historical and clinical findings as well as the results of the synovial fluid analysis and/or positive synovial culture. Ultrasonographic findings recorded were degree of joint/sheath effusion, degree of synovial membrane thickening, echogenicity of the synovial fluid, and presence of hyperechogenic spots and fibrinous loculations. Ultrasonographic findings were tested for dependence on the cause of sepsis, time between admission and beginning of clinical signs, and the white blood cell counts in the synovial fluid. Thirty-eight horses with confirmed septic arthritis/tenosynovitis of 43 joints/sheaths were included. Degree of effusion was marked in 81.4% of cases, mild in 16.3%, and absent in 2.3%. Synovial thickening was mild in 30.9% of cases and moderate/severe in 69.1%. Synovial fluid was anechogenic in 45.2% of cases and echogenic in 54.8%. Hyperechogenic spots were identified in 32.5% of structures and fibrinous loculations in 64.3%. Relationships between the degree of synovial effusion, degree of the synovial thickening, presence of fibrinous loculations, and the time between admission and beginning of clinical signs were identified, as well as between the presence of fibrinous loculations and the cause of sepsis (P ≤ 0.05). Findings indicated that ultrasonographic findings of septic arthritis/tenosynovitis may vary in horses, and may be influenced by time between admission and beginning of clinical signs.", "title": "" }, { "docid": "41d97d98a524e5f1e45ae724017819d9", "text": "Dynamically changing (reconfiguring) the membership of a replicated distributed system while preserving data consistency and system availability is a challenging problem. In this paper, we show that reconfiguration can be simplified by taking advantage of certain properties commonly provided by Primary/Backup systems. We describe a new reconfiguration protocol, recently implemented in Apache Zookeeper. It fully automates configuration changes and minimizes any interruption in service to clients while maintaining data consistency. By leveraging the properties already provided by Zookeeper our protocol is considerably simpler than state of the art.", "title": "" }, { "docid": "9d75520f138bcf7c529488f29d01efbb", "text": "High utilization of cargo volume is an essential factor in the success of modern enterprises in the market. Although mathematical models have been presented for container loading problems in the literature, there is still a lack of studies that consider practical constraints. In this paper, a Mixed Integer Linear Programming is developed for the problem of packing a subset of rectangular boxes inside a container such that the total value of the packed boxes is maximized while some realistic constraints, such as vertical stability, are considered. The packing is orthogonal, and the boxes can be freely rotated into any of the six orientations. Moreover, a sequence triple-based solution methodology is proposed, simulated annealing is used as modeling technique, and the situation where some boxes are preplaced in the container is investigated. These preplaced boxes represent potential obstacles. Numerical experiments are conducted for containers with and without obstacles. The results show that the simulated annealing approach is successful and can handle large number of packing instances.", "title": "" }, { "docid": "d5907911dfa7340b786f85618702ac12", "text": "In recent years many popular data visualizations have emerged that are created largely by designers whose main area of expertise is not computer science. Designers generate these visualizations using a handful of design tools and environments. To better inform the development of tools intended for designers working with data, we set out to understand designers' challenges and perspectives. We interviewed professional designers, conducted observations of designers working with data in the lab, and observed designers working with data in team settings in the wild. A set of patterns emerged from these observations from which we extract a number of themes that provide a new perspective on design considerations for visualization tool creators, as well as on known engineering problems.", "title": "" }, { "docid": "baad4c23994bafbdfba2a3d566c83558", "text": "Memories today expose an all-or-nothing correctness model that incurs significant costs in performance, energy, area, and design complexity. But not all applications need high-precision storage for all of their data structures all of the time. This article proposes mechanisms that enable applications to store data approximately and shows that doing so can improve the performance, lifetime, or density of solid-state memories. We propose two mechanisms. The first allows errors in multilevel cells by reducing the number of programming pulses used to write them. The second mechanism mitigates wear-out failures and extends memory endurance by mapping approximate data onto blocks that have exhausted their hardware error correction resources. Simulations show that reduced-precision writes in multilevel phase-change memory cells can be 1.7 × faster on average and using failed blocks can improve array lifetime by 23% on average with quality loss under 10%.", "title": "" }, { "docid": "a31652c0236fb5da569ffbf326eb29e5", "text": "Since 2012, citizens in Alaska, Colorado, Oregon, and Washington have voted to legalize the recreational use of marijuana by adults. Advocates of legalization have argued that prohibition wastes scarce law enforcement resources by selectively arresting minority users of a drug that has fewer adverse health effects than alcohol.1,2 It would be better, they argue, to legalize, regulate, and tax marijuana, like alcohol.3 Opponents of legalization argue that it will increase marijuana use among youth because it will make marijuana more available at a cheaper price and reduce the perceived risks of its use.4 Cerdá et al5 have assessed these concerns by examining the effects of marijuana legalization in Colorado and Washington on attitudes toward marijuana and reported marijuana use among young people. They used surveys from Monitoring the Future between 2010 and 2015 to examine changes in the perceived risks of occasional marijuana use and self-reported marijuana use in the last 30 days among students in eighth, 10th, and 12th grades in Colorado and Washington before and after legalization. They compared these changes with changes among students in states in the contiguous United States that had not legalized marijuana (excluding Oregon, which legalized in 2014). The perceived risks of using marijuana declined in all states, but there was a larger decline in perceived risks and a larger increase in marijuana use in the past 30 days among eighth and 10th graders from Washington than among students from other states. They did not find any such differences between students in Colorado and students in other US states that had not legalized, nor did they find any of these changes in 12th graders in Colorado or Washington. If the changes observed in Washington are attributable to legalization, why were there no changes found in Colorado? The authors suggest that this may have been because Colorado’s medical marijuana laws were much more liberal before legalization than those in Washington. After 2009, Colorado permitted medical marijuana to be supplied through for-profit dispensaries and allowed advertising of medical marijuana products. This hypothesisissupportedbyotherevidencethattheperceivedrisks of marijuana use decreased and marijuana use increased among young people in Colorado after these changes in 2009.6", "title": "" }, { "docid": "42d3f666325c3c9e2d61fcbad3c6659a", "text": "Supernumerary or accessory nostrils are a very rare type of congenital nasal anomaly, with only a few cases reported in the literature. They can be associated with such malformations as facial clefts and they can be unilateral or bilateral, with most cases reported being unilateral. The accessory nostril may or may not communicate with the ipsilateral nasal cavity, probably depending on the degree of embryological progression of the anomaly. A case of simple supernumerary left nostril with no nasal cavity communication and with a normally developed nose is presented. The surgical treatment is described and the different speculative theories related to the embryogenesis of supernumerary nostrils are also reviewed.", "title": "" }, { "docid": "468cdc4decf3871314ce04d6e49f6fad", "text": "Documents come naturally with structure: a section contains paragraphs which itself contains sentences; a blog page contains a sequence of comments and links to related blogs. Structure, of course, implies something about shared topics. In this paper we take the simplest form of structure, a document consisting of multiple segments, as the basis for a new form of topic model. To make this computationally feasible, and to allow the form of collapsed Gibbs sampling that has worked well to date with topic models, we use the marginalized posterior of a two-parameter Poisson-Dirichlet process (or Pitman-Yor process) to handle the hierarchical modelling. Experiments using either paragraphs or sentences as segments show the method significantly outperforms standard topic models on either whole document or segment, and previous segmented models, based on the held-out perplexity measure.", "title": "" }, { "docid": "578130d8ef9d18041c84ed226af8c84a", "text": "Ranking and scoring are ubiquitous. We consider the setting in which an institution, called a ranker, evaluates a set of individuals based on demographic, behavioral or other characteristics. The final output is a ranking that represents the relative quality of the individuals. While automatic and therefore seemingly objective, rankers can, and often do, discriminate against individuals and systematically disadvantage members of protected groups. This warrants a careful study of the fairness of a ranking scheme, to enable data science for social good applications, among others.\n In this paper we propose fairness measures for ranked outputs. We develop a data generation procedure that allows us to systematically control the degree of unfairness in the output, and study the behavior of our measures on these datasets. We then apply our proposed measures to several real datasets, and detect cases of bias. Finally, we show preliminary results of incorporating our ranked fairness measures into an optimization framework, and show potential for improving fairness of ranked outputs while maintaining accuracy.\n The code implementing all parts of this work is publicly available at https://github.com/DataResponsibly/FairRank.", "title": "" } ]
scidocsrr
0ae27fdbbbfcd6caa4c720afb631f538
Privacy-Preserving Deep Inference for Rich User Data on The Cloud
[ { "docid": "0a968f1dcba70ab1a42c25b1a6ec2a5c", "text": "In recent years, privacy-preserving data mining has been studied extensively, because of the wide proliferation of sensitive information on the internet. A number of algorithmic techniques have been designed for privacy-preserving data mining. In this paper, we provide a review of the state-of-the-art methods for privacy. We discuss methods for randomization, k-anonymization, and distributed privacy-preserving data mining. We also discuss cases in which the output of data mining applications needs to be sanitized for privacy-preservation purposes. We discuss the computational and theoretical limits associated with privacy-preservation over high dimensional data sets.", "title": "" } ]
[ { "docid": "9e3de4720dade2bb73d78502d7cccc8b", "text": "Skeletonization is a way to reduce dimensionality of digital objects. Here, we present an algorithm that computes the curve skeleton of a surface-like object in a 3D image, i.e., an object that in one of the three dimensions is at most twovoxel thick. A surface-like object consists of surfaces and curves crossing each other. Its curve skeleton is a 1D set centred within the surface-like object and with preserved topological properties. It can be useful to achieve a qualitative shape representation of the object with reduced dimensionality. The basic idea behind our algorithm is to detect the curves and the junctions between different surfaces and prevent their removal as they retain the most significant shape representation. 2002 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "7ef40f6fb743ba331a9878ca8019bb7e", "text": "Modern neural networks are often augmented with an attention mechanism, which tells the network where to focus within the input. We propose in this paper a new framework for sparse and structured attention, building upon a smoothed max operator. We show that the gradient of this operator defines a mapping from real values to probabilities, suitable as an attention mechanism. Our framework includes softmax and a slight generalization of the recently-proposed sparsemax as special cases. However, we also show how our framework can incorporate modern structured penalties, resulting in more interpretable attention mechanisms, that focus on entire segments or groups of an input. We derive efficient algorithms to compute the forward and backward passes of our attention mechanisms, enabling their use in a neural network trained with backpropagation. To showcase their potential as a drop-in replacement for existing ones, we evaluate our attention mechanisms on three large-scale tasks: textual entailment, machine translation, and sentence summarization. Our attention mechanisms improve interpretability without sacrificing performance; notably, on textual entailment and summarization, we outperform the standard attention mechanisms based on softmax and sparsemax.", "title": "" }, { "docid": "d166f4cd01d22d7143487b691138023c", "text": "Although Bitcoin is often perceived to be an anonymous currency, research has shown that a user’s Bitcoin transactions can be linked to compromise the user’s anonymity. We present solutions to the anonymity problem for both transactions on Bitcoin’s blockchain and off the blockchain (in so called micropayment channel networks). We use an untrusted third party to issue anonymous vouchers which users redeem for Bitcoin. Blind signatures and Bitcoin transaction contracts (aka smart contracts) ensure the anonymity and fairness during the bitcoin ↔ voucher exchange. Our schemes are practical, secure and anonymous.", "title": "" }, { "docid": "98c3e7dd0c383e7cc934efa6113384ca", "text": "In the accident of nuclear disasters or biochemical terrors, there is the strong need for robots which can move around and collect information at the disaster site. The robot should have toughness and high mobility in a location of stairs and obstacles. In this study, we propose a brand new type of mobile base named “crank-wheel” suitable for such use. Crank-wheel consists of wheels and connecting coupler link named crank-leg. Crank-wheel makes simple and smooth wheeled motion on flat ground and automatically transforms to the walking motion on rugged terrain as the crank-legs starts to contact the surface of the rugged terrain and acts as legs. This mechanism features its simple, easiness to maintain water and dust proof structure, and limited danger of biting rubbles in the driving mechanism just as the case of tracked vehicles. Effectiveness of the Crank-wheel is confirmed by several driving experiments on debris, sand and bog.", "title": "" }, { "docid": "f472c6ee8382cfb508fbca29b1caade6", "text": "Modern digital systems are severely constrained by both battery life and operating temperatures, resulting in strict limits on total power consumption and power density. To continue to scale digital throughput at constant power density, there is a need for increasing parallelism and dynamic voltage/bias scaling. This work presents an architecture and power converter implementation providing efficient power-delivery for microprocessors and other high-performance digital circuits stacked in vertical voltage domains. A multi-level DC-DC converter interfaces between a fixed DC voltage and multiple 0.7 V to 1.4 V voltage domains stacked in series. The converter implements dynamic voltage scaling (DVS) with multi-objective digital control implemented in an on-board (embedded) digital control system. We present measured results demonstrating functional multi-core DVS and performance with moderate load current steps. The converter demonstrates the use of a two-phase interleaved powertrain with coupled inductors to achieve voltage and current ripple reduction for the stacked ladder-converter architecture.", "title": "" }, { "docid": "52a6319c28c6c889101d9b2b6d4a76d3", "text": "A method is developed for imputing missing values when the probability of response depends upon the variable being imputed. The missing data problem is viewed as one of parameter estimation in a regression model with stochastic ensoring of the dependent variable. The prediction approach to imputation is used to solve this estimation problem. Wages and salaries are imputed to nonrespondents in the Current Population Survey and the results are compared to the nonrespondents' IRS wage and salary data. The stochastic ensoring approach gives improved results relative to a prediction approach that ignores the response mechanism.", "title": "" }, { "docid": "6aaabe17947bc455d940047745ed7962", "text": "In this paper, we want to study how natural and engineered systems could perform complex optimizations with limited computational and communication capabilities. We adopt a continuous-time dynamical system view rooted in early work on optimization and more recently in network protocol design, and merge it with the dynamic view of distributed averaging systems. We obtain a general approach, based on the control system viewpoint, that allows to analyze and design (distributed) optimization systems converging to the solution of given convex optimization problems. The control system viewpoint provides many insights and new directions of research. We apply the framework to a distributed optimal location problem and demonstrate the natural tracking and adaptation capabilities of the system to changing constraints.", "title": "" }, { "docid": "d2b27ab3eb0aa572fdf8f8e3de6ae952", "text": "Both industry and academia have extensively investigated hardware accelerations. To address the demands in increasing computational capability and memory requirement, in this work, we propose the structured weight matrices (SWM)-based compression technique for both Field Programmable Gate Array (FPGA) and application-specific integrated circuit (ASIC) implementations. In the algorithm part, the SWM-based framework adopts block-circulant matrices to achieve a fine-grained tradeoff between accuracy and compression ratio. The SWM-based technique can reduce computational complexity from O(n2) to O(nlog n) and storage complexity from O(n2) to O(n) for each layer and both training and inference phases. For FPGA implementations on deep convolutional neural networks (DCNNs), we achieve at least 152X and 72X improvement in performance and energy efficiency, respectively using the SWM-based framework, compared with the baseline of IBM TrueNorth processor under same accuracy constraints using the data set of MNIST, SVHN, and CIFAR-10. For FPGA implementations on long short term memory (LSTM) networks, the proposed SWM-based LSTM can achieve up to 21X enhancement in performance and 33.5X gains in energy efficiency compared with the ESE accelerator. For ASIC implementations, the proposed SWM-based ASIC design exhibits impressive advantages in terms of power, throughput, and energy efficiency. Experimental results indicate that this method is greatly suitable for applying DNNs onto both FPGAs and mobile/IoT devices.", "title": "" }, { "docid": "db02af0f6c2994e4348c1f7c4f3191ce", "text": "American students rank well below international peers in the disciplines of science, technology, engineering, and mathematics (STEM). Early exposure to STEM-related concepts is critical to later academic achievement. Given the rise of tablet-computer use in early childhood education settings, interactive technology might be one particularly fruitful way of supplementing early STEM education. Using a between-subjects experimental design, we sought to determine whether preschoolers could learn a fundamental math concept (i.e., measurement with non-standard units) from educational technology, and whether interactivity is a crucial component of learning from that technology. Participants who either played an interactive tablet-based game or viewed a non-interactive video demonstrated greater transfer of knowledge than those assigned to a control condition. Interestingly, interactivity contributed to better performance on near transfer tasks, while participants in the non-interactive condition performed better on far transfer tasks. Our findings suggest that, while preschool-aged children can learn early STEM skills from educational technology, interactivity may only further support learning in certain", "title": "" }, { "docid": "9db779a5a77ac483bb1991060dca7c28", "text": "An Ambient Intelligence (AmI) environment is primary developed using intelligent agents and wireless sensor networks. The intelligent agents could automatically obtain contextual information in real time using Near Field Communication (NFC) technique and wireless ad-hoc networks. In this research, we propose a stock trading and recommendation system with mobile devices (Android platform) interface in the over-the-counter market (OTC) environments. The proposed system could obtain the real-time financial information of stock price through a multi-agent architecture with plenty of useful features. In addition, NFC is used to achieve a context-aware environment allowing for automatic acquisition and transmission of useful trading recommendations and relevant stock information for investors. Finally, AmI techniques are applied to successfully create smart investment spaces, providing investors with useful monitoring tools and investment recommendation.", "title": "" }, { "docid": "c159f32bda951cf15a886ff27b4aef8c", "text": "We consider the problem of using image queries to retrieve videos from a database. Our focus is on large-scale applications, where it is infeasible to index each database video frame independently. Our main contribution is a framework based on Bloom filters, which can be used to index long video segments, enabling efficient image-to-video comparisons. Using this framework, we investigate several retrieval architectures, by considering different types of aggregation and different functions to encode visual information – these play a crucial role in achieving high performance. Extensive experiments show that the proposed technique improves mean average precision by 24% on a public dataset, while being 4× faster, compared to the previous state-of-the-art.", "title": "" }, { "docid": "bf7cd2303c325968879da72966054427", "text": "Object detection methods fall into two categories, i.e., two-stage and single-stage detectors. The former is characterized by high detection accuracy while the latter usually has considerable inference speed. Hence, it is imperative to fuse their metrics for a better accuracy vs. speed trade-off. To this end, we propose a dual refinement network (DRN) to boost the performance of the single-stage detector. Inheriting from the advantages of two-stage approaches (i.e., two-step regression and accurate features for detection), anchor refinement and feature offset refinement are conducted in anchor-offset detection, where the detection head is comprised of deformable convolutions. Moreover, to leverage contextual information for describing objects, we design a multi-deformable head, in which multiple detection paths with different receptive field sizes devote themselves to detecting objects. Extensive experiments on PASCAL VOC and ImageNet VID datasets are conducted, and we achieve the state-of-the-art results and a better accuracy vs. speed trade-off, i.e., 81.4% mAP vs. 42.3 FPS on VOC2007 test set. Codes will be publicly available.", "title": "" }, { "docid": "7095bf529a060dd0cd7eeb2910998cf8", "text": "The proliferation of internet along with the attractiveness of the web in recent years has made web mining as the research area of great magnitude. Web mining essentially has many advantages which makes this technology attractive to researchers. The analysis of web user’s navigational pattern within a web site can provide useful information for applications like, server performance enhancements, restructuring a web site, direct marketing in ecommerce etc. The navigation paths may be explored based on some similarity criteria, in order to get the useful inference about the usage of web. The objective of this paper is to propose an effective clustering technique to group users’ sessions by modifying K-means algorithm and suggest a method to compute the distance between sessions based on similarity of their web access path, which takes care of the issue of the user sessions that are of variable", "title": "" }, { "docid": "47d7ba349d6b1d2f1024e8eed003b40b", "text": "Although motion blur and rolling shutter deformations are closely coupled artifacts in images taken with CMOS image sensors, the two phenomena have so far mostly been treated separately, with deblurring algorithms being unable to handle rolling shutter wobble, and rolling shutter algorithms being incapable of dealing with motion blur. We propose an approach that delivers sharp and undistorted output given a single rolling shutter motion blurred image. The key to achieving this is a global modeling of the camera motion trajectory, which enables each scanline of the image to be deblurred with the corresponding motion segment. We show the results of the proposed framework through experiments on synthetic and real data.", "title": "" }, { "docid": "b94d146408340ce2a89b95f1b47e91f6", "text": "In order to improve the life quality of amputees, providing approximate manipulation ability of a human hand to that of a prosthetic hand is considered by many researchers. In this study, a biomechanical model of the index finger of the human hand is developed based on the human anatomy. Since the activation of finger bones are carried out by tendons, a tendon configuration of the index finger is introduced and used in the model to imitate the human hand characteristics and functionality. Then, fuzzy sliding mode control where the slope of the sliding surface is tuned by a fuzzy logic unit is proposed and applied to have the finger model to follow a certain trajectory. The trajectory of the finger model, which mimics the motion characteristics of the human hand, is pre-determined from the camera images of a real hand during closing and opening motion. Also, in order to check the robust behaviour of the controller, an unexpected joint friction is induced on the prosthetic finger on its way. Finally, the resultant prosthetic finger motion and the tendon forces produced are given and results are discussed.", "title": "" }, { "docid": "d1d14d5f16b4a32576e9a6c43e75138f", "text": "6 1 and cost of the product. Not all materials can be scaled-up with the same mixing process. Frequently, scaling-up the mixing process from small research batches to large quantities, necessary for production, can lead to unexpected problems. This reference book is intended to help the reader both identify and solve mixing problems. It is a comprehensive handbook that provides excellent coverage on the fundamentals, design, and applications of current mixing technology in general. Although this book includes many technology areas, one of main areas of interest to our readers would be in the polymer processing area. This would include the first eight chapters in the book and a specific application chapter on polymer processing. These cover the fundamentals of mixing technology, important to polymer processing, including residence time distributions and laminar mixing techniques. In the experimental section of the book, some of the relevant tools and techniques cover flow visualization technologies, lab scale mixing, flow and torque measurements, CFD coding, and numerical methods. There is a good overview of various types of mixers used for polymer processing in a dedicated applications chapter on mixing high viscosity materials such as polymers. There are many details given on the differences between the mixing blades in various types of high viscosity mixers and suggestions for choosing the proper mixer for high viscosity applications. The majority of the book does, however, focus on the chemical, petroleum, and pharmaceutical industries that generally process materials with much lower viscosity than polymers. The reader interested in learning about the fundamentals of mixing in general as well as some specifics on polymer processing would find this book to be a useful reference.", "title": "" }, { "docid": "e9676faf7e8d03c64fdcf6aa5e09b008", "text": "In this paper, a novel subspace method called diagonal principal component analysis (DiaPCA) is proposed for face recognition. In contrast to standard PCA, DiaPCA directly seeks the optimal projective vectors from diagonal face images without image-to-vector transformation. While in contrast to 2DPCA, DiaPCA reserves the correlations between variations of rows and those of columns of images. Experiments show that DiaPCA is much more accurate than both PCA and 2DPCA. Furthermore, it is shown that the accuracy can be further improved by combining DiaPCA with 2DPCA.", "title": "" }, { "docid": "f3e38f283156ce65d8cfa937a55f9d0f", "text": "A novel multi-objective evolutionary algorithm (MOEA) is developed based on Imperialist Competitive Algorithm (ICA), a newly introduced evolutionary algorithm (EA). Fast non-dominated sorting and the Sigma method are employed for ranking the solutions. The algorithm is tested on six well-known test functions each of them incorporate a particular feature that may cause difficulty to MOEAs. The numerical results indicate that MOICA shows significantly higher efficiency in terms of accuracy and maintaining a diverse population of solutions when compared to the existing salient MOEAs, namely fast elitism non-dominated sorting genetic algorithm (NSGA-II) and multi-objective particle swarm optimization (MOPSO). Considering the computational time, the proposed algorithm is slightly faster than MOPSO and significantly outperforms NSGA-II. KEYWORD Multi-objective Imperialist Competitive Algorithm, Multi-objective optimization, Pareto front.", "title": "" }, { "docid": "fee50f8ab87f2b97b83ca4ef92f57410", "text": "Ontologies now play an important role for many knowledge-intensive applications for which they provide a source of precisely defined terms. However, with their wide-spread usage there come problems concerning their proliferation. Ontology engineers or users frequently have a core ontology that they use, e.g., for browsing or querying data, but they need to extend it with, adapt it to, or compare it with the large set of other ontologies. For the task of detecting and retrieving relevant ontologies, one needs means for measuring the similarity between ontologies. We present a set of ontology similarity measures and a multiple-phase empirical evaluation.", "title": "" } ]
scidocsrr
98463290f3e6afe821617921e80fba92
A Systematic Review of the Use of Blockchain in Healthcare
[ { "docid": "d01339e077c9d8300b4616e7c713f48e", "text": "Blockchains as a technology emerged to facilitate money exchange transactions and eliminate the need for a trusted third party to notarize and verify such transactions as well as protect data security and privacy. New structures of Blockchains have been designed to accommodate the need for this technology in other fields such as e-health, tourism and energy. This paper is concerned with the use of Blockchains in managing and sharing electronic health and medical records to allow patients, hospitals, clinics, and other medical stakeholder to share data amongst themselves, and increase interoperability. The selection of the Blockchains used architecture depends on the entities participating in the constructed chain network. Although the use of Blockchains may reduce redundancy and provide caregivers with consistent records about their patients, it still comes with few challenges which could infringe patients' privacy, or potentially compromise the whole network of stakeholders. In this paper, we investigate different Blockchains structures, look at existing challenges and provide possible solutions. We focus on challenges that may expose patients' privacy and the resiliency of Blockchains to possible attacks.", "title": "" }, { "docid": "91c4a82cfcf69c75352d569a883ea0d3", "text": "Permissionless blockchain-based cryptocurrencies commonly use proof-of-work (PoW) or proof-of-stake (PoS) to ensure their security, e.g. to prevent double spending attacks. However, both approaches have disadvantages: PoW leads to massive amounts of wasted electricity and re-centralization, whereas major stakeholders in PoS might be able to create a monopoly. In this work, we propose proof-of-personhood (PoP), a mechanism that binds physical entities to virtual identities in a way that enables accountability while preserving anonymity. Afterwards we introduce PoPCoin, a new cryptocurrency, whose consensus mechanism leverages PoP to eliminate the dis-advantages of PoW and PoS while ensuring security. PoPCoin leads to a continuously fair and democratic wealth creation process which paves the way for an experimental basic income infrastructure.", "title": "" } ]
[ { "docid": "22255906a7f1d30c9600728a6dc9ad9f", "text": "The next major step in the evolution of LTE targets the rapidly increasing demand for mobile broadband services and traffic volumes. One of the key technologies is a new carrier type, referred to in this article as a Lean Carrier, an LTE carrier with minimized control channel overhead and cell-specific reference signals. The Lean Carrier can enhance spectral efficiency, increase spectrum flexibility, and reduce energy consumption. This article provides an overview of the motivations and main use cases of the Lean Carrier. Technical challenges are highlighted, and design options are discussed; finally, a performance evaluation quantifies the benefits of the Lean Carrier.", "title": "" }, { "docid": "8dee3ada764a40fce6b5676287496ccd", "text": "We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e.g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video. While its image counterpart, the image-to-image translation problem, is a popular topic, the video-to-video synthesis problem is less explored in the literature. Without modeling temporal dynamics, directly applying existing image synthesis approaches to an input video often results in temporally incoherent videos of low visual quality. In this paper, we propose a video-to-video synthesis approach under the generative adversarial learning framework. Through carefully-designed generators and discriminators, coupled with a spatio-temporal adversarial objective, we achieve high-resolution, photorealistic, temporally coherent video results on a diverse set of input formats including segmentation masks, sketches, and poses. Experiments on multiple benchmarks show the advantage of our method compared to strong baselines. In particular, our model is capable of synthesizing 2K resolution videos of street scenes up to 30 seconds long, which significantly advances the state-of-the-art of video synthesis. Finally, we apply our method to future video prediction, outperforming several competing systems. Code, models, and more results are available at our website.", "title": "" }, { "docid": "44cf5669d05a759ab21b3ebc1f6c340d", "text": "Linear variable differential transformer (LVDT) sensors are widely used in hydraulic and pneumatic mechatronic systems for measuring physical quantities like displacement, force or pressure. The LVDT sensor consists of two magnetic coupled coils with a common core and this sensor converts the displacement of core into reluctance variation of magnetic circuit. LVDT sensors combines good accuracy (0.1 % error) with low cost, but they require relative complex electronics. Standard electronics for LVDT sensor conditioning is analog $the coupled coils constitute an inductive half-bridge supplied with 5 kHz sinus excitation from a quadrate oscillator. The output phase span is amplified and synchronous demodulated. This analog technology works well but has its drawbacks - hard to adjust, many components and packages, no connection to computer systems. To eliminate all these disadvantages, our team from \"Politehnica\" University of Bucharest has developed a LVDT signal conditioner using system on chip microcontroller MSP430F149 from Texas Instruments. This device integrates all peripherals required for LVDT signal conditioning (pulse width modulation modules, analog to digital converter, timers, enough memory resources and processing power) and offers also excellent low-power options. Resulting electronic module is a one-chip solution made entirely in SMD technology and its small dimensions allow its integration into sensor's body. Present paper focuses on specific issues of this digital solution for LVDT conditioning and compares it with classic analog solution from different points of view: error curve, power consumption, communication options, dimensions and production cost. Microcontroller software (firmware) and digital signal conditioning techniques for LVDT are also analyzed. Use of system on chip devices for signal conditioning allows realization of low cost compact transducers with same or better performances than their analog counterparts, but with extra options like serial communication channels, self-calibration, local storage of measured values and fault detection", "title": "" }, { "docid": "467ff4b60acb874c0430ae4c20d62137", "text": "The purpose of this paper is twofold. First, we give a survey of the known methods of constructing lattices in complex hyperbolic space. Secondly, we discuss some of the lattices constructed by Deligne and Mostow and by Thurston in detail. In particular, we give a unified treatment of the constructions of fundamental domains and we relate this to other properties of these lattices.", "title": "" }, { "docid": "b8bcd83f033587533d7502c54a2b67da", "text": "The development of structural health monitoring (SHM) technology has evolved for over fifteen years in Hong Kong since the implementation of the “Wind And Structural Health Monitoring System (WASHMS)” on the suspension Tsing Ma Bridge in 1997. Five cable-supported bridges in Hong Kong, namely the Tsing Ma (suspension) Bridge, the Kap Shui Mun (cable-stayed) Bridge, the Ting Kau (cable-stayed) Bridge, the Western Corridor (cable-stayed) Bridge, and the Stonecutters (cable-stayed) Bridge, have been instrumented with sophisticated long-term SHM systems. These SHM systems mainly focus on the tracing of structural behavior and condition of the long-span bridges over their lifetime. Recently, a structural health monitoring and maintenance management system (SHM&MMS) has been designed and will be implemented on twenty-one sea-crossing viaduct bridges with a total length of 9,283 km in the Hong Kong Link Road (HKLR) of the Hong Kong – Zhuhai – Macao Bridge of which the construction commenced in mid-2012. The SHM&MMS gives more emphasis on durability monitoring of the reinforced concrete viaduct bridges in marine environment and integration of the SHM system and bridge maintenance management system. It is targeted to realize the transition from traditional corrective and preventive maintenance to condition-based maintenance (CBM) of in-service bridges. The CBM uses real-time and continuous monitoring data and monitoring-derived information on the condition of bridges (including structural performance and deterioration mechanisms) to identify when the actual maintenance is necessary and how cost-effective maintenance can be conducted. This paper outlines how to incorporate SHM technology into bridge maintenance strategy to realize CBM management of bridges.", "title": "" }, { "docid": "3b1b829e6d017d574562e901f4963bc4", "text": "Many problems in AI are simplified by clever representations of sensory or symbolic input. How to discover such representations automatically, from large amounts of unlabeled data, remains a fundamental challenge. The goal of statistical methods for dimensionality reduction is to detect and discover low dimensional structure in high dimensional data. In this paper, we review a recently proposed algorithm— maximum variance unfolding—for learning faithful low dimensional representations of high dimensional data. The algorithm relies on modern tools in convex optimization that are proving increasingly useful in many areas of machine learning.", "title": "" }, { "docid": "aab5b2bb3061abc2405700a1001a464d", "text": "Although social skills group interventions for children with autism are common in outpatient clinic settings, little research has been conducted to determine the efficacy of such treatments. This study examined the effectiveness of an outpatient clinic-based social skills group intervention with four high-functioning elementary-aged children with autism. The group was designed to teach specific social skills, including greeting, conversation, and play skills in a brief therapy format (eight sessions total). At the end of each skills-training session, children with autism were observed in play sessions with typical peers. Typical peers received peer education about ways to interact with children with autism. Results indicate that a social skills group implemented in an outpatient clinic setting was effective in improving greeting and play skills, with less clear improvements noted in conversation skills. In addition, children with autism reported increased feelings of social support from classmates at school following participation in the group. However, parent report data of greeting, conversation, and play skills outside of the clinic setting indicated significant improvements in only greeting skills. Thus, although the clinic-based intervention led to improvements in social skills, fewer changes were noted in the generalization to nonclinic settings.", "title": "" }, { "docid": "1df3f59834420b108677e0a40e4cac63", "text": "We extend classic review mining work by building a binary classifier that predicts whether a review of a documentary film was written by an expert or a layman with 90.70% accuracy (F1 score), and compare the characteristics of the predicted classes. A variety of standard lexical and syntactic features was used for this supervised learning task. Our results suggest that experts write comparatively lengthier and more detailed reviews that feature more complex grammar and a higher diversity in their vocabulary. Layman reviews are more subjective and contextualized in peoples’ everyday lives. Our error analysis shows that laymen are about twice as likely to be mistaken as experts than vice versa. We argue that the type of author might be a useful new feature for improving the accuracy of predicting the rating, helpfulness and authenticity of reviews. Finally, the outcomes of this work might help researchers and practitioners in the field of impact assessment to gain a more fine-grained understanding of the perception of different types of media consumers and reviewers of a topic, genre or information product.", "title": "" }, { "docid": "a88c0d45ca7859c050e5e76379f171e6", "text": "Cancer and other chronic diseases have constituted (and will do so at an increasing pace) a significant portion of healthcare costs in the United States in recent years. Although prior research has shown that diagnostic and treatment recommendations might be altered based on the severity of comorbidities, chronic diseases are still being investigated in isolation from one another in most cases. To illustrate the significance of concurrent chronic diseases in the course of treatment, this study uses SEER’s cancer data to create two comorbid data sets: one for breast and female genital cancers and another for prostate and urinal cancers. Several popular machine learning techniques are then applied to the resultant data sets to build predictive models. Comparison of the results shows that having more information about comorbid conditions of patients can improve models’ predictive power, which in turn, can help practitioners make better diagnostic and treatment decisions. Therefore, proper identification, recording, and use of patients’ comorbidity status can potentially lower treatment costs and ease the healthcare related economic challenges.", "title": "" }, { "docid": "2c5ba4f458b3d185f8b73d091a9b696c", "text": "Community structure is one of the key properties of real-world complex networks. It plays a crucial role in their behaviors and topology. While an important work has been done on the issue of community detection, very little attention has been devoted to the analysis of the community structure. In this paper, we present an extensive investigation of the overlapping community network deduced from a large-scale co-authorship network. The nodes of the overlapping community network rep-resent the functional communities of the co-authorship network, and the links account for the fact that communities share some nodes in the co-authorship network. The comparative evaluation of the topological properties of these two networks shows that they share similar topological properties. These results are very interesting. Indeed, the network of communities seems to be a good representative of the original co-authorship network. With its smaller size, it may be more practical in order to realize various analyses that cannot be performed easily in large-scale real-world networks.", "title": "" }, { "docid": "15fb8b92428ce4f2c06d926fd323e9ef", "text": "Convolutional Neural Network (CNN) is one of the most effective neural network model for many classification tasks, such as voice recognition, computer vision and biological information processing. Unfortunately, Computation of CNN is both memory-intensive and computation-intensive, which brings a huge challenge to the design of the hardware accelerators. A large number of hardware accelerators for CNN inference are designed by the industry and the academia. Most of the engines are based on 32-bit floating point matrix multiplication, where the data precision is over-provisioned for inference job and the hardware cost are too high. In this paper, a 8-bit fixed-point LeNet inference engine (Laius) is designed and implemented on FPGA. In order to reduce the consumption of FPGA resource, we proposed a methodology to find the optimal bit-length for weight and bias in LeNet, which results in using 8-bit fixed point for most of the computation and using 16-bit fixed point for other computation. The PE (Processing Element) design is proposed. Pipelining and PE tiling technique is use to improve the performance of the inference engine. By theoretical analysis, we came to the conclusion that DSP resource in FPGA is the most critical resource, it should be carefully used during the design process. We implement the inference engine on Xilinx 485t FPGA. Experiment result shows that the designed LeNet inference engine can achieve 44.9 Gops throughput with 8-bit fixed-point operation after pipelining. Moreover, with only 1% loss of accuracy, the 8-bit fixed-point engine largely reduce 31.43% in latency, 87.01% in LUT consumption, 66.50% in BRAM consumption, 65.11% in DSP consumption and 47.95% reduction in power compared to a 32-bit fixed-point inference engine with the same structure.", "title": "" }, { "docid": "ba41dfe1382ae0bc45d82d197b124382", "text": "Business Intelligence (BI) deals with integrated approaches to management support. Currently, there are constraints to BI adoption and a new era of analytic data management for business intelligence these constraints are the integrated infrastructures that are subject to BI have become complex, costly, and inflexible, the effort required consolidating and cleansing enterprise data and Performance impact on existing infrastructure / inadequate IT infrastructure. So, in this paper Cloud computing will be used as a possible remedy for these issues. We will represent a new environment atmosphere for the business intelligence to make the ability to shorten BI implementation windows, reduced cost for BI programs compared with traditional on-premise BI software, Ability to add environments for testing, proof-of-concepts and upgrades, offer users the potential for faster deployments and increased flexibility. Also, Cloud computing enables organizations to analyze terabytes of data faster and more economically than ever before. Business intelligence (BI) in the cloud can be like a big puzzle. Users can jump in and put together small pieces of the puzzle but until the whole thing is complete the user will lack an overall view of the big picture. In this paper reading each section will fill in a piece of the puzzle.", "title": "" }, { "docid": "122ed18a623510052664996c7ef4b4bb", "text": "A number of sensor applications in recent years collect data which can be directly associated with human interactions. Some examples of such applications include GPS applications on mobile devices, accelerometers, or location sensors designed to track human and vehicular traffic. Such data lends itself to a variety of rich applications in which one can use the sensor data in order to model the underlying relationships and interactions. This requires the development of trajectory mining techniques, which can mine the GPS data for interesting social patterns. It also leads to a number of challenges, since such data may often be private, and it is important to be able to perform the mining process without violating the privacy of the users. Given the open nature of the information contributed by users in social sensing applications, this also leads to issues of trust in making inferences from the underlying data. In this chapter, we provide a broad survey of the work in this important and rapidly emerging field. We also discuss the key problems which arise in the context of this important field and the corresponding", "title": "" }, { "docid": "bb5dccb965c71fcbb8c4f2f924e65316", "text": "BACKGROUND AND OBJECTIVES\nBecause skin cancer affects millions of people worldwide, computational methods for the segmentation of pigmented skin lesions in images have been developed in order to assist dermatologists in their diagnosis. This paper aims to present a review of the current methods, and outline a comparative analysis with regards to several of the fundamental steps of image processing, such as image acquisition, pre-processing and segmentation.\n\n\nMETHODS\nTechniques that have been proposed to achieve these tasks were identified and reviewed. As to the image segmentation task, the techniques were classified according to their principle.\n\n\nRESULTS\nThe techniques employed in each step are explained, and their strengths and weaknesses are identified. In addition, several of the reviewed techniques are applied to macroscopic and dermoscopy images in order to exemplify their results.\n\n\nCONCLUSIONS\nThe image segmentation of skin lesions has been addressed successfully in many studies; however, there is a demand for new methodologies in order to improve the efficiency.", "title": "" }, { "docid": "3f6c3f979255b0d8a3f78ecd579a1cca", "text": "Botnet is most widespread and occurs commonly in today's cyber attacks, resulting in serious threats to our network assets and organization's properties. Botnets are collections of compromised computers (Bots) which are remotely controlled by its originator (BotMaster) under a common Commond-and-Control (C & C) infrastructure. They are used to distribute commands to the Bots for malicious activities such as distributed denial-of-service (DDoS) attacks, sending large amount of SPAM and other nefarious purposes. Understanding the Botnet C & C channels is a critical component to precisely identify, detect, and mitigate the Botnets threats. Therefore, in this paper we provide a classification of Botnets C & C channels and evaluate well-known protocols (e.g. IRC, HTTP, and P2P) which are being used in each of them.", "title": "" }, { "docid": "a0124ccd8586bd082ef4510389269d5d", "text": "We present a convolutional-neural-network-based system that faithfully colorizes black and white photographic images without direct human assistance. We explore various network architectures, objectives, color spaces, and problem formulations. The final classification-based model we build generates colorized images that are significantly more aesthetically-pleasing than those created by the baseline regression-based model, demonstrating the viability of our methodology and revealing promising avenues for future work.", "title": "" }, { "docid": "0102748c7f9969fb53a3b5ee76b6eefe", "text": "Face veri cation is the task of deciding by analyzing face images, whether a person is who he/she claims to be. This is very challenging due to image variations in lighting, pose, facial expression, and age. The task boils down to computing the distance between two face vectors. As such, appropriate distance metrics are essential for face veri cation accuracy. In this paper we propose a new method, named the Cosine Similarity Metric Learning (CSML) for learning a distance metric for facial veri cation. The use of cosine similarity in our method leads to an e ective learning algorithm which can improve the generalization ability of any given metric. Our method is tested on the state-of-the-art dataset, the Labeled Faces in the Wild (LFW), and has achieved the highest accuracy in the literature. Face veri cation has been extensively researched for decades. The reason for its popularity is the non-intrusiveness and wide range of practical applications, such as access control, video surveillance, and telecommunication. The biggest challenge in face veri cation comes from the numerous variations of a face image, due to changes in lighting, pose, facial expression, and age. It is a very di cult problem, especially using images captured in totally uncontrolled environment, for instance, images from surveillance cameras, or from the Web. Over the years, many public face datasets have been created for researchers to advance state of the art and make their methods comparable. This practice has proved to be extremely useful. FERET [1] is the rst popular face dataset freely available to researchers. It was created in 1993 and since then research in face recognition has advanced considerably. Researchers have come very close to fully recognizing all the frontal images in FERET [2,3,4,5,6]. However, these methods are not robust to deal with non-frontal face images. Recently a new face dataset named the Labeled Faces in the Wild (LFW) [7] was created. LFW is a full protocol for evaluating face veri cation algorithms. Unlike FERET, LFW is designed for unconstrained face veri cation. Faces in LFW can vary in all possible ways due to pose, lighting, expression, age, scale, and misalignment (Figure 1). Methods for frontal images cannot cope with these variations and as such many researchers have turned to machine learning to 2 Hieu V. Nguyen and Li Bai Fig. 1. From FERET to LFW develop learning based face veri cation methods [8,9]. One of these approaches is to learn a transformation matrix from the data so that the Euclidean distance can perform better in the new subspace. Learning such a transformation matrix is equivalent to learning a Mahalanobis metric in the original space [10]. Xing et al. [11] used semide nite programming to learn a Mahalanobis distance metric for clustering. Their algorithm aims to minimize the sum of squared distances between similarly labeled inputs, while maintaining a lower bound on the sum of distances between di erently labeled inputs. Goldberger et al. [10] proposed Neighbourhood Component Analysis (NCA), a distance metric learning algorithm especially designed to improve kNN classi cation. The algorithm is to learn a Mahalanobis distance by minimizing the leave-one-out cross validation error of the kNN classi er on a training set. Because it uses softmax activation function to convert distance to probability, the gradient computation step is expensive. Weinberger et al. [12] proposed a method that learns a matrix designed to improve the performance of kNN classi cation. The objective function is composed of two terms. The rst term minimizes the distance between target neighbours. The second term is a hinge-loss that encourages target neighbours to be at least one distance unit closer than points from other classes. It requires information about the class of each sample. As a result, their method is not applicable for the restricted setting in LFW (see section 2.1). Recently, Davis et al. [13] have taken an information theoretic approach to learn a Mahalanobis metric under a wide range of possible constraints and prior knowledge on the Mahalanobis distance. Their method regularizes the learned matrix to make it as close as possible to a known prior matrix. The closeness is measured as a Kullback-Leibler divergence between two Gaussian distributions corresponding to the two matrices. In this paper, we propose a new method named Cosine Similarity Metric Learning (CSML). There are two main contributions. The rst contribution is Cosine Similarity Metric Learning for Face Veri cation 3 that we have shown cosine similarity to be an e ective alternative to Euclidean distance in metric learning problem. The second contribution is that CSML can improve the generalization ability of an existing metric signi cantly in most cases. Our method is di erent from all the above methods in terms of distance measures. All of the other methods use Euclidean distance to measure the dissimilarities between samples in the transformed space whilst our method uses cosine similarity which leads to a simple and e ective metric learning method. The rest of this paper is structured as follows. Section 2 presents CSML method in detail. Section 3 present how CSML can be applied to face veri cation. Experimental results are presented in section 4. Finally, conclusion is given in section 5. 1 Cosine Similarity Metric Learning The general idea is to learn a transformation matrix from training data so that cosine similarity performs well in the transformed subspace. The performance is measured by cross validation error (cve). 1.1 Cosine similarity Cosine similarity (CS) between two vectors x and y is de ned as: CS(x, y) = x y ‖x‖ ‖y‖ Cosine similarity has a special property that makes it suitable for metric learning: the resulting similarity measure is always within the range of −1 and +1. As shown in section 1.3, this property allows the objective function to be simple and e ective. 1.2 Metric learning formulation Let {xi, yi, li}i=1 denote a training set of s labeled samples with pairs of input vectors xi, yi ∈ R and binary class labels li ∈ {1, 0} which indicates whether xi and yi match or not. The goal is to learn a linear transformation A : R → R(d ≤ m), which we will use to compute cosine similarities in the transformed subspace as: CS(x, y,A) = (Ax) (Ay) ‖Ax‖ ‖Ay‖ = xAAy √ xTATAx √ yTATAy Speci cally, we want to learn the linear transformation that minimizes the cross validation error when similarities are measured in this way. We begin by de ning the objective function. 4 Hieu V. Nguyen and Li Bai 1.3 Objective function First, we de ne positive and negative sample index sets Pos and Neg as:", "title": "" }, { "docid": "3e77ca4aa346bfe6cf6aacbffdcf344d", "text": "This paper introduces a shape descriptor, the soft shape context, motivated by the shape context method. Unlike the original shape context method, where each image point was hard assigned into a single histogram bin, we instead allow each image point to contribute to multiple bins, hence more robust to distortions. The soft shape context can easily be integrated into the iterative closest point (ICP) method as an auxiliary feature vector, enriching the representation of an image point from spatial information only, to spatial and shape information. This yields a registration method more robust than the original ICP method. The method is general for 2D shapes. It does not calculate derivatives, hence being able to handle shapes with junctions and discontinuities. We present experimental results to demonstrate the robustness compared with the standard ICP method.", "title": "" }, { "docid": "a935c84adaeeb6f691d65b03dd749c95", "text": "The use of wearable devices during running has become commonplace. Although there is ongoing research on interaction techniques for use while running, the effects of the resulting interactions on the natural movement patterns have received little attention so far. While previous studies on pedestrians reported increased task load and reduced walking speed while interacting, running movement further restricts interaction and requires minimizing interferences, e.g. to avoid injuries and maximize comfort. In this paper, we aim to shed light on how interacting with wearable devices affects running movement. We present results from a motion-tracking study (N=12) evaluating changes in movement and task load when users interact with a smartphone, a smartwatch, or a pair of smartglasses while running. In our study, smartwatches required less effort than smartglasses when using swipe input, resulted in less interference with the running movement and were preferred overall. From our results, we infer a number of guidelines regarding interaction design targeting runners.", "title": "" }, { "docid": "33e7dea74a2506bce40b8e7f48073c9e", "text": "Linker for activation of B cells (LAB, also called NTAL; a product of wbscr5 gene) is a newly identified transmembrane adaptor protein that is expressed in B cells, NK cells, and mast cells. Upon BCR activation, LAB is phosphorylated and interacts with Grb2. LAB is capable of rescuing thymocyte development in LAT-deficient mice. To study the in vivo function of LAB, LAB-deficient mice were generated. Although disruption of the Lab gene did not affect lymphocyte development, it caused mast cells to be hyperresponsive to stimulation via the FcepsilonRI, evidenced by enhanced Erk activation, calcium mobilization, degranulation, and cytokine production. These data suggested that LAB negatively regulates mast cell function. However, mast cells that lacked both linker for activation of T cells (LAT) and LAB proteins had a more severe block in FcepsilonRI-mediated signaling than LAT(-/-) mast cells, demonstrating that LAB also shares a redundant function with LAT to play a positive role in FcepsilonRI-mediated signaling.", "title": "" } ]
scidocsrr
aa5e9d637561714872ee658816d8e0aa
Discourse Marker Augmented Network with Reinforcement Learning for Natural Language Inference
[ { "docid": "d3997f030d5d7287a4c6557681dc7a46", "text": "This paper presents the first use of a computational model of natural logic—a system of logical inference which operates over natural language—for textual inference. Most current approaches to the PASCAL RTE textual inference task achieve robustness by sacrificing semantic precision; while broadly effective, they are easily confounded by ubiquitous inferences involving monotonicity. At the other extreme, systems which rely on first-order logic and theorem proving are precise, but excessively brittle. This work aims at a middle way. Our system finds a low-cost edit sequence which transforms the premise into the hypothesis; learns to classify entailment relations across atomic edits; and composes atomic entailments into a top-level entailment judgment. We provide the first reported results for any system on the FraCaS test suite. We also evaluate on RTE3 data, and show that hybridizing an existing RTE system with our natural logic system yields significant performance gains.", "title": "" }, { "docid": "d8eee79312660f4da03a29372fc87d7e", "text": "Previous work combines word-level and character-level representations using concatenation or scalar weighting, which is suboptimal for high-level tasks like reading comprehension. We present a fine-grained gating mechanism to dynamically combine word-level and character-level representations based on properties of the words. We also extend the idea of fine-grained gating to modeling the interaction between questions and paragraphs for reading comprehension. Experiments show that our approach can improve the performance on reading comprehension tasks, achieving new state-of-the-art results on the Children’s Book Test and Who Did What datasets. To demonstrate the generality of our gating mechanism, we also show improved results on a social media tag prediction task.1", "title": "" }, { "docid": "a986826041730d953dfbf9fbc1b115a6", "text": "This article presents a general class of associative reinforcement learning algorithms for connectionist networks containing stochastic units. These algorithms, called REINFORCE algorithms, are shown to make weight adjustments in a direction that lies along the gradient of expected reinforcement in both immediate-reinforcement tasks and certain limited forms of delayed-reinforcement tasks, and they do this without explicitly computing gradient estimates or even storing information from which such estimates could be computed. Specific examples of such algorithms are presented, some of which bear a close relationship to certain existing algorithms while others are novel but potentially interesting in their own right. Also given are results that show how such algorithms can be naturally integrated with backpropagation. We close with a brief discussion of a number of additional issues surrounding the use of such algorithms, including what is known about their limiting behaviors as well as further considerations that might be used to help develop similar but potentially more powerful reinforcement learning algorithms.", "title": "" } ]
[ { "docid": "fd2450f5b02a2599be29b90a599ad31d", "text": "Male genital injuries, demand prompt management to prevent long-term sexual and psychological damage. Injuries to the scrotum and contents may produce impaired fertility.We report our experience in diagnosing and managing a case of a foreign body in the scrotum following a boat engine blast accident. This case report highlights the need for a good history and thorough general examination to establish the mechanism of injury in order to distinguish between an embedded penetrating projectile injury and an injury with an exit wound. Prompt surgical exploration with hematoma evacuation limits complications.", "title": "" }, { "docid": "11434fe02e1e810a85dd8b27747b0af6", "text": "A model free auto tuning algorithm is developed by using simultaneous perturbation stochastic approximation (SPSA). For such a method, plant models are not required. A set of closed loop experiments are conducted to generate data for an online optimization procedure. The optimum of the parameters of the restricted structured controllers will be found via SPSA algorithm. Compared to the conventional gradient approximation methods, SPSA only needs the small number of measurement of the cost function. It will be beneficial to application with high dimensional parameters. In the paper, a cost function is formulated to directly reflect the control performances widely used in industry, like overshoot, settling time and integral of absolute error. Therefore, the proposed auto tuning method will naturally lead to the desired closed loop performance. A case study of auto tuning of spool position control in a twin spool two stage valve is conducted. Both simulation and experimental study in TI C2000 target demonstrate effectiveness of the algorithm.", "title": "" }, { "docid": "0b51889817aca2afd7c1c754aa47f7de", "text": "OBJECTIVE\nThis study aims to compare how national guidelines approach the management of obesity in reproductive age women.\n\n\nSTUDY DESIGN\nWe conducted a search for national guidelines in the English language on the topic of obesity surrounding the time of a pregnancy. We identified six primary source documents and several secondary source documents from five countries. Each document was then reviewed to identify: (1) statements acknowledging increased health risks related to obesity and reproductive outcomes, (2) recommendations for the management of obesity before, during, or after pregnancy.\n\n\nRESULTS\nAll guidelines cited an increased risk for miscarriage, birth defects, gestational diabetes, hypertension, fetal growth abnormalities, cesarean sections, difficulty with anesthesia, postpartum hemorrhage, and obesity in offspring. Counseling on the risks of obesity and weight loss before pregnancy were universal recommendations. There were substantial differences in the recommendations pertaining to gestational weight gain goals, nutrient and vitamin supplements, screening for gestational diabetes, and thromboprophylaxis among the guidelines.\n\n\nCONCLUSION\nStronger evidence from randomized trials is needed to devise consistent recommendations for obese reproductive age women. This research may also assist clinicians in overcoming one of the many obstacles they encounter when providing care to obese women.", "title": "" }, { "docid": "607977a85696ecc91816cd9f2cf04bbf", "text": "the paper presents a model integrating theories from collaboration research (i.e., social presence theory, channel expansion theory, and the task closure model) with a recent theory from technology adoption research (i.e., unified theory of acceptance and use of technology, abbreviated to utaut) to explain the adoption and use of collaboration technology. we theorize that collaboration technology characteristics, individual and group characteristics, task characteristics, and situational characteristics are predictors of performance expectancy, effort expectancy, social influence, and facilitating conditions in utaut. we further theorize that the utaut constructs, in concert with gender, age, and experience, predict intention to use a collaboration technology, which in turn predicts use. we conducted two field studies in Finland among (1) 349 short message service (SMS) users and (2) 447 employees who were potential users of a new collaboration technology in an organization. Our model was supported in both studies. the current work contributes to research by developing and testing a technology-specific model of adoption in the collaboration context. key worDS anD phraSeS: channel expansion theory, collaboration technologies, social presence theory, task closure model, technology acceptance, technology adoption, unified theory of acceptance and use of technology. technology aDoption iS one of the moSt mature StreamS in information systems (IS) research (see [65, 76, 77]). the benefit of such maturity is the availability of frameworks and models that can be applied to the study of interesting problems. while practical contributions are certain to accrue from such investigations, a key challenge for researchers is to ensure that studies yield meaningful scientific contributions. there have been several models explaining technology adoption and use, particularly since the late 1980s [76]. In addition to noting the maturity of this stream of research, Venkatesh et al. identified several important directions for future research and suggested that “one of the most important directions for future research is to tie this mature stream [technology adoption] of research into other established streams of work” [76, p. 470] (see also [70]). In research on technology adoption, the technology acceptance model (taM) [17] is the most widely employed theoretical model [76]. taM has been applied to a range of technologies and has been very predictive of individual technology adoption and use. the unified theory of acceptance and use of technology (utaut) [76] integrated eight distinct models of technology adoption and use, including taM. utaut extends taM by incorporating social influence and facilitating conditions. utaut is based in PrEDICtING COllaBOratION tEChNOlOGY uSE 11 the rich tradition of taM and provides a foundation for future research in technology adoption. utaut also incorporates four different moderators of key relationships. although utaut is more integrative, like taM, it still suffers from the limitation of being predictive but not particularly useful in providing explanations that can be used to design interventions that foster adoption (e.g., [72, 73]). there has been some research on general antecedents of perceived usefulness and perceived ease of use that are technology independent (e.g., [69, 73]). But far less attention has been paid to technology-specific antecedents that may provide significantly stronger guidance for the successful design and implementation of specific types of systems. Developing theory that is more focused and context specific—here, technology specific—is considered an important frontier for advances in IS research [53, 70]. Building on utaut to develop a model that will be more helpful will require a better understanding of how the utaut factors play out with different technologies [7, 76]. as a first step, it is important to extend utaut to a specific class of technologies [70, 76]. a model focused on a specific class of technology will be more explanatory compared to a general model that attempts to address many classes of technologies [70]. Such a focused model will also provide designers and managers with levers to augment adoption and use. One example is collaboration technology [20], a technology designed to assist two or more people to work together at the same place and time or at different places or different times [25, 26]. technologies that facilitate collaboration via electronic means have become an important component of day-to-day life (both in and out of the workplace). thus, it is not surprising that collaboration technologies have received considerable research attention over the past decades [24, 26, 77]. Several studies have examined the adoption of collaboration technologies, such as voice mail, e-mail, and group support systems (e.g., [3, 4, 44, 56, 63]). these studies focused on organizational factors leading to adoption (e.g., size, centralization) or on testing the boundary conditions of taM (e.g., could taM be applied to collaboration technologies). Given that adoption of collaboration technologies is not progressing as fast or as broadly as expected [20, 54], it seems a different approach is needed. It is possible that these two streams could inform each other to develop a more complete understanding of collaboration technology use, one in which we can begin to understand how collaboration factors influence adoption and use. a model that integrates knowledge from technology adoption and collaboration technology research is lacking, a void that this paper seeks to address. In doing so, we answer the call for research by Venkatesh et al. [76] to integrate the technology adoption stream with another dominant research stream, which in turn will move us toward a more cumulative and expansive nomological network (see [41, 70]). we also build on the work of wixom and todd [80] by examining the important role of technology characteristics leading to use. the current study will help us take a step toward alleviating one of the criticisms of IS research discussed by Benbasat and Zmud, especially in the context of technology adoption research: “we should neither focus our research on variables outside the nomological net nor exclusively on intermediate-level variables, such as ease of use, usefulness or behavioral intentions, without clarifying 12 BrOwN, DENNIS, aND VENkatESh the IS nuances involved” [6, p. 193]. Specifically, our work accomplishes the goal of “developing conceptualizations and theories of It [information technology] artifacts; and incorporating such conceptualizations and theories of It artifacts” [53, p. 130] by extending utaut to incorporate the specific artifact of collaboration technology and its related characteristics. In addition to the scientific value, such a model will provide greater value to practitioners who are attempting to foster successful use of a specific technology. Given this background, the primary objective of this paper is to develop and test a model to understand collaboration technology adoption that integrates utaut with key constructs from theories about collaboration technologies. we identify specific antecedents to utaut constructs by drawing from social presence theory [64], channel expansion theory [11] (a descendant of media richness theory [16]), and the task closure model [66], as well as a broad range of prior collaboration technology research. we test our model in two different studies conducted in Finland: the use of short message service (SMS) among working professionals and the use of a collaboration technology in an organization.", "title": "" }, { "docid": "c12fb39060ec4dd2c7bb447352ea4e8a", "text": "Lots of data from different domains is published as Linked Open Data (LOD). While there are quite a few browsers for such data, as well as intelligent tools for particular purposes, a versatile tool for deriving additional knowledge by mining the Web of Linked Data is still missing. In this system paper, we introduce the RapidMiner Linked Open Data extension. The extension hooks into the powerful data mining and analysis platform RapidMiner, and offers operators for accessing Linked Open Data in RapidMiner, allowing for using it in sophisticated data analysis workflows without the need for expert knowledge in SPARQL or RDF. The extension allows for autonomously exploring the Web of Data by following links, thereby discovering relevant datasets on the fly, as well as for integrating overlapping data found in different datasets. As an example, we show how statistical data from the World Bank on scientific publications, published as an RDF data cube, can be automatically linked to further datasets and analyzed using additional background knowledge from ten different LOD datasets.", "title": "" }, { "docid": "b62da3e709d2bd2c7605f3d0463eff2f", "text": "This study examines the economic effect of information security breaches reported in newspapers on publicly traded US corporations. We find limited evidence of an overall negative stock market reaction to public announcements of information security breaches. However, further investigation reveals that the nature of the breach affects this result. We find a highly significant negative market reaction for information security breaches involving unauthorized access to confidential data, but no significant reaction when the breach does not involve confidential information. Thus, stock market participants appear to discriminate across types of breaches when assessing their economic impact on affected firms. These findings are consistent with the argument that the economic consequences of information security breaches vary according to the nature of the underlying assets affected by the breach.", "title": "" }, { "docid": "a79c9ee27a13b35c1d6710cf9a1ee9cf", "text": "We present a new end-to-end network architecture for facial expression recognition with an attention model. It focuses attention in the human face and uses a Gaussian space representation for expression recognition. We devise this architecture based on two fundamental complementary components: (1) facial image correction and attention and (2) facial expression representation and classification. The first component uses an encoder-decoder style network and a convolutional feature extractor that are pixel-wise multiplied to obtain a feature attention map. The second component is responsible for obtaining an embedded representation and classification of the facial expression. We propose a loss function that creates a Gaussian structure on the representation space. To demonstrate the proposed method, we create two larger and more comprehensive synthetic datasets using the traditional BU3DFE and CK+ facial datasets. We compared results with the PreActResNet18 baseline. Our experiments on these datasets have shown the superiority of our approach in recognizing facial expressions.", "title": "" }, { "docid": "8de09be7888299dc5dd30bbeb5578c35", "text": "Scene text detection is challenging as the input may have different orientations, sizes, font styles, lighting conditions, perspective distortions and languages. This paper addresses the problem by designing a Rotational Region CNN (R2CNN). R2CNN includes a Text Region Proposal Network (Text-RPN) to estimate approximate text regions and a multitask refinement network to get the precise inclined box. Our work has the following features. First, we use a novel multi-task regression method to support arbitrarily-oriented scene text detection. Second, we introduce multiple ROIPoolings to address the scene text detection problem for the first time. Third, we use an inclined Non-Maximum Suppression (NMS) to post-process the detection candidates. Experiments show that our method outperforms the state-of-the-art on standard benchmarks: ICDAR 2013, ICDAR 2015, COCO-Text and MSRA-TD500.", "title": "" }, { "docid": "fbe0c6e8cbaf6c419990c1a7093fe2a9", "text": "Deep learning is quickly becoming the leading methodology for medical image analysis. Given a large medical archive, where each image is associated with a diagnosis, efficient pathology detectors or classifiers can be trained with virtually no expert knowledge about the target pathologies. However, deep learning algorithms, including the popular ConvNets, are black boxes: little is known about the local patterns analyzed by ConvNets to make a decision at the image level. A solution is proposed in this paper to create heatmaps showing which pixels in images play a role in the image-level predictions. In other words, a ConvNet trained for image-level classification can be used to detect lesions as well. A generalization of the backpropagation method is proposed in order to train ConvNets that produce high-quality heatmaps. The proposed solution is applied to diabetic retinopathy (DR) screening in a dataset of almost 90,000 fundus photographs from the 2015 Kaggle Diabetic Retinopathy competition and a private dataset of almost 110,000 photographs (e-ophtha). For the task of detecting referable DR, very good detection performance was achieved: Az=0.954 in Kaggle's dataset and Az=0.949 in e-ophtha. Performance was also evaluated at the image level and at the lesion level in the DiaretDB1 dataset, where four types of lesions are manually segmented: microaneurysms, hemorrhages, exudates and cotton-wool spots. For the task of detecting images containing these four lesion types, the proposed detector, which was trained to detect referable DR, outperforms recent algorithms trained to detect those lesions specifically, with pixel-level supervision. At the lesion level, the proposed detector outperforms heatmap generation algorithms for ConvNets. This detector is part of the Messidor® system for mobile eye pathology screening. Because it does not rely on expert knowledge or manual segmentation for detecting relevant patterns, the proposed solution is a promising image mining tool, which has the potential to discover new biomarkers in images.", "title": "" }, { "docid": "d2e3b893e257d04da0cccbd4b1def9f7", "text": "Augmented reality (AR) is currently considered as having potential for pedagogical applications. However, in science education, research regarding AR-aided learning is in its infancy. To understand how AR could help science learning, this review paper firstly has identified two major approaches of utilizing AR technology in science education, which are named as image-based AR and locationbased AR. These approaches may result in different affordances for science learning. It is then found that students’ spatial ability, practical skills, and conceptual understanding are often afforded by image-based AR and location-based AR usually supports inquiry-based scientific activities. After examining what has been done in science learning with AR supports, several suggestions for future research are proposed. For example, more research is required to explore learning experience (e.g., motivation or cognitive load) and learner characteristics (e.g., spatial ability or perceived presence) involved in AR. Mixed methods of investigating learning process (e.g., a content analysis and a sequential analysis) and in-depth examination of user experience beyond usability (e.g., affective variables of esthetic pleasure or emotional fulfillment) should be considered. Combining image-based and location-based AR technology may bring new possibility for supporting science learning. Theories including mental models, spatial cognition, situated cognition, and social constructivist learning are suggested for the profitable uses of future AR research in science education.", "title": "" }, { "docid": "eaa37c0420dbc804eaf480d1167ad201", "text": "This paper focuses on the problem of object detection when the annotation at training time is restricted to presence or absence of object instances at image level. We present a method based on features extracted from a Convolutional Neural Network and latent SVM that can represent and exploit the presence of multiple object instances in an image. Moreover, the detection of the object instances in the image is improved by incorporating in the learning procedure additional constraints that represent domain-specific knowledge such as symmetry and mutual exclusion. We show that the proposed method outperforms the state-of-the-art in weakly-supervised object detection and object classification on the Pascal VOC 2007 dataset.", "title": "" }, { "docid": "eecc4c73eb7f784b7f03923f14d50224", "text": "Gated-Attention (GA) Reader has been effective for reading comprehension. GA Reader makes two assumptions: (1) a uni-directional attention that uses an input query to gate token encodings of a document; (2) encoding at the cloze position of an input query is considered for answer prediction. In this paper, we propose Collaborative Gating (CG) and Self-Belief Aggregation (SBA) to address the above assumptions respectively. In CG, we first use an input document to gate token encodings of an input query so that the influence of irrelevant query tokens may be reduced. Then the filtered query is used to gate token encodings of an document in a collaborative fashion. In SBA, we conjecture that query tokens other than the cloze token may be informative for answer prediction. We apply self-attention to link the cloze token with other tokens in a query so that the importance of query tokens with respect to the cloze position are weighted. Then their evidences are weighted, propagated and aggregated for better reading comprehension. Experiments show that our approaches advance the state-of-theart results in CNN, Daily Mail, and Who Did What public test sets.", "title": "" }, { "docid": "99e1ae882a1b74ffcbe5e021eb577e49", "text": "This paper studies the problem of recognizing gender from full body images. This problem has not been addressed before, partly because of the variant nature of human bodies and clothing that can bring tough difficulties. However, gender recognition has high application potentials, e.g. security surveillance and customer statistics collection in restaurants, supermarkets, and even building entrances. In this paper, we build a system of recognizing gender from full body images, taken from frontal or back views. Our contributions are three-fold. First, to handle the variety of human body characteristics, we represent each image by a collection of patch features, which model different body parts and provide a set of clues for gender recognition. To combine the clues, we build an ensemble learning algorithm from those body parts to recognize gender from fixed view body images (frontal or back). Second, we relax the fixed view constraint and show the possibility to train a flexible classifier for mixed view images with the almost same accuracy as the fixed view case. At last, our approach is shown to be robust to small alignment errors, which is preferred in many applications.", "title": "" }, { "docid": "0cd1400bce31ea35b3f142339737dc28", "text": "LLC resonant converter is a nonlinear system, limiting the use of typical linear control methods. This paper proposed a new nonlinear control strategy, using load feedback linearization for an LLC resonant converter. Compared with the conventional PI controllers, the proposed feedback linearized control strategy can achieve better performance with elimination of the nonlinear characteristics. The LLC resonant converter's dynamic model is built based on fundamental harmonic approximation using extended describing function. By assuming the dynamics of resonant network is much faster than the output voltage and controller, the LLC resonant converter's model is simplified from seven-order state equations to two-order ones. Then, the feedback linearized control strategy is presented. A double loop PI controller is designed to regulate the modulation voltage. The switching frequency can be calculated as a function of the load, input voltage, and modulation voltage. Finally, a 200 W laboratory prototype is built to verify the proposed control scheme. The settling time of the LLC resonant converter is reduced from 38.8 to 20.4 ms under the positive load step using the proposed controller. Experimental results prove the superiority of the proposed feedback linearized controller over the conventional PI controller.", "title": "" }, { "docid": "3ae5e7ac5433f2449cd893e49f1b2553", "text": "We propose a category-independent method to produce a bag of regions and rank them, such that top-ranked regions are likely to be good segmentations of different objects. Our key objectives are completeness and diversity: Every object should have at least one good proposed region, and a diverse set should be top-ranked. Our approach is to generate a set of segmentations by performing graph cuts based on a seed region and a learned affinity function. Then, the regions are ranked using structured learning based on various cues. Our experiments on the Berkeley Segmentation Data Set and Pascal VOC 2011 demonstrate our ability to find most objects within a small bag of proposed regions.", "title": "" }, { "docid": "c796bc689e9b3e2b8d03525e5cd5908c", "text": "As they grapple with increasingly large data sets, biologists and computer scientists uncork new bottlenecks. B iologists are joining the big-data club. With the advent of high-throughput genomics, life scientists are starting to grapple with massive data sets, encountering challenges with handling, processing and moving information that were once the domain of astronomers and high-energy physicists 1. With every passing year, they turn more often to big data to probe everything from the regulation of genes and the evolution of genomes to why coastal algae bloom, what microbes dwell where in human body cavities and how the genetic make-up of different cancers influences how cancer patients fare 2. The European Bioinformatics Institute (EBI) in Hinxton, UK, part of the European Molecular Biology Laboratory and one of the world's largest biology-data repositories, currently stores 20 petabytes (1 petabyte is 10 15 bytes) of data and backups about genes, proteins and small molecules. Genomic data account for 2 peta-bytes of that, a number that more than doubles every year 3 (see 'Data explosion'). This data pile is just one-tenth the size of the data store at CERN, Europe's particle-physics laboratory near Geneva, Switzerland. Every year, particle-collision events in CERN's Large Hadron Collider generate around 15 petabytes of data — the equivalent of about 4 million high-definition feature-length films. But the EBI and institutes like it face similar data-wrangling challenges to those at CERN, says Ewan Birney, associate director of the EBI. He and his colleagues now regularly meet with organizations such as CERN and the European Space Agency (ESA) in Paris to swap lessons about data storage, analysis and sharing. All labs need to manipulate data to yield research answers. As prices drop for high-throughput instruments such as automated Extremely powerful computers are needed to help biologists to handle big-data traffic jams.", "title": "" }, { "docid": "3eec1e9abcb677a4bc8f054fa8827f4f", "text": "We present a neural semantic parser that translates natural language questions into executable SQL queries with two key ideas. First, we develop an encoder-decoder model, where the decoder uses a simple type system of SQL to constraint the output prediction, and propose a value-based loss when copying from input tokens. Second, we explore using the execution semantics of SQL to repair decoded programs that result in runtime error or return empty result. We propose two modelagnostics repair approaches, an ensemble model and a local program repair, and demonstrate their effectiveness over the original model. We evaluate our model on the WikiSQL dataset and show that our model achieves close to state-of-the-art results with lesser model complexity.", "title": "" }, { "docid": "a63989ee86e2a57aae2d33421c61cd68", "text": "As the rapid growth of multi-modal data, hashing methods for cross-modal retrieval have received considerable attention. Deep-networks-based cross-modal hashing methods are appealing as they can integrate feature learning and hash coding into end-to-end trainable frameworks. However, it is still challenging to find content similarities between different modalities of data due to the heterogeneity gap. To further address this problem, we propose an adversarial hashing network with attention mechanism to enhance the measurement of content similarities by selectively focusing on informative parts of multi-modal data. The proposed new adversarial network, HashGAN, consists of three building blocks: 1) the feature learning module to obtain feature representations, 2) the generative attention module to generate an attention mask, which is used to obtain the attended (foreground) and the unattended (background) feature representations, 3) the discriminative hash coding module to learn hash functions that preserve the similarities between different modalities. In our framework, the generative module and the discriminative module are trained in an adversarial way: the generator is learned to make the discriminator cannot preserve the similarities of multi-modal data w.r.t. the background feature representations, while the discriminator aims to preserve the similarities of multimodal data w.r.t. both the foreground and the background feature representations. Extensive evaluations on several benchmark datasets demonstrate that the proposed HashGAN brings substantial improvements over other state-ofthe-art cross-modal hashing methods.", "title": "" }, { "docid": "85809b8e7811adb37314da2aaa28a70c", "text": "Underwater wireless sensor networks (UWSNs) will pave the way for a new era of underwater monitoring and actuation applications. The envisioned landscape of UWSN applications will help us learn more about our oceans, as well as about what lies beneath them. They are expected to change the current reality where no more than 5% of the volume of the oceans has been observed by humans. However, to enable large deployments of UWSNs, networking solutions toward efficient and reliable underwater data collection need to be investigated and proposed. In this context, the use of topology control algorithms for a suitable, autonomous, and on-the-fly organization of the UWSN topology might mitigate the undesired effects of underwater wireless communications and consequently improve the performance of networking services and protocols designed for UWSNs. This article presents and discusses the intrinsic properties, potentials, and current research challenges of topology control in underwater sensor networks. We propose to classify topology control algorithms based on the principal methodology used to change the network topology. They can be categorized in three major groups: power control, wireless interface mode management, and mobility assisted–based techniques. Using the proposed classification, we survey the current state of the art and present an in-depth discussion of topology control solutions designed for UWSNs.", "title": "" }, { "docid": "9916cbe61d57121030ee718bc03e0c17", "text": "We propose a novel approach for constructing effective treatment policies when the observed data is biased and lacks counterfactual information. Learning in settings where the observed data does not contain all possible outcomes for all treatments is difficult since the observed data is typically biased due to existing clinical guidelines. This is an important problem in the medical domain as collecting unbiased data is expensive and so learning from the wealth of existing biased data is a worthwhile task. Our approach separates the problem into two stages: first we reduce the bias by learning a representation map using a novel auto-encoder network – this allows us to control the trade-off between the bias-reduction and the information loss – and then we construct effective treatment policies on the transformed data using a novel feedforward network. Separation of the problem into these two stages creates an algorithm that can be adapted to the problem at hand – the bias-reduction step can be performed as a preprocessing step for other algorithms. We compare our algorithm against state-of-art algorithms on two semi-synthetic datasets and demonstrate that our algorithm achieves a significant improvement in performance.", "title": "" } ]
scidocsrr
86d43e0b4ae9c634e85aeec789baad8c
A Brief Review of Network Embedding
[ { "docid": "9fe198a6184a549ff63364e9782593d8", "text": "Node embedding techniques have gained prominence since they produce continuous and low-dimensional features, which are effective for various tasks. Most existing approaches learn node embeddings by exploring the structure of networks and are mainly focused on static non-attributed graphs. However, many real-world applications, such as stock markets and public review websites, involve bipartite graphs with dynamic and attributed edges, called attributed interaction graphs. Different from conventional graph data, attributed interaction graphs involve two kinds of entities (e.g. investors/stocks and users/businesses) and edges of temporal interactions with attributes (e.g. transactions and reviews). In this paper, we study the problem of node embedding in attributed interaction graphs. Learning embeddings in interaction graphs is highly challenging due to the dynamics and heterogeneous attributes of edges. Different from conventional static graphs, in attributed interaction graphs, each edge can have totally different meanings when the interaction is at different times or associated with different attributes. We propose a deep node embedding method called IGE (Interaction Graph Embedding). IGE is composed of three neural networks: an encoding network is proposed to transform attributes into a fixed-length vector to deal with the heterogeneity of attributes; then encoded attribute vectors interact with nodes multiplicatively in two coupled prediction networks that investigate the temporal dependency by treating incident edges of a node as the analogy of a sentence in word embedding methods. The encoding network can be specifically designed for different datasets as long as it is differentiable, in which case it can be trained together with prediction networks by back-propagation. We evaluate our proposed method and various comparing methods on four real-world datasets. The experimental results prove the effectiveness of the learned embeddings by IGE on both node clustering and classification tasks.", "title": "" }, { "docid": "6b8329ef59c6811705688e48bf6c0c08", "text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.", "title": "" } ]
[ { "docid": "5b3ca1cc607d2e8f0394371f30d9e83a", "text": "We present a machine learning algorithm that takes as input a 2D RGB image and synthesizes a 4D RGBD light field (color and depth of the scene in each ray direction). For training, we introduce the largest public light field dataset, consisting of over 3300 plenoptic camera light fields of scenes containing flowers and plants. Our synthesis pipeline consists of a convolutional neural network (CNN) that estimates scene geometry, a stage that renders a Lambertian light field using that geometry, and a second CNN that predicts occluded rays and non-Lambertian effects. Our algorithm builds on recent view synthesis methods, but is unique in predicting RGBD for each light field ray and improving unsupervised single image depth estimation by enforcing consistency of ray depths that should intersect the same scene point.", "title": "" }, { "docid": "4a572df21f3a8ebe3437204471a1fd10", "text": "Whilst studies on emotion recognition show that genderdependent analysis can improve emotion classification performance, the potential differences in the manifestation of depression between male and female speech have yet to be fully explored. This paper presents a qualitative analysis of phonetically aligned acoustic features to highlight differences in the manifestation of depression. Gender-dependent analysis with phonetically aligned gender-dependent features are used for speech-based depression recognition. The presented experimental study reveals gender differences in the effect of depression on vowel-level features. Considering the experimental study, we also show that a small set of knowledge-driven gender-dependent vowel-level features can outperform state-of-the-art turn-level acoustic features when performing a binary depressed speech recognition task. A combination of these preselected gender-dependent vowel-level features with turn-level standardised openSMILE features results in additional improvement for depression recognition.", "title": "" }, { "docid": "b9aaab241bab9c11ac38d6e9188b7680", "text": "Find loads of the research methods in the social sciences book catalogues in this site as the choice of you visiting this page. You can also join to the website book library that will show you numerous books from any types. Literature, science, politics, and many more catalogues are presented to offer you the best book to find. The book that really makes you feels satisfied. Or that's the book that will save you from your job deadline.", "title": "" }, { "docid": "bc06b540765ddf762dc8cb72cae7ad41", "text": "We present a method to produce free, enormous corpora to train taggers for Named Entity Recognition (NER), the task of identifying and classifying names in text, often solved by statistical learning systems. Our approach utilises the text of Wikipedia, a free online encyclopedia, transforming links between Wikipedia articles into entity annotations. Having derived a baseline corpus, we found that altering Wikipedia’s links and identifying classes of capitalised non-entity terms would enable the corpus to conform more closely to gold-standard annotations, increasing performance by up to 32% F score. The evaluation of our method is novel since the training corpus is not usually a variable in NER experimentation. We therefore develop a number of methods for analysing and comparing training corpora. Gold-standard training corpora for NER perform poorly (F score up to 32% lower) when evaluated on test data from a different gold-standard corpus. Our Wikipedia-derived data can outperform manually-annotated corpora on this cross-corpus evaluation task by up to 7% on held-out test data. These experimental results show that Wikipedia is viable as a source of automatically-annotated training corpora, which have wide domain coverage applicable to a broad range of NLP applications.", "title": "" }, { "docid": "157f8adc236a9d2079ea424c5cf40dcb", "text": "As humans we are a highly social species: in order to coordinate our joint actions and assure successful communication, we use language skills to explicitly convey information to each other, and social abilities such as empathy or perspective taking to infer another person's emotions and mental state. The human cognitive capacity to draw inferences about other peoples' beliefs, intentions and thoughts has been termed mentalizing, theory of mind or cognitive perspective taking. This capacity makes it possible, for instance, to understand that people may have views that differ from our own. Conversely, the capacity to share the feelings of others is called empathy. Empathy makes it possible to resonate with others' positive and negative feelings alike--we can thus feel happy when we vicariously share the joy of others and we can share the experience of suffering when we empathize with someone in pain. Importantly, in empathy one feels with someone, but one does not confuse oneself with the other; that is, one still knows that the emotion one resonates with is the emotion of another. If this self-other distinction is not present, we speak of emotion contagion, a precursor of empathy that is already present in babies.", "title": "" }, { "docid": "2b40c6f6a9fc488524c23e11cd57a00b", "text": "An overview of the basics of metaphorical thought and language from the perspective of Neurocognition, the integrated interdisciplinary study of how conceptual thought and language work in the brain. The paper outlines a theory of metaphor circuitry and discusses how everyday reason makes use of embodied metaphor circuitry.", "title": "" }, { "docid": "9441113599194d172b6f618058b2ba88", "text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.", "title": "" }, { "docid": "022a63e994a74d3d0e7b04680c1cb77e", "text": "Practitioners in Europe and the U.S. recently have proposed two distinct approaches to address what they believe are shortcomings of traditional budgeting practices. One approach advocates improving the budgeting process and primarily focuses on the planning problems with budgeting. The other advocates abandoning the budget and primarily focuses on the performance evaluation problems with budgeting. This paper provides an overview and research perspective on these two recent developments. We discuss why practitioners have become dissatisfied with budgets, describe the two distinct approaches, place them in a research context, suggest insights that may aid the practitioners, and use the practitioner perspectives to identify fruitful areas for research. INTRODUCTION Budgeting is the cornerstone of the management control process in nearly all organizations, but despite its widespread use, it is far from perfect. Practitioners express concerns about using budgets for planning and performance evaluation. The practitioners argue that budgets impede the allocation of organizational resources to their best uses and encourage myopic decision making and other dysfunctional budget games. They attribute these problems, in part, to traditional budgeting’s financial, top-down, commandand-control orientation as embedded in annual budget planning and performance evaluation processes (e.g., Schmidt 1992; Bunce et al. 1995; Hope and Fraser 1997, 2000, 2003; Wallander 1999; Ekholm and Wallin 2000; Marcino 2000; Jensen 2001). We demonstrate practitioners’ concerns with budgets by describing two practice-led developments: one advocating improving the budgeting process, the other abandoning it. These developments illustrate two points. First, they show practitioners’ concerns with budgeting problems that the scholarly literature has largely ignored while focusing instead 1 For example, Comshare (2000) surveyed financial executives about their current experience with their organizations’ budgeting processes. One hundred thirty of the 154 participants (84 percent) identified 332 frustrations with their organizations’ budgeting processes, an average of 2.6 frustrations per person. We acknowledge the many helpful suggestions by the reviewers, Bjorn Jorgensen, Murray Lindsay, Ken Merchant, and Mark Young. 96 Hansen, Otley, and Van der Stede Journal of Management Accounting Research, 2003 on more traditional issues like participative budgeting. Second, the two conflicting developments illustrate that firms face a critical decision regarding budgeting: maintain it, improve it, or abandon it? Our discussion has two objectives. First, we demonstrate the level of concern with budgeting in practice, suggesting its potential for continued scholarly research. Second, we wish to raise academics’ awareness of apparent disconnects between budgeting practice and research. We identify areas where prior research may aid the practitioners and, conversely, use the practitioners’ insights to suggest areas for research. In the second section, we review some of the most common criticisms of budgets in practice. The third section describes and analyzes the main thrust of two recent practiceled developments in budgeting. In the fourth section, we place these two practice developments in a research context and suggest research that may be relevant to the practitioners. The fifth section turns the tables by using the practitioner insights to offer new perspectives for research. In the sixth section, we conclude. PROBLEMS WITH BUDGETING IN PRACTICE The ubiquitous use of budgetary control is largely due to its ability to weave together all the disparate threads of an organization into a comprehensive plan that serves many different purposes, particularly performance planning and ex post evaluation of actual performance vis-à-vis the plan. Despite performing this integrative function and laying the basis for performance evaluation, budgetary control has many limitations, such as its longestablished and oft-researched susceptibility to induce budget games or dysfunctional behaviors (Hofstede 1967; Onsi 1973; Merchant 1985b; Lukka 1988). A recent report by Neely et al. (2001), drawn primarily from the practitioner literature, lists the 12 most cited weaknesses of budgetary control as: 1. Budgets are time-consuming to put together; 2. Budgets constrain responsiveness and are often a barrier to change; 3. Budgets are rarely strategically focused and often contradictory; 4. Budgets add little value, especially given the time required to prepare them; 5. Budgets concentrate on cost reduction and not value creation; 6. Budgets strengthen vertical command-and-control; 7. Budgets do not reflect the emerging network structures that organizations are adopting; 8. Budgets encourage gaming and perverse behaviors; 9. Budgets are developed and updated too infrequently, usually annually; 10. Budgets are based on unsupported assumptions and guesswork; 11. Budgets reinforce departmental barriers rather than encourage knowledge sharing; and 12. Budgets make people feel undervalued. 2 For example, in their review of nearly 2,000 research and professional articles in management accounting in the 1996–2000 period, Selto and Widener (2001) document several areas of ‘‘fit’’ and ‘‘misfit’’ between practice and research. They document that more research than practice exists in the area of participative budgeting and state that ‘‘[this] topic appears to be of little current, practical interest, but continues to attract research efforts, perhaps because of the interesting theoretical issues it presents.’’ Selto and Widener (2001) also document virtually no research on activity-based budgeting (one of the practice-led developments we discuss in this paper) and planning and forecasting, although these areas have grown in practice coverage each year during the 1996– 2000 period. Practice Developments in Budgeting 97 Journal of Management Accounting Research, 2003 While not all would agree with these criticisms, other recent critiques (e.g., Schmidt 1992; Hope and Fraser 1997, 2000, 2003; Ekholm and Wallin 2000; Marcino 2000; Jensen 2001) also support the perception of widespread dissatisfaction with budgeting in practice. We synthesize the sources of dissatisfaction as follows. Claims 1, 4, 9, and 10 relate to the recurring criticism that by the time budgets are used, their assumptions are typically outdated, reducing the value of the budgeting process. A more radical version of this criticism is that conventional budgets can never be valid because they cannot capture the uncertainty involved in rapidly changing environments (Wallender 1999). In more conceptual terms, the operation of a useful budgetary control system requires two related elements. First, there must be a high degree of operational stability so that the budget provides a valid plan for a reasonable period of time (typically the next year). Second, managers must have good predictive models so that the budget provides a reasonable performance standard against which to hold managers accountable (Berry and Otley 1980). Where these criteria hold, budgetary control is a useful control mechanism, but for organizations that operate in more turbulent environments, it becomes less useful (Samuelson 2000). Claims 2, 3, 5, 6, and 8 relate to another common criticism that budgetary controls impose a vertical command-and-control structure, centralize decision making, stifle initiative, and focus on cost reductions rather than value creation. As such, budgetary controls often impede the pursuit of strategic goals by supporting such mechanical practices as lastyear-plus budget setting and across-the-board cuts. Moreover, the budget’s exclusive focus on annual financial performance causes a mismatch with operational and strategic decisions that emphasize nonfinancial goals and cut across the annual planning cycle, leading to budget games involving skillful timing of revenues, expenditures, and investments (Merchant 1985a). Finally, claims 7, 11, and 12 reflect organizational and people-related budgeting issues. The critics argue that vertical, command-and-control, responsibility center-focused budgetary controls are incompatible with flat, network, or value chain-based organizational designs and impede empowered employees from making the best decisions (Hope and Fraser 2003). Given such a long list of problems and many calls for improvement, it seems odd that the vast majority of U.S. firms retain a formal budgeting process (97 percent of the respondents in Umapathy [1987]). One reason that budgets may be retained in most firms is because they are so deeply ingrained in an organization’s fabric (Scapens and Roberts 1993). ‘‘They remain a centrally coordinated activity (often the only one) within the business’’ (Neely et al. 2001, 9) and constitute ‘‘the only process that covers all areas of organizational activity’’ (Otley 1999). However, a more recent survey of Finnish firms found that although 25 percent are retaining their traditional budgeting system, 61 percent are actively upgrading their system, and 14 percent are either abandoning budgets or at least considering it (Ekholm and Wallin 2000). We discuss two practice-led developments that illustrate proposals to improve budgeting or to abandon it. Although the two developments reach different conclusions, both originated in the same organization, the Consortium for Advanced Manufacturing-International (CAM-I); one in 3 We note that there are several factors that inevitably contribute to the seemingly negative evaluation of budgetary controls. First, given information asymmetries, budgets operate under second-best conditions in most organizations. Second, information is costly. Finally, unlike the costs, the benefits of budgeting are indirect, and thus, less salient. 98 Hansen, Otley, and Van der Stede Journal of Management Accounting Research, 2003 the U.S. and the other in Europe. The U", "title": "" }, { "docid": "97f54d4b04e54ddae85d2e0c9a0a6476", "text": "We propose a novel and robust hashing paradigm that uses iterative geometric techniques and relies on observations that main geometric features within an image would approximately stay invariant under small perturbations. A key goal of this algorithm is to produce sufficiently randomized outputs which are unpredictable, thereby yielding properties akin to cryptographic MACs. This is a key component for robust multimedia identification and watermarking (for synchronization as well as content dependent key generation). Our algorithm withstands standard benchmark (e.g Stirmark) attacks provided they do not cause severe perceptually significant distortions. As verified by our detailed experiments, the approach is relatively media independent and works for", "title": "" }, { "docid": "1459f6bf9ebf153277f49a0791e2cf6d", "text": "Content popularity prediction finds application in many areas, including media advertising, content caching, movie revenue estimation, traffic management and macro-economic trends forecasting, to name a few. However, predicting this popularity is difficult due to, among others, the effects of external phenomena, the influence of context such as locality and relevance to users,and the difficulty of forecasting information cascades.\n In this paper we identify patterns of temporal evolution that are generalisable to distinct types of data, and show that we can (1) accurately classify content based on the evolution of its popularity over time and (2) predict the value of the content's future popularity. We verify the generality of our method by testing it on YouTube, Digg and Vimeo data sets and find our results to outperform the K-Means baseline when classifying the behaviour of content and the linear regression baseline when predicting its popularity.", "title": "" }, { "docid": "f3f70e5ba87399e9d44bda293a231399", "text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.", "title": "" }, { "docid": "f6c3124f3824bcc836db7eae1b926d65", "text": "Cloud balancing provides an organization with the ability to distribute application requests across any number of application deployments located in different data centers and through Cloud-computing providers. In this paper, we propose a load balancing methodMinsd (Minimize standard deviation of Cloud load method) and apply it on three levels control: PEs (Processing Elements), Hosts and Data Centers. Simulations on CloudSim are used to check its performance and its influence on makespan, communication overhead and throughput. A true log of a cluster also is used to test our method. Results indicate that our method not only gives good Cloud balancing but also ensures reducing makespan and communication overhead and enhancing throughput of the whole the system.", "title": "" }, { "docid": "92e955705aa333923bb7b14af946fc2f", "text": "This study examines the role of online daters’ physical attractiveness in their profile selfpresentation and, in particular, their use of deception. Sixty-nine online daters identified the deceptions in their online dating profiles and had their photograph taken in the lab. Independent judges rated the online daters’ physical attractiveness. Results show that the lower online daters’ attractiveness, the more likely they were to enhance their profile photographs and lie about their physical descriptors (height, weight, age). The association between attractiveness and deception did not extend to profile elements unrelated to their physical appearance (e.g., income, occupation), suggesting that their deceptions were limited and strategic. Results are discussed in terms of (a) evolutionary theories about the importance of physical attractiveness in the dating realm and (b) the technological affordances that allow online daters to engage in selective self-presentation.", "title": "" }, { "docid": "082630a33c0cc0de0e60a549fc57d8e8", "text": "Agricultural monitoring, especially in developing countries, can help prevent famine and support humanitarian efforts. A central challenge is yield estimation, i.e., predicting crop yields before harvest. We introduce a scalable, accurate, and inexpensive method to predict crop yields using publicly available remote sensing data. Our approach improves existing techniques in three ways. First, we forego hand-crafted features traditionally used in the remote sensing community and propose an approach based on modern representation learning ideas. We also introduce a novel dimensionality reduction technique that allows us to train a Convolutional Neural Network or Long-short Term Memory network and automatically learn useful features even when labeled training data are scarce. Finally, we incorporate a Gaussian Process component to explicitly model the spatio-temporal structure of the data and further improve accuracy. We evaluate our approach on county-level soybean yield prediction in the U.S. and show that it outperforms competing techniques.", "title": "" }, { "docid": "b4d7fccccd7a80631f1190320cfeab9e", "text": "BACKGROUND\nPatients on surveillance for clinical stage I (CSI) testicular cancer are counseled regarding their baseline risk of relapse. The conditional risk of relapse (cRR), which provides prognostic information on patients who have survived for a period of time without relapse, have not been determined for CSI testicular cancer.\n\n\nOBJECTIVE\nTo determine cRR in CSI testicular cancer.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nWe reviewed 1239 patients with CSI testicular cancer managed with surveillance at a tertiary academic centre between 1980 and 2014. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: cRR estimates were calculated using the Kaplan-Meier method. We stratified patients according to validated risk factors for relapse. We used linear regression to determine cRR trends over time.\n\n\nRESULTS AND LIMITATIONS\nAt orchiectomy, the risk of relapse within 5 yr was 42.4%, 17.3%, 20.3%, and 12.2% among patients with high-risk nonseminomatous germ cell tumor (NSGCT), low-risk NSGCT, seminoma with tumor size ≥3cm, and seminoma with tumor size <3cm, respectively. However, for patients without relapse within the first 2 yr of follow-up, the corresponding risk of relapse within the next 5 yr in the groups was 0.0%, 1.0% (95% confidence interval [CI] 0.3-1.7%), 5.6% (95% CI 3.1-8.2%), and 3.9% (95% CI 1.4-6.4%). Over time, cRR decreased (p≤0.021) in all models. Limitations include changes to surveillance protocols over time and few late relapses.\n\n\nCONCLUSIONS\nAfter 2 yr, the risk of relapse on surveillance for CSI testicular cancer is very low. Consideration should be given to adapting surveillance protocols to individualized risk of relapse based on cRR as opposed to static protocols based on baseline factors. This strategy could reduce the intensity of follow-up for the majority of patients.\n\n\nPATIENT SUMMARY\nOur study is the first to provide data on the future risk of relapse during surveillance for clinical stage I testicular cancer, given a patient has been without relapse for a specified period of time.", "title": "" }, { "docid": "76f4d1051bcb75156f4fcf402b1ebf27", "text": "Slowly but surely, Alzheimer's disease (AD) patients lose their memory and their cognitive abilities, and even their personalities may change dramatically. These changes are due to the progressive dysfunction and death of nerve cells that are responsible for the storage and processing of information. Although drugs can temporarily improve memory, at present there are no treatments that can stop or reverse the inexorable neurodegenerative process. But rapid progress towards understanding the cellular and molecular alterations that are responsible for the neuron's demise may soon help in developing effective preventative and therapeutic strategies.", "title": "" }, { "docid": "187bbc30046f17b2030c9dbe3c800074", "text": "To present a summary of current scientific evidence about the cannabinoid, cannabidiol (CBD) with regard to its relevance to epilepsy and other selected neuropsychiatric disorders. We summarize the presentations from a conference in which invited participants reviewed relevant aspects of the physiology, mechanisms of action, pharmacology, and data from studies with animal models and human subjects. Cannabis has been used to treat disease since ancient times. Δ(9) -Tetrahydrocannabinol (Δ(9) -THC) is the major psychoactive ingredient and CBD is the major nonpsychoactive ingredient in cannabis. Cannabis and Δ(9) -THC are anticonvulsant in most animal models but can be proconvulsant in some healthy animals. The psychotropic effects of Δ(9) -THC limit tolerability. CBD is anticonvulsant in many acute animal models, but there are limited data in chronic models. The antiepileptic mechanisms of CBD are not known, but may include effects on the equilibrative nucleoside transporter; the orphan G-protein-coupled receptor GPR55; the transient receptor potential of vanilloid type-1 channel; the 5-HT1a receptor; and the α3 and α1 glycine receptors. CBD has neuroprotective and antiinflammatory effects, and it appears to be well tolerated in humans, but small and methodologically limited studies of CBD in human epilepsy have been inconclusive. More recent anecdotal reports of high-ratio CBD:Δ(9) -THC medical marijuana have claimed efficacy, but studies were not controlled. CBD bears investigation in epilepsy and other neuropsychiatric disorders, including anxiety, schizophrenia, addiction, and neonatal hypoxic-ischemic encephalopathy. However, we lack data from well-powered double-blind randomized, controlled studies on the efficacy of pure CBD for any disorder. Initial dose-tolerability and double-blind randomized, controlled studies focusing on target intractable epilepsy populations such as patients with Dravet and Lennox-Gastaut syndromes are being planned. Trials in other treatment-resistant epilepsies may also be warranted. A PowerPoint slide summarizing this article is available for download in the Supporting Information section here.", "title": "" }, { "docid": "6e8a9c37672ec575821da5c9c3145500", "text": "As video games become increasingly popular pastimes, it becomes more important to understand how different individuals behave when they play these games. Previous research has focused mainly on behavior in massively multiplayer online role-playing games; therefore, in the current study we sought to extend on this research by examining the connections between personality traits and behaviors in video games more generally. Two hundred and nineteen university students completed measures of personality traits, psychopathic traits, and a questionnaire regarding frequency of different behaviors during video game play. A principal components analysis of the video game behavior questionnaire revealed four factors: Aggressing, Winning, Creating, and Helping. Each behavior subscale was significantly correlated with at least one personality trait. Men reported significantly more Aggressing, Winning, and Helping behavior than women. Controlling for participant sex, Aggressing was negatively correlated with Honesty–Humility, Helping was positively correlated with Agreeableness, and Creating was negatively correlated with Conscientiousness. Aggressing was also positively correlated with all psychopathic traits, while Winning and Creating were correlated with one psychopathic trait each. Frequency of playing video games online was positively correlated with the Aggressing, Winning, and Helping scales, but not with the Creating scale. The results of the current study provide support for previous research on personality and behavior in massively multiplayer online role-playing games. 2015 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8c80129507b138d1254e39acfa9300fc", "text": "Motivation\nText mining has become an important tool for biomedical research. The most fundamental text-mining task is the recognition of biomedical named entities (NER), such as genes, chemicals and diseases. Current NER methods rely on pre-defined features which try to capture the specific surface properties of entity types, properties of the typical local context, background knowledge, and linguistic information. State-of-the-art tools are entity-specific, as dictionaries and empirically optimal feature sets differ between entity types, which makes their development costly. Furthermore, features are often optimized for a specific gold standard corpus, which makes extrapolation of quality measures difficult.\n\n\nResults\nWe show that a completely generic method based on deep learning and statistical word embeddings [called long short-term memory network-conditional random field (LSTM-CRF)] outperforms state-of-the-art entity-specific NER tools, and often by a large margin. To this end, we compared the performance of LSTM-CRF on 33 data sets covering five different entity classes with that of best-of-class NER tools and an entity-agnostic CRF implementation. On average, F1-score of LSTM-CRF is 5% above that of the baselines, mostly due to a sharp increase in recall.\n\n\nAvailability and implementation\nThe source code for LSTM-CRF is available at https://github.com/glample/tagger and the links to the corpora are available at https://corposaurus.github.io/corpora/ .\n\n\nContact\[email protected].", "title": "" }, { "docid": "eaf7b6b0cc18453538087cc90254dbd8", "text": "We present a real-time system that renders antialiased hard shadows using irregular z-buffers (IZBs). For subpixel accuracy, we use 32 samples per pixel at roughly twice the cost of a single sample. Our system remains interactive on a variety of game assets and CAD models while running at 1080p and 2160p and imposes no constraints on light, camera or geometry, allowing fully dynamic scenes without precomputation. Unlike shadow maps we introduce no spatial or temporal aliasing, smoothly animating even subpixel shadows from grass or wires.\n Prior irregular z-buffer work relies heavily on GPU compute. Instead we leverage the graphics pipeline, including hardware conservative raster and early-z culling. We observe a duality between irregular z-buffer performance and shadow map quality; this allows common shadow map algorithms to reduce our cost. Compared to state-of-the-art ray tracers, we spawn similar numbers of triangle intersections per pixel yet completely rebuild our data structure in under 2 ms per frame.", "title": "" } ]
scidocsrr
e0301c813aa0aeaac7d4039bc9b5e5ae
The roles of brand community and community engagement in building brand trust on social media
[ { "docid": "64e0a1345e5a181191c54f6f9524c96d", "text": "Social media based brand communities are communities initiated on the platform of social media. In this article, we explore whether brand communities based on social media (a special type of online brand communities) have positive effects on the main community elements and value creation practices in the communities as well as on brand trust and brand loyalty. A survey based empirical study with 441 respondents was conducted. The results of structural equation modeling show that brand communities established on social media have positive effects on community markers (i.e., shared consciousness, shared rituals and traditions, and obligations to society), which have positive effects on value creation practices (i.e., social networking, community engagement, impressions management, and brand use). Such communities could enhance brand loyalty through brand use and impression management practices. We show that brand trust has a full mediating role in converting value creation practices into brand loyalty. Implications for practice and future research opportunities are discussed.", "title": "" } ]
[ { "docid": "89652309022bc00c7fd76c4fe1c5d644", "text": "In first encounters people quickly form impressions of each other’s personality and interpersonal attitude. We conducted a study to investigate how this transfers to first encounters between humans and virtual agents. In the study, subjects’ avatars approached greeting agents in a virtual museum rendered in both first and third person perspective. Each agent exclusively exhibited nonverbal immediacy cues (smile, gaze and proximity) during the approach. Afterwards subjects judged its personality (extraversion) and interpersonal attitude (hostility/friendliness). We found that within only 12.5 seconds of interaction subjects formed impressions of the agents based on observed behavior. In particular, proximity had impact on judgments of extraversion whereas smile and gaze on friendliness. These results held for the different camera perspectives. Insights on how the interpretations might change according to the user’s own personality are also provided.", "title": "" }, { "docid": "c1906bcb735d0c77057441f13ea282fc", "text": "It has long been known that storage of information in working memory suffers as a function of proactive interference. Here we review the results of experiments using approaches from cognitive neuroscience to reveal a pattern of brain activity that is a signature of proactive interference. Many of these results derive from a single paradigm that requires one to resolve interference from a previous experimental trial. The importance of activation in left inferior frontal cortex is shown repeatedly using this task and other tasks. We review a number of models that might account for the behavioral and imaging findings about proactive interference, raising questions about the adequacy of these models.", "title": "" }, { "docid": "c4ecf2d867a84a94ad34a1d4943071df", "text": "This paper introduces our submission to the 2nd Facial Landmark Localisation Competition. We present a deep architecture to directly detect facial landmarks without using face detection as an initialization. The architecture consists of two stages, a Basic Landmark Prediction Stage and a Whole Landmark Regression Stage. At the former stage, given an input image, the basic landmarks of all faces are detected by a sub-network of landmark heatmap and affinity field prediction. At the latter stage, the coarse canonical face and the pose can be generated by a Pose Splitting Layer based on the visible basic landmarks. According to its pose, each canonical state is distributed to the corresponding branch of the shape regression sub-networks for the whole landmark detection. Experimental results show that our method obtains promising results on the 300-W dataset, and achieves superior performances over the baselines of the semi-frontal and the profile categories in this competition.", "title": "" }, { "docid": "c6d2371a165acc46029eb4ad42df3270", "text": "Video game playing is a popular activity and its enjoyment among frequent players has been associated with absorption and immersion experiences. This paper examines how immersion in the video game environment can influence the player during the game and afterwards (including fantasies, thoughts, and actions). This is what is described as Game Transfer Phenomena (GTP). GTP occurs when video game elements are associated with real life elements triggering subsequent thoughts, sensations and/or player actions. To investigate this further, a total of 42 frequent video game players aged between 15 and 21 years old were interviewed. Thematic analysis showed that many players experienced GTP, where players appeared to integrate elements of video game playing into their real lives. These GTP were then classified as either intentional or automatic experiences. Results also showed that players used video games for interacting with others as a form of amusement, modeling or mimicking video game content, and daydreaming about video games. Furthermore, the findings demonstrate how video games triggered intrusive thoughts, sensations, impulses, reflexes, visual illusions, and dissociations. DOI: 10.4018/ijcbpl.2011070102 16 International Journal of Cyber Behavior, Psychology and Learning, 1(3), 15-33, July-September 2011 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited. 24/7 activity (e.g., Ng & Weimer-Hastings, 2005; Chappell, Eatough, Davies, & Griffiths, 2006; Grüsser, Thalemann, & Griffiths, 2007). Today’s video games have evolved due to technological advance, resulting in high levels of realism and emotional design that include diversity, experimentation, and (perhaps in some cases) sensory overload. Furthermore, video games have been considered as fantasy triggers because they offer ‘what if’ scenarios (Baranowski, Buday, Thompson, & Baranowski, 2008). What if the player could become someone else? What if the player could inhabit an improbable world? What if the player could interact with fantasy characters or situations (Woolley, 1995)? Entertainment media content can be very effective in capturing the minds and eliciting emotions in the individual. Research about novels, films, fairy tales and television programs has shown that entertainment can generate emotions such as joy, awe, compassion, fear and anger (Oatley, 1999; Tan 1996; Valkenburg Cantor & Peeters, 2000, cited in Jansz et al., 2005). Video games also have the capacity to generate such emotions and have the capacity for players to become both immersed in, and dissociated from, the video game. Dissociation and Immersion It is clear that dissociation is a somewhat “fuzzy” concept as there is no clear accepted definition of what it actually constitutes (Griffiths, Wood, Parke, & Parke, 2006). Most would agree that dissociation is a form of altered state of consciousness. However, dissociative behaviours lie on a continuum and range from individuals losing track of time, feeling like they are someone else, blacking out, not recalling how they got somewhere or what they did, and being in a trance like state (Griffiths et al., 2006). Studies have found that dissociation is related to an extensive involvement in fantasizing, and daydreaming (Giesbrecht, Geraerts, & Merckelbach, 2007). Dissociative phenomena of the non-pathological type include absorption and imaginative involvement (Griffith et al., 2006) and are psychological phenomena that can occur during video game playing. Anyone can, to some degree, experience dissociative states in their daily lives (Giesbrecht et al., 2007). Furthermore, these states can happen episodically and can be situationally triggered (Griffiths et al., 2006). When people become engaged in games they may experience psychological absorption. More commonly known as ‘immersion’, this refers to when individual logical integration of thoughts, feelings and experiences is suspended (Funk, Chan, Brouwer, & Curtiss, 2006; Wood, Griffiths, & Parke, 2007). This can incur an altered state of consciousness such as altered time perception and change in degree of control over cognitive functioning (Griffiths et al., 2006). Video game enjoyment has been associated with absorption and immersion experiences (IJsselsteijn, Kort, de Poels, Jurgelionis, & Belotti, 2007). How an individual can get immersed in video games has been explained by the phenomenon of ‘flow’ (Csikszentmihalyi, 1988). Flow refers to the optimum experience a person achieves when performing an activity (e.g., video game playing) and may be induced, in part, by the structural characteristics of the activity itself. Structural characteristics of video games (i.e., the game elements that are incorporated into the game by the games designers) are usually based on a balance between skill and challenge (Wood et al., 2004; King, Delfabbro, & Griffiths, 2010), and help make playing video games an intrinsically rewarding activity (Csikszentmihalyi, 1988; King, et al. 2010). Studying Video Game Playing Studying the effects of video game playing requires taking in consideration four independent dimensions suggested by Gentile and Stone (2005); amount, content, form, and mechanism. The amount is understood as the time spent playing and gaming habits. Content refers to the message and topic delivered by the video game. Form focuses on the types of activity necessary to perform in the video game. The mechanism refers to the input-output devices used, which 17 more pages are available in the full version of this document, which may be purchased using the \"Add to Cart\" button on the product's webpage: www.igi-global.com/article/game-transfer-phenomena-videogame/58041?camid=4v1 This title is available in InfoSci-Journals, InfoSci-Journal Disciplines Communications and Social Science, InfoSciCommunications, Online Engagement, and Media eJournal Collection, InfoSci-Educational Leadership, Administration, and Technologies eJournal Collection, InfoSci-Healthcare Administration, Clinical Practice, and Bioinformatics eJournal Collection, InfoSci-Select, InfoSci-Journal Disciplines Library Science, Information Studies, and Education, InfoSci-Journal Disciplines Medicine, Healthcare, and Life Science. Recommend this product to your librarian: www.igi-global.com/e-resources/libraryrecommendation/?id=2", "title": "" }, { "docid": "2390d3d6c51c4a6857c517eb2c2cb3c0", "text": "It is common for organizations to maintain multiple variants of a given business process, such as multiple sales processes for different products or multiple bookkeeping processes for different countries. Conventional business process modeling languages do not explicitly support the representation of such families of process variants. This gap triggered significant research efforts over the past decade, leading to an array of approaches to business process variability modeling. In general, each of these approaches extends a conventional process modeling language with constructs to capture customizable process models. A customizable process model represents a family of process variants in a way that a model of each variant can be derived by adding or deleting fragments according to customization options or according to a domain model. This survey draws up a systematic inventory of approaches to customizable process modeling and provides a comparative evaluation with the aim of identifying common and differentiating modeling features, providing criteria for selecting among multiple approaches, and identifying gaps in the state of the art. The survey puts into evidence an abundance of customizable process-modeling languages, which contrasts with a relative scarcity of available tool support and empirical comparative evaluations.", "title": "" }, { "docid": "9676c561df01b794aba095dc66b684f8", "text": "The differentiation of B lymphocytes in the bone marrow is guided by the surrounding microenvironment determined by cytokines, adhesion molecules, and the extracellular matrix. These microenvironmental factors are mainly provided by stromal cells. In this paper, we report the identification of a VCAM-1-positive stromal cell population by flow cytometry. This population showed the expression of cell surface markers known to be present on stromal cells (CD10, CD13, CD90, CD105) and had a fibroblastoid phenotype in vitro. Single cell RT-PCR analysis of its cytokine expression pattern revealed transcripts for haematopoietic cytokines important for either the early B lymphopoiesis like flt3L or the survival of long-lived plasma cells like BAFF or both processes like SDF-1. Whereas SDF-1 transcripts were detectable in all VCAM-1-positive cells, flt3L and BAFF were only expressed by some cells suggesting the putative existence of different subpopulations with distinct functional properties. In summary, the VCAM-1-positive cell population seems to be a candidate stromal cell population supporting either developing B cells and/or long-lived plasma cells in human bone marrow.", "title": "" }, { "docid": "9c28badf1e53e69452c1d7aad2a87fab", "text": "While an al dente character of 5G is yet to emerge, network densification, miscellany of node types, split of control and data plane, network virtualization, heavy and localized cache, infrastructure sharing, concurrent operation at multiple frequency bands, simultaneous use of different medium access control and physical layers, and flexible spectrum allocations can be envisioned as some of the potential ingredients of 5G. It is not difficult to prognosticate that with such a conglomeration of technologies, the complexity of operation and OPEX can become the biggest challenge in 5G. To cope with similar challenges in the context of 3G and 4G networks, recently, self-organizing networks, or SONs, have been researched extensively. However, the ambitious quality of experience requirements and emerging multifarious vision of 5G, and the associated scale of complexity and cost, demand a significantly different, if not totally new, approach toward SONs in order to make 5G technically as well as financially feasible. In this article we first identify what challenges hinder the current self-optimizing networking paradigm from meeting the requirements of 5G. We then propose a comprehensive framework for empowering SONs with big data to address the requirements of 5G. Under this framework we first characterize big data in the context of future mobile networks, identifying its sources and future utilities. We then explicate the specific machine learning and data analytics tools that can be exploited to transform big data into the right data that provides a readily useable knowledge base to create end-to-end intelligence of the network. We then explain how a SON engine can build on the dynamic models extractable from the right data. The resultant dynamicity of a big data empowered SON (BSON) makes it more agile and can essentially transform the SON from being a reactive to proactive paradigm and hence act as a key enabler for 5G's extremely low latency requirements. Finally, we demonstrate the key concepts of our proposed BSON framework through a case study of a problem that the classic 3G/4G SON fails to solve.", "title": "" }, { "docid": "12af7a639f885a173950304cf44b5a42", "text": "Objective:To compare fracture rates in four diet groups (meat eaters, fish eaters, vegetarians and vegans) in the Oxford cohort of the European Prospective Investigation into Cancer and Nutrition (EPIC-Oxford).Design:Prospective cohort study of self-reported fracture risk at follow-up.Setting:The United Kingdom.Subjects:A total of 7947 men and 26 749 women aged 20–89 years, including 19 249 meat eaters, 4901 fish eaters, 9420 vegetarians and 1126 vegans, recruited by postal methods and through general practice surgeries.Methods:Cox regression.Results:Over an average of 5.2 years of follow-up, 343 men and 1555 women reported one or more fractures. Compared with meat eaters, fracture incidence rate ratios in men and women combined adjusted for sex, age and non-dietary factors were 1.01 (95% CI 0.88–1.17) for fish eaters, 1.00 (0.89–1.13) for vegetarians and 1.30 (1.02–1.66) for vegans. After further adjustment for dietary energy and calcium intake the incidence rate ratio among vegans compared with meat eaters was 1.15 (0.89–1.49). Among subjects consuming at least 525 mg/day calcium the corresponding incidence rate ratios were 1.05 (0.90–1.21) for fish eaters, 1.02 (0.90–1.15) for vegetarians and 1.00 (0.69–1.44) for vegans.Conclusions:In this population, fracture risk was similar for meat eaters, fish eaters and vegetarians. The higher fracture risk in the vegans appeared to be a consequence of their considerably lower mean calcium intake. An adequate calcium intake is essential for bone health, irrespective of dietary preferences.Sponsorship:The EPIC-Oxford study is supported by The Medical Research Council and Cancer Research UK.", "title": "" }, { "docid": "b1e039673d60defd9b8699074235cf1b", "text": "Sentiment classification has undergone significant development in recent years. However, most existing studies assume the balance between negative and positive samples, which may not be true in reality. In this paper, we investigate imbalanced sentiment classification instead. In particular, a novel clustering-based stratified under-sampling framework and a centroid-directed smoothing strategy are proposed to address the imbalanced class and feature distribution problems respectively. Evaluation across different datasets shows the effectiveness of both the under-sampling framework and the smoothing strategy in handling the imbalanced problems in real sentiment classification applications.", "title": "" }, { "docid": "8aacdb790ddec13f396a0591c0cd227a", "text": "This paper reports on a qualitative study of journal entries written by students in six health professions participating in the Interprofessional Health Mentors program at the University of British Columbia, Canada. The study examined (1) what health professions students learn about professional language and communication when given the opportunity, in an interprofessional group with a patient or client, to explore the uses, meanings, and effects of common health care terms, and (2) how health professional students write about their experience of discussing common health care terms, and what this reveals about how students see their development of professional discourse and participation in a professional discourse community. Using qualitative thematic analysis to address the first question, the study found that discussion of these health care terms provoked learning and reflection on how words commonly used in one health profession can be understood quite differently in other health professions, as well as on how health professionals' language choices may be perceived by patients and clients. Using discourse analysis to address the second question, the study further found that many of the students emphasized accuracy and certainty in language through clear definitions and intersubjective agreement. However, when prompted by the discussion they were willing to consider other functions and effects of language.", "title": "" }, { "docid": "26feac05cc1827728cbcb6be3b4bf6d1", "text": "This paper presents a Linux kernel module, DigSig, which helps system administrators control Executable and Linkable Format (ELF) binary execution and library loading based on the presence of a valid digital signature. By preventing attackers from replacing libraries and sensitive, privileged system daemons with malicious code, DigSig increases the difficulty of hiding illicit activities such as access to compromised systems. DigSig provides system administrators with an efficient tool which mitigates the risk of running malicious code at run time. This tool adds extra functionality previously unavailable for the Linux operating system: kernel level RSA signature verification with caching and revocation of signatures.", "title": "" }, { "docid": "a134fe9ffdf7d99593ad9cdfd109b89d", "text": "A hybrid particle swarm optimization (PSO) for the job shop problem (JSP) is proposed in this paper. In previous research, PSO particles search solutions in a continuous solution space. Since the solution space of the JSP is discrete, we modified the particle position representation, particle movement, and particle velocity to better suit PSO for the JSP. We modified the particle position based on preference list-based representation, particle movement based on swap operator, and particle velocity based on the tabu list concept in our algorithm. Giffler and Thompson’s heuristic is used to decode a particle position into a schedule. Furthermore, we applied tabu search to improve the solution quality. The computational results show that the modified PSO performs better than the original design, and that the hybrid PSO is better than other traditional metaheuristics. 2006 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "76f033087b24fdb7494dd7271adbb346", "text": "Navigation research is attracting renewed interest with the advent of learning-based methods. However, this new line of work is largely disconnected from well-established classic navigation approaches. In this paper, we take a step towards coordinating these two directions of research. We set up classic and learning-based navigation systems in common simulated environments and thoroughly evaluate them in indoor spaces of varying complexity, with access to different sensory modalities. Additionally, we measure human performance in the same environments. We find that a classic pipeline, when properly tuned, can perform very well in complex cluttered environments. On the other hand, learned systems can operate more robustly with a limited sensor suite. Both approaches are still far from human-level performance.", "title": "" }, { "docid": "21d84bd9ea7896892a3e69a707b03a6a", "text": "Tahoe is a system for secure, distributed storage. It uses capabilities for access control, cryptography for confidentiality and integrity, and erasure coding for fault-tolerance. It has been deployed in a commercial backup service and is currently operational. The implementation is Open Source.", "title": "" }, { "docid": "3230fba68358a08ab9112887bdd73bb9", "text": "The local field potential (LFP) reflects activity of many neurons in the vicinity of the recording electrode and is therefore useful for studying local network dynamics. Much of the nature of the LFP is, however, still unknown. There are, for instance, contradicting reports on the spatial extent of the region generating the LFP. Here, we use a detailed biophysical modeling approach to investigate the size of the contributing region by simulating the LFP from a large number of neurons around the electrode. We find that the size of the generating region depends on the neuron morphology, the synapse distribution, and the correlation in synaptic activity. For uncorrelated activity, the LFP represents cells in a small region (within a radius of a few hundred micrometers). If the LFP contributions from different cells are correlated, the size of the generating region is determined by the spatial extent of the correlated activity.", "title": "" }, { "docid": "e00295dc86476d1d350d11068439fe87", "text": "A 10-bit LCD column driver, consisting of piecewise linear digital to analog converters (DACs), is proposed. Piecewise linear compensation is utilized to reduce the die area and to increase the effective color depth. The data conversion is carried out by a resistor string type DAC (R-DAC) and a charge sharing DAC, which are used for the most significant bit and least significant bit data conversions, respectively. Gamma correction voltages are applied to the R-DAC to lit the inverse of the liquid crystal trans-mittance-voltage characteristic. The gamma correction can also be digitally fine-tuned in the timing controller or column drivers. A prototype 10-bit LCD column driver implemented in a 0.35-mum CMOS technology demonstrates that the settling time is within 3 mus and the average die size per channel is 0.063 mm2, smaller than those of column drivers based exclusively on R-DACs.", "title": "" }, { "docid": "4c261e2b54a12270f158299733942a5f", "text": "Applying Data Mining (DM) in education is an emerging interdisciplinary research field also known as Educational Data Mining (EDM). Ensemble techniques have been successfully applied in the context of supervised learning to increase the accuracy and stability of prediction. In this paper, we present a hybrid procedure based on ensemble classification and clustering that enables academicians to firstly predict students’ academic performance and then place each student in a well-defined cluster for further advising. Additionally, it endows instructors an anticipated estimation of their students’ capabilities during team forming and in-class participation. For ensemble classification, we use multiple classifiers (Decision Trees-J48, Naïve Bayes and Random Forest) to improve the quality of student data by eliminating noisy instances, and hence improving predictive accuracy. We then use the approach of bootstrap (sampling with replacement) averaging, which consists of running k-means clustering algorithm to convergence of the training data and averaging similar cluster centroids to obtain a single model. We empirically compare our technique with other ensemble techniques on real world education datasets.", "title": "" }, { "docid": "2a7b7d9fab496be18f6bf50add2f7b1e", "text": "BACKROUND\nSuperior Mesenteric Artery Syndrome (SMAS) is a rare disorder caused by compression of the third portion of the duodenum by the SMA. Once a conservative approach fails, usual surgical strategies include Duodenojejunostomy and Strong's procedure. The latter avoids potential anastomotic risks and complications. Robotic Strong's procedure (RSP) combines both the benefits of a minimal invasive approach and also enchased robotic accuracy and efficacy.\n\n\nMETHODS\nFor a young girl who was unsuccessfully treated conservatively, the paper describes the RSP surgical technique. To the authors' knowledge, this is the first report in the literature.\n\n\nRESULTS\nMinimal blood loss, short operative time, short hospital stay and early recovery were the short-term benefits. Significant weight gain was achieved three months after the surgery.\n\n\nCONCLUSION\nBased on primary experience, it is suggested that RSP is a very effective alternative in treating SMAS.", "title": "" }, { "docid": "d18c53be23600c9b0ae2efa215c7c4af", "text": "The problem of large-scale image search has been traditionally addressed with the bag-of-visual-words (BOV). In this article, we propose to use as an alternative the Fisher kernel framework. We first show why the Fisher representation is well-suited to the retrieval problem: it describes an image by what makes it different from other images. One drawback of the Fisher vector is that it is high-dimensional and, as opposed to the BOV, it is dense. The resulting memory and computational costs do not make Fisher vectors directly amenable to large-scale retrieval. Therefore, we compress Fisher vectors to reduce their memory footprint and speed-up the retrieval. We compare three binarization approaches: a simple approach devised for this representation and two standard compression techniques. We show on two publicly available datasets that compressed Fisher vectors perform very well using as little as a few hundreds of bits per image, and significantly better than a very recent compressed BOV approach.", "title": "" }, { "docid": "c32c1c16aec9bc6dcfb5fa8fb4f25140", "text": "Logo detection is a challenging task with many practical applications in our daily life and intellectual property protection. The two main obstacles here are lack of public logo datasets and effective design of logo detection structure. In this paper, we first manually collected and annotated 6,400 images and mix them with FlickrLogo-32 dataset, forming a larger dataset. Secondly, we constructed Faster R-CNN frameworks with several widely used classification models for logo detection. Furthermore, the transfer learning method was introduced in the training process. Finally, clustering was used to guarantee suitable hyper-parameters and more precise anchors of RPN. Experimental results show that the proposed framework outper-forms the state of-the-art methods with a noticeable margin.", "title": "" } ]
scidocsrr
1b8ab416e44c8d94d782589e19c50540
What Is the Evidence to Support the Use of Therapeutic Gardens for the Elderly?
[ { "docid": "a86114aeee4c0bc1d6c9a761b50217d4", "text": "OBJECTIVE\nThe purpose of this study was to investigate the effect of antidepressant treatment on hippocampal volumes in patients with major depression.\n\n\nMETHOD\nFor 38 female outpatients, the total time each had been in a depressive episode was divided into days during which the patient was receiving antidepressant medication and days during which no antidepressant treatment was received. Hippocampal gray matter volumes were determined by high resolution magnetic resonance imaging and unbiased stereological measurement.\n\n\nRESULTS\nLonger durations during which depressive episodes went untreated with antidepressant medication were associated with reductions in hippocampal volume. There was no significant relationship between hippocampal volume loss and time depressed while taking antidepressant medication or with lifetime exposure to antidepressants.\n\n\nCONCLUSIONS\nAntidepressants may have a neuroprotective effect during depression.", "title": "" } ]
[ { "docid": "53a033a068a51cfa0b025c2cae508702", "text": "In a grid connected photovoltaic system, the main aim is to design an efficient solar inverter with higher efficiency and which also controls the power that the inverter injects into the grid. The effectiveness of the general PV system anticipate on the productivity by which the direct current of the solar module is changed over into alternating current. The fundamental requirement to interface the solar module to the grid with increased productivity includes: Low THD of current injected to the grid, maximum power point, and high power factor. In this paper, a two stage topology without galvanic isolation is been carried out for a single phase grid connected photovoltaic inverter. The output from the PV panel is given to the DC/DC boost converter, maximum power point tracking (MPPT) control technique is being used to control the gate pulse of the IGBT of boost converter. The boosted output is fed to the highly efficient and reliable inverter concept (HERIC) inverter in order to convert DC into AC with higher efficiency.", "title": "" }, { "docid": "d3ac465b3271e81f735086a2359fca9b", "text": "Computing a curve to approximate data points is a problem encountered frequently in many applications in computer graphics, computer vision, CAD/CAM, and image processing. We present a novel and efficient method, called squared distance minimization (SDM), for computing a planar B-spline curve, closed or open, to approximate a target shape defined by a point cloud, that is, a set of unorganized, possibly noisy data points. We show that SDM significantly outperforms other optimization methods used currently in common practice of curve fitting. In SDM, a B-spline curve starts from some properly specified initial shape and converges towards the target shape through iterative quadratic minimization of the fitting error. Our contribution is the introduction of a new fitting error term, called the squared distance (SD) error term, defined by a curvature-based quadratic approximant of squared distances from data points to a fitting curve. The SD error term faithfully measures the geometric distance between a fitting curve and a target shape, thus leading to faster and more stable convergence than the point distance (PD) error term, which is commonly used in computer graphics and CAGD, and the tangent distance (TD) error term, which is often adopted in the computer vision community. To provide a theoretical explanation of the superior performance of SDM, we formulate the B-spline curve fitting problem as a nonlinear least squares problem and conclude that SDM is a quasi-Newton method which employs a curvature-based positive definite approximant to the true Hessian of the objective function. Furthermore, we show that the method based on the TD error term is a Gauss-Newton iteration, which is unstable for target shapes with high curvature variations, whereas optimization based on the PD error term is the alternating method that is known to have linear convergence.", "title": "" }, { "docid": "3a2168e93c1f8025e93de1a7594e17d5", "text": "1 Multisensor Data Fusion for Next Generation Distributed Intrusion Detection Systems Tim Bass ERIM International & Silk Road Ann Arbor, MI 48113 Abstract| Next generation cyberspace intrusion detection systems will fuse data from heterogeneous distributed network sensors to create cyberspace situational awareness. This paper provides a few rst steps toward developing the engineering requirements using the art and science of multisensor data fusion as the underlying model. Current generation internet-based intrusion detection systems and basic multisensor data fusion constructs are summarized. The TCP/IP model is used to develop framework sensor and database models. The SNMP ASN.1 MIB construct is recommended for the representation of context-dependent threat & vulnerabilities databases.", "title": "" }, { "docid": "bd0e01675a12193752588e6bc730edd5", "text": "Online safety is everyone's responsibility---a concept much easier to preach than to practice.", "title": "" }, { "docid": "bf707a96f7059b4c4f62d38255bb8333", "text": "We present a system to detect passenger cars in aerial images along the road directions where cars appear as small objects. We pose this as a 3D object recognition problem to account for the variation in viewpoint and the shadow. We started from psychological tests to find important features for human detection of cars. Based on these observations, we selected the boundary of the car body, the boundary of the front windshield, and the shadow as the features. Some of these features are affected by the intensity of the car and whether or not there is a shadow along it. This information is represented in the structure of the Bayesian network that we use to integrate all features. Experiments show very promising results even on some very challenging images.", "title": "" }, { "docid": "71cf493e0026fe057b1100c5ad1118ad", "text": "We explore story generation: creative systems that can build coherent and fluent passages of text about a topic. We collect a large dataset of 300K human-written stories paired with writing prompts from an online forum. Our dataset enables hierarchical story generation, where the model first generates a premise, and then transforms it into a passage of text. We gain further improvements with a novel form of model fusion that improves the relevance of the story to the prompt, and adding a new gated multi-scale self-attention mechanism to model long-range context. Experiments show large improvements over strong baselines on both automated and human evaluations. Human judges prefer stories generated by our approach to those from a strong non-hierarchical model by a factor of two to one.", "title": "" }, { "docid": "beedf5250dccbb0cf021618532dd98f6", "text": "This paper deals with the problem of gender classification using fingerprint images. Our attempt to gender identification follows the use of machine learning to determine the differences between fingerprint images. Each image in the database was represented by a feature vector consisting of ridge thickness to valley thickness ratio (RTVTR) and the ridge density values. By using a support vector machine trained on a set of 150 male and 125 female images, we obtain a robust classifying function for male and female feature vector patterns.", "title": "" }, { "docid": "b891bf4de3d1060b723c7a2e443acd10", "text": "For a dynamic network based large vocabulary continuous speech recognizer, this paper proposes a fast language model (LM) look-ahead method using extended N -gram model. The extended N -gram model unifies the representations and score computations of the LM and the LM look-ahead tree, and thus greatly simplifies the decoder implementation and improves the LM look-ahead speed significantly, which makes higher-order LM look-ahead possible. The extended N -gram model is generated off-line before decoding starts. The generation procedure makes use of sparseness of backing-off N -gram models for efficient look-ahead score computation, and uses word-end node pushing and score quantitation to compact the model′s storage space. Experiments showed that with the same character error rate, the proposed method speeded up the overall recognition speed by a factor of 5∼ 9 than the traditional dynamic programming method which computes LM look-ahead scores on-line during the decoding process, and that using higher-order LM look-ahead algorithm can achieve a faster decoding speed and better accuracy than using the lower-order look-ahead ones.", "title": "" }, { "docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" }, { "docid": "ef2cc160033a30ed1341b45468d93464", "text": "A number of issues can affect sample size in qualitative research; however, the guiding principle should be the concept of saturation. This has been explored in detail by a number of authors but is still hotly debated, and some say little understood. A sample of PhD studies using qualitative approaches, and qualitative interviews as the method of data collection was taken from theses.com and contents analysed for their sample sizes. Five hundred and sixty studies were identified that fitted the inclusion criteria. Results showed that the mean sample size was 31; however, the distribution was non-random, with a statistically significant proportion of studies, presenting sample sizes that were multiples of ten. These results are discussed in relation to saturation. They suggest a pre-meditated approach that is not wholly congruent with the principles of qualitative research.", "title": "" }, { "docid": "913ea886485fae9b567146532ca458ac", "text": "This article presents a new method to illustrate the feasibility of 3D topology creation. We base the 3D construction process on testing real cases of implementation of 3D parcels construction in a 3D cadastral system. With the utilization and development of dense urban space, true 3D geometric volume primitives are needed to represent 3D parcels with the adjacency and incidence relationship. We present an effective straightforward approach to identifying and constructing the valid volumetric cadastral object from the given faces, and build the topological relationships among 3D cadastral objects on-thefly, based on input consisting of loose boundary 3D faces made by surveyors. This is drastically different from most existing methods, which focus on the validation of single volumetric objects after the assumption of the object’s creation. Existing methods do not support the needed types of geometry/ topology (e.g. non 2-manifold, singularities) and how to create and maintain valid 3D parcels is still a challenge in practice. We will show that the method does not change the faces themselves and faces in a given input are independently specified. Various volumetric objects, including non-manifold 3D cadastral objects (legal spaces), can be constructed correctly by this method, as will be shown from the", "title": "" }, { "docid": "988b56fdbfd0fbb33bb715adb173c63c", "text": "This paper presents a new sensing system for home-based rehabilitation based on optical linear encoder (OLE), in which the motion of an optical encoder on a code strip is converted to the limb joints' goniometric data. A body sensing module was designed, integrating the OLE and an accelerometer. A sensor network of three sensing modules was established via controller area network bus to capture human arm motion. Experiments were carried out to compare the performance of the OLE module with that of commercial motion capture systems such as electrogoniometers and fiber-optic sensors. The results show that the inexpensive and simple-design OLE's performance is comparable to that of expensive systems. Moreover, a statistical study was conducted to confirm the repeatability and reliability of the sensing system. The OLE-based system has strong potential as an inexpensive tool for motion capture and arm-function evaluation for short-term as well as long-term home-based monitoring.", "title": "" }, { "docid": "45d60590eeb7983c5f449719e51dd628", "text": "Directly adding the knowledge triples obtained from open information extraction systems into a knowledge base is often impractical due to a vocabulary gap between natural language (NL) expressions and knowledge base (KB) representation. This paper aims at learning to map relational phrases in triples from natural-language-like statement to knowledge base predicate format. We train a word representation model on a vector space and link each NL relational pattern to the semantically equivalent KB predicate. Our mapping result shows not only high quality, but also promising coverage on relational phrases compared to previous research.", "title": "" }, { "docid": "8efe66661d6c1bb7e96c4c2cb2fbdeec", "text": "IT Leader Sample SIC Code Control Sample SIC Code Consol Energy Inc 1220 Walter Energy Inc 1220 Halliburton Co 1389 Schlumberger Ltd 1389 Standard Pacific Corp 1531 M/I Homes Inc 1531 Beazer Homes USA Inc 1531 Hovnanian Entrprs Inc -Cl A 1531 Toll Brothers Inc 1531 MDC Holdings Inc 1531 D R Horton Inc 1531 Ryland Group Inc 1531 Lennar Corp 1531 KB Home 1531 Granite Construction Inc 1600 Empresas Ica Soc Ctl ADR 1600 Fluor Corp 1600 Alstom ADR 1600 Gold Kist Inc 2015 Sadia Sa ADR 2015 Kraft Foods Inc 2000 ConAgra Foods Inc 2000 Smithfield Foods Inc 2011 Hormel Foods Corp 2011 Campbell Soup Co 2030 Heinz (H J) Co 2030 General Mills Inc 2040 Kellogg Co 2040 Imperial Sugar Co 2060 Wrigley (Wm) Jr Co 2060 Hershey Co 2060 Tate & Lyle Plc ADR 2060 Molson Coors Brewing Co 2082 Comp Bebidas Americas ADR 2082 Constellation Brands Cl A 2084 Gruma S.A.B. de C.V. ADR B 2040 Brown-Forman Cl B 2085 Coca Cola Hellenic Bttlg ADR 2086", "title": "" }, { "docid": "7b8dffab502fae2abbea65464e2727aa", "text": "Bone tissue is continuously remodeled through the concerted actions of bone cells, which include bone resorption by osteoclasts and bone formation by osteoblasts, whereas osteocytes act as mechanosensors and orchestrators of the bone remodeling process. This process is under the control of local (e.g., growth factors and cytokines) and systemic (e.g., calcitonin and estrogens) factors that all together contribute for bone homeostasis. An imbalance between bone resorption and formation can result in bone diseases including osteoporosis. Recently, it has been recognized that, during bone remodeling, there are an intricate communication among bone cells. For instance, the coupling from bone resorption to bone formation is achieved by interaction between osteoclasts and osteoblasts. Moreover, osteocytes produce factors that influence osteoblast and osteoclast activities, whereas osteocyte apoptosis is followed by osteoclastic bone resorption. The increasing knowledge about the structure and functions of bone cells contributed to a better understanding of bone biology. It has been suggested that there is a complex communication between bone cells and other organs, indicating the dynamic nature of bone tissue. In this review, we discuss the current data about the structure and functions of bone cells and the factors that influence bone remodeling.", "title": "" }, { "docid": "8f916f7be3048ae2a367096f4f82207d", "text": "Existing methods for single document keyphrase extraction usually make use of only the information contained in the specified document. This paper proposes to use a small number of nearest neighbor documents to provide more knowledge to improve single document keyphrase extraction. A specified document is expanded to a small document set by adding a few neighbor documents close to the document, and the graph-based ranking algorithm is then applied on the expanded document set to make use of both the local information in the specified document and the global information in the neighbor documents. Experimental results demonstrate the good effectiveness and robustness of our proposed approach.", "title": "" }, { "docid": "6fb868748f5c2ed6d8ae34721bc445eb", "text": "Handling imbalanced datasets is a challenging problem that if not treated correctly results in reduced classification performance. Imbalanced datasets are commonly handled using minority oversampling, whereas the SMOTE algorithm is a successful oversampling algorithm with numerous extensions. SMOTE extensions do not have a theoretical guarantee during training to work better than SMOTE and in many instances their performance is data dependent. In this paper we propose a novel extension to the SMOTE algorithm with a theoretical guarantee for improved classification performance. The proposed approach considers the classification performance of both the majority and minority classes. In the proposed approach CGMOS (Certainty Guided Minority OverSampling) new data points are added by considering certainty changes in the dataset. The paper provides a proof that the proposed algorithm is guaranteed to work better than SMOTE for training data. Further, experimental results on 30 real-world datasets show that CGMOS works better than existing algorithms when using 6 different classifiers.", "title": "" }, { "docid": "bf4d9bcadd48efcea886ea442077acb3", "text": "Satellite remote sensing is a valuable tool for monitoring flooding. Microwave sensors are especially appropriate instruments, as they allow the differentiation of inundated from non-inundated areas, regardless of levels of solar illumination or frequency of cloud cover in regions experiencing substantial rainy seasons. In the current study we present the longest synthetic aperture radar-based time series of flood and inundation information derived for the Mekong Delta that has been analyzed for this region so far. We employed overall 60 Envisat ASAR Wide Swath Mode data sets at a spatial resolution of 150 meters acquired during the years 2007–2011 to facilitate a thorough understanding of the flood regime in the Mekong Delta. The Mekong Delta in southern Vietnam comprises 13 provinces and is home to 18 million inhabitants. Extreme dry seasons from late December to May and wet seasons from June to December characterize people’s rural life. In this study, we show which areas of the delta are frequently affected by floods and which regions remain dry all year round. Furthermore, we present which areas are flooded at which frequency and elucidate the patterns of flood progression over the course of the rainy season. In this context, we also examine the impact of dykes on floodwater emergence and assess the relationship between retrieved flood occurrence patterns and land use. In addition, the advantages and shortcomings of ENVISAT ASAR-WSM based flood mapping are discussed. The results contribute to a comprehensive understanding of Mekong Delta flood OPEN ACCESS Remote Sens. 2013, 5 688 dynamics in an environment where the flow regime is influenced by the Mekong River, overland water-flow, anthropogenic floodwater control, as well as the tides.", "title": "" }, { "docid": "a3cfab5203348546d901e18ab4cc7c3a", "text": "Most of neural language models use different kinds of embeddings for word prediction. While word embeddings can be associated to each word in the vocabulary or derived from characters as well as factored morphological decomposition, these word representations are mainly used to parametrize the input, i.e. the context of prediction. This work investigates the effect of using subword units (character and factored morphological decomposition) to build output representations for neural language modeling. We present a case study on Czech, a morphologically-rich language, experimenting with different input and output representations. When working with the full training vocabulary, despite unstable training, our experiments show that augmenting the output word representations with character-based embeddings can significantly improve the performance of the model. Moreover, reducing the size of the output look-up table, to let the character-based embeddings represent rare words, brings further improvement.", "title": "" }, { "docid": "9361344286f994c8432f3f6bb0f1a86c", "text": "Proper formulation of features plays an important role in shorttext classification tasks as the amount of text available is very little. In literature, Term Frequency Inverse Document Frequency (TF-IDF) is commonly used to create feature vectors for such tasks. However, TF-IDF formulation does not utilize the class information available in supervised learning. For classification problems, if it is possible to identify terms that can strongly distinguish among classes, then more weight can be given to those terms during feature construction phase. This may result in improved classifier performance with the incorporation of extra class label related information. We propose a supervised feature construction method to classify tweets, based on the actionable information that might be present, posted during different disaster scenarios. Improved classifier performance for such classification tasks can be helpful in the rescue and relief operations. We used three benchmark datasets containing tweets posted during Nepal and Italy earthquakes in 2015 and 2016 respectively. Experimental results show that the proposed method obtains better classification performance on these benchmark datasets.", "title": "" } ]
scidocsrr
981b8ee24864cf71e9ad34c9967065ff
Integrating 3D structure into traffic scene understanding with RGB-D data
[ { "docid": "5691ca09e609aea46b9fd5e7a83d165a", "text": "View-based 3-D object retrieval and recognition has become popular in practice, e.g., in computer aided design. It is difficult to precisely estimate the distance between two objects represented by multiple views. Thus, current view-based 3-D object retrieval and recognition methods may not perform well. In this paper, we propose a hypergraph analysis approach to address this problem by avoiding the estimation of the distance between objects. In particular, we construct multiple hypergraphs for a set of 3-D objects based on their 2-D views. In these hypergraphs, each vertex is an object, and each edge is a cluster of views. Therefore, an edge connects multiple vertices. We define the weight of each edge based on the similarities between any two views within the cluster. Retrieval and recognition are performed based on the hypergraphs. Therefore, our method can explore the higher order relationship among objects and does not use the distance between objects. We conduct experiments on the National Taiwan University 3-D model dataset and the ETH 3-D object collection. Experimental results demonstrate the effectiveness of the proposed method by comparing with the state-of-the-art methods.", "title": "" } ]
[ { "docid": "c460179cbdb40b9d89b3cc02276d54e1", "text": "In recent years the sport of climbing has seen consistent increase in popularity. Climbing requires a complex skill set for successful and safe exercising. While elite climbers receive intensive expert coaching to refine this skill set, this progression approach is not viable for the amateur population. We have developed ClimbAX - a climbing performance analysis system that aims for replicating expert assessments and thus represents a first step towards an automatic coaching system for climbing enthusiasts. Through an accelerometer based wearable sensing platform, climber's movements are captured. An automatic analysis procedure detects climbing sessions and moves, which form the basis for subsequent performance assessment. The assessment parameters are derived from sports science literature and include: power, control, stability, speed. ClimbAX was evaluated in a large case study with 53 climbers under competition settings. We report a strong correlation between predicted scores and official competition results, which demonstrate the effectiveness of our automatic skill assessment system.", "title": "" }, { "docid": "179e5b887f15b4ecf4ba92031a828316", "text": "High efficiency power supply solutions for data centers are gaining more attention, in order to minimize the fast growing power demands of such loads, the 48V Voltage Regulator Module (VRM) for powering CPU is a promising solution replacing the legacy 12V VRM by which the bus distribution loss, cost and size can be dramatically minimized. In this paper, a two-stage 48V/12V/1.8V–250W VRM is proposed, the first stage is a high efficiency, high power density isolated — unregulated DC/DC converter (DCX) based on LLC resonant converter stepping the input voltage from 48V to 12V. The Matrix transformer concept was utilized for designing the high frequency transformer of the first stage, an enhanced termination loop for the synchronous rectifiers and a non-uniform winding structure is proposed resulting in significant increase in both power density and efficiency of the first stage converter. The second stage is a 4-phases buck converter stepping the voltage from 12V to 1.8V to the CPU. Since the CPU runs in the sleep mode most of the time a light load efficiency improvement method by changing the bus voltage from 12V to 6 V during light load operation is proposed showing more than 8% light load efficiency enhancement than fixed bus voltage. Experimental results demonstrate the high efficiency of the proposed solution reaching peak of 91% with a significant light load efficiency improvement.", "title": "" }, { "docid": "31461de346fb454f296495287600a74f", "text": "The working hypothesis of the paper is that motor images are endowed with the same properties as those of the (corresponding) motor representations, and therefore have the same functional relationship to the imagined or represented movement and the same causal role in the generation of this movement. The fact that the timing of simulated movements follows the same constraints as that of actually executed movements is consistent with this hypothesis. Accordingly, many neural mechanisms are activated during motor imagery, as revealed by a sharp increase in tendinous reflexes in the limb imagined to move, and by vegetative changes which correlate with the level of mental effort. At the cortical level, a specific pattern of activation, that closely resembles that of action execution, is observed in areas devoted to motor control. This activation might be the substrate for the effects of mental training. A hierarchical model of the organization of action is proposed: this model implies a short-term memory storage of a 'copy' of the various representational steps. These memories are erased when an action corresponding to the represented goal takes place. By contrast, if the action is incompletely or not executed, the whole system remains activated, and the content of the representation is rehearsed. This mechanism would be the substrate for conscious access to this content during motor imagery and mental training.", "title": "" }, { "docid": "e054c2d3b52441eaf801e7d2dd54dce9", "text": "The concept of centrality is often invoked in social network analysis, and diverse indices have been proposed to measure it. This paper develops a unified framework for the measurement of centrality. All measures of centrality assess a node’s involvement in the walk structure of a network. Measures vary along four key dimensions: type of nodal involvement assessed, type of walk considered, property of walk assessed, and choice of summary measure. If we cross-classify measures by type of nodal involvement (radial versus medial) and property of walk assessed (volume versus length), we obtain a four-fold polychotomization with one cell empty which mirrors Freeman’s 1979 categorization. At a more substantive level, measures of centrality summarize a node’s involvement in or contribution to the cohesiveness of the network. Radial measures in particular are reductions of pair-wise proximities/cohesion to attributes of nodes or actors. The usefulness and interpretability of radial measures depend on the fit of the cohesion matrix to the onedimensional model. In network terms, a network that is fit by a one-dimensional model has a core-periphery structure in which all nodes revolve more or less closely around a single core. This in turn implies that the network does not contain distinct cohesive subgroups. Thus, centrality is shown to be intimately connected with the cohesive subgroup structure of a network. © 2005 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "bd1fdbfcc0116dcdc5114065f32a883e", "text": "Thousands of operations are annually guided with computer assisted surgery (CAS) technologies. As the use of these devices is rapidly increasing, the reliability of the devices becomes ever more critical. The problem of accuracy assessment of the devices has thus become relevant. During the past five years, over 200 hazardous situations have been documented in the MAUDE database during operations using these devices in the field of neurosurgery alone. Had the accuracy of these devices been periodically assessed pre-operatively, many of them might have been prevented. The technical accuracy of a commercial navigator enabling the use of both optical (OTS) and electromagnetic (EMTS) tracking systems was assessed in the hospital setting using accuracy assessment tools and methods developed by the authors of this paper. The technical accuracy was obtained by comparing the positions of the navigated tool tip with the phantom accuracy assessment points. Each assessment contained a total of 51 points and a region of surgical interest (ROSI) volume of 120x120x100 mm roughly mimicking the size of the human head. The error analysis provided a comprehensive understanding of the trend of accuracy of the surgical navigator modalities. This study showed that the technical accuracies of OTS and EMTS over the pre-determined ROSI were nearly equal. However, the placement of the particular modality hardware needs to be optimized for the surgical procedure. New applications of EMTS, which does not require rigid immobilization of the surgical area, are suggested.", "title": "" }, { "docid": "48a45f03f31d8fc0daede6603f3b693a", "text": "This paper presents GelClust, a new software that is designed for processing gel electrophoresis images and generating the corresponding phylogenetic trees. Unlike the most of commercial and non-commercial related softwares, we found that GelClust is very user-friendly and guides the user from image toward dendrogram through seven simple steps. Furthermore, the software, which is implemented in C# programming language under Windows operating system, is more accurate than similar software regarding image processing and is the only software able to detect and correct gel 'smile' effects completely automatically. These claims are supported with experiments.", "title": "" }, { "docid": "e306d50838fc5e140a8c96cd95fd3ca2", "text": "Customer Relationship Management (CRM) is a strategy that supports an organization’s decision-making process to retain long-term and profitable relationships with its customers. Effective CRM analyses require a detailed data warehouse model that can support various CRM analyses and deep understanding on CRM-related business questions. In this paper, we present a taxonomy of CRM analysis categories. Our CRM taxonomy includes CRM strategies, CRM category analyses, CRM business questions, their potential uses, and key performance indicators (KPIs) for those analysis types. Our CRM taxonomy can be used in selecting and evaluating a data schema for CRM analyses, CRM vendors, CRM strategies, and KPIs.", "title": "" }, { "docid": "860e3c429e6ae709ce9cbc4b6cb148db", "text": "This paper presents an approach for performance analysis of modern enterprise-class server applications. In our experience, performance bottlenecks in these applications differ qualitatively from bottlenecks in smaller, stand-alone systems. Small applications and benchmarks often suffer from CPU-intensive hot spots. In contrast, enterprise-class multi-tier applications often suffer from problems that manifest not as hot spots, but as idle time, indicating a lack of forward motion. Many factors can contribute to undesirable idle time, including locking problems, excessive system-level activities like garbage collection, various resource constraints, and problems driving load.\n We present the design and methodology for WAIT, a tool to diagnosis the root cause of idle time in server applications. Given lightweight samples of Java activity on a single tier, the tool can often pinpoint the primary bottleneck on a multi-tier system. The methodology centers on an informative abstraction of the states of idleness observed in a running program. This abstraction allows the tool to distinguish, for example, between hold-ups on a database machine, insufficient load, lock contention in application code, and a conventional bottleneck due to a hot method. To compute the abstraction, we present a simple expert system based on an extensible set of declarative rules.\n WAIT can be deployed on the fly, without modifying or even restarting the application. Many groups in IBM have applied the tool to diagnosis performance problems in commercial systems, and we present a number of examples as case studies.", "title": "" }, { "docid": "a8a8656f2f7cdcab79662cb150c8effa", "text": "As networks grow both in importance and size, there is an increasing need for effective security monitors such as Network Intrusion Detection System to prevent such illicit accesses. Intrusion Detection Systems technology is an effective approach in dealing with the problems of network security. In this paper, we present an intrusion detection model based on hybrid fuzzy logic and neural network. The key idea is to take advantage of different classification abilities of fuzzy logic and neural network for intrusion detection system. The new model has ability to recognize an attack, to differentiate one attack from another i.e. classifying attack, and the most important, to detect new attacks with high detection rate and low false negative. Training and testing data were obtained from the Defense Advanced Research Projects Agency (DARPA) intrusion detection evaluation data set.", "title": "" }, { "docid": "920748fbdcaf91346a40e3bf5ae53d42", "text": "This sketch presents an improved formalization of automatic caricature that extends a standard approach to account for the population variance of facial features. Caricature is generally considered a rendering that emphasizes the distinctive features of a particular face. A formalization of this idea, which we term “Exaggerating the Difference from the Mean” (EDFM), is widely accepted among caricaturists [Redman 1984] and was first implemented in a groundbreaking computer program by [Brennan 1985]. Brennan’s “Caricature generator” program produced caricatures by manually defining a polyline drawing with topology corresponding to a frontal, mean, face-shape drawing, and then displacing the vertices by a constant factor away from the mean shape. Many psychological studies have applied the “Caricature Generator” or EDFM idea to investigate caricaturerelated issues in face perception [Rhodes 1997].", "title": "" }, { "docid": "e7eb22e4ac65696e3bb2a2611a28e809", "text": "Cuckoo search (CS) is an efficient swarm-intelligence-based algorithm and significant developments have been made since its introduction in 2009. CS has many advantages due to its simplicity and efficiency in solving highly non-linear optimisation problems with real-world engineering applications. This paper provides a timely review of all the state-of-the-art developments in the last five years, including the discussions of theoretical background and research directions for future development of this powerful algorithm.", "title": "" }, { "docid": "65cae0002bcff888d6514aa2d375da40", "text": "We study the problem of finding efficiently computable non-degenerate multilinear maps from G1 to G2, where G1 and G2 are groups of the same prime order, and where computing discrete logarithms in G1 is hard. We present several applications to cryptography, explore directions for building such maps, and give some reasons to believe that finding examples with n > 2", "title": "" }, { "docid": "a2fb1ee73713544852292721dce21611", "text": "Large scale implementation of active RFID tag technology has been restricted by the need for battery replacement. Prolonging battery lifespan may potentially promote active RFID tags which offer obvious advantages over passive RFID systems. This paper explores some opportunities to simulate and develop a prototype RF energy harvester for 2.4 GHz band specifically designed for low power active RFID tag application. This system employs a rectenna architecture which is a receiving antenna attached to a rectifying circuit that efficiently converts RF energy to DC current. Initial ADS simulation results show that 2 V output voltage can be achieved using a 7 stage Cockroft-Walton rectifying circuitry with -4.881 dBm (0.325 mW) output power under -4 dBm (0.398 mW) input RF signal. These results lend support to the idea that RF energy harvesting is indeed promising.", "title": "" }, { "docid": "08c97484fe3784e2f1fd42606b915f83", "text": "In the present study we manipulated the importance of performing two event-based prospective memory tasks. In Experiment 1, the event-based task was assumed to rely on relatively automatic processes, whereas in Experiment 2 the event-based task was assumed to rely on a more demanding monitoring process. In contrast to the first experiment, the second experiment showed that importance had a positive effect on prospective memory performance. In addition, the occurrence of an importance effect on prospective memory performance seemed to be mainly due to the features of the prospective memory task itself, and not to the characteristics of the ongoing tasks that only influenced the size of the importance effect. The results suggest that importance instructions may improve prospective memory if the prospective task requires the strategic allocation of attentional monitoring resources.", "title": "" }, { "docid": "0e61015f3372ba177acdfcddbd0ffdfb", "text": "INTRODUCTION\nThere are many challenges to the drug discovery process, including the complexity of the target, its interactions, and how these factors play a role in causing the disease. Traditionally, biophysics has been used for hit validation and chemical lead optimization. With its increased throughput and sensitivity, biophysics is now being applied earlier in this process to empower target characterization and hit finding. Areas covered: In this article, the authors provide an overview of how biophysics can be utilized to assess the quality of the reagents used in screening assays, to validate potential tool compounds, to test the integrity of screening assays, and to create follow-up strategies for compound characterization. They also briefly discuss the utilization of different biophysical methods in hit validation to help avoid the resource consuming pitfalls caused by the lack of hit overlap between biophysical methods. Expert opinion: The use of biophysics early on in the drug discovery process has proven crucial to identifying and characterizing targets of complex nature. It also has enabled the identification and classification of small molecules which interact in an allosteric or covalent manner with the target. By applying biophysics in this manner and at the early stages of this process, the chances of finding chemical leads with novel mechanisms of action are increased. In the future, focused screens with biophysics as a primary readout will become increasingly common.", "title": "" }, { "docid": "51df36570be2707556a8958e16682612", "text": "Through co-design of Augmented Reality (AR) based teaching material, this research aims to enhance collaborative learning experience in primary school education. It will introduce an interactive AR Book based on primary school textbook using tablets as the real time interface. The development of this AR Book employs co-design methods to involve children, teachers, educators and HCI experts from the early stages of the design process. Research insights from the co-design phase will be implemented in the AR Book design. The final outcome of the AR Book will be evaluated in the classroom to explore its effect on the collaborative experience of primary school students. The research aims to answer the question - Can Augmented Books be designed for primary school students in order to support collaboration? This main research question is divided into two sub-questions as follows - How can co-design methods be applied in designing Augmented Book with and for primary school children? And what is the effect of the proposed Augmented Book on primary school students' collaboration? This research will not only present a practical application of co-designing AR Book for and with primary school children, it will also clarify the benefit of AR for education in terms of collaborative experience.", "title": "" }, { "docid": "d59e64c1865193db3aaecc202f688690", "text": "Event-related desynchronization/synchronization patterns during right/left motor imagery (MI) are effective features for an electroencephalogram-based brain-computer interface (BCI). As MI tasks are subject-specific, selection of subject-specific discriminative frequency components play a vital role in distinguishing these patterns. This paper proposes a new discriminative filter bank (FB) common spatial pattern algorithm to extract subject-specific FB for MI classification. The proposed method enhances the classification accuracy in BCI competition III dataset IVa and competition IV dataset IIb. Compared to the performance offered by the existing FB-based method, the proposed algorithm offers error rate reductions of 17.42% and 8.9% for BCI competition datasets III and IV, respectively.", "title": "" }, { "docid": "e748162d1e0de342983f7028156b3cf6", "text": "Modeling cloth with fiber-level geometry can produce highly realistic details. However, rendering fiber-level cloth models not only has a high memory cost but it also has a high computation cost even for offline rendering applications. In this paper we present a real-time fiber-level cloth rendering method for current GPUs. Our method procedurally generates fiber-level geometric details on-the-fly using yarn-level control points for minimizing the data transfer to the GPU. We also reduce the rasterization operations by collectively representing the fibers near the center of each ply that form the yarn structure. Moreover, we employ a level-of-detail strategy to minimize or completely eliminate the generation of fiber-level geometry that would have little or no impact on the final rendered image. Furthermore, we introduce a simple self-shadow computation method that allows lighting with self-shadows using relatively low-resolution shadow maps. We also provide a simple distance-based ambient occlusion approximation as well as an ambient illumination precomputation approach, both of which account for fiber-level self-occlusion of yarn. Finally, we discuss how to use a physical-based shading model with our fiber-level cloth rendering method and how to handle cloth animations with temporal coherency. We demonstrate the effectiveness of our approach by comparing our simplified fiber geometry to procedurally generated references and display knitwear containing more than a hundred million individual fiber curves at real-time frame rates with shadows and ambient occlusion.", "title": "" }, { "docid": "5028d250c60a70c0ed6954581ab6cfa7", "text": "Social Commerce as a result of the advancement of Social Networking Sites and Web 2.0 is increasing as a new model of online shopping. With techniques to improve the website using AJAX, Adobe Flash, XML, and RSS, Social Media era has changed the internet user behavior to be more communicative and active in internet, they love to share information and recommendation among communities. Social commerce also changes the way people shopping through online. Social commerce will be the new way of online shopping nowadays. But the new challenge is business has to provide the interactive website yet interesting website for internet users, the website should give experience to satisfy their needs. This purpose of research is to analyze the website quality (System Quality, Information Quality, and System Quality) as well as interaction feature (communication feature) impact on social commerce website and customers purchase intention. Data from 134 customers of social commerce website were used to test the model. Multiple linear regression is used to calculate the statistic result while confirmatory factor analysis was also conducted to test the validity from each variable. The result shows that website quality and communication feature are important aspect for customer purchase intention while purchasing in social commerce website.", "title": "" }, { "docid": "5fbd1f14c8f4e8dc82bc86ad8b27c115", "text": "Computer-animated characters are common in popular culture and have begun to be used as experimental tools in social cognitive neurosciences. Here we investigated how appearance of these characters' influences perception of their actions. Subjects were presented with different characters animated either with motion data captured from human actors or by interpolating between poses (keyframes) designed by an animator, and were asked to categorize the motion as biological or artificial. The response bias towards 'biological', derived from the Signal Detection Theory, decreases with characters' anthropomorphism, while sensitivity is only affected by the simplest rendering style, point-light displays. fMRI showed that the response bias correlates positively with activity in the mentalizing network including left temporoparietal junction and anterior cingulate cortex, and negatively with regions sustaining motor resonance. The absence of significant effect of the characters on the brain activity suggests individual differences in the neural responses to unfamiliar artificial agents. While computer-animated characters are invaluable tools to investigate the neural bases of social cognition, further research is required to better understand how factors such as anthropomorphism affect their perception, in order to optimize their appearance for entertainment, research or therapeutic purposes.", "title": "" } ]
scidocsrr
0e6d0376110dc8b335378bf8b498dfca
Measuring the Effect of Conversational Aspects on Machine Translation Quality
[ { "docid": "355d040cf7dd706f08ef4ce33d53a333", "text": "Conversational participants tend to immediately and unconsciously adapt to each other’s language styles: a speaker will even adjust the number of articles and other function words in their next utterance in response to the number in their partner’s immediately preceding utterance. This striking level of coordination is thought to have arisen as a way to achieve social goals, such as gaining approval or emphasizing difference in status. But has the adaptation mechanism become so deeply embedded in the language-generation process as to become a reflex? We argue that fictional dialogs offer a way to study this question, since authors create the conversations but don’t receive the social benefits (rather, the imagined characters do). Indeed, we find significant coordination across many families of function words in our large movie-script corpus. We also report suggestive preliminary findings on the effects of gender and other features; e.g., surprisingly, for articles, on average, characters adapt more to females than to males.", "title": "" }, { "docid": "e8f431676ed0a85cb09a6462303a3ec7", "text": "This paper describes Champollion, a lexicon-based sentence aligner designed for robust alignment of potential noisy parallel text. Champollion increases the robustness of the alignment by assigning greater weights to less frequent translated words. Experiments on a manually aligned Chinese – English parallel corpus show that Champollion achieves high precision and recall on noisy data. Champollion can be easily ported to new language pairs. It’s freely available to the public.", "title": "" } ]
[ { "docid": "dcacbed90f45b76e9d40c427e16e89d6", "text": "High torque density and low torque ripple are crucial for traction applications, which allow electrified powertrains to perform properly during start-up, acceleration, and cruising. High-quality anisotropic magnetic materials such as cold-rolled grain-oriented electrical steels can be used for achieving higher efficiency, torque density, and compactness in synchronous reluctance motors equipped with transverse laminated rotors. However, the rotor cylindrical geometry makes utilization of these materials with pole numbers higher than two more difficult. From a reduced torque ripple viewpoint, particular attention to the rotor slot pitch angle design can lead to improvements. This paper presents an innovative rotor lamination design and assembly using cold-rolled grain-oriented electrical steel to achieve higher torque density along with an algorithm for rotor slot pitch angle design for reduced torque ripple. The design methods and prototyping process are discussed, finite-element analyses and experimental examinations are carried out, and the results are compared to verify and validate the proposed methods.", "title": "" }, { "docid": "991c5610152acf37b9a5e90b4f89bab8", "text": "The BioTac® is a biomimetic tactile sensor for grip control and object characterization. It has three sensing modalities: thermal flux, microvibration and force. In this paper, we discuss feature extraction and interpretation of the force modality data. The data produced by this force sensing modality during sensor-object interaction are monotonic but non-linear. Algorithms and machine learning techniques were developed and validated for extracting the radius of curvature (ROC), point of application of force (PAF) and force vector (FV). These features have varying degrees of usefulness in extracting object properties using only cutaneous information; most robots can also provide the equivalent of proprioceptive sensing. For example, PAF and ROC is useful for extracting contact points for grasp and object shape as the finger depresses and moves along an object; magnitude of FV is useful in evaluating compliance from reaction forces when a finger is pushed into an object at a given velocity while direction is important for maintaining stable grip.", "title": "" }, { "docid": "054b3f9068c92545e9c2c39e0728ad17", "text": "Data Aggregation is an important topic and a suitable technique in reducing the energy consumption of sensors nodes in wireless sensor networks (WSN’s) for affording secure and efficient big data aggregation. The wireless sensor networks have been broadly applied, such as target tracking and environment remote monitoring. However, data can be easily compromised by a vast of attacks, such as data interception and tampering of data. Data integrity protection is proposed, gives an identity-based aggregate signature scheme for wireless sensor networks with a designated verifier. The aggregate signature scheme keeps data integrity, can reduce bandwidth and storage cost. Furthermore, the security of the scheme is effectively presented based on the computation of Diffie-Hellman random oracle model.", "title": "" }, { "docid": "1389323613225897330d250e9349867b", "text": "Description: The field of data mining lies at the confluence of predictive analytics, statistical analysis, and business intelligence. Due to the ever–increasing complexity and size of data sets and the wide range of applications in computer science, business, and health care, the process of discovering knowledge in data is more relevant than ever before. This book provides the tools needed to thrive in today s big data world. The author demonstrates how to leverage a company s existing databases to increase profits and market share, and carefully explains the most current data science methods and techniques. The reader will learn data mining by doing data mining . By adding chapters on data modelling preparation, imputation of missing data, and multivariate statistical analysis, Discovering Knowledge in Data, Second Edition remains the eminent reference on data mining .", "title": "" }, { "docid": "b842d759b124e1da0240f977d95a8b9a", "text": "In this paper we argue for a broader view of ontology patterns and therefore present different use-cases where drawbacks of the current declarative pattern languages can be seen. We also discuss usecases where a declarative pattern approach can replace procedural-coded ontology patterns. With previous work on an ontology pattern language in mind we argue for a general pattern language.", "title": "" }, { "docid": "556dbae297d06aaaeb0fd78016bd573f", "text": "This paper presents a learning and scoring framework based on neural networks for speaker verification. The framework employs an autoencoder as its primary structure while three factors are jointly considered in the objective function for speaker discrimination. The first one, relating to the sample reconstruction error, makes the structure essentially a generative model, which benefits to learn most salient and useful properties of the data. Functioning in the middlemost hidden layer, the other two attempt to ensure that utterances spoken by the same speaker are mapped into similar identity codes in the speaker discriminative subspace, where the dispersion of all identity codes are maximized to some extent so as to avoid the effect of over-concentration. Finally, the decision score of each utterance pair is simply computed by cosine similarity of their identity codes. Dealing with utterances represented by i-vectors, the results of experiments conducted on the male portion of the core task in the NIST 2010 Speaker Recognition Evaluation (SRE) significantly demonstrate the merits of our approach over the conventional PLDA method.", "title": "" }, { "docid": "734ca5ac095cc8339056fede2a642909", "text": "The value of depth-first search or \"bacltracking\" as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and ar algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by k1V + k2E dk for some constants kl, k2, and ka, where Vis the number of vertices and E is the number of edges of the graph being examined.", "title": "" }, { "docid": "352bcf1c407568871880ad059053e1ec", "text": "In this paper we present a novel system for sketching the motion of a character. The process begins by sketching a character to be animated. An animated motion is then created for the character by drawing a continuous sequence of lines, arcs, and loops. These are parsed and mapped to a parameterized set of output motions that further reflect the location and timing of the input sketch. The current system supports a repertoire of 18 different types of motions in 2D and a subset of these in 3D. The system is unique in its use of a cursive motion specification, its ability to allow for fast experimentation, and its ease of use for non-experts.", "title": "" }, { "docid": "4019beb9fa6ec59b4b19c790fe8ff832", "text": "R. Cropanzano, D. E. Rupp, and Z. S. Byrne (2003) found that emotional exhaustion (i.e., 1 dimension of burnout) negatively affects organizational citizenship behavior (OCB). The authors extended this research by investigating relationships among 3 dimensions of burnout (emotional exhaustion, depersonalization, and diminished personal accomplishment) and OCB. They also affirmed the mediating effect of job involvement on these relationships. Data were collected from 296 paired samples of service employees and their supervisors from 12 hotels and restaurants in Taiwan. Findings demonstrated that emotional exhaustion and diminished personal accomplishment were related negatively to OCB, whereas depersonalization had no independent effect on OCB. Job involvement mediated the relationships among emotional exhaustion, diminished personal accomplishment, and OCB.", "title": "" }, { "docid": "ad401a35f367fabf31b35586bc1d10c4", "text": "This paper describes a small-size buck-type dc–dc converter for cellular phones. Output power MOSFETs and control circuitry are monolithically integrated. The newly developed pulse frequency modulation control integrated circuit, mounted on a planar inductor within the converter package, has a low quiescent current below 10 μA and a small chip size of 1.4 mm × 1.1 mm in a 0.35-μm CMOS process. The converter achieves a maximum efficiency of 90% and a power density above 100 W/cm<formula formulatype=\"inline\"> <tex Notation=\"TeX\">$^3$</tex></formula>.", "title": "" }, { "docid": "c3112126fa386710fb478dcfe978630e", "text": "In recent years, distributed intelligent microelectromechanical systems (DiMEMSs) have appeared as a new form of distributed embedded systems. DiMEMSs contain thousands or millions of removable autonomous devices, which will collaborate with each other to achieve the final target of the whole system. Programming such systems is becoming an extremely difficult problem. The difficulty is due not only to their inherent nature of distributed collaboration, mobility, large scale, and limited resources of their devices (e.g., in terms of energy, memory, communication, and computation) but also to the requirements of real-time control and tolerance for uncertainties such as inaccurate actuation and unreliable communications. As a result, existing programming languages for traditional distributed and embedded systems are not suitable for DiMEMSs. In this article, we first introduce the origin and characteristics of DiMEMSs and then survey typical implementations of DiMEMSs and related research hotspots. Finally, we propose a real-time programming framework that can be used to design new real-time programming languages for DiMEMSs. The framework is composed of three layers: a real-time programming model layer, a compilation layer, and a runtime system layer. The design challenges and requirements of these layers are investigated. The framework is then discussed in further detail and suggestions for future research are given.", "title": "" }, { "docid": "2f4a4c223c13c4a779ddb546b3e3518c", "text": "Machine learning systems trained on user-provided data are susceptible to data poisoning attacks, whereby malicious users inject false training data with the aim of corrupting the learned model. While recent work has proposed a number of attacks and defenses, little is understood about the worst-case loss of a defense in the face of a determined attacker. We address this by constructing approximate upper bounds on the loss across a broad family of attacks, for defenders that first perform outlier removal followed by empirical risk minimization. Our approximation relies on two assumptions: (1) that the dataset is large enough for statistical concentration between train and test error to hold, and (2) that outliers within the clean (nonpoisoned) data do not have a strong effect on the model. Our bound comes paired with a candidate attack that often nearly matches the upper bound, giving us a powerful tool for quickly assessing defenses on a given dataset. Empirically, we find that even under a simple defense, the MNIST-1-7 and Dogfish datasets are resilient to attack, while in contrast the IMDB sentiment dataset can be driven from 12% to 23% test error by adding only 3% poisoned data.", "title": "" }, { "docid": "74373dd009fc6285b8f43516d8e8bf2c", "text": "Computational speech reconstruction algorithms have the ultimate aim of returning natural sounding speech to aphonic and dysphonic patients as well as those who can only whisper. In particular, individuals who have lost glottis function due to disease or surgery, retain the power of vocal tract modulation to some degree but they are unable to speak anything more than hoarse whispers without prosthetic aid. While whispering can be seen as a natural and secondary aspect of speech communications for most people, it becomes the primary mechanism of communications for those who have impaired voice production mechanisms, such as laryngectomees. In this paper, by considering the current limitations of speech reconstruction methods, a novel algorithm for converting whispers to normal speech is proposed and the efficiency of the algorithm is explored. The algorithm relies upon cascading mapping models and makes use of artificially generated whispers (called whisperised speech) to regenerate natural phonated speech from whispers. Using a training-based approach, the mapping models exploit whisperised speech to overcome frame to frame time alignment problems that are inherent in the speech reconstruction process. This algorithm effectively regenerates missing information in the conventional frameworks of phonated speech reconstruction, ∗Corresponding author Email address: [email protected] (Hamid R. Sharifzadeh) Preprint submitted to Journal of Computers & Electrical Engineering February 15, 2016 and is able to outperform the current state-of-the-art regeneration methods using both subjective and objective criteria.", "title": "" }, { "docid": "bab429bf74fe4ce3f387a716964a867f", "text": "We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input. Virtual adversarial loss is defined as the robustness of the conditional label distribution around each input data point against local perturbation. Unlike adversarial training, our method defines the adversarial direction without label information and is hence applicable to semi-supervised learning. Because the directions in which we smooth the model are only \"virtually\" adversarial, we call our method virtual adversarial training (VAT). The computational cost of VAT is relatively low. For neural networks, the approximated gradient of virtual adversarial loss can be computed with no more than two pairs of forward- and back-propagations. In our experiments, we applied VAT to supervised and semi-supervised learning tasks on multiple benchmark datasets. With a simple enhancement of the algorithm based on the entropy minimization principle, our VAT achieves state-of-the-art performance for semi-supervised learning tasks on SVHN and CIFAR-10.", "title": "" }, { "docid": "715fda02bad1633be9097cc0a0e68c8d", "text": "Data uncertainty is common in real-world applications due to various causes, including imprecise measurement, network latency, outdated sources and sampling errors. These kinds of uncertainty have to be handled cautiously, or else the mining results could be unreliable or even wrong. In this paper, we propose a new rule-based classification and prediction algorithm called uRule for classifying uncertain data. This algorithm introduces new measures for generating, pruning and optimizing rules. These new measures are computed considering uncertain data interval and probability distribution function. Based on the new measures, the optimal splitting attribute and splitting value can be identified and used for classification and prediction. The proposed uRule algorithm can process uncertainty in both numerical and categorical data. Our experimental results show that uRule has excellent performance even when data is highly uncertain.", "title": "" }, { "docid": "7f82ff12310f74b17ba01cac60762a8c", "text": "For worst case parameter mismatch, modest levels of unbalance are predicted through the use of minimum gate decoupling, dynamic load lines with high Q values, common source inductance or high yield screening. Each technique is evaluated in terms of current unbalance, transition energy, peak turn-off voltage and parasitic oscillations, as appropriate, for various pulse duty cycles and frequency ranges.", "title": "" }, { "docid": "edcf1cb4d09e0da19c917eab9eab3b23", "text": "The paper describes a computerized process of myocardial perfusion diagnosis from cardiac single proton emission computed tomography (SPECT) images using data mining and knowledge discovery approach. We use a six-step knowledge discovery process. A database consisting of 267 cleaned patient SPECT images (about 3000 2D images), accompanied by clinical information and physician interpretation was created first. Then, a new user-friendly algorithm for computerizing the diagnostic process was designed and implemented. SPECT images were processed to extract a set of features, and then explicit rules were generated, using inductive machine learning and heuristic approaches to mimic cardiologist's diagnosis. The system is able to provide a set of computer diagnoses for cardiac SPECT studies, and can be used as a diagnostic tool by a cardiologist. The achieved results are encouraging because of the high correctness of diagnoses.", "title": "" }, { "docid": "6a2562987d10cdc499aca15da5526ebf", "text": "The underwater images usually suffers from non-uniform lighting, low contrast, blur and diminished colors. In this paper, we proposed an image based preprocessing technique to enhance the quality of the underwater images. The proposed technique comprises a combination of four filters such as homomorphic filtering, wavelet denoising, bilateral filter and contrast equalization. These filters are applied sequentially on degraded underwater images. The literature survey reveals that image based preprocessing algorithms uses standard filter techniques with various combinations. For smoothing the image, the image based preprocessing algorithms uses the anisotropic filter. The main drawback of the anisotropic filter is that iterative in nature and computation time is high compared to bilateral filter. In the proposed technique, in addition to other three filters, we employ a bilateral filter for smoothing the image. The experimentation is carried out in two stages. In the first stage, we have conducted various experiments on captured images and estimated optimal parameters for bilateral filter. Similarly, optimal filter bank and optimal wavelet shrinkage function are estimated for wavelet denoising. In the second stage, we conducted the experiments using estimated optimal parameters, optimal filter bank and optimal wavelet shrinkage function for evaluating the proposed technique. We evaluated the technique using quantitative based criteria such as a gradient magnitude histogram and Peak Signal to Noise Ratio (PSNR). Further, the results are qualitatively evaluated based on edge detection results. The proposed technique enhances the quality of the underwater images and can be employed prior to apply computer vision techniques.", "title": "" }, { "docid": "b09c438933e0c9300e19f035eb0e9305", "text": "A Reverse Conducting IGBT (RC-IGBT) is a promising device to reduce a size and cost of the power module thanks to the integration of IGBT and FWD into a single chip. However, it is difficult to achieve well-balanced performance between IGBT and FWD. Indeed, the total inverter loss of the conventional RC-IGBT was not so small as the individual IGBT and FWD pair. To minimize the loss, the most important key is the improvement of reverse recovery characteristics of FWD. We carefully extracted five effective parameters to improve the FWD characteristics, and investigated the impact of these parameters by using simulation and experiments. Finally, optimizing these parameters, we succeeded in fabricating the second-generation 600V class RC-IGBT with a smaller FWD loss than the first-generation RC-IGBT.", "title": "" }, { "docid": "3b05b099ee7e043c43270e92ba5290bd", "text": "In connection with a study of various aspects of the modifiability of behavior in the dancing mouse a need for definite knowledge concerning the relation of strength of stimulus to rate of learning arose. It was for the purpose of obtaining this knowledge that we planned and executed the experiments which are now to be described. Our work was greatly facilitated by the advice and assistance of Doctor E. G. MARTIN, Professor G. W. PIERCE, and Professor A. E. KENNELLY, and we desire to express here both our indebtedness and our thanks for their generous services.", "title": "" } ]
scidocsrr
d3bca3025b5f26f3428a448435e5eab1
Upsampling range data in dynamic environments
[ { "docid": "67e16f36bb6d83c5d6eae959a7223b77", "text": "Neighborhood filters are nonlocal image and movie filters which reduce the noise by averaging similar pixels. The first object of the paper is to present a unified theory of these filters and reliable criteria to compare them to other filter classes. A CCD noise model will be presented justifying the involvement of neighborhood filters. A classification of neighborhood filters will be proposed, including classical image and movie denoising methods and discussing further a recently introduced neighborhood filter, NL-means. In order to compare denoising methods three principles will be discussed. The first principle, “method noise”, specifies that only noise must be removed from an image. A second principle will be introduced, “noise to noise”, according to which a denoising method must transform a white noise into a white noise. Contrarily to “method noise”, this principle, which characterizes artifact-free methods, eliminates any subjectivity and can be checked by mathematical arguments and Fourier analysis. “Noise to noise” will be proven to rule out most denoising methods, with the exception of neighborhood filters. This is why a third and new comparison principle, the “statistical optimality”, is needed and will be introduced to compare the performance of all neighborhood filters. The three principles will be applied to compare ten different image and movie denoising methods. It will be first shown that only wavelet thresholding methods and NL-means give an acceptable method noise. Second, that neighborhood filters are the only ones to satisfy the “noise to noise” principle. Third, that among them NL-means is closest to statistical optimality. A particular attention will be paid to the application of the statistical optimality criterion for movie denoising methods. It will be pointed out that current movie denoising methods are motion compensated neighborhood filters. This amounts to say that they are neighborhood filters and that the ideal neighborhood of a pixel is its trajectory. Unfortunately the aperture problem makes it impossible to estimate ground true trajectories. It will be demonstrated that computing trajectories and restricting the neighborhood to them is harmful for denoising purposes and that space-time NL-means preserves more movie details.", "title": "" } ]
[ { "docid": "ce0b0543238a81c3f02c43e63a285605", "text": "Hatebusters is a web application for actively reporting YouTube hate speech, aiming to establish an online community of volunteer citizens. Hatebusters searches YouTube for videos with potentially hateful comments, scores their comments with a classifier trained on human-annotated data and presents users those comments with the highest probability of being hate speech. It also employs gamification elements, such as achievements and leaderboards, to drive user engagement.", "title": "" }, { "docid": "8dd540b33035904f63c67b57d4c97aa3", "text": "Wireless local area networks (WLANs) based on the IEEE 802.11 standards are one of today’s fastest growing technologies in businesses, schools, and homes, for good reasons. As WLAN deployments increase, so does the challenge to provide these networks with security. Security risks can originate either due to technical lapse in the security mechanisms or due to defects in software implementations. Standard Bodies and researchers have mainly used UML state machines to address the implementation issues. In this paper we propose the use of GSE methodology to analyse the incompleteness and uncertainties in specifications. The IEEE 802.11i security protocol is used as an example to compare the effectiveness of the GSE and UML models. The GSE methodology was found to be more effective in identifying ambiguities in specifications and inconsistencies between the specification and the state machines. Resolving all issues, we represent the robust security network (RSN) proposed in the IEEE 802.11i standard using different GSE models.", "title": "" }, { "docid": "e3b1e52066d20e7c92e936cdb72cc32b", "text": "This paper presents a new approach to power system automation, based on distributed intelligence rather than traditional centralized control. The paper investigates the interplay between two international standards, IEC 61850 and IEC 61499, and proposes a way of combining of the application functions of IEC 61850-compliant devices with IEC 61499-compliant “glue logic,” using the communication services of IEC 61850-7-2. The resulting ability to customize control and automation logic will greatly enhance the flexibility and adaptability of automation systems, speeding progress toward the realization of the smart grid concept.", "title": "" }, { "docid": "756b25456494b3ece9b240ba3957f91c", "text": "In this paper we introduce the task of fact checking, i.e. the assessment of the truthfulness of a claim. The task is commonly performed manually by journalists verifying the claims made by public figures. Furthermore, ordinary citizens need to assess the truthfulness of the increasing volume of statements they consume. Thus, developing fact checking systems is likely to be of use to various members of society. We first define the task and detail the construction of a publicly available dataset using statements fact-checked by journalists available online. Then, we discuss baseline approaches for the task and the challenges that need to be addressed. Finally, we discuss how fact checking relates to mainstream natural language processing tasks and can stimulate further research.", "title": "" }, { "docid": "2639f5d735abed38ed4f7ebf11072087", "text": "The rising popularity of intelligent mobile devices and the daunting computational cost of deep learning-based models call for efficient and accurate on-device inference schemes. We propose a quantization scheme that allows inference to be carried out using integer-only arithmetic, which can be implemented more efficiently than floating point inference on commonly available integer-only hardware. We also co-design a training procedure to preserve end-to-end model accuracy post quantization. As a result, the proposed quantization scheme improves the tradeoff between accuracy and on-device latency. The improvements are significant even on MobileNets, a model family known for run-time efficiency, and are demonstrated in ImageNet classification and COCO detection on popular CPUs.", "title": "" }, { "docid": "9f005054e640c2db97995c7540fe2034", "text": "Attack detection is usually approached as a classification problem. However, standard classification tools often perform poorly, because an adaptive attacker can shape his attacks in response to the algorithm. This has led to the recent interest in developing methods for adversarial classification, but to the best of our knowledge, there have been a very few prior studies that take into account the attacker’s tradeoff between adapting to the classifier being used against him with his desire to maintain the efficacy of his attack. Including this effect is a key to derive solutions that perform well in practice. In this investigation, we model the interaction as a game between a defender who chooses a classifier to distinguish between attacks and normal behavior based on a set of observed features and an attacker who chooses his attack features (class 1 data). Normal behavior (class 0 data) is random and exogenous. The attacker’s objective balances the benefit from attacks and the cost of being detected while the defender’s objective balances the benefit of a correct attack detection and the cost of false alarm. We provide an efficient algorithm to compute all Nash equilibria and a compact characterization of the possible forms of a Nash equilibrium that reveals intuitive messages on how to perform classification in the presence of an attacker. We also explore qualitatively and quantitatively the impact of the non-attacker and underlying parameters on the equilibrium strategies.", "title": "" }, { "docid": "f0d55892fb927c5c5324cfb7b8380bda", "text": "The paper presents application of data mining methods for recognizing the most significant genes and gene sequences (treated as features) stored in a dataset of gene expression microarray. The investigations are performed for autism data. Few chosen methods of feature selection have been applied and their results integrated in the final outcome. In this way we find the contents of small set of the most important genes associated with autism. They have been applied in the classification procedure aimed on recognition of autism from reference group members. The results of numerical experiments concerning selection of the most important genes and classification of the cases on the basis of the selected genes will be discussed. The main contribution of the paper is in developing the fusion system of the results of many selection approaches into the final set, most closely associated with autism. We have also proposed special procedure of estimating the number of highest rank genes used in classification procedure. 2014 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "b3bb84322c28a9d0493d9b8a626666e4", "text": "Underwater images often suffer from color distortion and low contrast, because light is scattered and absorbed when traveling through water. Such images with different color tones can be shot in various lighting conditions, making restoration and enhancement difficult. We propose a depth estimation method for underwater scenes based on image blurriness and light absorption, which can be used in the image formation model (IFM) to restore and enhance underwater images. Previous IFM-based image restoration methods estimate scene depth based on the dark channel prior or the maximum intensity prior. These are frequently invalidated by the lighting conditions in underwater images, leading to poor restoration results. The proposed method estimates underwater scene depth more accurately. Experimental results on restoring real and synthesized underwater images demonstrate that the proposed method outperforms other IFM-based underwater image restoration methods.", "title": "" }, { "docid": "37d4b01b77e548aa6226774be627471c", "text": "A fully integrated 8-channel phased-array receiver at 24 GHz is demonstrated. Each channel achieves a gain of 43 dB, noise figure of 8 dB, and an IIP3 of -11dBm, consuming 29 mA of current from a 2.5 V supply. The 8-channel array has a beam-forming resolution of 22.5/spl deg/, a peak-to-null ratio of 20 dB (4-bits), a total array gain of 61 dB, and improves the signal-to-noise ratio by 9 dB.", "title": "" }, { "docid": "0a2e59ab99b9666d8cf3fb31be9fa40c", "text": "Behavioral targeting (BT) is a widely used technique for online advertising. It leverages information collected on an individual's web-browsing behavior, such as page views, search queries and ad clicks, to select the ads most relevant to user to display. With the proliferation of social networks, it is possible to relate the behavior of individuals and their social connections. Although the similarity among connected individuals are well established (i.e., homophily), it is still not clear whether and how we can leverage the activities of one's friends for behavioral targeting; whether forecasts derived from such social information are more accurate than standard behavioral targeting models. In this paper, we strive to answer these questions by evaluating the predictive power of social data across 60 consumer domains on a large online network of over 180 million users in a period of two and a half months. To our best knowledge, this is the most comprehensive study of social data in the context of behavioral targeting on such an unprecedented scale. Our analysis offers interesting insights into the value of social data for developing the next generation of targeting services.", "title": "" }, { "docid": "461ee7b6a61a6d375a3ea268081f80f5", "text": "In this paper, we review the background and state-of-the-art of big data. We first introduce the general background of big data and review related technologies, such as could computing, Internet of Things, data centers, and Hadoop. We then focus on the four phases of the value chain of big data, i.e., data generation, data acquisition, data storage, and data analysis. For each phase, we introduce the general background, discuss the technical challenges, and review the latest advances. We finally examine the several representative applications of big data, including enterprise management, Internet of Things, online social networks, medial applications, collective intelligence, and smart grid. These discussions aim to provide a comprehensive overview and big-picture to readers of this exciting area. This survey is concluded with a discussion of open problems and future directions.", "title": "" }, { "docid": "e63a8b6595e1526a537b0881bc270542", "text": "The CTD which stands for “Conductivity-Temperature-Depth” is one of the most used instruments for the oceanographic measurements. MEMS based CTD sensor components consist of a conductivity sensor (C), temperature sensor (T) and a piezo resistive pressure sensor (D). CTDs are found in every marine related institute and navy throughout the world as they are used to produce the salinity profile for the area of the ocean under investigation and are also used to determine different oceanic parameters. This research paper provides the design, fabrication and initial test results on a prototype CTD sensor.", "title": "" }, { "docid": "9cc23cd9bfb3e422e2b4ace1fe816855", "text": "Evaluating surgeon skill has predominantly been a subjective task. Development of objective methods for surgical skill assessment are of increased interest. Recently, with technological advances such as robotic-assisted minimally invasive surgery (RMIS), new opportunities for objective and automated assessment frameworks have arisen. In this paper, we applied machine learning methods to automatically evaluate performance of the surgeon in RMIS. Six important movement features were used in the evaluation including completion time, path length, depth perception, speed, smoothness and curvature. Different classification methods applied to discriminate expert and novice surgeons. We test our method on real surgical data for suturing task and compare the classification result with the ground truth data (obtained by manual labeling). The experimental results show that the proposed framework can classify surgical skill level with relatively high accuracy of 85.7%. This study demonstrates the ability of machine learning methods to automatically classify expert and novice surgeons using movement features for different RMIS tasks. Due to the simplicity and generalizability of the introduced classification method, it is easy to implement in existing trainers. .", "title": "" }, { "docid": "ef706ea7a6dcd5b71602ea4c28eb9bd3", "text": "\"Stealth Dicing (SD) \" was developed to solve such inherent problems of dicing process as debris contaminants and unnecessary thermal damage on work wafer. In SD, laser beam power of transmissible wavelength is absorbed only around focal point in the wafer by utilizing temperature dependence of absorption coefficient of the wafer. And these absorbed power forms modified layer in the wafer, which functions as the origin of separation in followed separation process. Since only the limited interior region of a wafer is processed by laser beam irradiation, damages and debris contaminants can be avoided in SD. Besides characteristics of devices will not be affected. Completely dry process of SD is another big advantage over other dicing methods.", "title": "" }, { "docid": "ba3f3ca8a34e1ea6e54fe9dde673b51f", "text": "This paper proposes a high-efficiency dual-band on-chip rectifying antenna (rectenna) at 35 and 94 GHz for wireless power transmission. The rectenna is designed in slotline (SL) and finite-width ground coplanar waveguide (FGCPW) transmission lines in a CMOS 0.13-μm process. The rectenna comprises a high gain linear tapered slot antenna (LTSA), an FGCPW to SL transition, a bandpass filter, and a full-wave rectifier. The LTSA achieves a VSWR=2 fractional bandwidth of 82% and 41%, and a gain of 7.4 and 6.5 dBi at the frequencies of 35 and 94 GHz. The measured power conversion efficiencies are 53% and 37% in free space at 35 and 94 GHz, while the incident radiation power density is 30 mW/cm2 . The fabricated rectenna occupies a compact size of 2.9 mm2.", "title": "" }, { "docid": "4783e35e54d0c7f555015427cbdc011d", "text": "The language of deaf and dumb which uses body parts to convey the message is known as sign language. Here, we are doing a study to convert speech into sign language used for conversation. In this area we have many developed method to recognize alphabets and numerals of ISL (Indian sign language). There are various approaches for recognition of ISL and we have done a comparative studies between them [1].", "title": "" }, { "docid": "3d319572361f55dd4b91881dac2c9ace", "text": "In this paper, a modular interleaved boost converter is first proposed by integrating a forward energy-delivering circuit with a voltage-doubler to achieve high step-up ratio and high efficiency for dc-microgrid applications. Then, steady-state analyses are made to show the merits of the proposed converter module. For closed-loop control design, the corresponding small-signal model is also derived. It is seen that, for higher power applications, more modules can be paralleled to increase the power rating and the dynamic performance. As an illustration, closed-loop control of a 450-W rating converter consisting of two paralleled modules with 24-V input and 200-V output is implemented for demonstration. Experimental results show that the modular high step-up boost converter can achieve an efficiency of 95.8% approximately.", "title": "" }, { "docid": "5a62c276e7cce7c7a10109f3c3b1e401", "text": "A miniature coplanar antenna on a perovskite substrate is analyzed and designed using short circuit technique. The overall dimensions are minimized to 0.09 λ × 0.09 λ. The antenna geometry, the design concept, as well as the simulated and the measured results are discussed in this paper.", "title": "" }, { "docid": "d9aadb86785057ae5445dc894b1ef7a7", "text": "This paper presents Circe, an environment for the analysis of natural language requirements. Circe is first presented in terms of its architecture, based on a transformational paradigm. Details are then given for the various transformation steps, including (i) a novel technique for parsing natural language requirements, and (ii) an expert system based on modular agents, embodying intensional knowledge about software systems in general. The result of all the transformations is a set of models for the requirements document, for the system described by the requirements, and for the requirements writing process. These models can be inspected, measured, and validated against a given set of criteria. Some of the features of the environment are shown by means of an example. Various stages of requirements analysis are covered, from initial sketches to pseudo-code and UML models.", "title": "" }, { "docid": "b58c1e18a792974f57e9f676c1495826", "text": "The influence of bilingualism on cognitive test performance in older adults has received limited attention in the neuropsychology literature. The aim of this study was to examine the impact of bilingualism on verbal fluency and repetition tests in older Hispanic bilinguals. Eighty-two right-handed participants (28 men and 54 women) with a mean age of 61.76 years (SD = 9.30; range = 50-84) and a mean educational level of 14.8 years (SD = 3.6; range 2-23) were selected. Forty-five of the participants were English monolinguals, 18 were Spanish monolinguals, and 19 were Spanish-English bilinguals. Verbal fluency was tested by electing a verbal description of a picture and by asking participants to generate words within phonemic and semantic categories. Repetition was tested using a sentence-repetition test. The bilinguals' test scores were compared to English monolinguals' and Spanish monolinguals' test scores. Results demonstrated equal performance of bilingual and monolingual participants in all tests except that of semantic verbal fluency. Bilinguals who learned English before age 12 performed significantly better on the English repetition test and produced a higher number of words in the description of a picture than the bilinguals who learned English after age 12. Variables such as task demands, language interference, linguistic mode, and level of bilingualism are addressed in the Discussion section.", "title": "" } ]
scidocsrr
63d340f89dd18d1873c3bdaf4de2f732
DSGAN: Generative Adversarial Training for Distant Supervision Relation Extraction
[ { "docid": "3ca057959a24245764953a6aa1b2ed84", "text": "Distant supervision for relation extraction is an efficient method to scale relation extraction to very large corpora which contains thousands of relations. However, the existing approaches have flaws on selecting valid instances and lack of background knowledge about the entities. In this paper, we propose a sentence-level attention model to select the valid instances, which makes full use of the supervision information from knowledge bases. And we extract entity descriptions from Freebase and Wikipedia pages to supplement background knowledge for our task. The background knowledge not only provides more information for predicting relations, but also brings better entity representations for the attention module. We conduct three experiments on a widely used dataset and the experimental results show that our approach outperforms all the baseline systems significantly.", "title": "" }, { "docid": "3388d2e88fdc2db9967da4ddb452d9f1", "text": "Entity pair provide essential information for identifying relation type. Aiming at this characteristic, Position Feature is widely used in current relation classification systems to highlight the words close to them. However, semantic knowledge involved in entity pair has not been fully utilized. To overcome this issue, we propose an Entity-pair-based Attention Mechanism, which is specially designed for relation classification. Recently, attention mechanism significantly promotes the development of deep learning in NLP. Inspired by this, for specific instance(entity pair, sentence), the corresponding entity pair information is incorporated as prior knowledge to adaptively compute attention weights for generating sentence representation. Experimental results on SemEval-2010 Task 8 dataset show that our method outperforms most of the state-of-the-art models, without external linguistic features.", "title": "" }, { "docid": "c1943f443b0e7be72091250b34262a8f", "text": "We survey recent approaches to noise reduction in distant supervision learning for relation extraction. We group them according to the principles they are based on: at-least-one constraints, topic-based models, or pattern correlations. Besides describing them, we illustrate the fundamental differences and attempt to give an outlook to potentially fruitful further research. In addition, we identify related work in sentiment analysis which could profit from approaches to noise reduction.", "title": "" }, { "docid": "9c44aba7a9802f1fe95fbeb712c23759", "text": "In relation extraction, distant supervision seeks to extract relations between entities from text by using a knowledge base, such as Freebase, as a source of supervision. When a sentence and a knowledge base refer to the same entity pair, this approach heuristically labels the sentence with the corresponding relation in the knowledge base. However, this heuristic can fail with the result that some sentences are labeled wrongly. This noisy labeled data causes poor extraction performance. In this paper, we propose a method to reduce the number of wrong labels. We present a novel generative model that directly models the heuristic labeling process of distant supervision. The model predicts whether assigned labels are correct or wrong via its hidden variables. Our experimental results show that this model detected wrong labels with higher performance than baseline methods. In the experiment, we also found that our wrong label reduction boosted the performance of relation extraction.", "title": "" } ]
[ { "docid": "dea7d83ed497fc95f4948a5aa4787b18", "text": "The distinguishing feature of the Fog Computing (FC) paradigm is that FC spreads communication and computing resources over the wireless access network, so as to provide resource augmentation to resource and energy-limited wireless (possibly mobile) devices. Since FC would lead to substantial reductions in energy consumption and access latency, it will play a key role in the realization of the Fog of Everything (FoE) paradigm. The core challenge of the resulting FoE paradigm is tomaterialize the seamless convergence of three distinct disciplines, namely, broadband mobile communication, cloud computing, and Internet of Everything (IoE). In this paper, we present a new IoE architecture for FC in order to implement the resulting FoE technological platform. Then, we elaborate the related Quality of Service (QoS) requirements to be satisfied by the underlying FoE technological platform. Furthermore, in order to corroborate the conclusion that advancements in the envisioned architecture description, we present: (i) the proposed energy-aware algorithm adopt Fog data center; and, (ii) the obtained numerical performance, for a real-world case study that shows that our approach saves energy consumption impressively in theFog data Center compared with the existing methods and could be of practical interest in the incoming Fog of Everything (FoE) realm.", "title": "" }, { "docid": "a73df97081ec01929e06969c52775007", "text": "Massive graphs arise naturally in a lot of applications, especially in communication networks like the internet. The size of these graphs makes it very hard or even impossible to store set of edges in the main memory. Thus, random access to the edges can't be realized, which makes most o ine algorithms unusable. This essay investigates e cient algorithms that read the edges only in a xed sequential order. Since even basic graph problems often need at least linear space in the number of vetices to be solved, the storage space bounds are relaxed compared to the classic streaming model, such that the bound is O(n · polylog n). The essay describes algorithms for approximations of the unweighted and weighted matching problem and gives a o(log1− n) lower bound for approximations of the diameter. Finally, some results for further graph problems are discussed.", "title": "" }, { "docid": "6ae739344034410a570b12a57db426e3", "text": "In recent times we tend to use a number of surveillance systems for monitoring the targeted area. This requires an enormous amount of storage space along with a lot of human power in order to implement and monitor the area under surveillance. This is supposed to be costly and not a reliable process. In this paper we propose an intelligent surveillance system that continuously monitors the targeted area and detects motion in each and every frame. If the system detects motion in the targeted area then a notification is automatically sent to the user by sms and the video starts getting recorded till the motion is stopped. Using this method the required memory space for storing the video is reduced since it doesn't store the entire video but stores the video only when a motion is detected. This is achieved by using real time video processing using open CV (computer vision / machine vision) technology and raspberry pi system.", "title": "" }, { "docid": "2c5ab4dddbb6aeae4542b42f57e54d72", "text": "Online action detection is a challenging problem: a system needs to decide what action is happening at the current frame, based on previous frames only. Fortunately in real-life, human actions are not independent from one another: there are strong (long-term) dependencies between them. An online action detection method should be able to capture these dependencies, to enable a more accurate early detection. At first sight, an LSTM seems very suitable for this problem. It is able to model both short-term and long-term patterns. It takes its input one frame at the time, updates its internal state and gives as output the current class probabilities. In practice, however, the detection results obtained with LSTMs are still quite low. In this work, we start from the hypothesis that it may be too difficult for an LSTM to learn both the interpretation of the input and the temporal patterns at the same time. We propose a two-stream feedback network, where one stream processes the input and the other models the temporal relations. We show improved detection accuracy on an artificial toy dataset and on the Breakfast Dataset [21] and the TVSeries Dataset [7], reallife datasets with inherent temporal dependencies between the actions.", "title": "" }, { "docid": "51e307584d6446ba2154676d02d2cc84", "text": "This article provides a tutorial overview of cognitive architectures that can form a theoretical foundation for designing multimedia instruction. Cognitive architectures include a description of memory stores, memory codes, and cognitive operations. Architectures that are relevant to multimedia learning include Paivio’s dual coding theory, Baddeley’s working memory model, Engelkamp’s multimodal theory, Sweller’s cognitive load theory, Mayer’s multimedia learning theory, and Nathan’s ANIMATE theory. The discussion emphasizes the interplay between traditional research studies and instructional applications of this research for increasing recall, reducing interference, minimizing cognitive load, and enhancing understanding. Tentative conclusions are that (a) there is general agreement among the different architectures, which differ in focus; (b) learners’ integration of multiple codes is underspecified in the models; (c) animated instruction is not required when mental simulations are sufficient; (d) actions must be meaningful to be successful; and (e) multimodal instruction is superior to targeting modality-specific individual differences.", "title": "" }, { "docid": "de48b60276b27861d58aaaf501606d69", "text": "Many environmental variables that are important for the development of chironomid larvae (such as water temperature, oxygen availability, and food quantity) are related to water depth, and a statistically strong relationship between chironomid distribution and water depth is therefore expected. This study focuses on the distribution of fossil chironomids in seven shallow lakes and one deep lake from the Plymouth Aquifer (Massachusetts, USA) and aims to assess the influence of water depth on chironomid assemblages within a lake. Multiple samples were taken per lake in order to study the distribution of fossil chironomid head capsules within a lake. Within each lake, the chironomid assemblages are diverse and the changes that are seen in the assemblages are strongly related to changes in water depth. Several thresholds (i.e., where species turnover abruptly changes) are identified in the assemblages, and most lakes show abrupt changes at about 1–2 and 5–7 m water depth. In the deep lake, changes also occur at 9.6 and 15 m depth. The distribution of many individual taxa is significantly correlated to water depth, and we show that the identification of different taxa within the genus Tanytarsus is important because different morphotypes show different responses to water depth. We conclude that the chironomid fauna is sensitive to changes in lake level, indicating that fossil chironomid assemblages can be used as a tool for quantitative reconstruction of lake level changes.", "title": "" }, { "docid": "5495aeaa072a1f8f696298ebc7432045", "text": "Deep neural networks (DNNs) are widely used in data analytics, since they deliver state-of-the-art accuracies. Binarized neural networks (BNNs) are recently proposed optimized variant of DNNs. BNNs constraint network weight and/or neuron value to either +1 or −1, which is representable in 1 bit. This leads to dramatic algorithm efficiency improvement, due to reduction in the memory and computational demands. This paper evaluates the opportunity to further improve the execution efficiency of BNNs through hardware acceleration. We first proposed a BNN hardware accelerator design. Then, we implemented the proposed accelerator on Aria 10 FPGA as well as 14-nm ASIC, and compared them against optimized software on Xeon server CPU, Nvidia Titan X server GPU, and Nvidia TX1 mobile GPU. Our evaluation shows that FPGA provides superior efficiency over CPU and GPU. Even though CPU and GPU offer high peak theoretical performance, they are not as efficiently utilized since BNNs rely on binarized bit-level operations that are better suited for custom hardware. Finally, even though ASIC is still more efficient, FPGA can provide orders of magnitudes in efficiency improvements over software, without having to lock into a fixed ASIC solution.", "title": "" }, { "docid": "d7574e4d5fd3a395907db7a7d380652b", "text": "In this paper, we analyze and evaluate word embeddings for representation of longer texts in the multi-label document classification scenario. The embeddings are used in three convolutional neural network topologies. The experiments are realized on the Czech ČTK and English Reuters-21578 standard corpora. We compare the results of word2vec static and trainable embeddings with randomly initialized word vectors. We conclude that initialization does not play an important role for classification. However, learning of word vectors is crucial to obtain good results.", "title": "" }, { "docid": "fbd05f764470b94af30c7799e94ff0f0", "text": "Agent-based modeling of human social behavior is an increasingly important research area. A key factor in human social interaction is our beliefs about others, a theory of mind. Whether we believe a message depends not only on its content but also on our model of the communicator. How we act depends not only on the immediate effect but also on how we believe others will react. In this paper, we discuss PsychSim, an implemented multiagent-based simulation tool for modeling interactions and influence. While typical approaches to such modeling have used first-order logic, PsychSim agents have their own decision-theoretic model of the world, including beliefs about its environment and recursive models of other agents. Using these quantitative models of uncertainty and preferences, we have translated existing psychological theories into a decision-theoretic semantics that allow the agents to reason about degrees of believability in a novel way. We discuss PsychSim’s underlying architecture and describe its application to a school violence scenario for illustration.", "title": "" }, { "docid": "18824b0ce748e097c049440439116b77", "text": "Before we try to specify how to give a semantic analysis of discourse, we must define what semantic analysis is and what kinds of semantic analysis can be distinguished. Such a definition will be as complex as the number of semantic theories in the various disciplines involved in the study of language: linguistics and grammar, the philosophy of language, logic, cognitive psychology, and sociology, each with several competing semantic theories. These theories will be different according to their object of analysis, their aims, and their methods. Yet, they will also have some common properties that allow us to call them semantic theories. In this chapter I first enumerate more or less intuitively a number of these common properties, then select some of them for further theoretical analysis, and finally apply the theoretical notions in actual semantic analyses of some discourse fragments. In the most general sense, semantics is a component theory within a larger semiotic theory about meaningful, symbolic, behavior. Hence we have not only a semantics of natural language utterances or acts, but also of nonverbal or paraverbal behavior, such as gestures, pictures and films, logical systems or computer languages, sign languages of the deaf, and perhaps social interaction in general. In this chapter we consider only the semantics of natural-language utterances, that is, discourses, and their component elements, such as words, phrases, clauses, sentences, paragraphs, and other identifiable discourse units. Other semiotic aspects of verbal and nonverbal communication are treated elsewhere in this Handbook. Probably the most general concept used to denote the specific object", "title": "" }, { "docid": "921c7a6c3902434b250548e573816978", "text": "Energy harvesting based on tethered kites makes use of the advantage, that these airborne wind energy systems are able to exploit higher wind speeds at higher altitudes. The setup, considered in this paper, is based on the pumping cycle, which generates energy by winching out at high tether forces, driving an electrical generator while flying crosswind and winching in at a stationary neutral position, thus leaving a net amount of generated energy. The economic operation of such airborne wind energy plants demands for a reliable control system allowing for a complete autonomous operation of cycles. This task involves the flight control of the kite as well as the operation of a winch for the tether. The focus of this paper is put on the flight control, which implements an accurate direction control towards target points allowing for eight-down pattern flights. In addition, efficient winch control strategies are provided. The paper summarises a simple comprehensible model with equations of motion in order to motivate the approach of the control system design. After an extended overview on the control system, the flight controller parts are discussed in detail. Subsequently, the winch strategies based on an optimisation scheme are presented. In order to demonstrate the real world functionality of the presented algorithms, flight data from a fully automated pumping-cycle operation of a small-scale prototype setup based on a 30 m2 kite and a 50 kW electrical motor/generator is given.", "title": "" }, { "docid": "51ece87cfa463cd76c6fd60e2515c9f4", "text": "In a 1998 speech before the California Science Center in Los Angeles, then US VicePresident Al Gore called for a global undertaking to build a multi-faceted computing system for education and research, which he termed “Digital Earth.” The vision was that of a system providing access to what is known about the planet and its inhabitants’ activities – currently and for any time in history – via responses to queries and exploratory tools. Furthermore, it would accommodate modeling extensions for predicting future conditions. Organized efforts towards realizing that vision have diminished significantly since 2001, but progress on key requisites has been made. As the 10 year anniversary of that influential speech approaches, we re-examine it from the perspective of a systematic software design process and find the envisioned system to be in many respects inclusive of concepts of distributed geolibraries and digital atlases. A preliminary definition for a particular digital earth system as: “a comprehensive, distributed geographic information and knowledge organization system,” is offered and discussed. We suggest that resumption of earlier design and focused research efforts can and should be undertaken, and may prove a worthwhile “Grand Challenge” for the GIScience community.", "title": "" }, { "docid": "1b6e35187b561de95051f67c70025152", "text": "Ž . The technology acceptance model TAM proposes that ease of use and usefulness predict applications usage. The current research investigated TAM for work-related tasks with the World Wide Web as the application. One hundred and sixty-three subjects responded to an e-mail survey about a Web site they access often in their jobs. The results support TAM. They also Ž . Ž . demonstrate that 1 ease of understanding and ease of finding predict ease of use, and that 2 information quality predicts usefulness for revisited sites. In effect, the investigation applies TAM to help Web researchers, developers, and managers understand antecedents to users’ decisions to revisit sites relevant to their jobs. q 2000 Elsevier Science B.V. All rights reserved.", "title": "" }, { "docid": "d90b6c61369ff0458843241cd30437ba", "text": "The unprecedented challenges of creating Biosphere 2, the world's first laboratory for biospherics, the study of global ecology and long-term closed ecological system dynamics, led to breakthrough developments in many fields, and a deeper understanding of the opportunities and difficulties of material closure. This paper will review accomplishments and challenges, citing some of the key research findings and publications that have resulted from the experiments in Biosphere 2. Engineering accomplishments included development of a technique for variable volume to deal with pressure differences between the facility and outside environment, developing methods of atmospheric leak detection and sealing, while achieving new standards of closure, with an annual atmospheric leakrate of less than 10%, or less than 300 ppm per day. This degree of closure permitted detailed tracking of carbon dioxide, oxygen, and trace gases such as nitrous oxide and ethylene over the seasonal variability of two years. Full closure also necessitated developing new approaches and technologies for complete air, water, and wastewater recycle and reuse within the facility. The development of a soil-based highly productive agricultural system was a first in closed ecological systems, and much was learned about managing a wide variety of crops using non-chemical means of pest and disease control. Closed ecological systems have different temporal biogeochemical cycling and ranges of atmospheric components because of their smaller reservoirs of air, water and soil, and higher concentration of biomass, and Biosphere 2 provided detailed examination and modeling of these accelerated cycles over a period of closure which measured in years. Medical research inside Biosphere 2 included the effects on humans of lowered oxygen: the discovery that human productivity can be maintained with good health with lowered atmospheric oxygen levels could lead to major economies on the design of space stations and planetary/lunar settlements. The improved health resulting from the calorie-restricted but nutrient dense Biosphere 2 diet was the first such scientifically controlled experiment with humans. The success of Biosphere 2 in creating a diversity of terrestrial and marine environments, from rainforest to coral reef, allowed detailed studies with comprehensive measurements such that the dynamics of these complex biomic systems are now better understood. The coral reef ecosystem, the largest artificial reef ever built, catalyzed methods of study now being applied to planetary coral reef systems. Restoration ecology advanced through the creation and study of the dynamics of adaptation and self-organization of the biomes in Biosphere 2. The international interest that Biosphere 2 generated has given new impetus to the public recognition of the sciences of biospheres (biospherics), biomes and closed ecological life systems. The facility, although no longer a materially-closed ecological system, is being used as an educational facility by Columbia University as an introduction to the study of the biosphere and complex system ecology and for carbon dioxide impacts utilizing the complex ecosystems created in Biosphere '. The many lessons learned from Biosphere 2 are being used by its key team of creators in their design and operation of a laboratory-sized closed ecological system, the Laboratory Biosphere, in operation as of March 2002, and for the design of a Mars on Earth(TM) prototype life support system for manned missions to Mars and Mars surface habitats. Biosphere 2 is an important foundation for future advances in biospherics and closed ecological system research.", "title": "" }, { "docid": "ffd4fc3c7d63eab3cc8a7129f31afdea", "text": "The growth of desktop 3-D printers is driving an interest in recycled 3-D printer filament to reduce costs of distributed production. Life cycle analysis studies were performed on the recycling of high density polyethylene into filament suitable for additive layer manufacturing with 3-D printers. The conventional centralized recycling system for high population density and low population density rural locations was compared to the proposed in home, distributed recycling system. This system would involve shredding and then producing filament with an open-source plastic extruder from postconsumer plastics and then printing the extruded filament into usable, value-added parts and products with 3-D printers such as the open-source self replicating rapid prototyper, or RepRap. The embodied energy and carbon dioxide emissions were calculated for high density polyethylene recycling using SimaPro 7.2 and the database EcoInvent v2.0. The results showed that distributed recycling uses less embodied energy than the best-case scenario used for centralized recycling. For centralized recycling in a low-density population case study involving substantial embodied energy use for transportation and collection these savings for distributed recycling were found to extend to over 80%. If the distributed process is applied to the U.S. high density polyethylene currently recycled, more than 100 million MJ of energy could be conserved per annum along with the concomitant significant reductions in greenhouse gas emissions. It is concluded that with the open-source 3-D printing network expanding rapidly the potential for widespread adoption of in-home recycling of post-consumer plastic represents a novel path to a future of distributed manufacturing appropriate for both the developed and developing world with lower environmental impacts than the current system.", "title": "" }, { "docid": "fa07419129af7100fc0bf38746f084aa", "text": "We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientific study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.", "title": "" }, { "docid": "8baa6af3ee08029f0a555e4f4db4e218", "text": "We introduce several probabilistic models for learning the lexicon of a semantic parser. Lexicon learning is the first step of training a semantic parser for a new application domain and the quality of the learned lexicon significantly affects both the accuracy and efficiency of the final semantic parser. Existing work on lexicon learning has focused on heuristic methods that lack convergence guarantees and require significant human input in the form of lexicon templates or annotated logical forms. In contrast, our probabilistic models are trained directly from question/answer pairs using EM and our simplest model has a concave objective that guarantees convergence to a global optimum. An experimental evaluation on a set of 4th grade science questions demonstrates that our models improve semantic parser accuracy (35-70% error reduction) and efficiency (4-25x more sentences per second) relative to prior work despite using less human input. Our models also obtain competitive results on GEO880 without any datasetspecific engineering.", "title": "" }, { "docid": "836815216224b278df229927d825e411", "text": "Logistics demand forecasting is important for investment decision-making of infrastructure and strategy programming of the logistics industry. In this paper, a hybrid method which combines the Grey Model, artificial neural networks and other techniques in both learning and analyzing phases is proposed to improve the precision and reliability of forecasting. After establishing a learning model GNNM(1,8) for road logistics demand forecasting, we chose road freight volume as target value and other economic indicators, i.e. GDP, production value of primary industry, total industrial output value, outcomes of tertiary industry, retail sale of social consumer goods, disposable personal income, and total foreign trade value as the seven key influencing factors for logistics demand. Actual data sequences of the province of Zhejiang from years 1986 to 2008 were collected as training and test-proof samples. By comparing the forecasting results, it turns out that GNNM(1,8) is an appropriate forecasting method to yield higher accuracy and lower mean absolute percentage errors than other individual models for short-term logistics demand forecasting.", "title": "" }, { "docid": "16b8a948e76a04b1703646d5e6111afe", "text": "Nanotechnology offers many potential benefits to cancer research through passive and active targeting, increased solubility/bioavailablility, and novel therapies. However, preclinical characterization of nanoparticles is complicated by the variety of materials, their unique surface properties, reactivity, and the task of tracking the individual components of multicomponent, multifunctional nanoparticle therapeutics in in vivo studies. There are also regulatory considerations and scale-up challenges that must be addressed. Despite these hurdles, cancer research has seen appreciable improvements in efficacy and quite a decrease in the toxicity of chemotherapeutics because of 'nanotech' formulations, and several engineered nanoparticle clinical trials are well underway. This article reviews some of the challenges and benefits of nanomedicine for cancer therapeutics and diagnostics.", "title": "" }, { "docid": "a40c00b1dc4a8d795072e0a8cec09d7a", "text": "Summary form only given. Most of current job scheduling systems for supercomputers and clusters provide batch queuing support. With the development of metacomputing and grid computing, users require resources managed by multiple local job schedulers. Advance reservations are becoming essential for job scheduling systems to be utilized within a large-scale computing environment with geographically distributed resources. COSY is a lightweight implementation of such a local job scheduler with support for both queue scheduling and advance reservations. COSY queue scheduling utilizes the FCFS algorithm with backfilling mechanisms and priority management. Advance reservations with COSY can provide effective QoS support for exact start time and latest completion time. Scheduling polices are defined to reject reservations with too short notice time so that there is no start time advantage to making a reservation over submitting to a queue. Further experimental results show that as a larger percentage of reservation requests are involved, a longer mandatory shortest notice time for advance reservations must be applied in order not to sacrifice queue scheduling efficiency.", "title": "" } ]
scidocsrr
6b5140a6b1b2d1da1a1552aa0b4eeeb2
Deep Q-learning From Demonstrations
[ { "docid": "a3bce6c544a08e48a566a189f66d0131", "text": "Model-free episodic reinforcement learning problems define the environment reward with functions that often provide only sparse information throughout the task. Consequently, agents are not given enough feedback about the fitness of their actions until the task ends with success or failure. Previous work addresses this problem with reward shaping. In this paper we introduce a novel approach to improve modelfree reinforcement learning agents’ performance with a three step approach. Specifically, we collect demonstration data, use the data to recover a linear function using inverse reinforcement learning and we use the recovered function for potential-based reward shaping. Our approach is model-free and scalable to high dimensional domains. To show the scalability of our approach we present two sets of experiments in a two dimensional Maze domain, and the 27 dimensional Mario AI domain. We compare the performance of our algorithm to previously introduced reinforcement learning from demonstration algorithms. Our experiments show that our approach outperforms the state-of-the-art in cumulative reward, learning rate and asymptotic performance.", "title": "" } ]
[ { "docid": "8e6d17b6d7919d76cebbcefcc854573e", "text": "Vincent Larivière École de bibliothéconomie et des sciences de l’information, Université de Montréal, C.P. 6128, Succ. CentreVille, Montréal, QC H3C 3J7, Canada, and Observatoire des Sciences et des Technologies (OST), Centre Interuniversitaire de Recherche sur la Science et la Technologie (CIRST), Université du Québec à Montréal, CP 8888, Succ. Centre-Ville, Montréal, QC H3C 3P8, Canada. E-mail: [email protected]", "title": "" }, { "docid": "c1d5f28d264756303fded5faa65587a2", "text": "English vocabulary learning and ubiquitous learning have separately received considerable attention in recent years. However, research on English vocabulary learning in ubiquitous learning contexts has been less studied. In this study, we develop a ubiquitous English vocabulary learning (UEVL) system to assist students in experiencing a systematic vocabulary learning process in which ubiquitous technology is used to develop the system, and video clips are used as the material. Afterward, the technology acceptance model and partial least squares approach are used to explore students’ perspectives on the UEVL system. The results indicate that (1) both the system characteristics and the material characteristics of the UEVL system positively and significantly influence the perspectives of all students on the system; (2) the active students are interested in perceived usefulness; (3) the passive students are interested in perceived ease of use. 2011 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "0df26f2f40e052cde72048b7538548c3", "text": "Keshif is an open-source, web-based data exploration environment that enables data analytics novices to create effective visual and interactive dashboards and explore relations with minimal learning time, and data analytics experts to explore tabular data in multiple perspectives rapidly with minimal setup time. In this paper, we present a high-level overview of the exploratory features and design characteristics of Keshif, as well as its API and a selection of its implementation specifics. We conclude with a discussion of its use as an open-source project.", "title": "" }, { "docid": "b0a37782d653fa03843ecdc118a56034", "text": "Non-frontal lip views contain useful information which can be used to enhance the performance of frontal view lipreading. However, the vast majority of recent lipreading works, including the deep learning approaches which significantly outperform traditional approaches, have focused on frontal mouth images. As a consequence, research on joint learning of visual features and speech classification from multiple views is limited. In this work, we present an end-to-end multi-view lipreading system based on Bidirectional Long-Short Memory (BLSTM) networks. To the best of our knowledge, this is the first model which simultaneously learns to extract features directly from the pixels and performs visual speech classification from multiple views and also achieves state-of-the-art performance. The model consists of multiple identical streams, one for each view, which extract features directly from different poses of mouth images. The temporal dynamics in each stream/view are modelled by a BLSTM and the fusion of multiple streams/views takes place via another BLSTM. An absolute average improvement of 3% and 3.8% over the frontal view performance is reported on the OuluVS2 database when the best two (frontal and profile) and three views (frontal, profile, 45◦) are combined, respectively. The best three-view model results in a 10.5% absolute improvement over the current multi-view state-of-the-art performance on OuluVS2, without using external databases for training, achieving a maximum classification accuracy of 96.9%.", "title": "" }, { "docid": "c02697087e8efd4c1ba9f9a26fa1115b", "text": "OBJECTIVE\nTo estimate the current prevalence of limb loss in the United States and project the future prevalence to the year 2050.\n\n\nDESIGN\nEstimates were constructed using age-, sex-, and race-specific incidence rates for amputation combined with age-, sex-, and race-specific assumptions about mortality. Incidence rates were derived from the 1988 to 1999 Nationwide Inpatient Sample of the Healthcare Cost and Utilization Project, corrected for the likelihood of reamputation among those undergoing amputation for vascular disease. Incidence rates were assumed to remain constant over time and applied to historic mortality and population data along with the best available estimates of relative risk, future mortality, and future population projections. To investigate the sensitivity of our projections to increasing or decreasing incidence, we developed alternative sets of estimates of limb loss related to dysvascular conditions based on assumptions of a 10% or 25% increase or decrease in incidence of amputations for these conditions.\n\n\nSETTING\nCommunity, nonfederal, short-term hospitals in the United States.\n\n\nPARTICIPANTS\nPersons who were discharged from a hospital with a procedure code for upper-limb or lower-limb amputation or diagnosis code of traumatic amputation.\n\n\nINTERVENTIONS\nNot applicable.\n\n\nMAIN OUTCOME MEASURES\nPrevalence of limb loss by age, sex, race, etiology, and level in 2005 and projections to the year 2050.\n\n\nRESULTS\nIn the year 2005, 1.6 million persons were living with the loss of a limb. Of these subjects, 42% were nonwhite and 38% had an amputation secondary to dysvascular disease with a comorbid diagnosis of diabetes mellitus. It is projected that the number of people living with the loss of a limb will more than double by the year 2050 to 3.6 million. If incidence rates secondary to dysvascular disease can be reduced by 10%, this number would be lowered by 225,000.\n\n\nCONCLUSIONS\nOne in 190 Americans is currently living with the loss of a limb. Unchecked, this number may double by the year 2050.", "title": "" }, { "docid": "031562142f7a2ffc64156f9d09865604", "text": "The demand for video content is continuously increasing as video sharing on the Internet is becoming enormously popular recently. This demand, with its high bandwidth requirements, has a considerable impact on the load of the network infrastructure. As more users access videos from their mobile devices, the load on the current wireless infrastructure (which has limited capacity) will be even more significant. Based on observations from many local video sharing scenarios, in this paper, we study the tradeoffs of using Wi-Fi ad-hoc mode versus infrastructure mode for video streaming between adjacent devices. We thus show the potential of direct device-to-device communication as a way to reduce the load on the wireless infrastructure and to improve user experiences. Setting up experiments for WiFi devices connected in ad-hoc mode, we collect measurements for various video streaming scenarios and compare them to the case where the devices are connected through access points. The results show the improvements in latency, jitter and loss rate. More importantly, the results show that the performance in direct device-to-device streaming is much more stable in contrast to the access point case, where different factors affect the performance causing widely unpredictable qualities.", "title": "" }, { "docid": "9bfba29f44c585df56062582d4e35ba5", "text": "We address the problem of optimizing recommender systems for multiple relevance objectives that are not necessarily aligned. Specifically, given a recommender system that optimizes for one aspect of relevance, semantic matching (as defined by any notion of similarity between source and target of recommendation; usually trained on CTR), we want to enhance the system with additional relevance signals that will increase the utility of the recommender system, but that may simultaneously sacrifice the quality of the semantic match. The issue is that semantic matching is only one relevance aspect of the utility function that drives the recommender system, albeit a significant aspect. In talent recommendation systems, job posters want candidates who are a good match to the job posted, but also prefer those candidates to be open to new opportunities. Recommender systems that recommend discussion groups must ensure that the groups are relevant to the users' interests, but also need to favor active groups over inactive ones. We refer to these additional relevance signals (job-seeking intent and group activity) as extraneous features, and they account for aspects of the utility function that are not captured by the semantic match (i.e. post-CTR down-stream utilities that reflect engagement: time spent reading, sharing, commenting, etc). We want to include these extraneous features into the recommendations, but we want to do so while satisfying the following requirements: 1) we do not want to drastically sacrifice the quality of the semantic match, and 2) we want to quantify exactly how the semantic match would be affected as we control the different aspects of the utility function. In this paper, we present an approach that satisfies these requirements.\n We frame our approach as a general constrained optimization problem and suggest ways in which it can be solved efficiently by drawing from recent research on optimizing non-smooth rank metrics for information retrieval. Our approach features the following characteristics: 1) it is model and feature agnostic, 2) it does not require additional labeled training data to be collected, and 3) it can be easily incorporated into an existing model as an additional stage in the computation pipeline. We validate our approach in a revenue-generating recommender system that ranks billions of candidate recommendations on a daily basis and show that a significant improvement in the utility of the recommender system can be achieved with an acceptable and predictable degradation in the semantic match quality of the recommendations.", "title": "" }, { "docid": "a3a29e4f0c25c5f1e09b590048a4a1c0", "text": "We present DeepPicar, a low-cost deep neural network based autonomous car platform. DeepPicar is a small scale replication of a real self-driving car called DAVE-2 by NVIDIA. DAVE-2 uses a deep convolutional neural network (CNN), which takes images from a front-facing camera as input and produces car steering angles as output. DeepPicar uses the same network architecture—9 layers, 27 million connections and 250K parameters—and can drive itself in real-time using a web camera and a Raspberry Pi 3 quad-core platform. Using DeepPicar, we analyze the Pi 3's computing capabilities to support end-to-end deep learning based real-time control of autonomous vehicles. We also systematically compare other contemporary embedded computing platforms using the DeepPicar's CNN-based real-time control workload. We find that all tested platforms, including the Pi 3, are capable of supporting the CNN-based real-time control, from 20 Hz up to 100 Hz, depending on hardware platform. However, we find that shared resource contention remains an important issue that must be considered in applying CNN models on shared memory based embedded computing platforms; we observe up to 11.6X execution time increase in the CNN based control loop due to shared resource contention. To protect the CNN workload, we also evaluate state-of-the-art cache partitioning and memory bandwidth throttling techniques on the Pi 3. We find that cache partitioning is ineffective, while memory bandwidth throttling is an effective solution.", "title": "" }, { "docid": "bfe76736623dfc3271be4856f5dc2eef", "text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.", "title": "" }, { "docid": "9688efb8845895d49029c07d397a336b", "text": "Familial hypercholesterolaemia (FH) leads to elevated plasma levels of LDL-cholesterol and increased risk of premature atherosclerosis. Dietary treatment is recommended to all patients with FH in combination with lipid-lowering drug therapy. Little is known about how children with FH and their parents respond to dietary advice. The aim of the present study was to characterise the dietary habits in children with FH. A total of 112 children and young adults with FH and a non-FH group of children (n 36) were included. The children with FH had previously received dietary counselling. The FH subjects were grouped as: 12-14 years (FH (12-14)) and 18-28 years (FH (18-28)). Dietary data were collected by SmartDiet, a short self-instructing questionnaire on diet and lifestyle where the total score forms the basis for an overall assessment of the diet. Clinical and biochemical data were retrieved from medical records. The SmartDiet scores were significantly improved in the FH (12-14) subjects compared with the non-FH subjects (SmartDiet score of 31 v. 28, respectively). More FH (12-14) subjects compared with non-FH children consumed low-fat milk (64 v. 18 %, respectively), low-fat cheese (29 v. 3%, respectively), used margarine with highly unsaturated fat (74 v. 14 %, respectively). In all, 68 % of the FH (12-14) subjects and 55 % of the non-FH children had fish for dinner twice or more per week. The FH (18-28) subjects showed the same pattern in dietary choices as the FH (12-14) children. In contrast to the choices of low-fat dietary items, 50 % of the FH (12-14) subjects consumed sweet spreads or sweet drinks twice or more per week compared with only 21 % in the non-FH group. In conclusion, ordinary out-patient dietary counselling of children with FH seems to have a long-lasting effect, as the diet of children and young adults with FH consisted of more products that are favourable with regard to the fatty acid composition of the diet.", "title": "" }, { "docid": "136278bd47962b54b644a77bbdaf77e3", "text": "In this paper, we consider the grayscale template-matching problem, invariant to rotation, scale, translation, brightness and contrast, without previous operations that discard grayscale information, like detection of edges, detection of interest points or segmentation/binarization of the images. The obvious “brute force” solution performs a series of conventional template matchings between the image to analyze and the template query shape rotated by every angle, translated to every position and scaled by every factor (within some specified range of scale factors). Clearly, this takes too long and thus is not practical. We propose a technique that substantially accelerates this searching, while obtaining the same result as the original brute force algorithm. In some experiments, our algorithm was 400 times faster than the brute force algorithm. Our algorithm consists of three cascaded filters. These filters successively exclude pixels that have no chance of matching the template from further processing.", "title": "" }, { "docid": "d6b87f5b6627f1a1ac5cc951c7fe0f28", "text": "Despite a strong nonlinear behavior and a complex design, the interior permanent-magnet (IPM) machine is proposed as a good candidate among the PM machines owing to its interesting peculiarities, i.e., higher torque in flux-weakening operation, higher fault tolerance, and ability to adopt low-cost PMs. A second trend in designing PM machines concerns the adoption of fractional-slot (FS) nonoverlapped coil windings, which reduce the end winding length and consequently the Joule losses and the cost. Therefore, the adoption of an IPM machine with an FS winding aims to combine both advantages: high torque and efficiency in a wide operating region. However, the combination of an anisotropic rotor and an FS winding stator causes some problems. The interaction between the magnetomotive force harmonics due to the stator current and the rotor anisotropy causes a very high torque ripple. This paper illustrates a procedure in designing an IPM motor with the FS winding exhibiting a low torque ripple. The design strategy is based on two consecutive steps: at first, the winding is optimized by taking a multilayer structure, and then, the rotor geometry is optimized by adopting a nonsymmetric structure. As an example, a 12-slot 10-pole IPM machine is considered, achieving a torque ripple lower than 1.5% at full load.", "title": "" }, { "docid": "ad091e4f66adb26d36abfc40377ee6ab", "text": "This chapter provides a self-contained first introduction to description logics (DLs). The main concepts and features are explained with examples before syntax and semantics of the DL SROIQ are defined in detail. Additional sections review light-weight DL languages, discuss the relationship to the Web Ontology Language OWL and give pointers to further reading.", "title": "" }, { "docid": "d38df66fe85b4d12093965e649a70fe1", "text": "We describe the CoNLL-2002 shared task: language-independent named entity recognition. We give background information on the data sets and the evaluation method, present a general overview of the systems that have taken part in the task and discuss their performance.", "title": "" }, { "docid": "f783860e569d9f179466977db544bd01", "text": "In medical research, continuous variables are often converted into categorical variables by grouping values into two or more categories. We consider in detail issues pertaining to creating just two groups, a common approach in clinical research. We argue that the simplicity achieved is gained at a cost; dichotomization may create rather than avoid problems, notably a considerable loss of power and residual confounding. In addition, the use of a data-derived 'optimal' cutpoint leads to serious bias. We illustrate the impact of dichotomization of continuous predictor variables using as a detailed case study a randomized trial in primary biliary cirrhosis. Dichotomization of continuous data is unnecessary for statistical analysis and in particular should not be applied to explanatory variables in regression models.", "title": "" }, { "docid": "b83a0341f2ead9c72eda4217e0f31ea2", "text": "Time-series classification has attracted considerable research attention due to the various domains where time-series data are observed, ranging from medicine to econometrics. Traditionally, the focus of time-series classification has been on short time-series data composed of a few patterns exhibiting variabilities, while recently there have been attempts to focus on longer series composed of multiple local patrepeating with an arbitrary irregularity. The primary contribution of this paper relies on presenting a method which can detect local patterns in repetitive time-series via fitting local polynomial functions of a specified degree. We capture the repetitiveness degrees of time-series datasets via a new measure. Furthermore, our method approximates local polynomials in linear time and ensures an overall linear running time complexity. The coefficients of the polynomial functions are converted to symbolic words via equi-area discretizations of the coefficients' distributions. The symbolic polynomial words enable the detection of similar local patterns by assigning the same word to similar polynomials. Moreover, a histogram of the frequencies of the words is constructed from each time-series' bag of words. Each row of the histogram enables a new representation for the series and symbolizes the occurrence of local patterns and their frequencies. In an experimental comparison against state-of-the-art baselines on repetitive datasets, our method demonstrates significant improvements in terms of prediction accuracy.", "title": "" }, { "docid": "baa3d41ba1970125301b0fdd9380a966", "text": "This article provides an alternative perspective for measuring author impact by applying PageRank algorithm to a coauthorship network. A weighted PageRank algorithm considering citation and coauthorship network topology is proposed. We test this algorithm under different damping factors by evaluating author impact in the informetrics research community. In addition, we also compare this weighted PageRank with the h-index, citation, and program committee (PC) membership of the International Society for Scientometrics and Informetrics (ISSI) conferences. Findings show that this weighted PageRank algorithm provides reliable results in measuring author impact.", "title": "" }, { "docid": "c410b6cd3f343fc8b8c21e23e58013cd", "text": "Virtualization is increasingly being used to address server management and administration issues like flexible resource allocation, service isolation and workload migration. In a virtualized environment, the virtual machine monitor (VMM) is the primary resource manager and is an attractive target for implementing system features like scheduling, caching, and monitoring. However, the lackof runtime information within the VMM about guest operating systems, sometimes called the semantic gap, is a significant obstacle to efficiently implementing some kinds of services.In this paper we explore techniques that can be used by a VMM to passively infer useful information about a guest operating system's unified buffer cache and virtual memory system. We have created a prototype implementation of these techniques inside the Xen VMM called Geiger and show that it can accurately infer when pages are inserted into and evicted from a system's buffer cache. We explore several nuances involved in passively implementing eviction detection that have not previously been addressed, such as the importance of tracking disk block liveness, the effect of file system journaling, and the importance of accounting for the unified caches found in modern operating systems.Using case studies we show that the information provided by Geiger enables a VMM to implement useful VMM-level services. We implement a novel working set size estimator which allows the VMM to make more informed memory allocation decisions. We also show that a VMM can be used to drastically improve the hit rate in remote storage caches by using eviction-based cache placement without modifying the application or operating system storage interface. Both case studies hint at a future where inference techniques enable a broad new class of VMM-level functionality.", "title": "" }, { "docid": "2a56b6e6dcab0817e6ab4dfa8826fc49", "text": "Considerable data and analysis support the detection of one or more supernovae (SNe) at a distance of about 50 pc, ∼2.6 million years ago. This is possibly related to the extinction event around that time and is a member of a series of explosions that formed the Local Bubble in the interstellar medium. We build on previous work, and propagate the muon flux from SN-initiated cosmic rays from the surface to the depths of the ocean. We find that the radiation dose from the muons will exceed the total present surface dose from all sources at depths up to 1 km and will persist for at least the lifetime of marine megafauna. It is reasonable to hypothesize that this increase in radiation load may have contributed to a newly documented marine megafaunal extinction at that time.", "title": "" }, { "docid": "764a1d2571ed45dd56aea44efd4f5091", "text": "BACKGROUND\nThere exists some ambiguity regarding the exact anatomical limits of the orbicularis retaining ligament, particularly its medial boundary in both the superior and inferior orbits. Precise understanding of this anatomy is necessary during periorbital rejuvenation.\n\n\nMETHODS\nSixteen fresh hemifacial cadaver dissections were performed in the anatomy laboratory to evaluate the anatomy of the orbicularis retaining ligament. Dissection was assisted by magnification with loupes and the operating microscope.\n\n\nRESULTS\nA ligamentous system was found that arises from the inferior and superior orbital rim that is truly periorbital. This ligament spans the entire circumference of the orbit from the medial to the lateral canthus. There exists a fusion line between the orbital septum and the orbicularis retaining ligament in the superior orbit, indistinguishable from the arcus marginalis of the inferior orbital rim. Laterally, the orbicularis retaining ligament contributes to the lateral canthal ligament, consistent with previous studies. No contribution to the medial canthus was identified in this study.\n\n\nCONCLUSIONS\nThe orbicularis retaining ligament is a true, circumferential \"periorbital\" structure. This ligament may serve two purposes: (1) to act as a fixation point for the orbicularis muscle of the upper and lower eyelids and (2) to protect the ocular globe. With techniques of periorbital injection with fillers and botulinum toxin becoming ever more popular, understanding the orbicularis retaining ligament's function as a partitioning membrane is mandatory for avoiding ocular complications. As a support structure, examples are shown of how manipulation of this ligament may benefit canthopexy, septal reset, and brow-lift procedures as described by Hoxworth.", "title": "" } ]
scidocsrr
f9657119e4fdea6594c89addb1fd6be3
On the wafer/pad friction of chemical-mechanical planarization (CMP) processes - Part I: modeling and analysis
[ { "docid": "d1bd5406b31cec137860a73b203d6bef", "text": "A chemical-mechanical planarization (CMP) model based on lubrication theory is developed which accounts for pad compressibility, pad porosity and means of slurry delivery. Slurry ®lm thickness and velocity distributions between the pad and the wafer are predicted using the model. Two regimes of CMP operation are described: the lubrication regime (for ,40±70 mm slurry ®lm thickness) and the contact regime (for thinner ®lms). These regimes are identi®ed for two different pads using experimental copper CMP data and the predictions of the model. The removal rate correlation based on lubrication and mass transport theory agrees well with our experimental data in the lubrication regime. q 2000 Elsevier Science S.A. All rights reserved.", "title": "" }, { "docid": "e03795645ca53f6d4f903ff8ff227054", "text": "This paper presents the experimental validation and some application examples of the proposed wafer/pad friction models for linear chemical-mechanical planarization (CMP) processes in the companion paper. An experimental setup of a linear CMP polisher is first presented and some polishing processes are then designed for validation of the wafer/pad friction modeling and analysis. The friction torques of both the polisher spindle and roller systems are used to monitor variations of the friction coefficient in situ . Verification of the friction model under various process parameters is presented. Effects of pad conditioning and the wafer film topography on wafer/pad friction are experimentally demonstrated. Finally, several application examples are presented showing the use of the roller motor current measurement for real-time process monitoring and control.", "title": "" } ]
[ { "docid": "c5c64d7fcd9b4804f7533978026dcfbd", "text": "This paper presents a new method to control multiple micro-scale magnetic agents operating in close proximity to each other for applications in microrobotics. Controlling multiple magnetic microrobots close to each other is difficult due to magnetic interactions between the agents, and here we seek to control those interactions for the creation of desired multi-agent formations. We use the fact that all magnetic agents orient to the global input magnetic field to modulate the local attraction-repulsion forces between nearby agents. Here we study these controlled interaction magnetic forces for agents at a water-air interface and devise two controllers to regulate the inter-agent spacing and heading of the set, for motion in two dimensions. Simulation and experimental demonstrations show the feasibility of the idea and its potential for the completion of complex tasks using teams of microrobots. Average tracking error of less than 73 μm and 14° is accomplished for the regulation of the inter-agent space and the pair heading angle, respectively, for identical disk-shape agents with nominal radius of 500 μm and thickness of 80 μm operating within several body-lengths of each other.", "title": "" }, { "docid": "5dfda76bf2065850492406fdf7cfed81", "text": "We present a method for object categorization in real-world scenes. Following a common consensus in the field, we do not assume that a figure-ground segmentation is available prior to recognition. However, in contrast to most standard approaches for object class recognition, our approach automatically segments the object as a result of the categorization. This combination of recognition and segmentation into one process is made possible by our use of an Implicit Shape Model, which integrates both capabilities into a common probabilistic framework. This model can be thought of as a non-parametric approach which can easily handle configurations of large numbers of object parts. In addition to the recognition and segmentation result, it also generates a per-pixel confidence measure specifying the area that supports a hypothesis and how much it can be trusted. We use this confidence to derive a natural extension of the approach to handle multiple objects in a scene and resolve ambiguities between overlapping hypotheses with an MDL-based criterion. In addition, we present an extensive evaluation of our method on a standard dataset for car detection and compare its performance to existing methods from the literature. Our results show that the proposed method outperforms previously published methods while needing one order of magnitude less training examples. Finally, we present results for articulated objects, which show that the proposed method can categorize and segment unfamiliar objects in different articulations and with widely varying texture patterns, even under significant partial occlusion.", "title": "" }, { "docid": "2282c06ea5e203b7e94095334bba05b9", "text": "Exploring and surveying the world has been an important goal of humankind for thousands of years. Entering the 21st century, the Earth has almost been fully digitally mapped. Widespread deployment of GIS (Geographic Information Systems) technology and a tremendous increase of both satellite and street-level mapping over the last decade enables the public to view large portions of the world using computer applications such as Bing Maps or Google Earth.", "title": "" }, { "docid": "61a2b0e51b27f46124a8042d59c0f022", "text": "We address the highly challenging problem of real-time 3D hand tracking based on a monocular RGB-only sequence. Our tracking method combines a convolutional neural network with a kinematic 3D hand model, such that it generalizes well to unseen data, is robust to occlusions and varying camera viewpoints, and leads to anatomically plausible as well as temporally smooth hand motions. For training our CNN we propose a novel approach for the synthetic generation of training data that is based on a geometrically consistent image-to-image translation network. To be more specific, we use a neural network that translates synthetic images to \"real\" images, such that the so-generated images follow the same statistical distribution as real-world hand images. For training this translation network we combine an adversarial loss and a cycle-consistency loss with a geometric consistency loss in order to preserve geometric properties (such as hand pose) during translation. We demonstrate that our hand tracking system outperforms the current state-of-the-art on challenging RGB-only footage.", "title": "" }, { "docid": "16fbebf500be1bf69027d3a35d85362b", "text": "Mobile Edge Computing is an emerging technology that provides cloud and IT services within the close proximity of mobile subscribers. Traditional telecom network operators perform traffic control flow (forwarding and filtering of packets), but in Mobile Edge Computing, cloud servers are also deployed in each base station. Therefore, network operator has a great responsibility in serving mobile subscribers. Mobile Edge Computing platform reduces network latency by enabling computation and storage capacity at the edge network. It also enables application developers and content providers to serve context-aware services (such as collaborative computing) by using real time radio access network information. Mobile and Internet of Things devices perform computation offloading for compute intensive applications, such as image processing, mobile gaming, to leverage the Mobile Edge Computing services. In this paper, some of the promising real time Mobile Edge Computing application scenarios are discussed. Later on, a state-of-the-art research efforts on Mobile Edge Computing domain is presented. The paper also presents taxonomy of Mobile Edge Computing, describing key attributes. Finally, open research challenges in successful deployment of Mobile Edge Computing are identified and discussed.", "title": "" }, { "docid": "e3524dfc6939238e9e2f49440c1090ea", "text": "This paper presents modeling approaches for step-up grid-connected photovoltaic systems intended to provide analytical tools for control design. The first approach is based on a voltage source representation of the bulk capacitor interacting with the grid-connected inverter, which is a common model for large DC buses and closed-loop inverters. The second approach considers the inverter of a double-stage PV system as a Norton equivalent, which is widely accepted for open-loop inverters. In addition, the paper considers both ideal and realistic models for the DC/DC converter that interacts with the PV module, providing four mathematical models to cover a wide range of applications. The models are expressed in state space representation to simplify its use in analysis and control design, and also to be easily implemented in simulation software, e.g., Matlab. The PV system was analyzed to demonstrate the non-minimum phase condition for all the models, which is an important aspect to select the control technique. Moreover, the system observability and controllability were studied to define design criteria. Finally, the analytical results are illustrated by means of detailed simulations, and the paper results are validated in an experimental test bench.", "title": "" }, { "docid": "33c453cec25a77e1bde4ecb353fc678b", "text": "This article introduces the functional model of self-disclosure on social network sites by integrating a functional theory of self-disclosure and research on audience representations as situational cues for activating interpersonal goals. According to this model, people pursue strategic goals and disclose differently depending on social media affordances, and self-disclosure goals mediate between media affordances and disclosure intimacy. The results of the empirical study examining self-disclosure motivations and characteristics in Facebook status updates, wall posts, and private messaging lend support to this model and provide insights into the motivational drivers of self-disclosure on SNSs, helping to reconcile traditional views on self-disclosure and self-disclosing behaviors in new media contexts.", "title": "" }, { "docid": "b113d45660629847afbd7faade1f3a71", "text": "A wideband circularly polarized (CP) rectangular dielectric resonator antenna (DRA) is presented. An Archimedean spiral slot is used to excite the rectangular DRA for wideband CP radiation. The operating principle of the proposed antenna is based on using a broadband feeding structure to excite the DRA. A prototype of the proposed antenna is designed, fabricated, and measured. Good agreement between the simulated and measured results is attained, and a wide 3-dB axial-ratio (AR) bandwidth of 25.5% is achieved.", "title": "" }, { "docid": "024e95f41a48e8409bd029c14e6acb3a", "text": "This communication investigates the application of metamaterial absorber (MA) to waveguide slot antenna to reduce its radar cross section (RCS). A novel ultra-thin MA is presented, and its absorbing characteristics and mechanism are analyzed. The PEC ground plane of waveguide slot antenna is covered by this MA. As compared with the slot antenna with a PEC ground plane, the simulation and experiment results demonstrate that the monostatic and bistatic RCS of waveguide slot antenna are reduced significantly, and the performance of antenna is preserved simultaneously.", "title": "" }, { "docid": "d9a9339672121fb6c3baeb51f11bfcd8", "text": "The VISION (video indexing for searching over networks) digital video library system has been developed in our laboratory as a testbed for evaluating automatic and comprehensive mechanisms for video archive creation and content-based search, ®ltering and retrieval of video over local and wide area networks. In order to provide access to video footage within seconds of broadcast, we have developed a new pipelined digital video processing architecture which is capable of digitizing, processing, indexing and compressing video in real time on an inexpensive general purpose computer. These videos were automatically partitioned into short scenes using video, audio and closed-caption information. The resulting scenes are indexed based on their captions and stored in a multimedia database. A clientserver-based graphical user interface was developed to enable users to remotely search this archive and view selected video segments over networks of di€erent bandwidths. Additionally, VISION classi®es the incoming videos with respect to a taxonomy of categories and will selectively send users videos which match their individual pro®les. # 1999 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "8466bed483a2774f7ccb44416364cf3f", "text": "This paper proposes a semantics for incorporation that does not require the incorporated nominal to form a syntactic or morphological unit with the verb. Such a semantics is needed for languages like Hindi where semantic intuitions suggest the existence of incorporation but the evidence for syntactic fusion is not compelling. A lexical alternation between regular transitive and incorporating transitive verbs is proposed to derive the particular features of Hindi incorporation. The proposed semantics derives existential force without positing existential closure over the incorporated nominal. It also builds in modality into the meaning of the incorporating verb. This proposal is compared to two other recent proposals for the interpretation of incorporated arguments. The cross-linguistic implications of the analysis developed on the basis of Hindi are also discussed. 1. Identifying Incorporation The primary identification of the phenomenon known as noun incorporation is based on morphological and syntactic evidence about the shape and position of the nominal element involved. Consider the Inuit example in (1a) as well as the more familiar example of English compounding in (1b): 1a. Angunguaq eqalut-tur-p-u-q West Greenlandic -Inuit A-ABS salmon-eat-IND-[-tr]-3S Van Geenhoven (1998) “Angunguaq ate salmon.” b. Mary went apple-picking. The thematic object in (1a) occurs inside the verbal complex, and this affects transitivity. The verb has intransitive marking and the subject has absolutive case instead of the expected ergative. The nominal itself is a bare stem. There is no determiner, case marking, plurality or modification. In other words, an incorporated nominal is an N, not a DP or an NP. Similar comments apply to the English compound in (1b), though it should be noted that English does not have [V N+V] compounds. Though the reasons for this are not particularly well-understood at this time, my purpose in introducing English compounds here is for expository purposes only. A somewhat less obvious case of noun incorporation is attested in Niuean, discussed by Massam (2001). Niuean is an SVO language with obligatory V fronting. Massam notes that in addition to expect VSO order, there also exist sentences with VOS order in Niuean: 1 There can be external modifiers with (a limited set of) determiners, case marking etc. in what is known as the phenomenon of ‘doubling’.", "title": "" }, { "docid": "ad004dd47449b977cd30f2454c5af77a", "text": "Plants are a tremendous source for the discovery of new products of medicinal value for drug development. Today several distinct chemicals derived from plants are important drugs currently used in one or more countries in the world. Many of the drugs sold today are simple synthetic modifications or copies of the naturally obtained substances. The evolving commercial importance of secondary metabolites has in recent years resulted in a great interest in secondary metabolism, particularly in the possibility of altering the production of bioactive plant metabolites by means of tissue culture technology. Plant cell culture technologies were introduced at the end of the 1960’s as a possible tool for both studying and producing plant secondary metabolites. Different strategies, using an in vitro system, have been extensively studied to improve the production of plant chemicals. The focus of the present review is the application of tissue culture technology for the production of some important plant pharmaceuticals. Also, we describe the results of in vitro cultures and production of some important secondary metabolites obtained in our laboratory.", "title": "" }, { "docid": "f21e55c7509124be8fabfb1d706d76aa", "text": "CTCF and BORIS (CTCFL), two paralogous mammalian proteins sharing nearly identical DNA binding domains, are thought to function in a mutually exclusive manner in DNA binding and transcriptional regulation. Here we show that these two proteins co-occupy a specific subset of regulatory elements consisting of clustered CTCF binding motifs (termed 2xCTSes). BORIS occupancy at 2xCTSes is largely invariant in BORIS-positive cancer cells, with the genomic pattern recapitulating the germline-specific BORIS binding to chromatin. In contrast to the single-motif CTCF target sites (1xCTSes), the 2xCTS elements are preferentially found at active promoters and enhancers, both in cancer and germ cells. 2xCTSes are also enriched in genomic regions that escape histone to protamine replacement in human and mouse sperm. Depletion of the BORIS gene leads to altered transcription of a large number of genes and the differentiation of K562 cells, while the ectopic expression of this CTCF paralog leads to specific changes in transcription in MCF7 cells. We discover two functionally and structurally different classes of CTCF binding regions, 2xCTSes and 1xCTSes, revealed by their predisposition to bind BORIS. We propose that 2xCTSes play key roles in the transcriptional program of cancer and germ cells.", "title": "" }, { "docid": "6e3e881cb1bb05101ad0f38e3f21e547", "text": "Mechanical valves used for aortic valve replacement (AVR) continue to be associated with bleeding risks because of anticoagulation therapy, while bioprosthetic valves are at risk of structural valve deterioration requiring reoperation. This risk/benefit ratio of mechanical and bioprosthetic valves has led American and European guidelines on valvular heart disease to be consistent in recommending the use of mechanical prostheses in patients younger than 60 years of age. Despite these recommendations, the use of bioprosthetic valves has significantly increased over the last decades in all age groups. A systematic review of manuscripts applying propensity-matching or multivariable analysis to compare the usage of mechanical vs. bioprosthetic valves found either similar outcomes between the two types of valves or favourable outcomes with mechanical prostheses, particularly in younger patients. The risk/benefit ratio and choice of valves will be impacted by developments in valve designs, anticoagulation therapy, reducing the required international normalized ratio, and transcatheter and minimally invasive procedures. However, there is currently no evidence to support lowering the age threshold for implanting a bioprosthesis. Physicians in the Heart Team and patients should be cautious in pursuing more bioprosthetic valve use until its benefit is clearly proven in middle-aged patients.", "title": "" }, { "docid": "5a4b73a1357809a547773fa8982172dd", "text": "In this paper, we present a method for cup boundary detection from monocular colour fundus image to help quantify cup changes. The method is based on anatomical evidence such as vessel bends at cup boundary, considered relevant by glaucoma experts. Vessels are modeled and detected in a curvature space to better handle inter-image variations. Bends in a vessel are robustly detected using a region of support concept, which automatically selects the right scale for analysis. A reliable subset called r-bends is derived using a multi-stage strategy and a local splinetting is used to obtain the desired cup boundary. The method has been successfully tested on 133 images comprising 32 normal and 101 glaucomatous images against three glaucoma experts. The proposed method shows high sensitivity in cup to disk ratio-based glaucoma detection and local assessment of the detected cup boundary shows good consensus with the expert markings.", "title": "" }, { "docid": "f3f70e5ba87399e9d44bda293a231399", "text": "During natural disasters or crises, users on social media tend to easily believe contents of postings related to the events, and retweet the postings with hoping them to be reached to many other users. Unfortunately, there are malicious users who understand the tendency and post misinformation such as spam and fake messages with expecting wider propagation. To resolve the problem, in this paper we conduct a case study of 2013 Moore Tornado and Hurricane Sandy. Concretely, we (i) understand behaviors of these malicious users, (ii) analyze properties of spam, fake and legitimate messages, (iii) propose flat and hierarchical classification approaches, and (iv) detect both fake and spam messages with even distinguishing between them. Our experimental results show that our proposed approaches identify spam and fake messages with 96.43% accuracy and 0.961 F-measure.", "title": "" }, { "docid": "0ce0db75982c205b581bc24060b9e2a4", "text": "Maxim Gumin's WaveFunctionCollapse (WFC) algorithm is an example-driven image generation algorithm emerging from the craft practice of procedural content generation. In WFC, new images are generated in the style of given examples by ensuring every local window of the output occurs somewhere in the input. Operationally, WFC implements a non-backtracking, greedy search method. This paper examines WFC as an instance of constraint solving methods. We trace WFC's explosive influence on the technical artist community, explain its operation in terms of ideas from the constraint solving literature, and probe its strengths by means of a surrogate implementation using answer set programming.", "title": "" }, { "docid": "16ee3eb990a49bdff840609ae79f26e3", "text": "Driven by the prominent thermal signature of humans and following the growing availability of unmanned aerial vehicles (UAVs), more and more research efforts have been focusing on the detection and tracking of pedestrians using thermal infrared images recorded from UAVs. However, pedestrian detection and tracking from the thermal images obtained from UAVs pose many challenges due to the low-resolution of imagery, platform motion, image instability and the relatively small size of the objects. This research tackles these challenges by proposing a pedestrian detection and tracking system. A two-stage blob-based approach is first developed for pedestrian detection. This approach first extracts pedestrian blobs using the regional gradient feature and geometric constraints filtering and then classifies the detected blobs by using a linear Support Vector Machine (SVM) with a hybrid descriptor, which sophisticatedly combines Histogram of Oriented Gradient (HOG) and Discrete Cosine Transform (DCT) features in order to achieve accurate detection. This research further proposes an approach for pedestrian tracking. This approach employs the feature tracker with the update of detected pedestrian location to track pedestrian objects from the registered videos and extracts the motion trajectory data. The proposed detection and tracking approaches have been evaluated by multiple different datasets, and the results illustrate the effectiveness of the proposed methods. This research is expected to significantly benefit many transportation applications, such as the multimodal traffic performance measure, pedestrian behavior study and pedestrian-vehicle crash analysis. Future work will focus on using fused thermal and visual images to further improve the detection efficiency and effectiveness.", "title": "" }, { "docid": "2a717b823caaaa0187d25b04305f13ee", "text": "BACKGROUND\nDo peripersonal space for acting on objects and interpersonal space for interacting with con-specifics share common mechanisms and reflect the social valence of stimuli? To answer this question, we investigated whether these spaces refer to a similar or different physical distance.\n\n\nMETHODOLOGY\nParticipants provided reachability-distance (for potential action) and comfort-distance (for social processing) judgments towards human and non-human virtual stimuli while standing still (passive) or walking toward stimuli (active).\n\n\nPRINCIPAL FINDINGS\nComfort-distance was larger than other conditions when participants were passive, but reachability and comfort distances were similar when participants were active. Both spaces were modulated by the social valence of stimuli (reduction with virtual females vs males, expansion with cylinder vs robot) and the gender of participants.\n\n\nCONCLUSIONS\nThese findings reveal that peripersonal reaching and interpersonal comfort spaces share a common motor nature and are sensitive, at different degrees, to social modulation. Therefore, social processing seems embodied and grounded in the body acting in space.", "title": "" }, { "docid": "3a501184ca52dedde44e79d2c66e78df", "text": "China’s New Silk Road initiative is a multistate commercial project as grandiose as it is ambitious. Comprised of an overland economic “belt” and a maritime transit component, it envisages the development of a trade network traversing numerous countries and continents. Major investments in infrastructure are to establish new commercial hubs along the route, linking regions together via railroads, ports, energy transit systems, and technology. A relatively novel concept introduced by China’s President Xi Jinping in 2013, several projects related to the New Silk Road initiative—also called “One Belt, One Road” (OBOR, or B&R)—are being planned, are under construction, or have been recently completed. The New Silk Road is a fluid concept in its formative stages: it encompasses a variety of projects and is all-inclusive in terms of countries welcomed to participate. For these reasons, it has been labeled an abstract or visionary project. However, those in the region can attest that the New Silk Road is a reality, backed by Chinese hard currency. Thus, while Washington continues to deliberate on an overarching policy toward Asia, Beijing is making inroads—literally and figuratively— across the region and beyond.", "title": "" } ]
scidocsrr
fb3ec739ae67416aa9f0feacf4d301c9
Computational Technique for an Efficient Classification of Protein Sequences With Distance-Based Sequence Encoding Algorithm
[ { "docid": "c3525081c0f4eec01069dd4bd5ef12ab", "text": "More than twelve years have elapsed since the first public release of WEKA. In that time, the software has been rewritten entirely from scratch, evolved substantially and now accompanies a text on data mining [35]. These days, WEKA enjoys widespread acceptance in both academia and business, has an active community, and has been downloaded more than 1.4 million times since being placed on Source-Forge in April 2000. This paper provides an introduction to the WEKA workbench, reviews the history of the project, and, in light of the recent 3.6 stable release, briefly discusses what has been added since the last stable version (Weka 3.4) released in 2003.", "title": "" } ]
[ { "docid": "d8042183e064ffba69b54246b17b9ff4", "text": "Offshore software development is a new trend in the information technology (IT) outsourcing field, fueled by the globalization of IT and the improvement of telecommunication facilities. Countries such as India, Ireland, and Israel have established a significant presence in this market. In this article, we discuss how software processes affect offshore development projects. We use data from projects in India, and focus on three measures of project performance: effort, elapsed time, and software rework.", "title": "" }, { "docid": "69d3c943755734903b9266ca2bd2fad1", "text": "This paper describes experiments in Machine Learning for text classification using a new representation of text based on WordNet hypernyms. Six binary classification tasks of varying diff iculty are defined, and the Ripper system is used to produce discrimination rules for each task using the new hypernym density representation. Rules are also produced with the commonly used bag-of-words representation, incorporating no knowledge from WordNet. Experiments show that for some of the more diff icult tasks the hypernym density representation leads to significantly more accurate and more comprehensible rules.", "title": "" }, { "docid": "a2cf369a67507d38ac1a645e84525497", "text": "Development of a cystic mass on the nasal dorsum is a very rare complication of aesthetic rhinoplasty. Most reported cases are of mucous cyst and entrapment of the nasal mucosa in the subcutaneous space due to traumatic surgical technique has been suggested as a presumptive pathogenesis. Here, we report a case of dorsal nasal cyst that had a different pathogenesis for cyst formation. A 58-yr-old woman developed a large cystic mass on the nasal radix 30 yr after augmentation rhinoplasty with silicone material. The mass was removed via a direct open approach and the pathology findings revealed a foreign body inclusion cyst associated with silicone. Successful nasal reconstruction was performed with autologous cartilages. Discussion and a brief review of the literature will be focused on the pathophysiology of and treatment options for a postrhinoplasty dorsal cyst.", "title": "" }, { "docid": "60ac1fa826816d39562104849fff8f46", "text": "The increased attention to environmentalism in western societies has been accompanied by a rise in ecotourism, i.e. ecologically sensitive travel to remote areas to learn about ecosystems, as well as in cultural tourism, focusing on the people who are a part of ecosystems. Increasingly, the internet has partnered with ecotourism companies to provide information about destinations and facilitate travel arrangements. This study reviews the literature linking ecotourism and sustainable development, as well as prior research showing that cultures have been historically commodified in tourism advertising for developing countries destinations. We examine seven websites advertising ecotourism and cultural tourism and conclude that: (1) advertisements for natural and cultural spaces are not always consistent with the discourse of sustainability; and (2) earlier critiques of the commodification of culture in print advertising extend to internet advertising also.", "title": "" }, { "docid": "46170fe683c78a767cb15c0ac3437e83", "text": "Recently, efforts in the development of speech recognition systems and robots have come to fruition with an overflow of applications in our daily lives. However, we are still far from achieving natural interaction between humans and robots, given that robots do not take into account the emotional state of speakers. The objective of this research is to create an automatic emotion classifier integrated with a robot, such that the robot can understand the emotional state of a human user by analyzing the speech signals from the user. This becomes particularly relevant in the realm of using assistive robotics to tailor therapeutic techniques towards assisting children with autism spectrum disorder (ASD). Over the past two decades, the number of children being diagnosed with ASD has been rapidly increasing, yet the clinical and societal support have not been enough to cope with the needs. Therefore, finding alternative, affordable, and accessible means of therapy and assistance has become more of a concern. Improving audio-based emotion prediction for children with ASD will allow for the robotic system to properly assess the engagement level of the child and modify its responses to maximize the quality of interaction between the robot and the child and sustain an interactive learning environment.", "title": "" }, { "docid": "3a58c1a2e4428c0b875e1202055e5b13", "text": "Short texts usually encounter data sparsity and ambiguity problems in representations for their lack of context. In this paper, we propose a novel method to model short texts based on semantic clustering and convolutional neural network. Particularly, we first discover semantic cliques in embedding spaces by a fast clustering algorithm. Then, multi-scale semantic units are detected under the supervision of semantic cliques, which introduce useful external knowledge for short texts. These meaningful semantic units are combined and fed into convolutional layer, followed by max-pooling operation. Experimental results on two open benchmarks validate the effectiveness of the proposed method.", "title": "" }, { "docid": "918bf13ef0289eb9b78309c83e963b26", "text": "For information retrieval, it is useful to classify documents using a hierarchy of terms from a domain. One problem is that, for many domains, hierarchies of terms are not available. The task 17 of SemEval 2015 addresses the problem of structuring a set of terms from a given domain into a taxonomy without manual intervention. Here we present some simple taxonomy structuring techniques, such as term overlap and document and sentence cooccurrence in large quantities of text (English Wikipedia) to produce hypernym pairs for the eight domain lists supplied by the task organizers. Our submission ranked first in this 2015 benchmark, which suggests that overly complicated methods might need to be adapted to individual domains. We describe our generic techniques and present an initial evaluation of results.", "title": "" }, { "docid": "640fd96e02d8aa69be488323f77b40ba", "text": "Low Power Wide Area (LPWA) connectivity, a wireless wide area technology that is characterized for interconnecting devices with low bandwidth connectivity and focusing on range and power efficiency, is seen as one of the fastest-growing components of Internet-of-Things (IoT). The LPWA connectivity is used to serve a diverse range of vertical applications, including agriculture, consumer, industrial, logistic, smart building, smart city and utilities. 3GPP has defined the maiden Narrowband IoT (NB-IoT) specification in Release 13 (Rel-13) to accommodate the LPWA demand. Several major cellular operators, such as China Mobile, Deutsch Telekom and Vodafone, have announced their NB-IoT trials or commercial network in year 2017. In Telekom Malaysia, we have setup a NB-IoT trial network for End-to-End (E2E) integration study. Our experimental assessment showed that the battery lifetime target for NB-IoT devices as stated by 3GPP utilizing latest-to-date Commercial Off-The-Shelf (COTS) NB-IoT modules is yet to be realized. Finally, several recommendations on how to optimize the battery lifetime while designing firmware for NB-IoT device are also provided.", "title": "" }, { "docid": "aa3c0d7d023e1f9795df048ee44d92ec", "text": "Correspondence Institute of Computer Science, University of Tartu, Juhan Liivi 2, 50409 Tartu, Estonia Email: [email protected] Summary Blockchain platforms, such as Ethereum, allow a set of actors to maintain a ledger of transactions without relying on a central authority and to deploy scripts, called smart contracts, that are executedwhenever certain transactions occur. These features can be used as basic building blocks for executing collaborative business processes between mutually untrusting parties. However, implementing business processes using the low-level primitives provided by blockchain platforms is cumbersome and error-prone. In contrast, established business process management systems, such as those based on the standard Business Process Model and Notation (BPMN), provide convenient abstractions for rapid development of process-oriented applications. This article demonstrates how to combine the advantages of a business process management system with those of a blockchain platform. The article introduces a blockchain-based BPMN execution engine, namely Caterpillar. Like any BPMN execution engine, Caterpillar supports the creation of instances of a process model and allows users to monitor the state of process instances and to execute tasks thereof. The specificity of Caterpillar is that the state of each process instance is maintained on the (Ethereum) blockchain and the workflow routing is performed by smart contracts generated by a BPMN-to-Solidity compiler. The Caterpillar compiler supports a large array of BPMN constructs, including subprocesses, multi-instances activities and event handlers. The paper describes the architecture of Caterpillar, and the interfaces it provides to support the monitoring of process instances, the allocation and execution of work items, and the execution of service tasks.", "title": "" }, { "docid": "8e082f030aa5c5372fe327d4291f1864", "text": "The Internet of Things (IoT) describes the interconnection of objects (or Things) for various purposes including identification, communication, sensing, and data collection. “Things” in this context range from traditional computing devices like Personal Computers (PC) to general household objects embedded with capabilities for sensing and/or communication through the use of technologies such as Radio Frequency Identification (RFID). This conceptual paper, from a philosophical viewpoint, introduces an initial set of guiding principles also referred to in the paper as commandments that can be applied by all the stakeholders involved in the IoT during its introduction, deployment and thereafter. © 2011 Published by Elsevier Ltd. Selection and/or peer-review under responsibility of [name organizer]", "title": "" }, { "docid": "f376948c1b8952b0b19efad3c5ca0471", "text": "This essay grew out of an examination of one-tailed significance testing. One-tailed tests were little advocated by the founders of modern statistics but are widely used and recommended nowadays in the biological, behavioral and social sciences. The high frequency of their use in ecology and animal behavior and their logical indefensibil-ity have been documented in a companion review paper. In the present one, we trace the roots of this problem and counter some attacks on significance testing in general. Roots include: the early but irrational dichotomization of the P scale and adoption of the 'significant/non-significant' terminology; the mistaken notion that a high P value is evidence favoring the null hypothesis over the alternative hypothesis; and confusion over the distinction between statistical and research hypotheses. Resultant widespread misuse and misinterpretation of significance tests have also led to other problems, such as unjustifiable demands that reporting of P values be disallowed or greatly reduced and that reporting of confidence intervals and standardized effect sizes be required in their place. Our analysis of these matters thus leads us to a recommendation that for standard types of significance assessment the paleoFisherian and Neyman-Pearsonian paradigms be replaced by a neoFisherian one. The essence of the latter is that a critical α (probability of type I error) is not specified, the terms 'significant' and 'non-significant' are abandoned, that high P values lead only to suspended judgments, and that the so-called \" three-valued logic \" of Cox, Kaiser, Tukey, Tryon and Harris is adopted explicitly. Confidence intervals and bands, power analyses, and severity curves remain useful adjuncts in particular situations. Analyses conducted under this paradigm we term neoFisherian significance assessments (NFSA). Their role is assessment of the existence, sign and magnitude of statistical effects. The common label of null hypothesis significance tests (NHST) is retained for paleoFisherian and Neyman-Pearsonian approaches and their hybrids. The original Neyman-Pearson framework has no utility outside quality control type applications. Some advocates of Bayesian, likelihood and information-theoretic approaches to model selection have argued that P values and NFSAs are of little or no value, but those arguments do not withstand critical review. Champions of Bayesian methods in particular continue to overstate their value and relevance. 312 Hurlbert & Lombardi • ANN. ZOOL. FeNNICI Vol. 46 \" … the object of statistical methods is the reduction of data. A quantity of data … is to be replaced by relatively few quantities which shall …", "title": "" }, { "docid": "7d68eaf1d9916b0504ac13f5ff9ef980", "text": "The success of Bitcoin largely relies on the perception of a fair underlying peer-to-peer protocol: blockchain. Fairness here essentially means that the reward (in bitcoins) given to any participant that helps maintain the consistency of the protocol by mining, is proportional to the computational power devoted by that participant to the mining task. Without such perception of fairness, honest miners might be disincentivized to maintain the protocol, leaving the space for dishonest miners to reach a majority and jeopardize the consistency of the entire system. We prove, in this paper, that blockchain is actually unfair, even in a distributed system of only two honest miners. In a realistic setting where message delivery is not instantaneous, the ratio between the (expected) number of blocks committed by two miners is at least exponential in the product of the message delay and the difference between the two miners’ hashrates. To obtain our result, we model the growth of blockchain, which may be of independent interest. We also apply our result to explain recent empirical observations and vulnerabilities.", "title": "" }, { "docid": "01165a990d16000ac28b0796e462147a", "text": "Esthesioneuroblastoma is a rare malignant tumor of sinonasal origin. These tumors typically present with unilateral nasal obstruction and epistaxis, and diagnosis is confirmed on biopsy. Over the past 15 years, significant advances have been made in endoscopic technology and techniques that have made this tumor amenable to expanded endonasal resection. There is growing evidence supporting the feasibility of safe and effective resection of esthesioneuroblastoma via an expanded endonasal approach. This article outlines a technique for endoscopic resection of esthesioneuroblastoma and reviews the current literature on esthesioneuroblastoma with emphasis on outcomes after endoscopic resection of these malignant tumors.", "title": "" }, { "docid": "71bafd4946377eaabff813bffd5617d7", "text": "Autumn-seeded winter cereals acquire tolerance to freezing temperatures and become vernalized by exposure to low temperature (LT). The level of accumulated LT tolerance depends on the cold acclimation rate and factors controlling timing of floral transition at the shoot apical meristem. In this study, genomic loci controlling the floral transition time were mapped in a winter wheat (T. aestivum L.) doubled haploid (DH) mapping population segregating for LT tolerance and rate of phenological development. The final leaf number (FLN), days to FLN, and days to anthesis were determined for 142 DH lines grown with and without vernalization in controlled environments. Analysis of trait data by composite interval mapping (CIM) identified 11 genomic regions that carried quantitative trait loci (QTLs) for the developmental traits studied. CIM analysis showed that the time for floral transition in both vernalized and non-vernalized plants was controlled by common QTL regions on chromosomes 1B, 2A, 2B, 6A and 7A. A QTL identified on chromosome 4A influenced floral transition time only in vernalized plants. Alleles of the LT-tolerant parent, Norstar, delayed floral transition at all QTLs except at the 2A locus. Some of the QTL alleles delaying floral transition also increased the length of vegetative growth and delayed flowering time. The genes underlying the QTLs identified in this study encode factors involved in regional adaptation of cold hardy winter wheat.", "title": "" }, { "docid": "1865a404c970d191ed55e7509b21fb9e", "text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1", "title": "" }, { "docid": "7ad00ade30fad561b4caca2fb1326ed8", "text": "Today, digital games are available on a variety of mobile devices, such as tablet devices, portable game consoles and smart phones. Not only that, the latest mixed reality technology on mobile devices allows mobile games to integrate the real world environment into gameplay. However, little has been done to test whether the surroundings of play influence gaming experience. In this paper, we describe two studies done to test the effect of surroundings on immersion. Study One uses mixed reality games to investigate whether the integration of the real world environment reduces engagement. Whereas Study Two explored the effects of manipulating the lighting level, and therefore reducing visibility, of the surroundings. We found that immersion is reduced in the conditions where visibility of the surroundings is high. We argue that higher awareness of the surroundings has a strong impact on gaming experience.", "title": "" }, { "docid": "afe1be9e13ca6e2af2c5177809e7c893", "text": "Scar evaluation and revision techniques are chief among the most important skills in the facial plastic and reconstructive surgeon’s armamentarium. Often minimized in importance, these techniques depend as much on a thorough understanding of facial anatomy and aesthetics, advanced principles of wound healing, and an appreciation of the overshadowing psychological trauma as they do on thorough technical analysis and execution [1,2]. Scar revision is unique in the spectrum of facial plastic and reconstructive surgery because the initial traumatic event and its immediate treatment usually cannot be controlled. Patients who are candidates for scar revision procedures often present after significant loss of regional tissue, injury that crosses anatomically distinct facial aesthetic units, wound closure by personnel less experienced in plastic surgical technique, and poor post injury wound management [3,4]. While no scar can be removed completely, plastic surgeons can often improve the appearance of a scar, making it less obvious through the injection or application of certain steroid medications or through surgical procedures known as scar revisions.There are many variables affect the severity of scarring, including the size and depth of the wound, blood supply to the area, the thickness and color of your skin, and the direction of the scar [5,6].", "title": "" }, { "docid": "f284c6e32679d8413e366d2daf1d4613", "text": "Summary form only given. Existing studies on ensemble classifiers typically take a static approach in assembling individual classifiers, in which all the important features are specified in advance. In this paper, we propose a new concept, dynamic ensemble, as an advanced classifier that could have dynamic component classifiers and have dynamic configurations. Toward this goal, we have substantially expanded the existing \"overproduce and choose\" paradigm for ensemble construction. A new algorithm called BAGA is proposed to explore this approach. Taking a set of decision tree component classifiers as input, BAGA generates a set of candidate ensembles using combined bagging and genetic algorithm techniques so that component classifiers are determined at execution time. Empirical studies have been carried out on variations of the BAGA algorithm, where the sizes of chosen classifiers, effects of bag size, voting function and evaluation functions on the dynamic ensemble construction, are investigated.", "title": "" }, { "docid": "8e74a27a3edea7cf0e88317851bc15eb", "text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://dv1litvip.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.", "title": "" } ]
scidocsrr
6964ce910279f7c1e3eaec5191d4cf7f
A Learning-based Neural Network Model for the Detection and Classification of SQL Injection Attacks
[ { "docid": "5025766e66589289ccc31e60ca363842", "text": "The use of web applications has become increasingly popular in our routine activities, such as reading the news, paying bills, and shopping on-line. As the availability of these services grows, we are witnessing an increase in the number and sophistication of attacks that target them. In particular, SQL injection, a class of code-injection attacks in which specially crafted input strings result in illegal queries to a database, has become one of the most serious threats to web applications. In this paper we present and evaluate a new technique for detecting and preventing SQL injection attacks. Our technique uses a model-based approach to detect illegal queries before they are executed on the database. In its static part, the technique uses program analysis to automatically build a model of the legitimate queries that could be generated by the application. In its dynamic part, the technique uses runtime monitoring to inspect the dynamically-generated queries and check them against the statically-built model. We developed a tool, AMNESIA, that implements our technique and used the tool to evaluate the technique on seven web applications. In the evaluation we targeted the subject applications with a large number of both legitimate and malicious inputs and measured how many attacks our technique detected and prevented. The results of the study show that our technique was able to stop all of the attempted attacks without generating any false positives.", "title": "" } ]
[ { "docid": "b743159683f5cb99e7b5252dbc9ae74f", "text": "When human agents come together to make decisions it is often the case that one human agent has more information than the other and this phenomenon is called information asymmetry and this distorts the market. Often if one human agent intends to manipulate a decision in its favor the human agent can signal wrong or right information. Alternatively, one human agent can screen for information to reduce the impact of asymmetric information on decisions. With the advent of artificial intelligence, signaling and screening have been made easier. This chapter studies the impact of artificial intelligence on the theory of asymmetric information. It is surmised that artificial intelligent agents reduce the degree of information asymmetry and thus the market where these agents are deployed become more efficient. It is also postulated that the more artificial intelligent agents there are deployed in the market the less is the volume of trades in the market. This is because for trade to happen the asymmetry of information on goods and services to be traded should exist.", "title": "" }, { "docid": "4995bb31547a98adbe98c7a9f2bfa947", "text": "This paper describes our proposed solutions designed for a STS core track within the SemEval 2016 English Semantic Textual Similarity (STS) task. Our method of similarity detection combines recursive autoencoders with a WordNet award-penalty system that accounts for semantic relatedness, and an SVM classifier, which produces the final score from similarity matrices. This solution is further supported by an ensemble classifier, combining an aligner with a bi-directional Gated Recurrent Neural Network and additional features, which then performs Linear Support Vector Regression to determine another set of scores.", "title": "" }, { "docid": "49215cb8cb669aef5ea42dfb1e7d2e19", "text": "Many people rely on Web-based tutorials to learn how to use complex software. Yet, it remains difficult for users to systematically explore the set of tutorials available online. We present Sifter, an interface for browsing, comparing and analyzing large collections of image manipulation tutorials based on their command-level structure. Sifter first applies supervised machine learning to identify the commands contained in a collection of 2500 Photoshop tutorials obtained from the Web. It then provides three different views of the tutorial collection based on the extracted command-level structure: (1) A Faceted Browser View allows users to organize, sort and filter the collection based on tutorial category, command names or on frequently used command subsequences, (2) a Tutorial View summarizes and indexes tutorials by the commands they contain, and (3) an Alignment View visualizes the commandlevel similarities and differences between a subset of tutorials. An informal evaluation (n=9) suggests that Sifter enables users to successfully perform a variety of browsing and analysis tasks that are difficult to complete with standard keyword search. We conclude with a meta-analysis of our Photoshop tutorial collection and present several implications for the design of image manipulation software. ACM Classification H5.2 [Information interfaces and presentation]: User Interfaces. Graphical user interfaces. Author", "title": "" }, { "docid": "889dd22fcead3ce546e760bda8ef4980", "text": "We explore unsupervised approaches to relation extraction between two named entities; for instance, the semantic bornIn relation between a person and location entity. Concretely, we propose a series of generative probabilistic models, broadly similar to topic models, each which generates a corpus of observed triples of entity mention pairs and the surface syntactic dependency path between them. The output of each model is a clustering of observed relation tuples and their associated textual expressions to underlying semantic relation types. Our proposed models exploit entity type constraints within a relation as well as features on the dependency path between entity mentions. We examine effectiveness of our approach via multiple evaluations and demonstrate 12% error reduction in precision over a state-of-the-art weakly supervised baseline.", "title": "" }, { "docid": "ab47d6b0ae971a5cf0a24f1934fbee63", "text": "Deep representations, in particular ones implemented by convolutional neural networks, have led to good progress on many learning problems. However, the learned representations are hard to analyze and interpret, even when they are extracted from visual data. We propose a new approach to study deep image representations by inverting them with an up-convolutional neural network. Application of this method to a deep network trained on ImageNet provides numerous insights into the properties of the feature representation. Most strikingly, the colors and the rough contours of an input image can be reconstructed from activations in higher network layers and even from the predicted class probabilities.", "title": "" }, { "docid": "8a9603a10e5e02f6edfbd965ee11bbb9", "text": "The alerts produced by network-based intrusion detection systems, e.g. Snort, can be difficult for network administrators to efficiently review and respond to due to the enormous number of alerts generated in a short time frame. This work describes how the visualization of raw IDS alert data assists network administrators in understanding the current state of a network and quickens the process of reviewing and responding to intrusion attempts. The project presented in this work consists of three primary components. The first component provides a visual mapping of the network topology that allows the end-user to easily browse clustered alerts. The second component is based on the flocking behavior of birds such that birds tend to follow other birds with similar behaviors. This component allows the end-user to see the clustering process and provides an efficient means for reviewing alert data. The third component discovers and visualizes patterns of multistage attacks by profiling the attacker’s behaviors.", "title": "" }, { "docid": "f176f95d0c597b4272abe907e385befc", "text": "This paper addresses the problem of topic distillation on the World Wide Web, namely, given a typical user query to find quality documents related to the query topic. Connectivity analysis has been shown to be useful in identifying high quality pages within a topic specific graph of hyperlinked documents. The essence of our approach is to augment a previous connectivity analysis based algorithm with content analysis. We identify three problems with the existing approach and devise algorithms to tackle them. The results of a user evaluation are reported that show an improvement of precision at 10 documents by at least 45% over pure connectivity analysis.", "title": "" }, { "docid": "300e215e91bb49aef0fcb44c3084789e", "text": "We present an extension to the Tacotron speech synthesis architecture that learns a latent embedding space of prosody, derived from a reference acoustic representation containing the desired prosody. We show that conditioning Tacotron on this learned embedding space results in synthesized audio that matches the prosody of the reference signal with fine time detail even when the reference and synthesis speakers are different. Additionally, we show that a reference prosody embedding can be used to synthesize text that is different from that of the reference utterance. We define several quantitative and subjective metrics for evaluating prosody transfer, and report results with accompanying audio samples from single-speaker and 44-speaker Tacotron models on a prosody transfer task.", "title": "" }, { "docid": "370b1775eddfb6241078285872e1a009", "text": "Methods for Named Entity Recognition and Disambiguation (NERD) perform NER and NED in two separate stages. Therefore, NED may be penalized with respect to precision by NER false positives, and suffers in recall from NER false negatives. Conversely, NED does not fully exploit information computed by NER such as types of mentions. This paper presents J-NERD, a new approach to perform NER and NED jointly, by means of a probabilistic graphical model that captures mention spans, mention types, and the mapping of mentions to entities in a knowledge base. We present experiments with different kinds of texts from the CoNLL’03, ACE’05, and ClueWeb’09-FACC1 corpora. J-NERD consistently outperforms state-of-the-art competitors in end-to-end NERD precision, recall, and F1.", "title": "" }, { "docid": "02c00d998952d935ee694922953c78d1", "text": "OBJECTIVE\nEffect of peppermint on exercise performance was previously investigated but equivocal findings exist. This study aimed to investigate the effects of peppermint ingestion on the physiological parameters and exercise performance after 5 min and 1 h.\n\n\nMATERIALS AND METHODS\nThirty healthy male university students were randomly divided into experimental (n=15) and control (n=15) groups. Maximum isometric grip force, vertical and long jumps, spirometric parameters, visual and audio reaction times, blood pressure, heart rate, and breath rate were recorded three times: before, five minutes, and one hour after single dose oral administration of peppermint essential oil (50 µl). Data were analyzed using repeated measures ANOVA.\n\n\nRESULTS\nOur results revealed significant improvement in all of the variables after oral administration of peppermint essential oil. Experimental group compared with control group showed an incremental and a significant increase in the grip force (36.1%), standing vertical jump (7.0%), and standing long jump (6.4%). Data obtained from the experimental group after five minutes exhibited a significant increase in the forced vital capacity in first second (FVC1)(35.1%), peak inspiratory flow rate (PIF) (66.4%), and peak expiratory flow rate (PEF) (65.1%), whereas after one hour, only PIF shown a significant increase as compare with the baseline and control group. At both times, visual and audio reaction times were significantly decreased. Physiological parameters were also significantly improved after five minutes. A considerable enhancement in the grip force, spiromery, and other parameters were the important findings of this study. Conclusion : An improvement in the spirometric measurements (FVC1, PEF, and PIF) might be due to the peppermint effects on the bronchial smooth muscle tonicity with or without affecting the lung surfactant. Yet, no scientific evidence exists regarding isometric force enhancement in this novel study.", "title": "" }, { "docid": "620642c5437dc26cac546080c4465707", "text": "One of the most distinctive linguistic characteristics of modern academic writing is its reliance on nominalized structures. These include nouns that have been morphologically derived from verbs (e.g., development, progression) as well as verbs that have been ‘converted’ to nouns (e.g., increase, use). Almost any sentence taken from an academic research article will illustrate the use of such structures. For example, consider the opening sentences from three education research articles; derived nominalizations are underlined and converted nouns given in italics: 1", "title": "" }, { "docid": "162a4cab1ea0bd1e9b8980a57df7c2bf", "text": "This paper investigates the design of power and spectrally efficient coded modulations based on amplitude phase shift keying (APSK) with application to broadband satellite communications. Emphasis is put on 64APSK constellations. The APSK modulation has merits for digital transmission over nonlinear satellite channels due to its power and spectral efficiency combined with its inherent robustness against nonlinear distortion. This scheme has been adopted in the DVB-S2 Standard for satellite digital video broadcasting. Assuming an ideal rectangular transmission pulse, for which no nonlinear inter-symbol interference is present and perfect pre-compensation of the nonlinearity takes place, we optimize the 64APSK constellation design by employing an optimization criterion based on the mutual information. This method generates an optimum constellation for each operating SNR point, that is, for each spectral efficiency. Two separate cases of interest are particularly examined: (i) the equiprobable case, where all constellation points are equiprobable and (ii) the non-equiprobable case, where the constellation points on each ring are assumed to be equiprobable but the a priory symbol probability associated per ring is assumed different for each ring. Following the mutual information-based optimization approach in each case, detailed simulation results are obtained for the optimal 64APSK constellation settings as well as the achievable shaping gain.", "title": "" }, { "docid": "25822c79792325b86a90a477b6e988a1", "text": "As the social networking sites get more popular, spammers target these sites to spread spam posts. Twitter is one of the most popular online social networking sites where users communicate and interact on various topics. Most of the current spam filtering methods in Twitter focus on detecting the spammers and blocking them. However, spammers can create a new account and start posting new spam tweets again. So there is a need for robust spam detection techniques to detect the spam at tweet level. These types of techniques can prevent the spam in real time. To detect the spam at tweet level, often features are defined, and appropriate machine learning algorithms are applied in the literature. Recently, deep learning methods are showing fruitful results on several natural language processing tasks. We want to use the potential benefits of these two types of methods for our problem. Toward this, we propose an ensemble approach for spam detection at tweet level. We develop various deep learning models based on convolutional neural networks (CNNs). Five CNNs and one feature-based model are used in the ensemble. Each CNN uses different word embeddings (Glove, Word2vec) to train the model. The feature-based model uses content-based, user-based, and n-gram features. Our approach combines both deep learning and traditional feature-based models using a multilayer neural network which acts as a meta-classifier. We evaluate our method on two data sets, one data set is balanced, and another one is imbalanced. The experimental results show that our proposed method outperforms the existing methods.", "title": "" }, { "docid": "e30db40102a2d84a150c220250fa4d36", "text": "A voltage reference circuit operating with all transistors biased in weak inversion, providing a mean reference voltage of 257.5 mV, has been fabricated in 0.18 m CMOS technology. The reference voltage can be approximated by the difference of transistor threshold voltages at room temperature. Accurate subthreshold design allows the circuit to work at room temperature with supply voltages down to 0.45 V and an average current consumption of 5.8 nA. Measurements performed over a set of 40 samples showed an average temperature coefficient of 165 ppm/ C with a standard deviation of 100 ppm/ C, in a temperature range from 0 to 125°C. The mean line sensitivity is ≈0.44%/V, for supply voltages ranging from 0.45 to 1.8 V. The power supply rejection ratio measured at 30 Hz and simulated at 10 MHz is lower than -40 dB and -12 dB, respectively. The active area of the circuit is ≈0.043mm2.", "title": "" }, { "docid": "ce2f8135fe123e09b777bd147bec4bb3", "text": "Supervised learning, e.g., classification, plays an important role in processing and organizing microblogging data. In microblogging, it is easy to mass vast quantities of unlabeled data, but would be costly to obtain labels, which are essential for supervised learning algorithms. In order to reduce the labeling cost, active learning is an effective way to select representative and informative instances to query for labels for improving the learned model. Different from traditional data in which the instances are assumed to be independent and identically distributed (i.i.d.), instances in microblogging are networked with each other. This presents both opportunities and challenges for applying active learning to microblogging data. Inspired by social correlation theories, we investigate whether social relations can help perform effective active learning on networked data. In this paper, we propose a novel Active learning framework for the classification of Networked Texts in microblogging (ActNeT). In particular, we study how to incorporate network information into text content modeling, and design strategies to select the most representative and informative instances from microblogging for labeling by taking advantage of social network structure. Experimental results on Twitter datasets show the benefit of incorporating network information in active learning and that the proposed framework outperforms existing state-of-the-art methods.", "title": "" }, { "docid": "7b916833f0d611465e36b0b2792b2fa7", "text": "A fully-integrated silicon-based 94-GHz direct-detection imaging receiver with on-chip Dicke switch and baseband circuitry is demonstrated. Fabricated in a 0.18-µm SiGe BiCMOS technology (fT/fMAX = 200 GHz), the receiver chip achieves a peak imager responsivity of 43 MV/W with a 3-dB bandwidth of 26 GHz. A balanced LNA topology with an embedded Dicke switch provides 30-dB gain and enables a temperature resolution of 0.3–0.4 K. The imager chip consumes 200 mW from a 1.8-V supply.", "title": "" }, { "docid": "d6bbec8d1426cacba7f8388231f04add", "text": "This paper presents a novel multiple-frequency resonant inverter for induction heating (IH) applications. By adopting a center tap transformer, the proposed resonant inverter can give load switching frequency as twice as the isolated-gate bipolar transistor (IGBT) switching frequency. The structure and the operation of the proposed topology are described in order to demonstrate how the output frequency of the proposed resonant inverter is as twice as the switching frequency of IGBTs. In addition to this, the IGBTs in the proposed topology work in zero-voltage switching during turn-on phase of the switches. The new topology is verified by the experimental results using a prototype for IH applications. Moreover, increased efficiency of the proposed inverter is verified by comparison with conventional designs.", "title": "" }, { "docid": "62d1fc9ea1c6a5d1f64939eff3202dad", "text": "This research applied both the traditional and the fuzzy control methods for mobile satellite antenna tracking system design. The antenna tracking and the stabilization loops were designed firstly according to the bandwidth and phase margin requirements. However, the performance would be degraded if the tracking loop gain is reduced due to parameter variation. On the other hand a PD type of fuzzy controller was also applied for tracking loop design. It can be seen that the system performance obtained by the fuzzy controller was better for low antenna tracking gain. Thus this research proposed an adaptive law by taking either traditional or fuzzy controllers for antenna tracking system depending on the tracking loop gain, then the tracking gain parameter variation effect can be reduced.", "title": "" }, { "docid": "1f4c22a725fb5cb34bb1a087ba47987e", "text": "This paper demonstrates key capabilities of Cognitive Database, a novel AI-enabled relational database system which uses an unsupervised neural network model to facilitate semantic queries over relational data. The neural network model, called word embedding, operates on an unstructured view of the database and builds a vector model that captures latent semantic context of database entities of different types. The vector model is then seamlessly integrated into the SQL infrastructure and exposed to the users via a new class of SQL-based analytics queries known as cognitive intelligence (CI) queries. The cognitive capabilities enable complex queries over multi-modal data such as semantic matching, inductive reasoning queries such as analogies, and predictive queries using entities not present in a database. We plan to demonstrate the end-to-end execution flow of the cognitive database using a Spark based prototype. Furthermore, we demonstrate the use of CI queries using a publicaly available enterprise financial dataset (with text and numeric values). A Jupyter Notebook python based implementation will also be presented.", "title": "" } ]
scidocsrr
56b71e2392afb3cf4b51cffa7fa02509
Battery management system in the Bayesian paradigm: Part I: SOC estimation
[ { "docid": "69f36a0f043d8966dbcd7fc2607d61f8", "text": "This paper presents a method for modeling and estimation of the state of charge (SOC) of lithium-ion (Li-Ion) batteries using neural networks (NNs) and the extended Kalman filter (EKF). The NN is trained offline using the data collected from the battery-charging process. This network finds the model needed in the state-space equations of the EKF, where the state variables are the battery terminal voltage at the previous sample and the SOC at the present sample. Furthermore, the covariance matrix for the process noise in the EKF is estimated adaptively. The proposed method is implemented on a Li-Ion battery to estimate online the actual SOC of the battery. Experimental results show a good estimation of the SOC and fast convergence of the EKF state variables.", "title": "" }, { "docid": "560a19017dcc240d48bb879c3165b3e1", "text": "Battery management systems in hybrid electric vehicle battery packs must estimate values descriptive of the pack’s present operating condition. These include: battery state of charge, power fade, capacity fade, and instantaneous available power. The estimation mechanism must adapt to changing cell characteristics as cells age and therefore provide accurate estimates over the lifetime of the pack. In a series of three papers, we propose a method, based on extended Kalman filtering (EKF), that is able to accomplish these goals on a lithium ion polymer battery pack. We expect that it will also work well on other battery chemistries. These papers cover the required mathematical background, cell modeling and system identification requirements, and the final solution, together with results. In order to use EKF to estimate the desired quantities, we first require a mathematical model that can accurately capture the dynamics of a cell. In this paper we “evolve” a suitable model from one that is very primitive to one that is more advanced and works well in practice. The final model includes terms that describe the dynamic contributions due to open-circuit voltage, ohmic loss, polarization time constants, electro-chemical hysteresis, and the effects of temperature. We also give a means, based on EKF, whereby the constant model parameters may be determined from cell test data. Results are presented that demonstrate it is possible to achieve root-mean-squared modeling error smaller than the level of quantization error expected in an implementation. © 2004 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "f8ec5289b43504fcc96b9280ce7ce67d", "text": "This study examined how scaffolds and student achievement levels influence inquiry and performance in a problem-based learning environment. The scaffolds were embedded within a hypermedia program that placed students at the center of a problem in which they were trying to become the youngest person to fly around the world in a balloon. One-hundred and eleven seventh grade students enrolled in a science and technology course worked in collaborative groups for a duration of 3 weeks to complete a project that included designing a balloon and a travel plan. Student groups used one of three problem-based, hypermedia programs: (1) a no scaffolding condition that did not provide access to scaffolds, (2) a scaffolding optional condition that provided access to scaffolds, but gave students the choice of whether or not to use them, and (3) a scaffolding required condition required students to complete all available scaffolds. Results revealed that students in the scaffolding optional and scaffolding required conditions performed significantly better than students in the no scaffolding condition on one of the two components of the group project. Results also showed that student achievement levels were significantly related to individual posttest scores; higherachieving students scored better on the posttest than lower-achieving students. In addition, analyses of group notebooks confirmed qualitative differences between students in the various conditions. Specifically, those in the scaffolding required condition produced more highly organized project notebooks containing a higher percentage of entries directly relevant to the problem. These findings suggest that scaffolds may enhance inquiry and performance, especially when students are required to access and", "title": "" }, { "docid": "c88f5359fc6dc0cac2c0bd53cea989ee", "text": "Automatic detection and monitoring of oil spills and illegal oil discharges is of fundamental importance in ensuring compliance with marine legislation and protection of the coastal environments, which are under considerable threat from intentional or accidental oil spills, uncontrolled sewage and wastewater discharged. In this paper the level set based image segmentation was evaluated for the real-time detection and tracking of oil spills from SAR imagery. The developed processing scheme consists of a preprocessing step, in which an advanced image simplification is taking place, followed by a geometric level set segmentation for the detection of the possible oil spills. Finally a classification was performed, for the separation of lookalikes, leading to oil spill extraction. Experimental results demonstrate that the level set segmentation is a robust tool for the detection of possible oil spills, copes well with abrupt shape deformations and splits and outperforms earlier efforts which were based on different types of threshold or edge detection techniques. The developed algorithm’s efficiency for real-time oil spill detection and monitoring was also tested.", "title": "" }, { "docid": "edeefde21bbe1ace9a34a0ebe7bc6864", "text": "Social media platforms provide active communication channels during mass convergence and emergency events such as disasters caused by natural hazards. As a result, first responders, decision makers, and the public can use this information to gain insight into the situation as it unfolds. In particular, many social media messages communicated during emergencies convey timely, actionable information. Processing social media messages to obtain such information, however, involves solving multiple challenges including: parsing brief and informal messages, handling information overload, and prioritizing different types of information found in messages. These challenges can be mapped to classical information processing operations such as filtering, classifying, ranking, aggregating, extracting, and summarizing. We survey the state of the art regarding computational methods to process social media messages and highlight both their contributions and shortcomings. In addition, we examine their particularities, and methodically examine a series of key subproblems ranging from the detection of events to the creation of actionable and useful summaries. Research thus far has, to a large extent, produced methods to extract situational awareness information from social media. In this survey, we cover these various approaches, and highlight their benefits and shortcomings. We conclude with research challenges that go beyond situational awareness, and begin to look at supporting decision making and coordinating emergency-response actions.", "title": "" }, { "docid": "6b57c73406000ca0683b275c7e164c24", "text": "In this letter, a novel compact and broadband integrated transition between a laminated waveguide and an air-filled rectangular waveguide operating in Ka band is proposed. A three-pole filter equivalent circuit model is employed to interpret the working mechanism and to predict the performance of the transition. A back-to-back prototype of the proposed transition is designed and fabricated for proving the concept. Good agreement of the measured and simulated results is obtained. The measured result shows that the insertion loss of better than 0.26 dB from 34.8 to 37.8 GHz can be achieved.", "title": "" }, { "docid": "95a58a9fa31373296af2c41e47fa0884", "text": "Force.com is the preeminent on-demand application development platform in use today, supporting some 55,000+ organizations. Individual enterprises and commercial software-as-a-service (SaaS) vendors trust the platform to deliver robust, reliable, Internet-scale applications. To meet the extreme demands of its large user population, Force.com's foundation is a metadatadriven software architecture that enables multitenant applications.\n The focus of this paper is multitenancy, a fundamental design approach that can dramatically improve SaaS application management. This paper defines multitenancy, explains its benefits, and demonstrates why metadata-driven architectures are the premier choice for implementing multitenancy.", "title": "" }, { "docid": "c69e805751421b516e084498e7fc6f44", "text": "We investigate two extremal problems for polynomials giving upper bounds for spherical codes and for polynomials giving lower bounds for spherical designs, respectively. We consider two basic properties of the solutions of these problems. Namely, we estimate from below the number of double zeros and find zero Gegenbauer coefficients of extremal polynomials. Our results allow us to search effectively for such solutions using a computer. The best polynomials we have obtained give substantial improvements in some cases on the previously known bounds for spherical codes and designs. Some examples are given in Section 6.", "title": "" }, { "docid": "0f9ef379901c686df08dd0d1bb187e22", "text": "This paper studies the minimum achievable source coding rate as a function of blocklength <i>n</i> and probability ϵ that the distortion exceeds a given level <i>d</i> . Tight general achievability and converse bounds are derived that hold at arbitrary fixed blocklength. For stationary memoryless sources with separable distortion, the minimum rate achievable is shown to be closely approximated by <i>R</i>(<i>d</i>) + √<i>V</i>(<i>d</i>)/(<i>n</i>) <i>Q</i><sup>-1</sup>(ϵ), where <i>R</i>(<i>d</i>) is the rate-distortion function, <i>V</i>(<i>d</i>) is the rate dispersion, a characteristic of the source which measures its stochastic variability, and <i>Q</i><sup>-1</sup>(·) is the inverse of the standard Gaussian complementary cumulative distribution function.", "title": "" }, { "docid": "ed98eb7aa069c00e2be8a27ef889b623", "text": "The class imbalance problem has been known to hinder the learning performance of classification algorithms. Various real-world classification tasks such as text categorization suffer from this phenomenon. We demonstrate that active learning is capable of solving the problem.", "title": "" }, { "docid": "8af7826c809eb3941c2e394899ca83ef", "text": "The development of interactive rehabilitation technologies which rely on wearable-sensing for upper body rehabilitation is attracting increasing research interest. This paper reviews related research with the aim: 1) To inventory and classify interactive wearable systems for movement and posture monitoring during upper body rehabilitation, regarding the sensing technology, system measurements and feedback conditions; 2) To gauge the wearability of the wearable systems; 3) To inventory the availability of clinical evidence supporting the effectiveness of related technologies. A systematic literature search was conducted in the following search engines: PubMed, ACM, Scopus and IEEE (January 2010–April 2016). Forty-five papers were included and discussed in a new cuboid taxonomy which consists of 3 dimensions: sensing technology, feedback modalities and system measurements. Wearable sensor systems were developed for persons in: 1) Neuro-rehabilitation: stroke (n = 21), spinal cord injury (n = 1), cerebral palsy (n = 2), Alzheimer (n = 1); 2) Musculoskeletal impairment: ligament rehabilitation (n = 1), arthritis (n = 1), frozen shoulder (n = 1), bones trauma (n = 1); 3) Others: chronic pulmonary obstructive disease (n = 1), chronic pain rehabilitation (n = 1) and other general rehabilitation (n = 14). Accelerometers and inertial measurement units (IMU) are the most frequently used technologies (84% of the papers). They are mostly used in multiple sensor configurations to measure upper limb kinematics and/or trunk posture. Sensors are placed mostly on the trunk, upper arm, the forearm, the wrist, and the finger. Typically sensors are attachable rather than embedded in wearable devices and garments; although studies that embed and integrate sensors are increasing in the last 4 years. 16 studies applied knowledge of result (KR) feedback, 14 studies applied knowledge of performance (KP) feedback and 15 studies applied both in various modalities. 16 studies have conducted their evaluation with patients and reported usability tests, while only three of them conducted clinical trials including one randomized clinical trial. This review has shown that wearable systems are used mostly for the monitoring and provision of feedback on posture and upper extremity movements in stroke rehabilitation. The results indicated that accelerometers and IMUs are the most frequently used sensors, in most cases attached to the body through ad hoc contraptions for the purpose of improving range of motion and movement performance during upper body rehabilitation. Systems featuring sensors embedded in wearable appliances or garments are only beginning to emerge. Similarly, clinical evaluations are scarce and are further needed to provide evidence on effectiveness and pave the path towards implementation in clinical settings.", "title": "" }, { "docid": "c5dee985cbfd6c22beca6e2dad895efa", "text": "Recently, convolutional neural networks (CNNs) have been used as a powerful tool to solve many problems of machine learning and computer vision. In this paper, we aim to provide insight on the property of convolutional neural networks, as well as a generic method to improve the performance of many CNN architectures. Specifically, we first examine existing CNN models and observe an intriguing property that the filters in the lower layers form pairs (i.e., filters with opposite phase). Inspired by our observation, we propose a novel, simple yet effective activation scheme called concatenated ReLU (CRelu) and theoretically analyze its reconstruction property in CNNs. We integrate CRelu into several state-of-the-art CNN architectures and demonstrate improvement in their recognition performance on CIFAR-10/100 and ImageNet datasets with fewer trainable parameters. Our results suggest that better understanding of the properties of CNNs can lead to significant performance improvement with a simple modification.", "title": "" }, { "docid": "1569bcea0c166d9bf2526789514609c5", "text": "In this paper, we present the developmert and initial validation of a new self-report instrument, the Differentiation of Self Inventory (DSI). T. DSI represents the first attempt to create a multi-dimensional measure of differentiation based on Bowen Theory, focusing specifically on adults (ages 25 +), their current significant relationships, and their relations with families of origin. Principal components factor analysis on a sample of 313 normal adults (mean age = 36.8) suggested four dimensions: Emotional Reactivity, Reactive Distancing, Fusion with Parents, and \"I\" Position. Scales constructed from these factors were found to be moderately correlated in the expected direction, internally consistent, and significantly predictive of trait anxiety. The potential contribution of the DSI is discussed -for testing Bowen Theory, as a clinical assessment tool, and as an indicator of psychotherapeutic outcome.", "title": "" }, { "docid": "d76980f3a0b4e0dab21583b75ee16318", "text": "We present a gold standard annotation of syntactic dependencies in the English Web Treebank corpus using the Stanford Dependencies standard. This resource addresses the lack of a gold standard dependency treebank for English, as well as the limited availability of gold standard syntactic annotations for informal genres of English text. We also present experiments on the use of this resource, both for training dependency parsers and for evaluating dependency parsers like the one included as part of the Stanford Parser. We show that training a dependency parser on a mix of newswire and web data improves performance on that type of data without greatly hurting performance on newswire text, and therefore gold standard annotations for non-canonical text can be valuable for parsing in general. Furthermore, the systematic annotation effort has informed both the SD formalism and its implementation in the Stanford Parser’s dependency converter. In response to the challenges encountered by annotators in the EWT corpus, we revised and extended the Stanford Dependencies standard, and improved the Stanford Parser’s dependency converter.", "title": "" }, { "docid": "1b646a8a45b65799bbf2e71108f420e0", "text": "Dynamic Time Warping (DTW) is a distance measure that compares two time series after optimally aligning them. DTW is being used for decades in thousands of academic and industrial projects despite the very expensive computational complexity, O(n2). These applications include data mining, image processing, signal processing, robotics and computer graphics among many others. In spite of all this research effort, there are many myths and misunderstanding about DTW in the literature, for example \"it is too slow to be useful\" or \"the warping window size does not matter much.\" In this tutorial, we correct these misunderstandings and we summarize the research efforts in optimizing both the efficiency and effectiveness of both the basic DTW algorithm, and of the higher-level algorithms that exploit DTW such as similarity search, clustering and classification. We will discuss variants of DTW such as constrained DTW, multidimensional DTW and asynchronous DTW, and optimization techniques such as lower bounding, early abandoning, run-length encoding, bounded approximation and hardware optimization. We will discuss a multitude of application areas including physiological monitoring, social media mining, activity recognition and animal sound processing. The optimization techniques are generalizable to other domains on various data types and problems.", "title": "" }, { "docid": "38d1e06642f12138f8b0a90deeb96979", "text": "Research at the intersection of machine learning, programming languages, and software engineering has recently taken important steps in proposing learnable probabilistic models of source code that exploit the abundance of patterns of code. In this article, we survey this work. We contrast programming languages against natural languages and discuss how these similarities and differences drive the design of probabilistic models. We present a taxonomy based on the underlying design principles of each model and use it to navigate the literature. Then, we review how researchers have adapted these models to application areas and discuss cross-cutting and application-specific challenges and opportunities.", "title": "" }, { "docid": "41c5dbb3e903c007ba4b8f37d40b06ef", "text": "BACKGROUND\nMyocardial infarction (MI) can directly cause ischemic mitral regurgitation (IMR), which has been touted as an indicator of poor prognosis in acute and early phases after MI. However, in the chronic post-MI phase, prognostic implications of IMR presence and degree are poorly defined.\n\n\nMETHODS AND RESULTS\nWe analyzed 303 patients with previous (>16 days) Q-wave MI by ECG who underwent transthoracic echocardiography: 194 with IMR quantitatively assessed in routine practice and 109 without IMR matched for baseline age (71+/-11 versus 70+/-9 years, P=0.20), sex, and ejection fraction (EF, 33+/-14% versus 34+/-11%, P=0.14). In IMR patients, regurgitant volume (RVol) and effective regurgitant orifice (ERO) area were 36+/-24 mL/beat and 21+/-12 mm(2), respectively. After 5 years, total mortality and cardiac mortality for patients with IMR (62+/-5% and 50+/-6%, respectively) were higher than for those without IMR (39+/-6% and 30+/-5%, respectively) (both P<0.001). In multivariate analysis, independently of all baseline characteristics, particularly age and EF, the adjusted relative risks of total and cardiac mortality associated with the presence of IMR (1.88, P=0.003 and 1.83, P=0.014, respectively) and quantified degree of IMR defined by RVol >/=30 mL (2.05, P=0.002 and 2.01, P=0.009) and by ERO >/=20 mm(2) (2.23, P=0.003 and 2.38, P=0.004) were high.\n\n\nCONCLUSIONS\nIn the chronic phase after MI, IMR presence is associated with excess mortality independently of baseline characteristics and degree of ventricular dysfunction. The mortality risk is related directly to the degree of IMR as defined by ERO and RVol. Therefore, IMR detection and quantification provide major information for risk stratification and clinical decision making in the chronic post-MI phase.", "title": "" }, { "docid": "65eb604a2d45f29923ba24976130adc1", "text": "The recognition of boundaries, e.g., between chorus and verse, is an important task in music structure analysis. The goal is to automatically detect such boundaries in audio signals so that the results are close to human annotation. In this work, we apply Convolutional Neural Networks to the task, trained directly on mel-scaled magnitude spectrograms. On a representative subset of the SALAMI structural annotation dataset, our method outperforms current techniques in terms of boundary retrieval F -measure at different temporal tolerances: We advance the state-of-the-art from 0.33 to 0.46 for tolerances of±0.5 seconds, and from 0.52 to 0.62 for tolerances of ±3 seconds. As the algorithm is trained on annotated audio data without the need of expert knowledge, we expect it to be easily adaptable to changed annotation guidelines and also to related tasks such as the detection of song transitions.", "title": "" }, { "docid": "29fa75e49d4179072ec25b8aab6b48e2", "text": "We describe the design, development, and API for two discourse parsers for Rhetorical Structure Theory. The two parsers use the same underlying framework, but one uses features that rely on dependency syntax, produced by a fast shift-reduce parser, whereas the other uses a richer feature space, including both constituentand dependency-syntax and coreference information, produced by the Stanford CoreNLP toolkit. Both parsers obtain state-of-the-art performance, and use a very simple API consisting of, minimally, two lines of Scala code. We accompany this code with a visualization library that runs the two parsers in parallel, and displays the two generated discourse trees side by side, which provides an intuitive way of comparing the two parsers.", "title": "" }, { "docid": "343ba137056cac30d0d37e17a425d53b", "text": "This thesis explores fundamental improvements in unsupervised deep learning algorithms. Taking a theoretical perspective on the purpose of unsupervised learning, and choosing learnt approximate inference in a jointly learnt directed generative model as the approach, the main question is how existing implementations of this approach, in particular auto-encoders, could be improved by simultaneously rethinking the way they learn and the way they perform inference. In such network architectures, the availability of two opposing pathways, one for inference and one for generation, allows to exploit the symmetry between them and to let either provide feedback signals to the other. The signals can be used to determine helpful updates for the connection weights from only locally available information, removing the need for the conventional back-propagation path and mitigating the issues associated with it. Moreover, feedback loops can be added to the usual usual feed-forward network to improve inference itself. The reciprocal connectivity between regions in the brain’s neocortex provides inspiration for how the iterative revision and verification of proposed interpretations could result in a fair approximation to optimal Bayesian inference. While extracting and combining underlying ideas from research in deep learning and cortical functioning, this thesis walks through the concepts of generative models, approximate inference, local learning rules, target propagation, recirculation, lateral and biased competition, predictive coding, iterative and amortised inference, and other related topics, in an attempt to build up a complex of insights that could provide direction to future research in unsupervised deep learning methods.", "title": "" }, { "docid": "d6ea13f26642dfcb28b63ff43a0b39e1", "text": "This paper deals with the inter-turn short circuit fault analysis of Pulse Width Modulated (PWM) inverter fed three-phase Induction Motor (IM) using Finite Element Method (FEM). The short circuit in the stator winding of a 3-phase IM start with an inter-turn fault and if left undetected it progresses to a phase-phase fault or phase-ground fault. In main fed IM a popular technique known as Motor Current Signature Analysis (MCSA) is used to detect the inter-turn fault. But if the machine is fed from PWM inverter MCSA fails, due to high frequency inverter switching, the current spectrum will be rich in noise causing the fault detection difficult. An electromagnetic field analysis of inverter fed IM is carried out with 25% and 50% of stator winding inter-turn short circuit fault severity using FEM. The simulation is carried out on a 2.2kW IM using Ansys Maxwell Finite Element Analysis (FEA) tool. Comparisons are made on the various electromagnetic field parameters like flux lines distribution, flux density, radial air gap flux density between a healthy and faulty (25% & 50% severity) IM.", "title": "" }, { "docid": "87c973e92ef3affcff4dac0d0183067c", "text": "Drug-drug interaction (DDI) is a major cause of morbidity and mortality and a subject of intense scientific interest. Biomedical literature mining can aid DDI research by extracting evidence for large numbers of potential interactions from published literature and clinical databases. Though DDI is investigated in domains ranging in scale from intracellular biochemistry to human populations, literature mining has not been used to extract specific types of experimental evidence, which are reported differently for distinct experimental goals. We focus on pharmacokinetic evidence for DDI, essential for identifying causal mechanisms of putative interactions and as input for further pharmacological and pharmacoepidemiology investigations. We used manually curated corpora of PubMed abstracts and annotated sentences to evaluate the efficacy of literature mining on two tasks: first, identifying PubMed abstracts containing pharmacokinetic evidence of DDIs; second, extracting sentences containing such evidence from abstracts. We implemented a text mining pipeline and evaluated it using several linear classifiers and a variety of feature transforms. The most important textual features in the abstract and sentence classification tasks were analyzed. We also investigated the performance benefits of using features derived from PubMed metadata fields, various publicly available named entity recognizers, and pharmacokinetic dictionaries. Several classifiers performed very well in distinguishing relevant and irrelevant abstracts (reaching F1≈0.93, MCC≈0.74, iAUC≈0.99) and sentences (F1≈0.76, MCC≈0.65, iAUC≈0.83). We found that word bigram features were important for achieving optimal classifier performance and that features derived from Medical Subject Headings (MeSH) terms significantly improved abstract classification. We also found that some drug-related named entity recognition tools and dictionaries led to slight but significant improvements, especially in classification of evidence sentences. Based on our thorough analysis of classifiers and feature transforms and the high classification performance achieved, we demonstrate that literature mining can aid DDI discovery by supporting automatic extraction of specific types of experimental evidence.", "title": "" } ]
scidocsrr
37a76d3b6c71ef173133d68ba0809244
Printflatables: Printing Human-Scale, Functional and Dynamic Inflatable Objects
[ { "docid": "bf83b9fef9b4558538b2207ba57b4779", "text": "This paper presents preliminary results for the design, development and evaluation of a hand rehabilitation glove fabricated using soft robotic technology. Soft actuators comprised of elastomeric materials with integrated channels that function as pneumatic networks (PneuNets), are designed and geometrically analyzed to produce bending motions that can safely conform with the human finger motion. Bending curvature and force response of these actuators are investigated using geometrical analysis and a finite element model (FEM) prior to fabrication. The fabrication procedure of the chosen actuator is described followed by a series of experiments that mechanically characterize the actuators. The experimental data is compared to results obtained from FEM simulations showing good agreement. Finally, an open-palm glove design and the integration of the actuators to it are described, followed by a qualitative evaluation study.", "title": "" } ]
[ { "docid": "f136e875f021ea3ea67a87c6d0b1e869", "text": "Platelet-rich plasma (PRP) has been utilized for many years as a regenerative agent capable of inducing vascularization of various tissues using blood-derived growth factors. Despite this, drawbacks mostly related to the additional use of anti-coagulants found in PRP have been shown to inhibit the wound healing process. For these reasons, a novel platelet concentrate has recently been developed with no additives by utilizing lower centrifugation speeds. The purpose of this study was therefore to investigate osteoblast behavior of this novel therapy (injectable-platelet-rich fibrin; i-PRF, 100% natural with no additives) when compared to traditional PRP. Human primary osteoblasts were cultured with either i-PRF or PRP and compared to control tissue culture plastic. A live/dead assay, migration assay as well as a cell adhesion/proliferation assay were investigated. Furthermore, osteoblast differentiation was assessed by alkaline phosphatase (ALP), alizarin red and osteocalcin staining, as well as real-time PCR for genes encoding Runx2, ALP, collagen1 and osteocalcin. The results showed that all cells had high survival rates throughout the entire study period irrespective of culture-conditions. While PRP induced a significant 2-fold increase in osteoblast migration, i-PRF demonstrated a 3-fold increase in migration when compared to control tissue-culture plastic and PRP. While no differences were observed for cell attachment, i-PRF induced a significantly higher proliferation rate at three and five days when compared to PRP. Furthermore, i-PRF induced significantly greater ALP staining at 7 days and alizarin red staining at 14 days. A significant increase in mRNA levels of ALP, Runx2 and osteocalcin, as well as immunofluorescent staining of osteocalcin was also observed in the i-PRF group when compared to PRP. In conclusion, the results from the present study favored the use of the naturally-formulated i-PRF when compared to traditional PRP with anti-coagulants. Further investigation into the direct role of fibrin and leukocytes contained within i-PRF are therefore warranted to better elucidate their positive role in i-PRF on tissue wound healing.", "title": "" }, { "docid": "2ce4d585edd54cede6172f74cf9ab8bb", "text": "Enterprise resource planning (ERP) systems have been widely implemented by numerous firms throughout the industrial world. While success stories of ERP implementation abound due to its potential in resolving the problem of fragmented information, a substantial number of these implementations fail to meet the goals of the organization. Some are abandoned altogether and others contribute to the failure of an organization. This article seeks to identify the critical factors of ERP implementation and uses statistical analysis to further delineate the patterns of adoption of the various concepts. A cross-sectional mail survey was mailed to business executives who have experience in the implementation of ERP systems. The results of this study provide empirical evidence that the theoretical constructs of ERP implementation are followed at varying levels. It offers some fresh insights into the current practice of ERP implementation. In addition, this study fills the need for ERP implementation constructs that can be utilized for further study of this important topic.", "title": "" }, { "docid": "64c1c37422037fc9156db42cdcdbe7fe", "text": "[Context] It is an enigma that agile projects can succeed ‘without requirements’ when weak requirements engineering is a known cause for project failures. While agile development projects often manage well without extensive requirements test cases are commonly viewed as requirements and detailed requirements are documented as test cases. [Objective] We have investigated this agile practice of using test cases as requirements to understand how test cases can support the main requirements activities, and how this practice varies. [Method] We performed an iterative case study at three companies and collected data through 14 interviews and 2 focus groups. [Results] The use of test cases as requirements poses both benefits and challenges when eliciting, validating, verifying, and managing requirements, and when used as a documented agreement. We have identified five variants of the test-cases-as-requirements practice, namely de facto, behaviour-driven, story-test driven, stand-alone strict and stand-alone manual for which the application of the practice varies concerning the time frame of requirements documentation, the requirements format, the extent to which the test cases are a machine executable specification and the use of tools which provide specific support for the practice of using test cases as requirements. [Conclusions] The findings provide empirical insight into how agile development projects manage and communicate requirements. The identified variants of the practice of using test cases as requirements can be used to perform in-depth investigations into agile requirements engineering. Practitioners can use the provided recommendations as a guide in designing and improving their agile requirements practices based on project characteristics such as number of stakeholders and rate of change.", "title": "" }, { "docid": "b169e0e76f26db1f08cd84524aa10a53", "text": "A very lightweight, broad-band, dual polarized antenna array with 128 elements for the frequency range from 7 GHz to 18 GHz has been designed, manufactured and measured. The total gain at the center frequency was measured to be 20 dBi excluding feeding network losses.", "title": "" }, { "docid": "9520b99708d905d3713867fac14c3814", "text": "When people work together to analyze a data set, they need to organize their findings, hypotheses, and evidence, share that information with their collaborators, and coordinate activities amongst team members. Sharing externalizations (recorded information such as notes) could increase awareness and assist with team communication and coordination. However, we currently know little about how to provide tool support for this sort of sharing. We explore how linked common work (LCW) can be employed within a `collaborative thinking space', to facilitate synchronous collaborative sensemaking activities in Visual Analytics (VA). Collaborative thinking spaces provide an environment for analysts to record, organize, share and connect externalizations. Our tool, CLIP, extends earlier thinking spaces by integrating LCW features that reveal relationships between collaborators' findings. We conducted a user study comparing CLIP to a baseline version without LCW. Results demonstrated that LCW significantly improved analytic outcomes at a collaborative intelligence task. Groups using CLIP were also able to more effectively coordinate their work, and held more discussion of their findings and hypotheses. LCW enabled them to maintain awareness of each other's activities and findings and link those findings to their own work, preventing disruptive oral awareness notifications.", "title": "" }, { "docid": "910a416dc736ec3566583c57123ac87c", "text": "Internet of Things (IoT) is one of the greatest technology revolutions in the history. Due to IoT potential, daily objects will be consciously worked in harmony with optimized performances. However, today, technology is not ready to fully bring its power to our daily life because of huge data analysis requirements in instant time. On the other hand, the powerful data management of cloud computing gives IoT an opportunity to make the revolution in our life. However, the traditional cloud computing server schedulers are not ready to provide services to IoT because IoT consists of a number of heterogeneous devices and applications which are far away from standardization. Therefore, to meet the expectations of users, the traditional cloud computing server schedulers should be improved to efficiently schedule and allocate IoT requests. There are several proposed scheduling algorithms for cloud computing in the literature. However, these scheduling algorithms are limited because of considering neither heterogeneous servers nor dynamic scheduling approach for different priority requests. Our objective is to propose Husnu S. Narman [email protected] 1 Holcombe Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, USA 2 Department of Computer Science and Engineering, Bangladesh University of Engineering and Technology, Zahir Raihan Rd, Dhaka, 1000, Bangladesh 3 School of Computer Science, University of Oklahoma, Norman, OK, 73019, USA dynamic dedicated server scheduling for heterogeneous and homogeneous systems to efficiently provide desired services by considering priorities of requests. Results show that the proposed scheduling algorithm improves throughput up to 40 % in heterogeneous and homogeneous cloud computing systems for IoT requests. Our proposed scheduling algorithm and related analysis will help cloud service providers build efficient server schedulers which are adaptable to homogeneous and heterogeneous environments byconsidering systemperformancemetrics, such as drop rate, throughput, and utilization in IoT.", "title": "" }, { "docid": "dac5cebcbc14b82f7b8df977bed0c9d8", "text": "While blockchain services hold great promise to improve many different industries, there are significant cybersecurity concerns which must be addressed. In this paper, we investigate security considerations for an Ethereum blockchain hosting a distributed energy management application. We have simulated a microgrid with ten buildings in the northeast U.S., and results of the transaction distribution and electricity utilization are presented. We also present the effects on energy distribution when one or two smart meters have their identities corrupted. We then propose a new approach to digital identity management that would require smart meters to authenticate with the blockchain ledger and mitigate identity-spoofing attacks. Applications of this approach to defense against port scans and DDoS, attacks are also discussed.", "title": "" }, { "docid": "e5bf05ae6700078dda83eca8d2f65cd4", "text": "We propose a simple solution to use a single Neural Machine Translation (NMT) model to translate between multiple languages. Our solution requires no changes to the model architecture from a standard NMT system but instead introduces an artificial token at the beginning of the input sentence to specify the required target language. Using a shared wordpiece vocabulary, our approach enables Multilingual NMT systems using a single model. On the WMT’14 benchmarks, a single multilingual model achieves comparable performance for English→French and surpasses state-of-theart results for English→German. Similarly, a single multilingual model surpasses state-of-the-art results for French→English and German→English on WMT’14 and WMT’15 benchmarks, respectively. On production corpora, multilingual models of up to twelve language pairs allow for better translation of many individual pairs. Our models can also learn to perform implicit bridging between language pairs never seen explicitly during training, showing that transfer learning and zero-shot translation is possible for neural translation. Finally, we show analyses that hints at a universal interlingua representation in our models and also show some interesting examples when mixing languages.", "title": "" }, { "docid": "c1fecb605dcabbd411e3782c15fd6546", "text": "Neuropathic pain is a debilitating form of chronic pain that affects 6.9-10% of the population. Health-related quality-of-life is impeded by neuropathic pain, which not only includes physical impairment, but the mental wellbeing of the patient is also hindered. A reduction in both physical and mental wellbeing bares economic costs that need to be accounted for. A variety of medications are in use for the treatment of neuropathic pain, such as calcium channel α2δ agonists, serotonin/noradrenaline reuptake inhibitors and tricyclic antidepressants. However, recent studies have indicated a lack of efficacy regarding the aforementioned medication. There is increasing clinical and pre-clinical evidence that can point to the use of ketamine, an “old” anaesthetic, in the management of neuropathic pain. Conversely, to see ketamine being used in neuropathic pain, there needs to be more conclusive evidence exploring the long-term effects of sub-anesthetic ketamine.", "title": "" }, { "docid": "5b463701f83f7e6651260c8f55738146", "text": "Heart disease diagnosis is a complex task which requires much experience and knowledge. Traditional way of predicting Heart disease is doctor’s examination or number of medical tests such as ECG, Stress Test, and Heart MRI etc. Nowadays, Health care industry contains huge amount of heath care data, which contains hidden information. This hidden information is useful for making effective decisions. Computer based information along with advanced Data mining techniques are used for appropriate results. Neural network is widely used tool for predicting Heart disease diagnosis. In this research paper, a Heart Disease Prediction system (HDPS) is developed using Neural network. The HDPS system predicts the likelihood of patient getting a Heart disease. For prediction, the system uses sex, blood pressure, cholesterol like 13 medical parameters. Here two more parameters are added i.e. obesity and smoking for better accuracy. From the results, it has been seen that neural network predict heart disease with nearly 100% accuracy.", "title": "" }, { "docid": "a2f1a10c0e89f6d63f493c267759fb8f", "text": "BACKGROUND\nPatient portals tied to provider electronic health record (EHR) systems are increasingly popular.\n\n\nPURPOSE\nTo systematically review the literature reporting the effect of patient portals on clinical care.\n\n\nDATA SOURCES\nPubMed and Web of Science searches from 1 January 1990 to 24 January 2013.\n\n\nSTUDY SELECTION\nHypothesis-testing or quantitative studies of patient portals tethered to a provider EHR that addressed patient outcomes, satisfaction, adherence, efficiency, utilization, attitudes, and patient characteristics, as well as qualitative studies of barriers or facilitators, were included.\n\n\nDATA EXTRACTION\nTwo reviewers independently extracted data and addressed discrepancies through consensus discussion.\n\n\nDATA SYNTHESIS\nFrom 6508 titles, 14 randomized, controlled trials; 21 observational, hypothesis-testing studies; 5 quantitative, descriptive studies; and 6 qualitative studies were included. Evidence is mixed about the effect of portals on patient outcomes and satisfaction, although they may be more effective when used with case management. The effect of portals on utilization and efficiency is unclear, although patient race and ethnicity, education level or literacy, and degree of comorbid conditions may influence use.\n\n\nLIMITATION\nLimited data for most outcomes and an absence of reporting on organizational and provider context and implementation processes.\n\n\nCONCLUSION\nEvidence that patient portals improve health outcomes, cost, or utilization is insufficient. Patient attitudes are generally positive, but more widespread use may require efforts to overcome racial, ethnic, and literacy barriers. Portals represent a new technology with benefits that are still unclear. Better understanding requires studies that include details about context, implementation factors, and cost.", "title": "" }, { "docid": "1eef21abdf14dc430b333cac71d4fe07", "text": "The authors have developed an adaptive matched filtering algorithm based upon an artificial neural network (ANN) for QRS detection. They use an ANN adaptive whitening filter to model the lower frequencies of the electrocardiogram (ECG) which are inherently nonlinear and nonstationary. The residual signal which contains mostly higher frequency QRS complex energy is then passed through a linear matched filter to detect the location of the QRS complex. The authors developed an algorithm to adaptively update the matched filter template from the detected QRS complex in the ECG signal itself so that the template can be customized to an individual subject. This ANN whitening filter is very effective at removing the time-varying, nonlinear noise characteristic of ECG signals. The detection rate for a very noisy patient record in the MIT/BIH arrhythmia database is 99.5% with this approach, which compares favorably to the 97.5% obtained using a linear adaptive whitening filter and the 96.5% achieved with a bandpass filtering method.<<ETX>>", "title": "" }, { "docid": "a0d4089e55a0a392a2784ae50b6fa779", "text": "Organizations place a great deal of emphasis on hiring individuals who are a good fit for the organization and the job. Among the many ways that individuals are screened for a job, the employment interview is particularly prevalent and nearly universally used (Macan, 2009; Huffcutt and Culbertson, 2011). This Research Topic is devoted to a construct that plays a critical role in our understanding of job interviews: impression management (IM). In the interview context, IM describes behaviors an individual uses to influence the impression that others have of them (Bozeman and Kacmar, 1997). For instance, a job applicant can flatter an interviewer to be seen as likable (i.e., ingratiation), play up their qualifications and abilities to be seen as competent (i.e., self-promotion), or utilize excuses or justifications to make up for a negative event or error (i.e., defensive IM; Ellis et al., 2002). IM has emerged as a central theme in the interview literature over the last several decades (for reviews, see Posthuma et al., 2002; Levashina et al., 2014). Despite some pioneering early work (e.g., Schlenker, 1980; Leary and Kowalski, 1990; Stevens and Kristof, 1995), there has been a resurgence of interest in the area over the last decade. While the literature to date has set up a solid foundational knowledge about interview IM, there are a number of emerging trends and directions. In the following, we lay out some critical areas of inquiry in interview IM, and highlight how the innovative set of papers in this Research Topic is illustrative of these new directions.", "title": "" }, { "docid": "5fbb54e63158066198cdf59e1a8e9194", "text": "In this paper, we present results of a study of the data rate fairness among nodes within a LoRaWAN cell. Since LoRa/LoRaWAN supports various data rates, we firstly derive the fairest ratios of deploying each data rate within a cell for a fair collision probability. LoRa/LoRaWan, like other frequency modulation based radio interfaces, exhibits the capture effect in which only the stronger signal of colliding signals will be extracted. This leads to unfairness, where far nodes or nodes experiencing higher attenuation are less likely to see their packets received correctly. Therefore, we secondly develop a transmission power control algorithm to balance the received signal powers from all nodes regardless of their distances from the gateway for a fair data extraction. Simulations show that our approach achieves higher fairness in data rate than the state-of-art in almost all network configurations.", "title": "" }, { "docid": "0a16eb6bfb41a708e7a660cbf4c445af", "text": "Data from 1,010 lactating lactating, predominately component-fed Holstein cattle from 25 predominately tie-stall dairy farms in southwest Ontario were used to identify objective thresholds for defining hyperketonemia in lactating dairy cattle based on negative impacts on cow health, milk production, or both. Serum samples obtained during wk 1 and 2 postpartum and analyzed for beta-hydroxybutyrate (BHBA) concentrations that were used in analysis. Data were time-ordered so that the serum samples were obtained at least 1 d before the disease or milk recording events. Serum BHBA cutpoints were constructed at 200 micromol/L intervals between 600 and 2,000 micromol/L. Critical cutpoints for the health analysis were determined based on the threshold having the greatest sum of sensitivity and specificity for predicting the disease occurrence. For the production outcomes, models for first test day milk yield, milk fat, and milk protein percentage were constructed including covariates of parity, precalving body condition score, season of calving, test day linear score, and the random effect of herd. Each cutpoint was tested in these models to determine the threshold with the greatest impact and least risk of a type 1 error. Serum BHBA concentrations at or above 1,200 micromol/L in the first week following calving were associated with increased risks of subsequent displaced abomasum [odds ratio (OR) = 2.60] and metritis (OR = 3.35), whereas the critical threshold of BHBA in wk 2 postpartum on the risk of abomasal displacement was >or=1,800 micromol/L (OR = 6.22). The best threshold for predicting subsequent risk of clinical ketosis from serum obtained during wk 1 and wk 2 postpartum was 1,400 micromol/L of BHBA (OR = 4.25 and 5.98, respectively). There was no association between clinical mastitis and elevated serum BHBA in wk 1 or 2 postpartum, and there was no association between wk 2 BHBA and risk of metritis. Greater serum BHBA measured during the first and second week postcalving were associated with less milk yield, greater milk fat percentage, and less milk protein percentage on the first Dairy Herd Improvement test day of lactation. Impacts on first Dairy Herd Improvement test milk yield began at BHBA >or=1,200 micromol/L for wk 1 samples and >or=1,400 micromol/L for wk 2 samples. The greatest impact on yield occurred at 1,400 micromol/L (-1.88 kg/d) and 2,000 micromol/L (-3.3 kg/d) for sera from the first and second week postcalving, respectively. Hyperketonemia can be defined at 1,400 micromol/L of BHBA and in the first 2 wk postpartum increases disease risk and results in substantial loss of milk yield in early lactation.", "title": "" }, { "docid": "4c563b09a10ce0b444edb645ce411d42", "text": "Privacy and security are two important but seemingly contradictory objectives in a pervasive computing environment (PCE). On one hand, service providers want to authenticate legitimate users and make sure they are accessing their authorized services in a legal way. On the other hand, users want to maintain the necessary privacy without being tracked down for wherever they are and whatever they are doing. In this paper, a novel privacy preserving authentication and access control scheme to secure the interactions between mobile users and services in PCEs is proposed. The proposed scheme seamlessly integrates two underlying cryptographic primitives, namely blind signature and hash chain, into a highly flexible and lightweight authentication and key establishment protocol. The scheme provides explicit mutual authentication between a user and a service while allowing the user to anonymously interact with the service. Differentiated service access control is also enabled in the proposed scheme by classifying mobile users into different service groups. The correctness of the proposed authentication and key establishment protocol is formally verified based on Burrows-Abadi-Needham logic", "title": "" }, { "docid": "9a30008cc270ac7a0bb1a0f12dca6187", "text": "Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance on recommender system benchmarks. However, making these methods practical and scalable to web-scale recommendation tasks with billions of items and hundreds of millions of users remains an unsolved challenge. Here we describe a large-scale deep recommendation engine that we developed and deployed at Pinterest. We develop a data-efficient Graph Convolutional Network (GCN) algorithm, which combines efficient random walks and graph convolutions to generate embeddings of nodes (i.e., items) that incorporate both graph structure as well as node feature information. Compared to prior GCN approaches, we develop a novel method based on highly efficient random walks to structure the convolutions and design a novel training strategy that relies on harder-and-harder training examples to improve robustness and convergence of the model. We also develop an efficient MapReduce model inference algorithm to generate embeddings using a trained model. Overall, we can train on and embed graphs that are four orders of magnitude larger than typical GCN implementations. We show how GCN embeddings can be used to make high-quality recommendations in various settings at Pinterest, which has a massive underlying graph with 3 billion nodes representing pins and boards, and 17 billion edges. According to offline metrics, user studies, as well as A/B tests, our approach generates higher-quality recommendations than comparable deep learning based systems. To our knowledge, this is by far the largest application of deep graph embeddings to date and paves the way for a new generation of web-scale recommender systems based on graph convolutional architectures.", "title": "" }, { "docid": "4b8f59d1b416d4869ae38dbca0eaca41", "text": "This study investigates high frequency currency trading with neural networks trained via Recurrent Reinforcement Learning (RRL). We compare the performance of single layer networks with networks having a hidden layer, and examine the impact of the fixed system parameters on performance. In general, we conclude that the trading systems may be effective, but the performance varies widely for different currency markets and this variability cannot be explained by simple statistics of the markets. Also we find that the single layer network outperforms the two layer network in this application.", "title": "" }, { "docid": "ec7b348a0fe38afa02989a22aa9dcac2", "text": "We propose a general framework for learning from labeled and unlabeled data on a directed graph in which the structure of the graph including the directionality of the edges is considered. The time complexity of the algorithm derived from this framework is nearly linear due to recently developed numerical techniques. In the absence of labeled instances, this framework can be utilized as a spectral clustering method for directed graphs, which generalizes the spectral clustering approach for undirected graphs. We have applied our framework to real-world web classification problems and obtained encouraging results.", "title": "" } ]
scidocsrr
085d8ef9f29229887533b78ad8a9273a
Pain catastrophizing and kinesiophobia: predictors of chronic low back pain.
[ { "docid": "155411fe242dd4f3ab39649d20f5340f", "text": "Two studies are presented that investigated 'fear of movement/(re)injury' in chronic musculoskeletal pain and its relation to behavioral performance. The 1st study examines the relation among fear of movement/(re)injury (as measured with the Dutch version of the Tampa Scale for Kinesiophobia (TSK-DV)) (Kori et al. 1990), biographical variables (age, pain duration, gender, use of supportive equipment, compensation status), pain-related variables (pain intensity, pain cognitions, pain coping) and affective distress (fear and depression) in a group of 103 chronic low back pain (CLBP) patients. In the 2nd study, motoric, psychophysiologic and self-report measures of fear are taken from 33 CLBP patients who are exposed to a single and relatively simple movement. Generally, findings demonstrated that the fear of movement/(re)injury is related to gender and compensation status, and more closely to measures of catastrophizing and depression, but in a much lesser degree to pain coping and pain intensity. Furthermore, subjects who report a high degree of fear of movement/(re)injury show more fear and escape/avoidance when exposed to a simple movement. The discussion focuses on the clinical relevance of the construct of fear of movement/(re)injury and research questions that remain to be answered.", "title": "" } ]
[ { "docid": "f031d0db43b5f9d9d3068916ea975d75", "text": "Difficulties in the social domain and motor anomalies have been widely investigated in Autism Spectrum Disorder (ASD). However, they have been generally considered as independent, and therefore tackled separately. Recent advances in neuroscience have hypothesized that the cortical motor system can play a role not only as a controller of elementary physical features of movement, but also in a complex domain as social cognition. Here, going beyond previous studies on ASD that described difficulties in the motor and in the social domain separately, we focus on the impact of motor mechanisms anomalies on social functioning. We consider behavioral, electrophysiological and neuroimaging findings supporting the idea that motor cognition is a critical \"intermediate phenotype\" for ASD. Motor cognition anomalies in ASD affect the processes of extraction, codification and subsequent translation of \"external\" social information into the motor system. Intriguingly, this alternative \"motor\" approach to the social domain difficulties in ASD may be promising to bridge the gap between recent experimental findings and clinical practice, potentially leading to refined preventive approaches and successful treatments.", "title": "" }, { "docid": "70991373ae71f233b0facd2b5dd1a0d3", "text": "Information communications technology systems are facing an increasing number of cyber security threats, the majority of which are originated by insiders. As insiders reside behind the enterprise-level security defence mechanisms and often have privileged access to the network, detecting and preventing insider threats is a complex and challenging problem. In fact, many schemes and systems have been proposed to address insider threats from different perspectives, such as intent, type of threat, or available audit data source. This survey attempts to line up these works together with only three most common types of insider namely traitor, masquerader, and unintentional perpetrator, while reviewing the countermeasures from a data analytics perspective. Uniquely, this survey takes into account the early stage threats which may lead to a malicious insider rising up. When direct and indirect threats are put on the same page, all the relevant works can be categorised as host, network, or contextual data-based according to audit data source and each work is reviewed for its capability against insider threats, how the information is extracted from the engaged data sources, and what the decision-making algorithm is. The works are also compared and contrasted. Finally, some issues are raised based on the observations from the reviewed works and new research gaps and challenges identified.", "title": "" }, { "docid": "c630b600a0b03e9e3ede1c0132f80264", "text": "68 AI MAGAZINE Adaptive graphical user interfaces (GUIs) automatically tailor the presentation of functionality to better fit an individual user’s tasks, usage patterns, and abilities. A familiar example of an adaptive interface is the Windows XP start menu, where a small set of applications from the “All Programs” submenu is replicated in the top level of the “Start” menu for easier access, saving users from navigating through multiple levels of the menu hierarchy (figure 1). The potential of adaptive interfaces to reduce visual search time, cognitive load, and motor movement is appealing, and when the adaptation is successful an adaptive interface can be faster and preferred in comparison to a nonadaptive counterpart (for example, Gajos et al. [2006], Greenberg and Witten [1985]). In practice, however, many challenges exist, and, thus far, evaluation results of adaptive interfaces have been mixed. For an adaptive interface to be successful, the benefits of correct adaptations must outweigh the costs, or usability side effects, of incorrect adaptations. Often, an adaptive mechanism designed to improve one aspect of the interaction, typically motor movement or visual search, inadvertently increases effort along another dimension, such as cognitive or perceptual load. The result is that many adaptive designs that were expected to confer a benefit along one of these dimensions have failed in practice. For example, a menu that tracks how frequently each item is used and adaptively reorders itself so that items appear in order from most to least frequently accessed should improve motor performance, but in reality this design can slow users down and reduce satisfaction because of the constantly changing layout (Mitchell and Schneiderman [1989]; for example, figure 2b). Commonly cited issues with adaptive interfaces include the lack of control the user has over the adaptive process and the difficulty that users may have in predicting what the system’s response will be to a user action (Höök 2000). User evaluation of adaptive GUIs is more complex than eval-", "title": "" }, { "docid": "4facc72eb8270d12d0182c7a7833736f", "text": "We construct a family of extremely simple bijections that yield Cayley’s famous formula for counting trees. The weight preserving properties of these bijections furnish a number of multivariate generating functions for weighted Cayley trees. Essentially the same idea is used to derive bijective proofs and q-analogues for the number of spanning trees of other graphs, including the complete bipartite and complete tripartite graphs. These bijections also allow the calculation of explicit formulas for the expected number of various statistics on Cayley trees.", "title": "" }, { "docid": "47949e080b4f5643dde02eb1c5c2527f", "text": "Extracting biomedical entities and their relations from text has important applications on biomedical research. Previous work primarily utilized feature-based pipeline models to process this task. Many efforts need to be made on feature engineering when feature-based models are employed. Moreover, pipeline models may suffer error propagation and are not able to utilize the interactions between subtasks. Therefore, we propose a neural joint model to extract biomedical entities as well as their relations simultaneously, and it can alleviate the problems above. Our model was evaluated on two tasks, i.e., the task of extracting adverse drug events between drug and disease entities, and the task of extracting resident relations between bacteria and location entities. Compared with the state-of-the-art systems in these tasks, our model improved the F1 scores of the first task by 5.1% in entity recognition and 8.0% in relation extraction, and that of the second task by 9.2% in relation extraction. The proposed model achieves competitive performances with less work on feature engineering. We demonstrate that the model based on neural networks is effective for biomedical entity and relation extraction. In addition, parameter sharing is an alternative method for neural models to jointly process this task. Our work can facilitate the research on biomedical text mining.", "title": "" }, { "docid": "1c126457ee6b61be69448ee00a64d557", "text": "Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes are abundant, making them an overrepresented majority, and data of other classes are scarce, making them an underrepresented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this paper, we propose a cost-sensitive (CoSen) deep neural network, which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class-dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multiclass problems without any modification. Moreover, as opposed to data-level approaches, we do not alter the original data distribution, which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification data sets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and CoSen classifiers demonstrate the superior performance of our proposed method.", "title": "" }, { "docid": "3a852aa880c564a85cc8741ce7427ced", "text": "INTRODUCTION\nTumeric is a spice that comes from the root Curcuma longa, a member of the ginger family, Zingaberaceae. In Ayurveda (Indian traditional medicine), tumeric has been used for its medicinal properties for various indications and through different routes of administration, including topically, orally, and by inhalation. Curcuminoids are components of tumeric, which include mainly curcumin (diferuloyl methane), demethoxycurcumin, and bisdemethoxycurcmin.\n\n\nOBJECTIVES\nThe goal of this systematic review of the literature was to summarize the literature on the safety and anti-inflammatory activity of curcumin.\n\n\nMETHODS\nA search of the computerized database MEDLINE (1966 to January 2002), a manual search of bibliographies of papers identified through MEDLINE, and an Internet search using multiple search engines for references on this topic was conducted. The PDR for Herbal Medicines, and four textbooks on herbal medicine and their bibliographies were also searched.\n\n\nRESULTS\nA large number of studies on curcumin were identified. These included studies on the antioxidant, anti-inflammatory, antiviral, and antifungal properties of curcuminoids. Studies on the toxicity and anti-inflammatory properties of curcumin have included in vitro, animal, and human studies. A phase 1 human trial with 25 subjects using up to 8000 mg of curcumin per day for 3 months found no toxicity from curcumin. Five other human trials using 1125-2500 mg of curcumin per day have also found it to be safe. These human studies have found some evidence of anti-inflammatory activity of curcumin. The laboratory studies have identified a number of different molecules involved in inflammation that are inhibited by curcumin including phospholipase, lipooxygenase, cyclooxygenase 2, leukotrienes, thromboxane, prostaglandins, nitric oxide, collagenase, elastase, hyaluronidase, monocyte chemoattractant protein-1 (MCP-1), interferon-inducible protein, tumor necrosis factor (TNF), and interleukin-12 (IL-12).\n\n\nCONCLUSIONS\nCurcumin has been demonstrated to be safe in six human trials and has demonstrated anti-inflammatory activity. It may exert its anti-inflammatory activity by inhibition of a number of different molecules that play a role in inflammation.", "title": "" }, { "docid": "4272b4a73ecd9d2b60e0c60de0469f17", "text": "Suggesting that empirical work in the field of reading has advanced sufficiently to allow substantial agreed-upon results and conclusions, this literature review cuts through the detail of partially convergent, sometimes discrepant research findings to provide an integrated picture of how reading develops and how reading instruction should proceed. The focus of the review is prevention. Sketched is a picture of the conditions under which reading is most likely to develop easily--conditions that include stimulating preschool environments, excellent reading instruction, and the absence of any of a wide array of risk factors. It also provides recommendations for practice as well as recommendations for further research. After a preface and executive summary, chapters are (1) Introduction; (2) The Process of Learning to Read; (3) Who Has Reading Difficulties; (4) Predic:ors of Success and Failure in Reading; (5) Preventing Reading Difficulties before Kindergarten; (6) Instructional Strategies for Kindergarten and the Primary Grades; (7) Organizational Strategies for Kindergarten and the Primary Grades; (8) Helping Children with Reading Difficulties in Grades 1 to 3; (9) The Agents of Change; and (10) Recommendations for Practice and Research. Contains biographical sketches of the committee members and an index. Contains approximately 800 references.", "title": "" }, { "docid": "1f50a6d6e7c48efb7ffc86bcc6a8271d", "text": "Creating short summaries of documents with respect to a query has applications in for example search engines, where it may help inform users of the most relevant results. Constructing such a summary automatically, with the potential expressiveness of a human-written summary, is a difficult problem yet to be fully solved. In this thesis, a neural network model for this task is presented. We adapt an existing dataset of news article summaries for the task and train a pointer-generator model using this dataset to summarize such articles. The generated summaries are then evaluated by measuring similarity to reference summaries. We observe that the generated summaries exhibit abstractive properties, but also that they have issues, such as rarely being truthful. However, we show that a neural network summarization model, similar to existing neural network models for abstractive summarization, can be constructed to make use of queries for more targeted summaries.", "title": "" }, { "docid": "d994b23ea551f23215232c0771e7d6b3", "text": "It is said that there’s nothing so practical as good theory. It may also be said that there’s nothing so theoretically interesting as good practice1. This is particularly true of efforts to relate constructivism as a theory of learning to the practice of instruction. Our goal in this paper is to provide a clear link between the theoretical principles of constructivism, the practice of instructional design, and the practice of teaching. We will begin with a basic characterization of constructivism identifying what we believe to be the central principles in learning and understanding. We will then identify and elaborate on eight instructional principles for the design of a constructivist learning environment. Finally, we will examine what we consider to be one of the best exemplars of a constructivist learning environment -Problem Based Learning as described by Barrows (1985, 1986, 1992).", "title": "" }, { "docid": "9961f44d4ab7d0a344811186c9234f2c", "text": "This paper discusses the trust related issues and arguments (evidence) Internet stores need to provide in order to increase consumer trust. Based on a model of trust from academic literature, in addition to a model of the customer service life cycle, the paper develops a framework that identifies key trust-related issues and organizes them into four categories: personal information, product quality and price, customer service, and store presence. It is further validated by comparing the issues it raises to issues identified in a review of academic studies, and to issues of concern identified in two consumer surveys. The framework is also applied to ten well-known web sites to demonstrate its applicability. The proposed framework will benefit both practitioners and researchers by identifying important issues regarding trust, which need to be accounted for in Internet stores. For practitioners, it provides a guide to the issues Internet stores need to address in their use of arguments. For researchers, it can be used as a foundation for future empirical studies investigating the effects of trust-related arguments on consumers’ trust in Internet stores.", "title": "" }, { "docid": "9373cde066d8d898674a519206f1c38f", "text": "This work proposes a novel deep network architecture to solve the camera ego-motion estimation problem. A motion estimation network generally learns features similar to optical flow (OF) fields starting from sequences of images. This OF can be described by a lower dimensional latent space. Previous research has shown how to find linear approximations of this space. We propose to use an autoencoder network to find a nonlinear representation of the OF manifold. In addition, we propose to learn the latent space jointly with the estimation task, so that the learned OF features become a more robust description of the OF input. We call this novel architecture latent space visual odometry (LS-VO). The experiments show that LS-VO achieves a considerable increase in performances with respect to baselines, while the number of parameters of the estimation network only slightly increases.", "title": "" }, { "docid": "f6ad0d01cb66c1260c1074c4f35808c6", "text": "BACKGROUND\nUnilateral spatial neglect causes difficulty attending to one side of space. Various rehabilitation interventions have been used but evidence of their benefit is lacking.\n\n\nOBJECTIVES\nTo assess whether cognitive rehabilitation improves functional independence, neglect (as measured using standardised assessments), destination on discharge, falls, balance, depression/anxiety and quality of life in stroke patients with neglect measured immediately post-intervention and at longer-term follow-up; and to determine which types of interventions are effective and whether cognitive rehabilitation is more effective than standard care or an attention control.\n\n\nSEARCH METHODS\nWe searched the Cochrane Stroke Group Trials Register (last searched June 2012), MEDLINE (1966 to June 2011), EMBASE (1980 to June 2011), CINAHL (1983 to June 2011), PsycINFO (1974 to June 2011), UK National Research Register (June 2011). We handsearched relevant journals (up to 1998), screened reference lists, and tracked citations using SCISEARCH.\n\n\nSELECTION CRITERIA\nWe included randomised controlled trials (RCTs) of cognitive rehabilitation specifically aimed at spatial neglect. We excluded studies of general stroke rehabilitation and studies with mixed participant groups, unless more than 75% of their sample were stroke patients or separate stroke data were available.\n\n\nDATA COLLECTION AND ANALYSIS\nTwo review authors independently selected studies, extracted data, and assessed study quality. For subgroup analyses, review authors independently categorised the approach underlying the cognitive intervention as either 'top-down' (interventions that encourage awareness of the disability and potential compensatory strategies) or 'bottom-up' (interventions directed at the impairment but not requiring awareness or behavioural change, e.g. wearing prisms or patches).\n\n\nMAIN RESULTS\nWe included 23 RCTs with 628 participants (adding 11 new RCTs involving 322 new participants for this update). Only 11 studies were assessed to have adequate allocation concealment, and only four studies to have a low risk of bias in all categories assessed. Most studies measured outcomes using standardised neglect assessments: 15 studies measured effect on activities of daily living (ADL) immediately after the end of the intervention period, but only six reported persisting effects on ADL. One study (30 participants) reported discharge destination and one study (eight participants) reported the number of falls.Eighteen of the 23 included RCTs compared cognitive rehabilitation with any control intervention (placebo, attention or no treatment). Meta-analyses demonstrated no statistically significant effect of cognitive rehabilitation, compared with control, for persisting effects on either ADL (five studies, 143 participants) or standardised neglect assessments (eight studies, 172 participants), or for immediate effects on ADL (10 studies, 343 participants). In contrast, we found a statistically significant effect in favour of cognitive rehabilitation compared with control, for immediate effects on standardised neglect assessments (16 studies, 437 participants, standardised mean difference (SMD) 0.35, 95% confidence interval (CI) 0.09 to 0.62). However, sensitivity analyses including only studies of high methodological quality removed evidence of a significant effect of cognitive rehabilitation.Additionally, five of the 23 included RCTs compared one cognitive rehabilitation intervention with another. These included three studies comparing a visual scanning intervention with another cognitive rehabilitation intervention, and two studies (three comparison groups) comparing a visual scanning intervention plus another cognitive rehabilitation intervention with a visual scanning intervention alone. Only two small studies reported a measure of functional disability and there was considerable heterogeneity within these subgroups (I² > 40%) when we pooled standardised neglect assessment data, limiting the ability to draw generalised conclusions.Subgroup analyses exploring the effect of having an attention control demonstrated some evidence of a statistically significant difference between those comparing rehabilitation with attention control and those with another control or no treatment group, for immediate effects on standardised neglect assessments (test for subgroup differences, P = 0.04).\n\n\nAUTHORS' CONCLUSIONS\nThe effectiveness of cognitive rehabilitation interventions for reducing the disabling effects of neglect and increasing independence remains unproven. As a consequence, no rehabilitation approach can be supported or refuted based on current evidence from RCTs. However, there is some very limited evidence that cognitive rehabilitation may have an immediate beneficial effect on tests of neglect. This emerging evidence justifies further clinical trials of cognitive rehabilitation for neglect. However, future studies need to have appropriate high quality methodological design and reporting, to examine persisting effects of treatment and to include an attention control comparator.", "title": "" }, { "docid": "b883116f741733b3bbd3933fdc1b4542", "text": "To address concerns of TREC-style relevance judgments, we explore two improvements. The first one seeks to make relevance judgments contextual, collecting in situ feedback of users in an interactive search session and embracing usefulness as the primary judgment criterion. The second one collects multidimensional assessments to complement relevance or usefulness judgments, with four distinct alternative aspects examined in this paper - novelty, understandability, reliability, and effort.\n We evaluate different types of judgments by correlating them with six user experience measures collected from a lab user study. Results show that switching from TREC-style relevance criteria to usefulness is fruitful, but in situ judgments do not exhibit clear benefits over the judgments collected without context. In contrast, combining relevance or usefulness with the four alternative judgments consistently improves the correlation with user experience measures, suggesting future IR systems should adopt multi-aspect search result judgments in development and evaluation.\n We further examine implicit feedback techniques for predicting these judgments. We find that click dwell time, a popular indicator of search result quality, is able to predict some but not all dimensions of the judgments. We enrich the current implicit feedback methods using post-click user interaction in a search session and achieve better prediction for all six dimensions of judgments.", "title": "" }, { "docid": "6702bfca88f86e0c35a8b6195d0c971c", "text": "A hierarchical scheme for clustering data is presented which applies to spaces with a high number of dimensions ( 3 D N > ). The data set is first reduced to a smaller set of partitions (multi-dimensional bins). Multiple clustering techniques are used, including spectral clustering; however, new techniques are also introduced based on the path length between partitions that are connected to one another. A Line-of-Sight algorithm is also developed for clustering. A test bank of 12 data sets with varying properties is used to expose the strengths and weaknesses of each technique. Finally, a robust clustering technique is discussed based on reaching a consensus among the multiple approaches, overcoming the weaknesses found individually.", "title": "" }, { "docid": "edfc9cb39fe45a43aed78379bafa2dfc", "text": "We propose a novel decomposition framework for the distributed optimization of general nonconvex sum-utility functions arising naturally in the system design of wireless multi-user interfering systems. Our main contributions are i) the development of the first class of (inexact) Jacobi best-response algorithms with provable convergence, where all the users simultaneously and iteratively solve a suitably convexified version of the original sum-utility optimization problem; ii) the derivation of a general dynamic pricing mechanism that provides a unified view of existing pricing schemes that are based, instead, on heuristics; and iii) a framework that can be easily particularized to well-known applications, giving rise to very efficient practical (Jacobi or Gauss-Seidel) algorithms that outperform existing ad hoc methods proposed for very specific problems. Interestingly, our framework contains as special cases well-known gradient algorithms for nonconvex sum-utility problems, and many block-coordinate descent schemes for convex functions.", "title": "" }, { "docid": "b559579485358f7958eea8907c8b4b09", "text": "Word embedding models learn a distributed vectorial representation for words, which can be used as the basis for (deep) learning models to solve a variety of natural language processing tasks. One of the main disadvantages of current word embedding models is that they learn a single representation for each word in a metric space, as a result of which they cannot appropriately model polysemous words. In this work, we develop a new word embedding model that can accurately represent such words by automatically learning multiple representations for each word, whilst remaining computationally efficient. Without any supervision, our model learns multiple, complementary embeddings that all capture different semantic structure. We demonstrate the potential merits of our model by training it on large text corpora, and evaluating it on word similarity tasks. Our proposed embedding model is competitive with the state of the art and can easily scale to large corpora due to its computational simplicity.", "title": "" }, { "docid": "8700e170ba9c3e6c35008e2ccff48ef9", "text": "Recently, Uber has emerged as a leader in the \"sharing economy\". Uber is a \"ride sharing\" service that matches willing drivers with customers looking for rides. However, unlike other open marketplaces (e.g., AirBnB), Uber is a black-box: they do not provide data about supply or demand, and prices are set dynamically by an opaque \"surge pricing\" algorithm. The lack of transparency has led to concerns about whether Uber artificially manipulate prices, and whether dynamic prices are fair to customers and drivers. In order to understand the impact of surge pricing on passengers and drivers, we present the first in-depth investigation of Uber. We gathered four weeks of data from Uber by emulating 43 copies of the Uber smartphone app and distributing them throughout downtown San Francisco (SF) and midtown Manhattan. Using our dataset, we are able to characterize the dynamics of Uber in SF and Manhattan, as well as identify key implementation details of Uber's surge price algorithm. Our observations about Uber's surge price algorithm raise important questions about the fairness and transparency of this system.", "title": "" }, { "docid": "1272563e64ca327aba1be96f2e045c30", "text": "Current Web search engines are built to serve all users, independent of the special needs of any individual user. Personalization of Web search is to carry out retrieval for each user incorporating his/her interests. We propose a novel technique to learn user profiles from users' search histories. The user profiles are then used to improve retrieval effectiveness in Web search. A user profile and a general profile are learned from the user's search history and a category hierarchy, respectively. These two profiles are combined to map a user query into a set of categories which represent the user's search intention and serve as a context to disambiguate the words in the user's query. Web search is conducted based on both the user query and the set of categories. Several profile learning and category mapping algorithms and a fusion algorithm are provided and evaluated. Experimental results indicate that our technique to personalize Web search is both effective and efficient.", "title": "" }, { "docid": "750e7bd1b23da324a0a51d0b589acbfb", "text": "Various powerful people detection methods exist. Surprisingly, most approaches rely on static image features only despite the obvious potential of motion information for people detection. This paper systematically evaluates different features and classifiers in a sliding-window framework. First, our experiments indicate that incorporating motion information improves detection performance significantly. Second, the combination of multiple and complementary feature types can also help improve performance. And third, the choice of the classifier-feature combination and several implementation details are crucial to reach best performance. In contrast to many recent papers experimental results are reported for four different datasets rather than using a single one. Three of them are taken from the literature allowing for direct comparison. The fourth dataset is newly recorded using an onboard camera driving through urban environment. Consequently this dataset is more realistic and more challenging than any currently available dataset.", "title": "" } ]
scidocsrr
61b9619b02f8c7f3c0d2b06f4e6b6413
Linux kernel vulnerabilities: state-of-the-art defenses and open problems
[ { "docid": "3724a800d0c802203835ef9f68a87836", "text": "This paper presents SUD, a system for running existing Linux device drivers as untrusted user-space processes. Even if the device driver is controlled by a malicious adversary, it cannot compromise the rest of the system. One significant challenge of fully isolating a driver is to confine the actions of its hardware device. SUD relies on IOMMU hardware, PCI express bridges, and messagesignaled interrupts to confine hardware devices. SUD runs unmodified Linux device drivers, by emulating a Linux kernel environment in user-space. A prototype of SUD runs drivers for Gigabit Ethernet, 802.11 wireless, sound cards, USB host controllers, and USB devices, and it is easy to add a new device class. SUD achieves the same performance as an in-kernel driver on networking benchmarks, and can saturate a Gigabit Ethernet link. SUD incurs a CPU overhead comparable to existing runtime driver isolation techniques, while providing much stronger isolation guarantees for untrusted drivers. Finally, SUD requires minimal changes to the kernel—just two kernel modules comprising 4,000 lines of code—which may at last allow the adoption of these ideas in practice.", "title": "" }, { "docid": "68bab5e0579a0cdbaf232850e0587e11", "text": "This article presents a new mechanism that enables applications to run correctly when device drivers fail. Because device drivers are the principal failing component in most systems, reducing driver-induced failures greatly improves overall reliability. Earlier work has shown that an operating system can survive driver failures [Swift et al. 2005], but the applications that depend on them cannot. Thus, while operating system reliability was greatly improved, application reliability generally was not.To remedy this situation, we introduce a new operating system mechanism called a shadow driver. A shadow driver monitors device drivers and transparently recovers from driver failures. Moreover, it assumes the role of the failed driver during recovery. In this way, applications using the failed driver, as well as the kernel itself, continue to function as expected.We implemented shadow drivers for the Linux operating system and tested them on over a dozen device drivers. Our results show that applications and the OS can indeed survive the failure of a variety of device drivers. Moreover, shadow drivers impose minimal performance overhead. Lastly, they can be introduced with only modest changes to the OS kernel and with no changes at all to existing device drivers.", "title": "" } ]
[ { "docid": "68f10e252faf7171cac8d5ba914fcba9", "text": "Most languages have no formal writing system and at best a limited written record. However, textual data is critical to natural language processing and particularly important for the training of language models that would facilitate speech recognition of such languages. Bilingual phonetic dictionaries are often available in some form, since lexicon creation is a fundamental task of documentary linguistics. We investigate the use of such dictionaries to improve language models when textual training data is limited to as few as 1k sentences. The method involves learning cross-lingual word embeddings as a pretraining step in the training of monolingual language models. Results across a number of languages show that language models are improved by such pre-training.", "title": "" }, { "docid": "45b17b6521e84c8536ad852969b21c1d", "text": "Previous research on online media popularity prediction concluded that the rise in popularity of online videos maintains a conventional logarithmic distribution. However, recent studies have shown that a significant portion of online videos exhibit bursty/sudden rise in popularity, which cannot be accounted for by video domain features alone. In this paper, we propose a novel transfer learning framework that utilizes knowledge from social streams (e.g., Twitter) to grasp sudden popularity bursts in online content. We develop a transfer learning algorithm that can learn topics from social streams allowing us to model the social prominence of video content and improve popularity predictions in the video domain. Our transfer learning framework has the ability to scale with incoming stream of tweets, harnessing physical world event information in real-time. Using data comprising of 10.2 million tweets and 3.5 million YouTube videos, we show that social prominence of the video topic (context) is responsible for the sudden rise in its popularity where social trends have a ripple effect as they spread from the Twitter domain to the video domain. We envision that our cross-domain popularity prediction model will be substantially useful for various media applications that could not be previously solved by traditional multimedia techniques alone.", "title": "" }, { "docid": "28b7905d804cef8e54dbdf4f63f6495d", "text": "The recently introduced Galois/Counter Mode (GCM) of operation for block ciphers provides both encryption and message authentication, using universal hashing based on multiplication in a binary finite field. We analyze its security and performance, and show that it is the most efficient mode of operation for high speed packet networks, by using a realistic model of a network crypto module and empirical data from studies of Internet traffic in conjunction with software experiments and hardware designs. GCM has several useful features: it can accept IVs of arbitrary length, can act as a stand-alone message authentication code (MAC), and can be used as an incremental MAC. We show that GCM is secure in the standard model of concrete security, even when these features are used. We also consider several of its important system-security aspects.", "title": "" }, { "docid": "a83b417c2be604427eacf33b1db91468", "text": "We report a male infant with iris coloboma, choanal atresia, postnatal retardation of growth and psychomotor development, genital anomaly, ear anomaly, and anal atresia. In addition, there was cutaneous syndactyly and nail hypoplasia of the second and third fingers on the right and hypoplasia of the left second finger nail. Comparable observations have rarely been reported and possibly represent genetic heterogeneity.", "title": "" }, { "docid": "71759cdcf18dabecf1d002727eb9d8b8", "text": "A commonly observed neural correlate of working memory is firing that persists after the triggering stimulus disappears. Substantial effort has been devoted to understanding the many potential mechanisms that may underlie memory-associated persistent activity. These rely either on the intrinsic properties of individual neurons or on the connectivity within neural circuits to maintain the persistent activity. Nevertheless, it remains unclear which mechanisms are at play in the many brain areas involved in working memory. Herein, we first summarize the palette of different mechanisms that can generate persistent activity. We then discuss recent work that asks which mechanisms underlie persistent activity in different brain areas. Finally, we discuss future studies that might tackle this question further. Our goal is to bridge between the communities of researchers who study either single-neuron biophysical, or neural circuit, mechanisms that can generate the persistent activity that underlies working memory.", "title": "" }, { "docid": "0cd5813a069c8955871784cd3e63aa83", "text": "Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.", "title": "" }, { "docid": "4dd0d34f6b67edee60f2e6fae5bd8dd9", "text": "Virtual learning environments facilitate online learning, generating and storing large amounts of data during the learning/teaching process. This stored data enables extraction of valuable information using data mining. In this article, we present a systematic mapping, containing 42 papers, where data mining techniques are applied to predict students performance using Moodle data. Results show that decision trees are the most used classification approach. Furthermore, students interactions in forums are the main Moodle attribute analyzed by researchers.", "title": "" }, { "docid": "03f98b18392bd178ea68ce19b13589fa", "text": "Neural network techniques are widely used in network embedding, boosting the result of node classification, link prediction, visualization and other tasks in both aspects of efficiency and quality. All the state of art algorithms put effort on the neighborhood information and try to make full use of it. However, it is hard to recognize core periphery structures simply based on neighborhood. In this paper, we first discuss the influence brought by random-walk based sampling strategies to the embedding results. Theoretical and experimental evidences show that random-walk based sampling strategies fail to fully capture structural equivalence. We present a new method, SNS, that performs network embeddings using structural information (namely graphlets) to enhance its quality. SNS effectively utilizes both neighbor information and local-subgraphs similarity to learn node embeddings. This is the first framework that combines these two aspects as far as we know, positively merging two important areas in graph mining and machine learning. Moreover, we investigate what kinds of local-subgraph features matter the most on the node classification task, which enables us to further improve the embedding quality. Experiments show that our algorithm outperforms other unsupervised and semi-supervised neural network embedding algorithms on several real-world datasets.", "title": "" }, { "docid": "4e46fb5c1abb3379519b04a84183b055", "text": "Categorical models of emotions posit neurally and physiologically distinct human basic emotions. We tested this assumption by using multivariate pattern analysis (MVPA) to classify brain activity patterns of 6 basic emotions (disgust, fear, happiness, sadness, anger, and surprise) in 3 experiments. Emotions were induced with short movies or mental imagery during functional magnetic resonance imaging. MVPA accurately classified emotions induced by both methods, and the classification generalized from one induction condition to another and across individuals. Brain regions contributing most to the classification accuracy included medial and inferior lateral prefrontal cortices, frontal pole, precentral and postcentral gyri, precuneus, and posterior cingulate cortex. Thus, specific neural signatures across these regions hold representations of different emotional states in multimodal fashion, independently of how the emotions are induced. Similarity of subjective experiences between emotions was associated with similarity of neural patterns for the same emotions, suggesting a direct link between activity in these brain regions and the subjective emotional experience.", "title": "" }, { "docid": "2f17160c9f01aa779b1745a57e34e1aa", "text": "OBJECTIVE\nTo report an ataxic variant of Alzheimer disease expressing a novel molecular phenotype.\n\n\nDESIGN\nDescription of a novel phenotype associated with a presenilin 1 mutation.\n\n\nSETTING\nThe subject was an outpatient who was diagnosed at the local referral center.\n\n\nPATIENT\nA 28-year-old man presented with psychiatric symptoms and cerebellar signs, followed by cognitive dysfunction. Severe beta-amyloid (Abeta) deposition was accompanied by neurofibrillary tangles and cell loss in the cerebral cortex and by Purkinje cell dendrite loss in the cerebellum. A presenilin 1 gene (PSEN1) S170F mutation was detected.\n\n\nMAIN OUTCOME MEASURES\nWe analyzed the processing of Abeta precursor protein in vitro as well as the Abeta species in brain tissue.\n\n\nRESULTS\nThe PSEN1 S170F mutation induced a 3-fold increase of both secreted Abeta(42) and Abeta(40) species and a 60% increase of secreted Abeta precursor protein in transfected cells. Soluble and insoluble fractions isolated from brain tissue showed a prevalence of N-terminally truncated Abeta species ending at both residues 40 and 42.\n\n\nCONCLUSION\nThese findings define a new Alzheimer disease molecular phenotype and support the concept that the phenotypic variability associated with PSEN1 mutations may be dictated by the Abeta aggregates' composition.", "title": "" }, { "docid": "f5df06ebd22d4eac95287b38a5c3cc6b", "text": "We discuss the use of a double exponentially tapered slot antenna (DETSA) fabricated on flexible liquid crystal polymer (LCP) as a candidate for ultrawideband (UWB) communications systems. The features of the antenna and the effect of the antenna on a transmitted pulse are investigated. Return loss and E and H plane radiation pattern measurements are presented in several frequencies covering the whole ultra wide band. The return loss remains below -10 dB and the shape of the radiation pattern remains fairly constant in the whole UWB range (3.1 to 10.6 GHz). The main lobe characteristic of the radiation pattern remains stable even when the antenna is significantly conformed. The major effect of the conformation is an increase in the cross polarization component amplitude. The system: transmitter DETSA-channel receiver DETSA is measured in frequency domain and shows that the antenna adds very little distortion on a transmitted pulse. The distortion remains small even when both transmitter and receiver antennas are folded, although it increases slightly.", "title": "" }, { "docid": "27bcbde431c340db7544b58faa597fb7", "text": "Face and eye detection algorithms are deployed in a wide variety of applications. Unfortunately, there has been no quantitative comparison of how these detectors perform under difficult circumstances. We created a dataset of low light and long distance images which possess some of the problems encountered by face and eye detectors solving real world problems. The dataset we created is composed of reimaged images (photohead) and semi-synthetic heads imaged under varying conditions of low light, atmospheric blur, and distances of 3m, 50m, 80m, and 200m. This paper analyzes the detection and localization performance of the participating face and eye algorithms compared with the Viola Jones detector and four leading commercial face detectors. Performance is characterized under the different conditions and parameterized by per-image brightness and contrast. In localization accuracy for eyes, the groups/companies focusing on long-range face detection outperform leading commercial applications.", "title": "" }, { "docid": "a583bbf2deac0bf99e2790c47598cddd", "text": "We introduce TensorFlow Agents, an efficient infrastructure paradigm for building parallel reinforcement learning algorithms in TensorFlow. We simulate multiple environments in parallel, and group them to perform the neural network computation on a batch rather than individual observations. This allows the TensorFlow execution engine to parallelize computation, without the need for manual synchronization. Environments are stepped in separate Python processes to progress them in parallel without interference of the global interpreter lock. As part of this project, we introduce BatchPPO, an efficient implementation of the proximal policy optimization algorithm. By open sourcing TensorFlow Agents, we hope to provide a flexible starting point for future projects that accelerates future research in the field.", "title": "" }, { "docid": "6e63767a96f0d57ecfe98f55c89ae778", "text": "We investigate the use of Deep Q-Learning to control a simulated car via reinforcement learning. We start by implementing the approach of [5] ourselves, and then experimenting with various possible alterations to improve performance on our selected task. In particular, we experiment with various reward functions to induce specific driving behavior, double Q-learning, gradient update rules, and other hyperparameters. We find we are successfully able to train an agent to control the simulated car in JavaScript Racer [3] in some respects. Our agent successfully learned the turning operation, progressively gaining the ability to navigate larger sections of the simulated raceway without crashing. In obstacle avoidance, however, our agent faced challenges which we suspect are due to insufficient training time.", "title": "" }, { "docid": "c71d27d4e4e9c85e3f5016fa36d20a16", "text": "We present, GEM, the first heterogeneous graph neural network approach for detecting malicious accounts at Alipay, one of the world's leading mobile cashless payment platform. Our approach, inspired from a connected subgraph approach, adaptively learns discriminative embeddings from heterogeneous account-device graphs based on two fundamental weaknesses of attackers, i.e. device aggregation and activity aggregation. For the heterogeneous graph consists of various types of nodes, we propose an attention mechanism to learn the importance of different types of nodes, while using the sum operator for modeling the aggregation patterns of nodes in each type. Experiments show that our approaches consistently perform promising results compared with competitive methods over time.", "title": "" }, { "docid": "fa99f24d38858b5951c7af587194f4e3", "text": "Understanding foreign speech is difficult, in part because of unusual mappings between sounds and words. It is known that listeners in their native language can use lexical knowledge (about how words ought to sound) to learn how to interpret unusual speech-sounds. We therefore investigated whether subtitles, which provide lexical information, support perceptual learning about foreign speech. Dutch participants, unfamiliar with Scottish and Australian regional accents of English, watched Scottish or Australian English videos with Dutch, English or no subtitles, and then repeated audio fragments of both accents. Repetition of novel fragments was worse after Dutch-subtitle exposure but better after English-subtitle exposure. Native-language subtitles appear to create lexical interference, but foreign-language subtitles assist speech learning by indicating which words (and hence sounds) are being spoken.", "title": "" }, { "docid": "951d3f81129ecafa2d271d4398d9b3e6", "text": "The content-based image retrieval methods are developed to help people find what they desire based on preferred images instead of linguistic information. This paper focuses on capturing the image features representing details of the collar designs, which is important for people to choose clothing. The quality of the feature extraction methods is important for the queries. This paper presents several new methods for the collar-design feature extraction. A prototype of clothing image retrieval system based on relevance feedback approach and optimum-path forest algorithm is also developed to improve the query results and allows users to find clothing image of more preferred design. A series of experiments are conducted to test the qualities of the feature extraction methods and validate the effectiveness and efficiency of the RF-OPF prototype from multiple aspects. The evaluation scores of initial query results are used to test the qualities of the feature extraction methods. The average scores of all RF steps, the average numbers of RF iterations taken before achieving desired results and the score transition of RF iterations are used to validate the effectiveness and efficiency of the proposed RF-OPF prototype.", "title": "" }, { "docid": "37b60f30aba47a0c2bb3d31c848ee4bc", "text": "This research analyzed the perception of Makassar’s teenagers toward Korean drama and music and their influences to them. Interviews and digital recorder were provided as instruments of the research to ten respondents who are members of Makassar Korean Lover Community. Then, in analyzing data the researchers used descriptive qualitative method that aimed to get deep information about Korean wave in Makassar. The Results of the study found that Makassar’s teenagers put enormous interest in Korean culture especially Korean drama and music. However, most respondents also realize that the presence of Korean culture has a great negative impact to them and their environments. Korean culture itself gives effect in several aspects such as the influence on behavior, Influence on the taste and Influence on the environment as well.", "title": "" }, { "docid": "8b548e2c1922e6e105ab40b60fd7433c", "text": "Although there have been many decades of research and commercial presence on high performance general purpose processors, there are still many applications that require fully customized hardware architectures for further computational acceleration. Recently, deep learning has been successfully used to learn in a wide variety of applications, but their heavy computation demand has considerably limited their practical applications. This paper proposes a fully pipelined acceleration architecture to alleviate high computational demand of an artificial neural network (ANN) which is restricted Boltzmann machine (RBM) ANNs. The implemented RBM ANN accelerator (integrating $1024\\times 1024$ network size, using 128 input cases per batch, and running at a 303-MHz clock frequency) integrated in a state-of-the art field-programmable gate array (FPGA) (Xilinx Virtex 7 XC7V-2000T) provides a computational performance of 301-billion connection-updates-per-second and about 193 times higher performance than a software solution running on general purpose processors. Most importantly, the architecture enables over 4 times (12 times in batch learning) higher performance compared with a previous work when both are implemented in an FPGA device (XC2VP70).", "title": "" }, { "docid": "56e406924a967700fba3fe554b9a8484", "text": "Wearable orthoses can function both as assistive devices, which allow the user to live independently, and as rehabilitation devices, which allow the user to regain use of an impaired limb. To be fully wearable, such devices must have intuitive controls, and to improve quality of life, the device should enable the user to perform Activities of Daily Living. In this context, we explore the feasibility of using electromyography (EMG) signals to control a wearable exotendon device to enable pick and place tasks. We use an easy to don, commodity forearm EMG band with 8 sensors to create an EMG pattern classification control for an exotendon device. With this control, we are able to detect a user's intent to open, and can thus enable extension and pick and place tasks. In experiments with stroke survivors, we explore the accuracy of this control in both non-functional and functional tasks. Our results support the feasibility of developing wearable devices with intuitive controls which provide a functional context for rehabilitation.", "title": "" } ]
scidocsrr
a5fac85a85177ff57a7cc5e8506bf308
Causal Discovery from Subsampled Time Series Data by Constraint Optimization
[ { "docid": "17deb6c21da616a73a6daedf971765c3", "text": "Recent approaches to causal discovery based on Boolean satisfiability solvers have opened new opportunities to consider search spaces for causal models with both feedback cycles and unmeasured confounders. However, the available methods have so far not been able to provide a principled account of how to handle conflicting constraints that arise from statistical variability. Here we present a new approach that preserves the versatility of Boolean constraint solving and attains a high accuracy despite the presence of statistical errors. We develop a new logical encoding of (in)dependence constraints that is both well suited for the domain and allows for faster solving. We represent this encoding in Answer Set Programming (ASP), and apply a state-of-theart ASP solver for the optimization task. Based on different theoretical motivations, we explore a variety of methods to handle statistical errors. Our approach currently scales to cyclic latent variable models with up to seven observed variables and outperforms the available constraintbased methods in accuracy.", "title": "" } ]
[ { "docid": "e78e70d347fb76a79755442cabe1fbe0", "text": "Recent advances in neural variational inference have facilitated efficient training of powerful directed graphical models with continuous latent variables, such as variational autoencoders. However, these models usually assume simple, unimodal priors — such as the multivariate Gaussian distribution — yet many realworld data distributions are highly complex and multi-modal. Examples of complex and multi-modal distributions range from topics in newswire text to conversational dialogue responses. When such latent variable models are applied to these domains, the restriction of the simple, uni-modal prior hinders the overall expressivity of the learned model as it cannot possibly capture more complex aspects of the data distribution. To overcome this critical restriction, we propose a flexible, simple prior distribution which can be learned efficiently and potentially capture an exponential number of modes of a target distribution. We develop the multi-modal variational encoder-decoder framework and investigate the effectiveness of the proposed prior in several natural language processing modeling tasks, including document modeling and dialogue modeling.", "title": "" }, { "docid": "c2558388fb20454fa6f4653b1e4ab676", "text": "Recently, Convolutional Neural Network (CNN) based models have achieved great success in Single Image Super-Resolution (SISR). Owing to the strength of deep networks, these CNN models learn an effective nonlinear mapping from the low-resolution input image to the high-resolution target image, at the cost of requiring enormous parameters. This paper proposes a very deep CNN model (up to 52 convolutional layers) named Deep Recursive Residual Network (DRRN) that strives for deep yet concise networks. Specifically, residual learning is adopted, both in global and local manners, to mitigate the difficulty of training very deep networks, recursive learning is used to control the model parameters while increasing the depth. Extensive benchmark evaluation shows that DRRN significantly outperforms state of the art in SISR, while utilizing far fewer parameters. Code is available at https://github.com/tyshiwo/DRRN_CVPR17.", "title": "" }, { "docid": "18f739a605222415afdea4f725201fba", "text": "I discuss open theoretical questions pertaining to the modified dynamics (MOND)–a proposed alternative to dark matter, which posits a breakdown of Newtonian dynamics in the limit of small accelerations. In particular, I point the reasons for thinking that MOND is an effective theory–perhaps, despite appearance, not even in conflict with GR. I then contrast the two interpretations of MOND as modified gravity and as modified inertia. I describe two mechanical models that are described by potential theories similar to (non-relativistic) MOND: a potential-flow model, and a membrane model. These might shed some light on a possible origin of MOND. The possible involvement of vacuum effects is also speculated on.", "title": "" }, { "docid": "c197e1ab49287fc571f2a99a9501bf84", "text": "X-rays are commonly performed imaging tests that use small amounts of radiation to produce pictures of the organs, tissues, and bones of the body. X-rays of the chest are used to detect abnormalities or diseases of the airways, blood vessels, bones, heart, and lungs. In this work we present a stochastic attention-based model that is capable of learning what regions within a chest X-ray scan should be visually explored in order to conclude that the scan contains a specific radiological abnormality. The proposed model is a recurrent neural network (RNN) that learns to sequentially sample the entire X-ray and focus only on informative areas that are likely to contain the relevant information. We report on experiments carried out with more than 100, 000 X-rays containing enlarged hearts or medical devices. The model has been trained using reinforcement learning methods to learn task-specific policies.", "title": "" }, { "docid": "ed0d1e110347313285a6b478ff8875e3", "text": "Data mining is an area of computer science with a huge prospective, which is the process of discovering or extracting information from large database or datasets. There are many different areas under Data Mining and one of them is Classification or the supervised learning. Classification also can be implemented through a number of different approaches or algorithms. We have conducted the comparison between three algorithms with help of WEKA (The Waikato Environment for Knowledge Analysis), which is an open source software. It contains different type's data mining algorithms. This paper explains discussion of Decision tree, Bayesian Network and K-Nearest Neighbor algorithms. Here, for comparing the result, we have used as parameters the correctly classified instances, incorrectly classified instances, time taken, kappa statistic, relative absolute error, and root relative squared error.", "title": "" }, { "docid": "45c04c80a5e4c852c4e84ba66bd420dd", "text": "This paper addresses empirically and theoretically a question derived from the chunking theory of memory (Chase & Simon, 1973a, 1973b): To what extent is skilled chess memory limited by the size of short-term memory (about seven chunks)? This question is addressed first with an experiment where subjects, ranking from class A players to grandmasters, are asked to recall up to five positions presented during 5 s each. Results show a decline of percentage of recall with additional boards, but also show that expert players recall more pieces than is predicted by the chunking theory in its original form. A second experiment shows that longer latencies between the presentation of boards facilitate recall. In a third experiment, a Chessmaster gradually increases the number of boards he can reproduce with higher than 70% average accuracy to nine, replacing as many as 160 pieces correctly. To account for the results of these experiments, a revision of the Chase-Simon theory is proposed. It is suggested that chess players, like experts in other recall tasks, use long-term memory retrieval structures (Chase & Ericsson, 1982) or templates in addition to chunks in short-term memory to store information rapidly.", "title": "" }, { "docid": "de70b208289bad1bc410bcb7a76e56df", "text": "Instant Messaging chat sessions are realtime text-based conversations which can be analyzed using dialogue-act models. We describe a statistical approach for modelling and detecting dialogue acts in Instant Messaging dialogue. This involved the collection of a small set of task-based dialogues and annotating them with a revised tag set. We then dealt with segmentation and synchronisation issues which do not arise in spoken dialogue. The model we developed combines naive Bayes and dialogue-act n-grams to obtain better than 80% accuracy in our tagging experiment.", "title": "" }, { "docid": "530906b8827394b2dde40ae98d050b7b", "text": "The aim of transfer learning is to improve prediction accuracy on a target task by exploiting the training examples for tasks that are related to the target one. Transfer learning has received more attention in recent years, because this technique is considered to be helpful in reducing the cost of labeling. In this paper, we propose a very simple approach to transfer learning: TrBagg, which is the extension of bagging. TrBagg is composed of two stages: Many weak classifiers are first generated as in standard bagging, and these classifiers are then filtered based on their usefulness for the target task. This simplicity makes it easy to work reasonably well without severe tuning of learning parameters. Further, our algorithm equips an algorithmic scheme to avoid negative transfer. We applied TrBagg to personalized tag prediction tasks for social bookmarks Our approach has several convenient characteristics for this task such as adaptation to multiple tasks with low computational cost.", "title": "" }, { "docid": "06e50887ddec8b0e858173499ce2ee11", "text": "Over the last few years, we've seen a plethora of Internet of Things (IoT) solutions, products, and services make their way into the industry's marketplace. All such solutions will capture large amounts of data pertaining to the environment as well as their users. The IoT's objective is to learn more and better serve system users. Some IoT solutions might store data locally on devices (\"things\"), whereas others might store it in the cloud. The real value of collecting data comes through data processing and aggregation on a large scale, where new knowledge can be extracted. However, such procedures can lead to user privacy issues. This article discusses some of the main challenges of privacy in the IoT as well as opportunities for research and innovation. The authors also introduce some of the ongoing research efforts that address IoT privacy issues.", "title": "" }, { "docid": "b42b17131236abc1ee3066905025aa8c", "text": "The planet Mars, while cold and arid today, once possessed a warm and wet climate, as evidenced by extensive fluvial features observable on its surface. It is believed that the warm climate of the primitive Mars was created by a strong greenhouse effect caused by a thick CO2 atmosphere. Mars lost its warm climate when most of the available volatile CO2 was fixed into the form of carbonate rock due to the action of cycling water. It is believed, however, that sufficient CO2 to form a 300 to 600 mb atmosphere may still exist in volatile form, either adsorbed into the regolith or frozen out at the south pole. This CO2 may be released by planetary warming, and as the CO2 atmosphere thickens, positive feedback is produced which can accelerate the warming trend. Thus it is conceivable, that by taking advantage of the positive feedback inherent in Mars' atmosphere/regolith CO2 system, that engineering efforts can produce drastic changes in climate and pressure on a planetary scale. In this paper we propose a mathematical model of the Martian CO2 system, and use it to produce analysis which clarifies the potential of positive feedback to accelerate planetary engineering efforts. It is shown that by taking advantage of the feedback, the requirements for planetary engineering can be reduced by about 2 orders of magnitude relative to previous estimates. We examine the potential of various schemes for producing the initial warming to drive the process, including the stationing of orbiting mirrors, the importation of natural volatiles with high greenhouse capacity from the outer solar system, and the production of artificial halocarbon greenhouse gases on the Martian surface through in-situ industry. If the orbital mirror scheme is adopted, mirrors with dimension on the order or 100 km radius are required to vaporize the CO2 in the south polar cap. If manufactured of solar sail like material, such mirrors would have a mass on the order of 200,000 tonnes. If manufactured in space out of asteroidal or Martian moon material, about 120 MWe-years of energy would be needed to produce the required aluminum. This amount of power can be provided by near-term multimegawatt nuclear power units, such as the 5 MWe modules now under consideration for NEP spacecraft. Orbital transfer of very massive bodies from the outer solar system can be accomplished using nuclear thermal rocket engines using the asteroid's volatile material as propellant. Using major planets for gravity assists, the rocket ∆V required to move an outer solar system asteroid onto a collision trajectory with Mars can be as little as 300 m/s. If the asteroid is made of NH3, specific impulses of about 400 s can be attained, and as little as 10% of the asteroid will be required for propellant. Four 5000 MWt NTR engines would require a 10 year burn time to push a 10 billion tonne asteroid through a ∆V of 300 m/s. About 4 such objects would be sufficient to greenhouse Mars. Greenhousing Mars via the manufacture of halocarbon gases on the planet's surface may well be the most practical option. Total surface power requirements to drive planetary warming using this method are calculated and found to be on the order of 1000 MWe, and the required times scale for climate and atmosphere modification is on the order of 50 years. It is concluded that a drastic modification of Martian conditions can be achieved using 21st century technology. The Mars so produced will closely resemble the conditions existing on the primitive Mars. Humans operating on the surface of such a Mars would require breathing gear, but pressure suits would be unnecessary. With outside atmospheric pressures raised, it will be possible to create large dwelling areas by means of very large inflatable structures. Average temperatures could be above the freezing point of water for significant regions during portions of the year, enabling the growth of plant life in the open. The spread of plants could produce enough oxygen to make Mars habitable for animals in several millennia. More rapid oxygenation would require engineering efforts supported by multi-terrawatt power sources. It is speculated that the desire to speed the terraforming of Mars will be a driver for developing such technologies, which in turn will define a leap in human power over nature as dramatic as that which accompanied the creation of post-Renaissance industrial civilization.", "title": "" }, { "docid": "85908a576c13755e792d52d02947f8b3", "text": "Quick Response Code has been widely used in the automatic identification fields. In order to adapting various sizes, a little dirty or damaged, and various lighting conditions of bar code image, this paper proposes a novel implementation of real-time Quick Response Code recognition using mobile, which is an efficient technology used for data transferring. An image processing system based on mobile is described to be able to binarize, locate, segment, and decode the QR Code. Our experimental results indicate that these algorithms are robust to real world scene image.", "title": "" }, { "docid": "e9474d646b9da5e611475f4cdfdfc30e", "text": "Wearable medical sensors (WMSs) are garnering ever-increasing attention from both the scientific community and the industry. Driven by technological advances in sensing, wireless communication, and machine learning, WMS-based systems have begun transforming our daily lives. Although WMSs were initially developed to enable low-cost solutions for continuous health monitoring, the applications of WMS-based systems now range far beyond health care. Several research efforts have proposed the use of such systems in diverse application domains, e.g., education, human-computer interaction, and security. Even though the number of such research studies has grown drastically in the last few years, the potential challenges associated with their design, development, and implementation are neither well-studied nor well-recognized. This article discusses various services, applications, and systems that have been developed based on WMSs and sheds light on their design goals and challenges. We first provide a brief history of WMSs and discuss how their market is growing. We then discuss the scope of applications of WMS-based systems. Next, we describe the architecture of a typical WMS-based system and the components that constitute such a system, and their limitations. Thereafter, we suggest a list of desirable design goals that WMS-based systems should satisfy. Finally, we discuss various research directions related to WMSs and how previous research studies have attempted to address the limitations of the components used in WMS-based systems and satisfy the desirable design goals.", "title": "" }, { "docid": "0fcd04f5dccf595d2c08cff23168ee5e", "text": "PubChem (http://pubchem.ncbi.nlm.nih.gov) is a public repository for biological properties of small molecules hosted by the US National Institutes of Health (NIH). PubChem BioAssay database currently contains biological test results for more than 700 000 compounds. The goal of PubChem is to make this information easily accessible to biomedical researchers. In this work, we present a set of web servers to facilitate and optimize the utility of biological activity information within PubChem. These web-based services provide tools for rapid data retrieval, integration and comparison of biological screening results, exploratory structure-activity analysis, and target selectivity examination. This article reviews these bioactivity analysis tools and discusses their uses. Most of the tools described in this work can be directly accessed at http://pubchem.ncbi.nlm.nih.gov/assay/. URLs for accessing other tools described in this work are specified individually.", "title": "" }, { "docid": "b4e676d4d11039c5c5feb5e549eb364f", "text": "Abst ract Qualit at ive case st udy met hodology provides t ools f or researchers t o st udy complex phenomena wit hin t heir cont ext s. When t he approach is applied correct ly, it becomes a valuable met hod f or healt h science research t o develop t heory, evaluat e programs, and develop int ervent ions. T he purpose of t his paper is t o guide t he novice researcher in ident if ying t he key element s f or designing and implement ing qualit at ive case st udy research project s. An overview of t he t ypes of case st udy designs is provided along wit h general recommendat ions f or writ ing t he research quest ions, developing proposit ions, det ermining t he “case” under st udy, binding t he case and a discussion of dat a sources and t riangulat ion. T o f acilit at e applicat ion of t hese principles, clear examples of research quest ions, st udy proposit ions and t he dif f erent t ypes of case st udy designs are provided Keywo rds Case St udy and Qualit at ive Met hod Publicat io n Dat e 12-1-2008 Creat ive Co mmo ns License Journal Home About T his Journal Aims & Scope Edit orial Board Policies Open Access", "title": "" }, { "docid": "eb4cac4ac288bc65df70f906b674ceb5", "text": "LPWAN (Low Power Wide Area Networks) technologies have been attracting attention continuously in IoT (Internet of Things). LoRaWAN is present on the market as a LPWAN technology and it has features such as low power consumption, low transceiver chip cost and wide coverage area. In the LoRaWAN, end devices must perform a join procedure for participating in the network. Attackers could exploit the join procedure because it has vulnerability in terms of security. Replay attack is a method of exploiting the vulnerability in the join procedure. In this paper, we propose a attack scenario and a countermeasure against replay attack that may occur in the join request transfer process.", "title": "" }, { "docid": "6724af38a637d61ccc2a4ad8119c6e1a", "text": "INTRODUCTION Pivotal to athletic performance is the ability to more maintain desired athletic performance levels during particularly critical periods of competition [1], such as during pressurised situations that typically evoke elevated levels of anxiety (e.g., penalty kicks) or when exposed to unexpected adversities (e.g., unfavourable umpire calls on crucial points) [2, 3]. These kinds of situations become markedly important when athletes, who are separated by marginal physical and technical differences, are engaged in closely contested matches, games, or races [4]. It is within these competitive conditions, in particular, that athletes’ responses define their degree of success (or lack thereof); responses that are largely dependent on athletes’ psychological attributes [5]. One of these attributes appears to be mental toughness (MT), which has often been classified as a critical success factor due to the role it plays in fostering adaptive responses to positively and negatively construed pressures, situations, and events [6 8]. However, as scholars have intensified", "title": "" }, { "docid": "ff8c3ce63b340a682e99540313be7fe7", "text": "Detecting and identifying any phishing websites in real-time, particularly for e-banking is really a complex and dynamic problem involving many factors and criteria. Because of the subjective considerations and the ambiguities involved in the detection, Fuzzy Data Mining (DM) Techniques can be an effective tool in assessing and identifying phishing websites for e-banking since it offers a more natural way of dealing with quality factors rather than exact values. In this paper, we present novel approach to overcome the ‘fuzziness’ in the e-banking phishing website assessment and propose an intelligent resilient and effective model for detecting e-banking phishing websites. The proposed model is based on Fuzzy logic (FL) combined with Data Mining algorithms to characterize the e-banking phishing website factors and to investigate its techniques by classifying there phishing types and defining six e-banking phishing website attack criteria’s with a layer structure. The proposed e-banking phishing website model showed the significance importance of the phishing website two criteria’s (URL & Domain Identity) and (Security & Encryption) in the final phishing detection rate result, taking into consideration its characteristic association and relationship with each others as showed from the fuzzy data mining classification and association rule algorithms. Our phishing model also showed the insignificant trivial influence of the (Page Style & Content) criteria along with (Social Human Factor) criteria in the phishing detection final rate result.", "title": "" }, { "docid": "27c7afd468d969509eec2b2a3260a679", "text": "The impact of predictive genetic testing on cancer care can be measured by the increased demand for and utilization of genetic services as well as in the progress made in reducing cancer risks in known mutation carriers. Nonetheless, differential access to and utilization of genetic counseling and cancer predisposition testing among underserved racial and ethnic minorities compared with the white population has led to growing health care disparities in clinical cancer genetics that are only beginning to be addressed. Furthermore, deficiencies in the utility of genetic testing in underserved populations as a result of limited testing experience and in the effectiveness of risk-reducing interventions compound access and knowledge-base disparities. The recent literature on racial/ethnic health care disparities is briefly reviewed, and is followed by a discussion of the current limitations of risk assessment and genetic testing outside of white populations. The importance of expanded testing in underserved populations is emphasized.", "title": "" }, { "docid": "788bf97b435dfbe9d31373e21bc76716", "text": "In this paper, we study the design and workspace of a 6–6 cable-suspended parallel robot. The workspace volume is characterized as the set of points where the centroid of the moving platform can reach with tensions in all suspension cables at a constant orientation. This paper attempts to tackle some aspects of optimal design of a 6DOF cable robot by addressing the variations of the workspace volume and the accuracy of the robot using different geometric configurations, different sizes and orientations of the moving platform. The global condition index is used as a performance index of a robot with respect to the force and velocity transmission over the whole workspace. The results are used for design analysis of the cable-robot for a specific motion of the moving platform. 2004 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "8255146164ff42f8755d8e74fd24cfa1", "text": "We present a named-entity recognition (NER) system for parallel multilingual text. Our system handles three languages (i.e., English, French, and Spanish) and is tailored to the biomedical domain. For each language, we design a supervised knowledge-based CRF model with rich biomedical and general domain information. We use the sentence alignment of the parallel corpora, the word alignment generated by the GIZA++[8] tool, and Wikipedia-based word alignment in order to transfer system predictions made by individual language models to the remaining parallel languages. We re-train each individual language system using the transferred predictions and generate a final enriched NER model for each language. The enriched system performs better than the initial system based on the predictions transferred from the other language systems. Each language model benefits from the external knowledge extracted from biomedical and general domain resources.", "title": "" } ]
scidocsrr
93bc35b87540a4c67cdb45624d821210
The Riemann Zeros and Eigenvalue Asymptotics
[ { "docid": "d15a2f27112c6bd8bfa2f9c01471c512", "text": "Assuming a special version of the Montgomery-Odlyzko law on the pair correlation of zeros of the Riemann zeta function conjectured by Rudnick and Sarnak and assuming the Riemann Hypothesis, we prove new results on the prime number theorem, difference of consecutive primes, and the twin prime conjecture. 1. Introduction. Assuming the Riemann Hypothesis (RH), let us denote by ˆ 1=2‡ ig a nontrivial zero of a primitive L-function L…s;p† attached to an irreducible cuspidal automorphic representation of GLm; m ^ 1, over Q. When m ˆ 1, this L-function is the Riemann zeta function z…s† or the Dirichlet L-function L…s; c† for a primitive character c. Rudnick and Sarnak [13] examined the n-level correlation for these zeros and made a far reaching conjecture which is called the Montgomery [9]-Odlyzko [11], [12] Law by Katz and Sarnak [6]. Rudnick and Sarnak also proved a case of their conjecture when a test function f has its Fourier transform b f supported in a restricted region. In this article, we will show that a version of the above conjecture for the pair correlation of zeros of the zeta function z…s† implies interesting arithmetical results on prime distribution (Theorems 2, 3, and 4). These results can give us deep insight on possible ultimate bounds of these prime distribution problems. One can also see that the pair (and nlevel) correlation of zeros of zeta and L-functions is a powerful method in number theory. Our computation shows that the test function f and the support of its Fourier transform b f play a crucial role in the conjecture. To see the conjecture in Rudnick and Sarnak [13] in the case of the zeta function z…s† and n ˆ 2, the pair correlation, we use a test function f …x; y† which satisfies the following three conditions: (i) f …x; y† ˆ f …y; x† for any x; y 2 R, (ii) f …x‡ t; y‡ t† ˆ f …x; y† for any t 2 R, and (iii) f …x; y† tends to 0 rapidly as j…x; y†j ! 1 on the hyperplane x‡ y ˆ 0. Arch. Math. 76 (2001) 41±50 0003-889X/01/010041-10 $ 3.50/0 Birkhäuser Verlag, Basel, 2001 Archiv der Mathematik Mathematics Subject Classification (1991): 11M26, 11N05, 11N75. 1) Supported in part by China NNSF Grant # 19701019. 2) Supported in part by USA NSF Grant # DMS 97-01225. Define the function W2…x; y† ˆ 1ÿ sin p…xÿ y† …p…xÿ y†† : Denote the Dirac function by d…x† which satisfies „ R d…x†dx ˆ 1 and defines a distribution f 7! f …0†. We then define the pair correlation sum of zeros gj of the zeta function: R2…T; f ; h† ˆ P g1;g2 distinct h g1 T ; g2 T f Lg1 2p ; Lg2 2p ; where T ^ 2, L ˆ log T, and h…x; y† is a localized cutoff function which tends to zero rapidly when j…x; y†j tends to infinity. The conjecture proposed by Rudnick and Sarnak [13] is that R2…T; f ; h† 1 2p TL …", "title": "" } ]
[ { "docid": "e4dba25d2528a507e4b494977fd69fc0", "text": "The illegal distribution of a digital movie is a common and significant threat to the film industry. With the advent of high-speed broadband Internet access, a pirated copy of a digital video can now be easily distributed to a global audience. A possible means of limiting this type of digital theft is digital video watermarking whereby additional information, called a watermark, is embedded in the host video. This watermark can be extracted at the decoder and used to determine whether the video content is watermarked. This paper presents a review of the digital video watermarking techniques in which their applications, challenges, and important properties are discussed, and categorizes them based on the domain in which they embed the watermark. It then provides an overview of a few emerging innovative solutions using watermarks. Protecting a 3D video by watermarking is an emerging area of research. The relevant 3D video watermarking techniques in the literature are classified based on the image-based representations of a 3D video in stereoscopic, depth-image-based rendering, and multi-view video watermarking. We discuss each technique, and then present a survey of the literature. Finally, we provide a summary of this paper and propose some future research directions.", "title": "" }, { "docid": "5e333f4620908dc643ceac8a07ff2a2d", "text": "Convolutional Neural Networks (CNNs) have reached outstanding results in several complex visual recognition tasks, such as classification and scene parsing. CNNs are composed of multiple filtering layers that perform 2D convolutions over input images. The intrinsic parallelism in such a computation kernel makes it suitable to be effectively accelerated on parallel hardware. In this paper we propose a highly flexible and scalable architectural template for acceleration of CNNs on FPGA devices, based on the cooperation between a set of software cores and a parallel convolution engine that communicate via a tightly coupled L1 shared scratchpad. Our accelerator structure, tested on a Xilinx Zynq XC-Z7045 device, delivers peak performance up to 80 GMAC/s, corresponding to 100 MMAC/s for each DSP slice in the programmable fabric. Thanks to the flexible architecture, convolution operations can be scheduled in order to reduce input/output bandwidth down to 8 bytes per cycle without degrading the performance of the accelerator in most of the meaningful use-cases.", "title": "" }, { "docid": "a4030b9aa31d4cc0a2341236d6f18b5a", "text": "Generative adversarial networks (GANs) have achieved huge success in unsupervised learning. Most of GANs treat the discriminator as a classifier with the binary sigmoid cross entropy loss function. However, we find that the sigmoid cross entropy loss function will sometimes lead to the saturation problem in GANs learning. In this work, we propose to adopt the L2 loss function for the discriminator. The properties of the L2 loss function can improve the stabilization of GANs learning. With the usage of the L2 loss function, we propose the multi-class generative adversarial networks for the purpose of image generation with multiple classes. We evaluate the multi-class GANs on a handwritten Chinese characters dataset with 3740 classes. The experiments demonstrate that the multi-class GANs can generate elegant images on datasets with a large number of classes. Comparison experiments between the L2 loss function and the sigmoid cross entropy loss function are also conducted and the results demonstrate the stabilization of the L2 loss function.", "title": "" }, { "docid": "c93a401b7ed3031ed6571bfbbf1078c8", "text": "In this paper we propose a new footstep detection technique for data acquired using a triaxial geophone. The idea evolves from the investigation of geophone transduction principle. The technique exploits the randomness of neighbouring data vectors observed when the footstep is absent. We extend the same principle for triaxial signal denoising. Effectiveness of the proposed technique for transient detection and denoising are presented for real seismic data collected using a triaxial geophone.", "title": "" }, { "docid": "f1559798e0338074f28ca4aaf953b6a1", "text": "Example classifications (test set) [And09] Andriluka et al. Pictorial structures revisited: People detection and articulated pose estimation. In CVPR, 2009 [Eic09] Eichner et al. Articulated Human Pose Estimation and Search in (Almost) Unconstrained Still Images. In IJCV, 2012 [Sap10] Sapp et al. Cascaded models for articulated pose estimation. In ECCV, 2010 [Yan11] Yang and Ramanan. Articulated pose estimation with flexible mixturesof-parts. In CVPR, 2011. References Human Pose Estimation (HPE) Algorithm Input", "title": "" }, { "docid": "6a74c2d26f5125237929031cf1ccf204", "text": "Harnessing crowds can be a powerful mechanism for increasing innovation. However, current approaches to crowd innovation rely on large numbers of contributors generating ideas independently in an unstructured way. We introduce a new approach called distributed analogical idea generation, which aims to make idea generation more effective and less reliant on chance. Drawing from the literature in cognitive science on analogy and schema induction, our approach decomposes the creative process in a structured way amenable to using crowds. In three experiments we show that distributed analogical idea generation leads to better ideas than example-based approaches, and investigate the conditions under which crowds generate good schemas and ideas. Our results have implications for improving creativity and building systems for distributed crowd innovation.", "title": "" }, { "docid": "bd38c3f62798ed1f0b1e2baa6462123c", "text": "The key issue in image fusion is the process of defining evaluation indices for the output image and for multi-scale image data set. This paper attempted to develop a fusion model for plantar pressure distribution images, which is expected to contribute to feature points construction based on shoe-last surface generation and modification. First, the time series plantar pressure distribution image was preprocessed, including back removing and Laplacian of Gaussian (LoG) filter. Then, discrete wavelet transform and a multi-scale pixel conversion fusion operating using a parameter estimation optimized Gaussian mixture model (PEO-GMM) were performed. The output image was used in a fuzzy weighted evaluation system, that included the following evaluation indices: mean, standard deviation, entropy, average gradient, and spatial frequency; the difference with the reference image, including the root mean square error, signal to noise ratio (SNR), and the peak SNR; and the difference with source image including the cross entropy, joint entropy, mutual information, deviation index, correlation coefficient, and the degree of distortion. These parameters were used to evaluate the results of the comprehensive evaluation value for the synthesized image. The image reflected the fusion of plantar pressure distribution using the proposed method compared with other fusion methods, such as up-down, mean-mean, and max-min fusion. The experimental results showed that the proposed LoG filtering with PEO-GMM fusion operator outperformed other methods.", "title": "" }, { "docid": "2d0b170508ce03d649cf62ceef79a05a", "text": "Gyroscope is one of the primary sensors for air vehicle navigation and controls. This paper investigates the noise characteristics of microelectromechanical systems (MEMS) gyroscope null drift and temperature compensation. This study mainly focuses on temperature as a long-term error source. An in-house-designed inertial measurement unit (IMU) is used to perform temperature effect testing in the study. The IMU is placed into a temperature control chamber. The chamber temperature is controlled to increase from 25 C to 80 C at approximately 0.8 degrees per minute. After that, the temperature is decreased to -40 C and then returns to 25 C. The null voltage measurements clearly demonstrate the rapidly changing short-term random drift and slowly changing long-term drift due to temperature variations. The characteristics of the short-term random drifts are analyzed and represented in probability density functions. A temperature calibration mechanism is established by using an artificial neural network to compensate the long-term drift. With the temperature calibration, the attitude computation problem due to gyro drifts can be improved significantly.", "title": "" }, { "docid": "3c53d2589875a60b6c85cb8873a7c9a8", "text": "presenting with bullous pemphigoid-like lesions. Dermatol Online J 2006; 12: 19. 3 Bhawan J, Milstone E, Malhotra R, et al. Scabies presenting as bullous pemphigoid-like eruption. J Am Acad Dermatol 1991; 24: 179–181. 4 Ostlere LS, Harris D, Rustin MH. Scabies associated with a bullous pemphigoid-like eruption. Br J Dermatol 1993; 128: 217–219. 5 Parodi A, Saino M, Rebora A. Bullous pemphigoid-like scabies. Clin Exp Dermatol 1993; 18: 293. 6 Slawsky LD, Maroon M, Tyler WB, et al. Association of scabies with a bullous pemphigoid-like eruption. J Am Acad Dermatol 1996; 34: 878–879. 7 Chen MC, Luo DQ. Bullous scabies failing to respond to glucocorticoids, immunoglobulin, and cyclophosphamide. Int J Dermatol 2014; 53: 265–266. 8 Nakamura E, Taniguchi H, Ohtaki N. A case of crusted scabies with a bullous pemphigoid-like eruption and nail involvement. J Dermatol 2006; 33: 196–201. 9 Galvany Rossell L, Salleras Redonnet M, Umbert Millet P. Bullous scabies responding to ivermectin therapy. Actas Dermosifiliogr 2010; 101: 81–84. 10 Gutte RM. Bullous scabies in an adult: a case report with review of literature. Indian Dermatol Online J 2013; 4: 311–313.", "title": "" }, { "docid": "43100f1c6563b4af125c1c6040daa437", "text": "Humans can naturally understand an image in depth with the aid of rich knowledge accumulated from daily lives or professions. For example, to achieve fine-grained image recognition (e.g., categorizing hundreds of subordinate categories of birds) usually requires a comprehensive visual concept organization including category labels and part-level attributes. In this work, we investigate how to unify rich professional knowledge with deep neural network architectures and propose a Knowledge-Embedded Representation Learning (KERL) framework for handling the problem of fine-grained image recognition. Specifically, we organize the rich visual concepts in the form of knowledge graph and employ a Gated Graph Neural Network to propagate node message through the graph for generating the knowledge representation. By introducing a novel gated mechanism, our KERL framework incorporates this knowledge representation into the discriminative image feature learning, i.e., implicitly associating the specific attributes with the feature maps. Compared with existing methods of fine-grained image classification, our KERL framework has several appealing properties: i) The embedded high-level knowledge enhances the feature representation, thus facilitating distinguishing the subtle differences among subordinate categories. ii) Our framework can learn feature maps with a meaningful configuration that the highlighted regions finely accord with the nodes (specific attributes) of the knowledge graph. Extensive experiments on the widely used CaltechUCSD bird dataset demonstrate the superiority of ∗Corresponding author is Liang Lin (Email: [email protected]). This work was supported by the National Natural Science Foundation of China under Grant 61622214, the Science and Technology Planning Project of Guangdong Province under Grant 2017B010116001, and Guangdong Natural Science Foundation Project for Research Teams under Grant 2017A030312006. head-pattern: masked Bohemian", "title": "" }, { "docid": "8e878e5083d922d97f8d573c54cbb707", "text": "Deep neural networks have become the stateof-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyNet, FractalNet and RevNet, can be interpreted as different numerical discretizations of differential equations. This finding brings us a brand new perspective on the design of effective deep architectures. We can take advantage of the rich knowledge in numerical analysis to guide us in designing new and potentially more effective deep networks. As an example, we propose a linear multi-step architecture (LM-architecture) which is inspired by the linear multi-step method solving ordinary differential equations. The LMarchitecture is an effective structure that can be used on any ResNet-like networks. In particular, we demonstrate that LM-ResNet and LMResNeXt (i.e. the networks obtained by applying the LM-architecture on ResNet and ResNeXt respectively) can achieve noticeably higher accuracy than ResNet and ResNeXt on both CIFAR and ImageNet with comparable numbers of trainable parameters. In particular, on both CIFAR and ImageNet, LM-ResNet/LM-ResNeXt can significantly compress the original networkSchool of Mathematical Sciences, Peking University, Beijing, China MGH/BWH Center for Clinical Data Science, Masschusetts General Hospital, Harvard Medical School Center for Data Science in Health and Medicine, Peking University Laboratory for Biomedical Image Analysis, Beijing Institute of Big Data Research Beijing International Center for Mathematical Research, Peking University Center for Data Science, Peking University. Correspondence to: Bin Dong <[email protected]>, Quanzheng Li <[email protected]>. Proceedings of the 35 th International Conference on Machine Learning, Stockholm, Sweden, PMLR 80, 2018. Copyright 2018 by the author(s). s while maintaining a similar performance. This can be explained mathematically using the concept of modified equation from numerical analysis. Last but not least, we also establish a connection between stochastic control and noise injection in the training process which helps to improve generalization of the networks. Furthermore, by relating stochastic training strategy with stochastic dynamic system, we can easily apply stochastic training to the networks with the LM-architecture. As an example, we introduced stochastic depth to LM-ResNet and achieve significant improvement over the original LM-ResNet on CIFAR10.", "title": "" }, { "docid": "4fea6fb309d496f9b4fd281c80a8eed7", "text": "Network alignment is the problem of matching the nodes of two graphs, maximizing the similarity of the matched nodes and the edges between them. This problem is encountered in a wide array of applications---from biological networks to social networks to ontologies---where multiple networked data sources need to be integrated. Due to the difficulty of the task, an accurate alignment can rarely be found without human assistance. Thus, it is of great practical importance to develop network alignment algorithms that can optimally leverage experts who are able to provide the correct alignment for a small number of nodes. Yet, only a handful of existing works address this active network alignment setting.\n The majority of the existing active methods focus on absolute queries (\"are nodes a and b the same or not?\"), whereas we argue that it is generally easier for a human expert to answer relative queries (\"which node in the set b1,...,bn is the most similar to node a?\"). This paper introduces two novel relative-query strategies, TopMatchings and GibbsMatchings, which can be applied on top of any network alignment method that constructs and solves a bipartite matching problem. Our methods identify the most informative nodes to query by sampling the matchings of the bipartite graph associated to the network-alignment instance.\n We compare the proposed approaches to several commonly-used query strategies and perform experiments on both synthetic and real-world datasets. Our sampling-based strategies yield the highest overall performance, outperforming all the baseline methods by more than 15 percentage points in some cases. In terms of accuracy, TopMatchings and GibbsMatchings perform comparably. However, GibbsMatchings is significantly more scalable, but it also requires hyperparameter tuning for a temperature parameter.", "title": "" }, { "docid": "78b61359d8668336b198af9ad59fe149", "text": "This paper discusses a fuzzy cost-based failure modes, effects, and criticality analysis (FMECA) approach for wind turbines. Conventional FMECA methods use a crisp risk priority number (RPN) as a measure of criticality which suffers from the difficulty of quantifying the risk. One method of increasing wind turbine reliability is to install a condition monitoring system (CMS). The RPN can be reduced with the help of a CMS because faults can be detected at an incipient level, and preventive maintenance can be scheduled. However, the cost of installing a CMS cannot be ignored. The fuzzy cost-based FMECA method proposed in this paper takes into consideration the cost of a CMS and the benefits it brings and provides a method for determining whether it is financially profitable to install a CMS. The analysis is carried out in MATLAB® which provides functions for fuzzy logic operation and defuzzification.", "title": "" }, { "docid": "11bff8c8ed48fc53c841bafcaf2a04dd", "text": "Co-Attentions are highly effective attention mechanisms for text matching applications. Co-Attention enables the learning of pairwise attentions, i.e., learning to attend based on computing word-level affinity scores between two documents. However, text matching problems can exist in either symmetrical or asymmetrical domains. For example, paraphrase identification is a symmetrical task while question-answer matching and entailment classification are considered asymmetrical domains. In this paper, we argue that Co-Attention models in asymmetrical domains require different treatment as opposed to symmetrical domains, i.e., a concept of word-level directionality should be incorporated while learning word-level similarity scores. Hence, the standard inner product in real space commonly adopted in co-attention is not suitable. This paper leverages attractive properties of the complex vector space and proposes a co-attention mechanism based on the complex-valued inner product (Hermitian products). Unlike the real dot product, the dot product in complex space is asymmetric because the first item is conjugated. Aside from modeling and encoding directionality, our proposed approach also enhances the representation learning process. Extensive experiments on five text matching benchmark datasets demonstrate the effectiveness of our approach.", "title": "" }, { "docid": "bb94ac9ac0c1e1f1155fc56b13bc103e", "text": "In contrast to the Android application layer, Android’s application framework’s internals and their influence on the platform security and user privacy are still largely a black box for us. In this paper, we establish a static runtime model of the application framework in order to study its internals and provide the first high-level classification of the framework’s protected resources. We thereby uncover design patterns that differ highly from the runtime model at the application layer. We demonstrate the benefits of our insights for security-focused analysis of the framework by re-visiting the important use-case of mapping Android permissions to framework/SDK API methods. We, in particular, present a novel mapping based on our findings that significantly improves on prior results in this area that were established based on insufficient knowledge about the framework’s internals. Moreover, we introduce the concept of permission locality to show that although framework services follow the principle of separation of duty, the accompanying permission checks to guard sensitive operations violate it.", "title": "" }, { "docid": "347c3929efc37dee3230189e576f14ab", "text": "Attribute-based encryption (ABE) is a vision of public key encryption that allows users to encrypt and decrypt messages based on user attributes. This functionality comes at a cost. In a typical implementation, the size of the ciphertext is proportional to the number of attributes associated with it and the decryption time is proportional to the number of attributes used during decryption. Specifically, many practical ABE implementations require one pairing operation per attribute used during decryption. This work focuses on designing ABE schemes with fast decryption algorithms. We restrict our attention to expressive systems without systemwide bounds or limitations, such as placing a limit on the number of attributes used in a ciphertext or a private key. In this setting, we present the first key-policy ABE system where ciphertexts can be decrypted with a constant number of pairings. We show that GPSW ciphertexts can be decrypted with only 2 pairings by increasing the private key size by a factor of |Γ |, where Γ is the set of distinct attributes that appear in the private key. We then present a generalized construction that allows each system user to independently tune various efficiency tradeoffs to their liking on a spectrum where the extremes are GPSW on one end and our very fast scheme on the other. This tuning requires no changes to the public parameters or the encryption algorithm. Strategies for choosing an individualized user optimization plan are discussed. Finally, we discuss how these ideas can be translated into the ciphertext-policy ABE setting at a higher cost.", "title": "" }, { "docid": "1468a09c57b2d83181de06236386d323", "text": "This article provides an overview of the pathogenesis of type 2 diabetes mellitus. Discussion begins by describing normal glucose homeostasis and ingestion of a typical meal and then discusses glucose homeostasis in diabetes. Topics covered include insulin secretion in type 2 diabetes mellitus and insulin resistance, the site of insulin resistance, the interaction between insulin sensitivity and secretion, the role of adipocytes in the pathogenesis of type 2 diabetes, cellular mechanisms of insulin resistance including glucose transport and phosphorylation, glycogen and synthesis,glucose and oxidation, glycolysis, and insulin signaling.", "title": "" }, { "docid": "834bc1349d6da53c277ddd7eba95dc6a", "text": "Lymphedema is a common condition frequently seen in cancer patients who have had lymph node dissection +/- radiation treatment. Traditional management is mainly non-surgical and unsatisfactory. Surgical treatment has relied on excisional techniques in the past. Physiologic operations have more recently been devised to help improve this condition. Assessing patients and deciding which of the available operations to offer them can be challenging. MRI is an extremely useful tool in patient assessment and treatment planning. J. Surg. Oncol. 2017;115:18-22. © 2016 Wiley Periodicals, Inc.", "title": "" }, { "docid": "e49d1f0aa79a2913131010c9f4d88bcf", "text": "Low power consumption is crucial for medical implant devices. A single-chip, very-low-power interface IC used in implantable pacemaker systems is presented. It contains amplifiers, filters, ADCs, battery management system, voltage multipliers, high voltage pulse generators, programmable logic and timing control. A few circuit techniques are proposed to achieve nanopower circuit operations within submicron CMOS process. Subthreshold transistor designs and switched-capacitor circuits are widely used. The 200 k transistor IC occupies 49 mm/sup 2/, is fabricated in a 0.5-/spl mu/m two-poly three-metal multi-V/sub t/ process, and consumes 8 /spl mu/W.", "title": "" }, { "docid": "73f6ba4ad9559cd3c6f7a88223e4b556", "text": "A recurring problem faced when training neural networks is that there is typically not enough data to maximize the generalization capability of deep neural networks. There are many techniques to address this, including data augmentation, dropout, and transfer learning. In this paper, we introduce an additional method, which we call smart augmentation and we show how to use it to increase the accuracy and reduce over fitting on a target network. Smart augmentation works, by creating a network that learns how to generate augmented data during the training process of a target network in a way that reduces that networks loss. This allows us to learn augmentations that minimize the error of that network. Smart augmentation has shown the potential to increase accuracy by demonstrably significant measures on all data sets tested. In addition, it has shown potential to achieve similar or improved performance levels with significantly smaller network sizes in a number of tested cases.", "title": "" } ]
scidocsrr
6289f60d651706a549de7eaded26b56d
Modeling data entry rates for ASR and alternative input methods
[ { "docid": "b876e62db8a45ab17d3a9d217e223eb7", "text": "A study was conducted to evaluate user performance andsatisfaction in completion of a set of text creation tasks usingthree commercially available continuous speech recognition systems.The study also compared user performance on similar tasks usingkeyboard input. One part of the study (Initial Use) involved 24users who enrolled, received training and carried out practicetasks, and then completed a set of transcription and compositiontasks in a single session. In a parallel effort (Extended Use),four researchers used speech recognition to carry out real worktasks over 10 sessions with each of the three speech recognitionsoftware products. This paper presents results from the Initial Usephase of the study along with some preliminary results from theExtended Use phase. We present details of the kinds of usabilityand system design problems likely in current systems and severalcommon patterns of error correction that we found.", "title": "" } ]
[ { "docid": "a4e5a60d9ce417ef74fc70580837cd55", "text": "Emotional processes are important to survive. The Darwinian adaptive concept of stress refers to natural selection since evolved individuals have acquired effective strategies to adapt to the environment and to unavoidable changes. If demands are abrupt and intense, there might be insufficient time to successful responses. Usually, stress produces a cognitive or perceptual evaluation (emotional memory) which motivates to make a plan, to take a decision and to perform an action to face success‐ fully the demand. Between several kinds of stresses, there are psychosocial and emotional stresses with cultural, social and political influences. The cultural changes have modified the way in which individuals socially interact. Deficits in familiar relationships and social isolation alter physical and mental health in young students, producing reduction of their capacities of facing stressors in school. Adolescence is characterized by significant physiological, anatomical, and psychological changes in boys and girls, who become vulnerable to psychiatric disorders. In particular for young adult students, anxiety and depression symptoms could interfere in their academic performance. In this chapter, we reviewed approaches to the study of anxiety and depression symptoms related with the academic performance in adolescent and graduate students. Results from available published studies in academic journals are reviewed to discuss the importance to detect information about academic performance, which leads to discover in many cases the very commonly subdiagnosed psychiatric disorders in adolescents, that is, anxiety and depression. With the reviewed evidence of how anxiety and depression in young adult students may alter their main activity in life (studying and academic performance), we © 2015 The Author(s). Licensee InTech. This chapter is distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/3.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. discussed data in order to show a way in which professionals involved in schools could support students and stablish a routine of intervention in any case.", "title": "" }, { "docid": "e2c4f9cfce1db6282fe3a23fd5d6f3a4", "text": "In semi-structured case-oriented business processes, the sequence of process steps is determined by case workers based on available document content associated with a case. Transitions between process execution steps are therefore case specific and depend on independent judgment of case workers. In this paper, we propose an instance-specific probabilistic process model (PPM) whose transition probabilities are customized to the semi-structured business process instance it represents. An instance-specific PPM serves as a powerful representation to predict the likelihood of different outcomes. We also show that certain instance-specific PPMs can be transformed into a Markov chain under some non-restrictive assumptions. For instance-specific PPMs that contain parallel execution of tasks, we provide an algorithm to map them to an extended space Markov chain. This way existing Markov techniques can be leveraged to make predictions about the likelihood of executing future tasks. Predictions provided by our technique could generate early alerts for case workers about the likelihood of important or undesired outcomes in an executing case instance. We have implemented and validated our approach on a simulated automobile insurance claims handling semi-structured business process. Results indicate that an instance-specific PPM provides more accurate predictions than other methods such as conditional probability. We also show that as more document data become available, the prediction accuracy of an instance-specific PPM increases.", "title": "" }, { "docid": "4cf77462459efa81f6ed856655ae7454", "text": "Antibody response to the influenza immunization was investigated in 83 1st-semester healthy university freshmen. Elevated levels of loneliness throughout the semester and small social networks were independently associated with poorer antibody response to 1 component of the vaccine. Those with both high levels of loneliness and a small social network had the lowest antibody response. Loneliness was also associated with greater psychological stress and negative affect, less positive affect, poorer sleep efficiency and quality, and elevations in circulating levels of cortisol. However, only the stress data were consistent with mediation of the loneliness-antibody response relation. None of these variables were associated with social network size, and hence none were potential mediators of the relation between network size and immunization response.", "title": "" }, { "docid": "cba5c85ee9a9c4f97f99c1fcb35d0623", "text": "Virtualized Cloud platforms have become increasingly common and the number of online services hosted on these platforms is also increasing rapidly. A key problem faced by providers in managing these services is detecting the performance anomalies and adjusting resources accordingly. As online services generate a very large amount of monitored data in the form of time series, it becomes very difficult to process this complex data by traditional approaches. In this work, we present a novel distributed parallel approach for performance anomaly detection. We build upon Holt-Winters forecasting for automatic aberrant behavior detection in time series. First, we extend the technique to work with MapReduce paradigm. Next, we correlate the anomalous metrics with the target Service Level Objective (SLO) in order to locate the suspicious metrics. We implemented and evaluated our approach on a production Cloud encompassing IaaS and PaaS service models. Experimental results confirm that our approach is efficient and effective in capturing the metrics causing performance anomalies in large time series datasets.", "title": "" }, { "docid": "92c6e4ec2497c467eaa31546e2e2be0e", "text": "The subjective sense of future time plays an essential role in human motivation. Gradually, time left becomes a better predictor than chronological age for a range of cognitive, emotional, and motivational variables. Socioemotional selectivity theory maintains that constraints on time horizons shift motivational priorities in such a way that the regulation of emotional states becomes more important than other types of goals. This motivational shift occurs with age but also appears in other contexts (for example, geographical relocations, illnesses, and war) that limit subjective future time.", "title": "" }, { "docid": "ea3ed48d47473940134027caea2679f9", "text": "With rapid development of face recognition and detection techniques, the face has been frequently used as a biometric to find illegitimate access. It relates to a security issues of system directly, and hence, the face spoofing detection is an important issue. However, correctly classifying spoofing or genuine faces is challenging due to diverse environment conditions such as brightness and color of a face skin. Therefore we propose a novel approach to robustly find the spoofing faces using the highlight removal effect, which is based on the reflection information. Because spoofing face image is recaptured by a camera, it has additional light information. It means that spoofing image could have much more highlighted areas and abnormal reflection information. By extracting these differences, we are able to generate features for robust face spoofing detection. In addition, the spoofing face image and genuine face image have distinct textures because of surface material of medium. The skin and spoofing medium are expected to have different texture, and some genuine image characteristics are distorted such as color distribution. We achieve state-of-the-art performance by concatenating these features. It significantly outperforms especially for the error rate.", "title": "" }, { "docid": "a1a4b028fba02904333140e6791709bb", "text": "Cross-site scripting (also referred to as XSS) is a vulnerability that allows an attacker to send malicious code (usually in the form of JavaScript) to another user. XSS is one of the top 10 vulnerabilities on Web application. While a traditional cross-site scripting vulnerability exploits server-side codes, DOM-based XSS is a type of vulnerability which affects the script code being executed in the clients browser. DOM-based XSS vulnerabilities are much harder to be detected than classic XSS vulnerabilities because they reside on the script codes from Web sites. An automated scanner needs to be able to execute the script code without errors and to monitor the execution of this code to detect such vulnerabilities. In this paper, we introduce a distributed scanning tool for crawling modern Web applications on a large scale and detecting, validating DOMbased XSS vulnerabilities. Very few Web vulnerability scanners can really accomplish this.", "title": "" }, { "docid": "046245929e709ef2935c9413619ab3d7", "text": "In recent years, there has been a growing intensity of competition in virtually all areas of business in both markets upstream for raw materials such as components, supplies, capital and technology and markets downstream for consumer goods and services. This paper examines the relationships among generic strategy, competitive advantage, and organizational performance. Firstly, the nature of generic strategies, competitive advantage, and organizational performance is examined. Secondly, the relationship between generic strategies and competitive advantage is analyzed. Finally, the implications of generic strategies, organizational performance, performance measures and competitive advantage are studied. This study focuses on: (i) the relationship of generic strategy and organisational performance in Australian manufacturing companies participating in the “Best Practice Program in Australia”, (ii) the relationship between generic strategies and competitive advantage, and (iii) the relationship among generic strategies, competitive advantage and organisational performance.  1999 Elsevier Science Ltd. All rights reserved.", "title": "" }, { "docid": "d8484cc7973882777f65a28fcdbb37be", "text": "The reported power analysis attacks on hardware implementations of the MICKEY family of streams ciphers require a large number of power traces. The primary motivation of our work is to break an implementation of the cipher when only a limited number of power traces can be acquired by an adversary. In this paper, we propose a novel approach to mount a Template attack (TA) on MICKEY-128 2.0 stream cipher using Particle Swarm Optimization (PSO) generated initialization vectors (IVs). In addition, we report the results of power analysis against a MICKEY-128 2.0 implementation on a SASEBO-GII board to demonstrate our proposed attack strategy. The captured power traces were analyzed using Least Squares Support Vector Machine (LS-SVM) learning algorithm based binary classifiers to segregate the power traces into the respective Hamming distance (HD) classes. The outcomes of the experiments reveal that our proposed power analysis attack strategy requires a much lesser number of IVs compared to a standard Correlation Power Analysis (CPA) attack on MICKEY-128 2.0 during the key loading phase of the cipher.", "title": "" }, { "docid": "2a1eea68ab90c34fbe90e8f6ac28059e", "text": "This article discusses how to avoid biased questions in survey instruments, how to motivate people to complete instruments and how to evaluate instruments. In the context of survey evaluation, we discuss how to assess survey reliability i.e. how reproducible a survey's data is and survey validity i.e. how well a survey instrument measures what it sets out to measure.", "title": "" }, { "docid": "2d845ef6552b77fb4dd0d784233aa734", "text": "The timing of the origin of arthropods in relation to the Cambrian explosion is still controversial, as are the timing of other arthropod macroevolutionary events such as the colonization of land and the evolution of flight. Here we assess the power of a phylogenomic approach to shed light on these major events in the evolutionary history of life on earth. Analyzing a large phylogenomic dataset (122 taxa, 62 genes) with a Bayesian-relaxed molecular clock, we simultaneously reconstructed the phylogenetic relationships and the absolute times of divergences among the arthropods. Simulations were used to test whether our analysis could distinguish between alternative Cambrian explosion scenarios with increasing levels of autocorrelated rate variation. Our analyses support previous phylogenomic hypotheses and simulations indicate a Precambrian origin of the arthropods. Our results provide insights into the 3 independent colonizations of land by arthropods and suggest that evolution of insect wings happened much earlier than the fossil record indicates, with flight evolving during a period of increasing oxygen levels and impressively large forests. These and other findings provide a foundation for macroevolutionary and comparative genomic study of Arthropoda.", "title": "" }, { "docid": "f90cb4fdf664e24ceeb3727eda3543b3", "text": "The self-powering, long-lasting, and functional features of embedded wireless microsensors appeal to an ever-expanding application space in monitoring, control, and diagnosis for military, commercial, industrial, space, and biomedical applications. Extended operational life, however, is difficult to achieve when power-intensive functions like telemetry draw whatever little energy is available from energy-storage microdevices like thin-film lithium-ion batteries and/or microscale fuel cells. Harvesting ambient energy overcomes this deficit by continually replenishing the energy reservoir and indefinitely extending system lifetime. In this paper, a prototyped circuit that precharges, detects, and synchronizes to a variable voltage-constrained capacitor verifies experimentally that harvesting energy electrostatically from vibrations is possible. Experimental results show that, on average (excluding gate-drive and control losses), the system harvests 9.7 nJ/cycle by investing 1.7 nJ/cycle, yielding a net energy gain of approximately 8 nJ/cycle at an average of 1.6 ¿W (in typical applications) for every 200 pF variation. Projecting and including reasonable gate-drive and controller losses reduces the net energy gain to 6.9 nJ/cycle at 1.38 ¿W.", "title": "" }, { "docid": "b76f10452e4a4b0d7408e6350b263022", "text": "In this paper, a Y-Δ hybrid connection for a high-voltage induction motor is described. Low winding harmonic content is achieved by careful consideration of the interaction between the Y- and Δ-connected three-phase winding sets so that the magnetomotive force (MMF) in the air gap is close to sinusoid. Essentially, the two winding sets operate in a six-phase mode. This paper goes on to verify that the fundamental distribution coefficient for the stator MMF is enhanced compared to a standard three-phase winding set. The design method for converting a conventional double-layer lap winding in a high-voltage induction motor into a Y-Δ hybrid lap winding is described using standard winding theory as often applied to small- and medium-sized motors. The main parameters addressed when designing the winding are the conductor wire gauge, coil turns, and parallel winding branches in the Y and Δ connections. A winding design scheme for a 1250-kW 6-kV induction motor is put forward and experimentally validated; the results show that the efficiency can be raised effectively without increasing the cost.", "title": "" }, { "docid": "78c6ca3a62314b1033470a03c90619be", "text": "Metabolomics is the comprehensive study of small molecule metabolites in biological systems. By assaying and analyzing thousands of metabolites in biological samples, it provides a whole picture of metabolic status and biochemical events happening within an organism and has become an increasingly powerful tool in the disease research. In metabolomics, it is common to deal with large amounts of data generated by nuclear magnetic resonance (NMR) and/or mass spectrometry (MS). Moreover, based on different goals and designs of studies, it may be necessary to use a variety of data analysis methods or a combination of them in order to obtain an accurate and comprehensive result. In this review, we intend to provide an overview of computational and statistical methods that are commonly applied to analyze metabolomics data. The review is divided into five sections. The first two sections will introduce the background and the databases and resources available for metabolomics research. The third section will briefly describe the principles of the two main experimental methods that produce metabolomics data: MS and NMR, followed by the fourth section that describes the preprocessing of the data from these two approaches. In the fifth and the most important section, we will review four main types of analysis that can be performed on metabolomics data with examples in metabolomics. These are unsupervised learning methods, supervised learning methods, pathway analysis methods and analysis of time course metabolomics data. We conclude by providing a table summarizing the principles and tools that we discussed in this review.", "title": "" }, { "docid": "9292d1a97913257cfd1e72645969a988", "text": "A digital PLL employing an adaptive tracking technique and a novel frequency acquisition scheme achieves a wide tracking range and fast frequency acquisition. The test chip fabricated in a 0.13 mum CMOS process operates from 0.6 GHz to 2 GHz and achieves better than plusmn3200 ppm frequency tracking range when the reference clock is modulated with a 1 MHz sine wave.", "title": "" }, { "docid": "c3473e7fe7b46628d384cbbe10bfe74c", "text": "STUDY OBJECTIVE\nTo (1) examine the prevalence of abnormal genital findings in a large cohort of female children presenting with concerns of sexual abuse; and (2) explore how children use language when describing genital contact and genital anatomy.\n\n\nDESIGN\nIn this prospective study we documented medical histories and genital findings in all children who met inclusion criteria. Findings were categorized as normal, indeterminate, and diagnostic of trauma. Logistic regression analysis was used to determine the effects of key covariates on predicting diagnostic findings. Children older than 4 years of age were asked questions related to genital anatomy to assess their use of language.\n\n\nSETTING\nA regional, university-affiliated sexual abuse clinic.\n\n\nPARTICIPANTS\nFemale children (N = 1500) aged from birth to 17 years (inclusive) who received an anogenital examination with digital images.\n\n\nINTERVENTIONS AND MAIN OUTCOME MEASURES\nPhysical exam findings, medical history, and the child's use of language were recorded.\n\n\nRESULTS\nPhysical findings were determined in 99% (n = 1491) of patients. Diagnostic findings were present in 7% (99 of 1491). After adjusting for age, acuity, and type of sexual contact reported by the adult, the estimated odds of diagnostic findings were 12.5 times higher for children reporting genital penetration compared with those who reported only contact (95% confidence interval, 3.46-45.34). Finally, children used the word \"inside\" to describe contact other than penetration of the vaginal canal (ie, labial penetration).\n\n\nCONCLUSION\nA history of penetration by the child was the primary predictor of diagnostic findings. Interpretation of children's use of \"inside\" might explain the low prevalence of diagnostic findings and warrants further study.", "title": "" }, { "docid": "3e94030eb03806d79c5e66aa90408fbb", "text": "The sampling rate of the sensors in wireless sensor networks (WSNs) determines the rate of its energy consumption since most of the energy is used in sampling and transmission. To save the energy in WSNs and thus prolong the network lifetime, we present a novel approach based on the compressive sensing (CS) framework to monitor 1-D environmental information in WSNs. The proposed technique is based on CS theory to minimize the number of samples taken by sensor nodes. An innovative feature of our approach is a new random sampling scheme that considers the causality of sampling, hardware limitations and the trade-off between the randomization scheme and computational complexity. In addition, a sampling rate indicator (SRI) feedback scheme is proposed to enable the sensor to adjust its sampling rate to maintain an acceptable reconstruction performance while minimizing the number of samples. A significant reduction in the number of samples required to achieve acceptable reconstruction error is demonstrated using real data gathered by a WSN located in the Hessle Anchorage of the Humber Bridge.", "title": "" }, { "docid": "c94c9913634f715049d90a55282908ca", "text": "Indirect field oriented control for induction machine requires the knowledge of rotor time constant to estimate the rotor flux linkages. Here an online method for estimating the rotor time constant and stator resistance is presented. The problem is formulated as a nonlinear least-squares problem and a procedure is presented that guarantees the minimum is found in a finite number of steps. Experimental results are presented. Two different approaches to implementing the algorithm online are discussed. Simulations are also presented to show how the algorithm works online", "title": "" }, { "docid": "670ade2a60809bd501b3d365d173f4ab", "text": "Attack graph is a tool to analyze multi-stage, multi-host attack scenarios in a network. It is a complete graph where each attack scenario is depicted by an attack path which is essentially a series of exploits. Each exploit in the series satisfies the pre-conditions for subsequent exploits and makes a casual relationship among them. One of the intrinsic problem with the generation of such a full attack graph is its scalability. In this work, an approach based on planner has been proposed for time-efficient scalable representation of the attack graphs. A planner is a special purpose search algorithm from artificial intelligence domain, used for finding out solutions within a large state space without suffering state space explosion. A case study has also been presented and the proposed methodology is found to be efficient than some of the earlier reported works.", "title": "" } ]
scidocsrr
a48a2385c64de73ec6837650edccc60c
Privacy Preserving Social Network Data Publication
[ { "docid": "6fa6ce80c183cf9b36e56011490c0504", "text": "Lipschitz extensions were recently proposed as a tool for designing node differentially private algorithms. However, efficiently computable Lipschitz extensions were known only for 1-dimensional functions (that is, functions that output a single real value). In this paper, we study efficiently computable Lipschitz extensions for multi-dimensional (that is, vector-valued) functions on graphs. We show that, unlike for 1-dimensional functions, Lipschitz extensions of higher-dimensional functions on graphs do not always exist, even with a non-unit stretch. We design Lipschitz extensions with small stretch for the sorted degree list and for the degree distribution of a graph. Crucially, our extensions are efficiently computable. We also develop new tools for employing Lipschitz extensions in the design of differentially private algorithms. Specifically, we generalize the exponential mechanism, a widely used tool in data privacy. The exponential mechanism is given a collection of score functions that map datasets to real values. It attempts to return the name of the function with nearly minimum value on the data set. Our generalized exponential mechanism provides better accuracy when the sensitivity of an optimal score function is much smaller than the maximum sensitivity of score functions. We use our Lipschitz extension and the generalized exponential mechanism to design a nodedifferentially private algorithm for releasing an approximation to the degree distribution of a graph. Our algorithm is much more accurate than algorithms from previous work. ∗Computer Science and Engineering Department, Pennsylvania State University. {asmith,sofya}@cse.psu.edu. Supported by NSF awards CDI-0941553 and IIS-1447700 and a Google Faculty Award. Part of this work was done while visiting Boston University’s Hariri Institute for Computation. 1 ar X iv :1 50 4. 07 91 2v 1 [ cs .C R ] 2 9 A pr 2 01 5", "title": "" } ]
[ { "docid": "5c90f5a934a4d936257467a14a058925", "text": "We present a new autoencoder-type architecture that is trainable in an unsupervised mode, sustains both generation and inference, and has the quality of conditional and unconditional samples boosted by adversarial learning. Unlike previous hybrids of autoencoders and adversarial networks, the adversarial game in our approach is set up directly between the encoder and the generator, and no external mappings are trained in the process of learning. The game objective compares the divergences of each of the real and the generated data distributions with the prior distribution in the latent space. We show that direct generator-vs-encoder game leads to a tight coupling of the two components, resulting in samples and reconstructions of a comparable quality to some recently-proposed more complex", "title": "" }, { "docid": "19fe8c6452dd827ffdd6b4c6e28bc875", "text": "Motivation for the investigation of position and waypoint controllers is the demand for Unattended Aerial Systems (UAS) capable of fulfilling e.g. surveillance tasks in contaminated or in inaccessible areas. Hence, this paper deals with the development of a 2D GPS-based position control system for 4 Rotor Helicopters able to keep positions above given destinations as well as to navigate between waypoints while minimizing trajectory errors. Additionally, the novel control system enables permanent full speed flight with reliable altitude keeping considering that the resulting lift is decreasing while changing pitch or roll angles for position control. In the following chapters the control procedure for position control and waypoint navigation is described. The dynamic behavior was simulated by means of Matlab/Simulink and results are shown. Further, the control strategies were implemented on a flight demonstrator for validation, experimental results are provided and a comparison is discussed.", "title": "" }, { "docid": "ff93e77bb0e0b24a06780a05cc16123d", "text": "Models in science may be used for various purposes: organizing data, synthesizing information, and making predictions. However, the value of model predictions is undermined by their uncertainty, which arises primarily from the fact that our models of complex natural systems are always open. Models can never fully specify the systems that they describe, and therefore their predictions are always subject to uncertainties that we cannot fully specify. Moreover, the attempt to make models capture the complexities of natural systems leads to a paradox: the more we strive for realism by incorporating as many as possible of the different processes and parameters that we believe to be operating in the system, the more difficult it is for us to know if our tests of the model are meaningful. A complex model may be more realistic, yet it is ironic that as we add more factors to a model, the certainty of its predictions may decrease even as our intuitive faith in the model increases. For this and other reasons, model output should not be viewed as an accurate prediction of the future state of the system. Short timeframe model output can and should be used to evaluate models and suggest avenues for future study. Model output can also generate “what if” scenarios that can help to evaluate alternative courses of action (or inaction), including worst-case and best-case outcomes. But scientists should eschew long-range deterministic predictions, which are likely to be erroneous and may damage the credibility of the communities that generate them.", "title": "" }, { "docid": "d00cdbbe08a56952685118e68c0b9115", "text": "s R esum es Canadian Undergraduate Mathematics Conference 1998 | Part 3 The Brachistochrone Problem Nils Johnson The University of British Columbia The brachistochrone problem is to nd the curve between two points down which a bead will slide in the shortest amount of time, neglecting friction and assuming conservation of energy. To solve the problem, an integral is derived that computes the amount of time it would take a bead to slide down a given curve y(x). This integral is minimized over all possible curves and yields the di erential equation y(1 + (y)) = k as a constraint for the minimizing function y(x). Solving this di erential equation shows that a cycloid (the path traced out by a point on the rim of a rolling wheel) is the solution to the brachistochrone problem. First proposed in 1696 by Johann Bernoulli, this problem is credited with having led to the development of the calculus of variations. The solution presented assumes knowledge of one-dimensional calculus and elementary di erential equations. The Theory of Error-Correcting Codes Dennis Hill University of Ottawa Coding theory is concerned with the transfer of data. There are two issues of fundamental importance. First, the data must be transferred accurately. But equally important is that the transfer be done in an e cient manner. It is the interplay of these two issues which is the core of the theory of error-correcting codes. Typically, the data is represented as a string of zeros and ones. Then a code consists of a set of such strings, each of the same length. The most fruitful approach to the subject is to consider the set f0; 1g as a two-element eld. We will then only", "title": "" }, { "docid": "476aa14f6b71af480e8ab4747849d7e3", "text": "The present study explored the relationship between risky cybersecurity behaviours, attitudes towards cybersecurity in a business environment, Internet addiction, and impulsivity. 538 participants in part-time or full-time employment in the UK completed an online questionnaire, with responses from 515 being used in the data analysis. The survey included an attitude towards cybercrime and cybersecurity in business scale, a measure of impulsivity, Internet addiction and a 'risky' cybersecurity behaviours scale. The results demonstrated that Internet addiction was a significant predictor for risky cybersecurity behaviours. A positive attitude towards cybersecurity in business was negatively related to risky cybersecurity behaviours. Finally, the measure of impulsivity revealed that both attentional and motor impulsivity were both significant positive predictors of risky cybersecurity behaviours, with non-planning being a significant negative predictor. The results present a further step in understanding the individual differences that may govern good cybersecurity practices, highlighting the need to focus directly on more effective training and awareness mechanisms.", "title": "" }, { "docid": "2526915745dda9026836347292f79d12", "text": "I show that a functional representation of self-similarity (as the one occurring in fractals) is provided by squeezed coherent states. In this way, the dissipative model of brain is shown to account for the self-similarity in brain background activity suggested by power-law distributions of power spectral densities of electrocorticograms. I also briefly discuss the action-perception cycle in the dissipative model with reference to intentionality in terms of trajectories in the memory state space.", "title": "" }, { "docid": "f095118c63d1531ebdbaec3565b0d91f", "text": "BACKGROUND\nSystematic reviews are most helpful if they are up-to-date. We did a systematic review of strategies and methods describing when and how to update systematic reviews.\n\n\nOBJECTIVES\nTo identify, describe and assess strategies and methods addressing: 1) when to update systematic reviews and 2) how to update systematic reviews.\n\n\nSEARCH STRATEGY\nWe searched MEDLINE (1966 to December 2005), PsycINFO, the Cochrane Methodology Register (Issue 1, 2006), and hand searched the 2005 Cochrane Colloquium proceedings.\n\n\nSELECTION CRITERIA\nWe included methodology reports, updated systematic reviews, commentaries, editorials, or other short reports describing the development, use, or comparison of strategies and methods for determining the need for updating or updating systematic reviews in healthcare.\n\n\nDATA COLLECTION AND ANALYSIS\nWe abstracted information from each included report using a 15-item questionnaire. The strategies and methods for updating systematic reviews were assessed and compared descriptively with respect to their usefulness, comprehensiveness, advantages, and disadvantages.\n\n\nMAIN RESULTS\nFour updating strategies, one technique, and two statistical methods were identified. Three strategies addressed steps for updating and one strategy presented a model for assessing the need to update. One technique discussed the use of the \"entry date\" field in bibliographic searching. Statistical methods were cumulative meta-analysis and predicting when meta-analyses are outdated.\n\n\nAUTHORS' CONCLUSIONS\nLittle research has been conducted on when and how to update systematic reviews and the feasibility and efficiency of the identified approaches is uncertain. These shortcomings should be addressed in future research.", "title": "" }, { "docid": "940e3a77d9dbe1da2fb2f38ae768b71e", "text": "Layer-by-layer deposition of materials to manufacture parts—better known as three-dimensional (3D) printing or additive manufacturing—has been flourishing as a fabrication process in the past several years and now can create complex geometries for use as models, assembly fixtures, and production molds. Increasing interest has focused on the use of this technology for direct manufacturing of production parts; however, it remains generally limited to single-material fabrication, which can limit the end-use functionality of the fabricated structures. The next generation of 3D printing will entail not only the integration of dissimilar materials but the embedding of active components in order to deliver functionality that was not possible previously. Examples could include arbitrarily shaped electronics with integrated microfluidic thermal management and intelligent prostheses custom-fit to the anatomy of a specific patient. We review the state of the art in multiprocess (or hybrid) 3D printing, in which complementary processes, both novel and traditional, are combined to advance the future of manufacturing.", "title": "" }, { "docid": "9a3a73f35b27d751f237365cc34c8b28", "text": "The development of brain metastases in patients with advanced stage melanoma is common, but the molecular mechanisms responsible for their development are poorly understood. Melanoma brain metastases cause significant morbidity and mortality and confer a poor prognosis; traditional therapies including whole brain radiation, stereotactic radiotherapy, or chemotherapy yield only modest increases in overall survival (OS) for these patients. While recently approved therapies have significantly improved OS in melanoma patients, only a small number of studies have investigated their efficacy in patients with brain metastases. Preliminary data suggest that some responses have been observed in intracranial lesions, which has sparked new clinical trials designed to evaluate the efficacy in melanoma patients with brain metastases. Simultaneously, recent advances in our understanding of the mechanisms of melanoma cell dissemination to the brain have revealed novel and potentially therapeutic targets. In this review, we provide an overview of newly discovered mechanisms of melanoma spread to the brain, discuss preclinical models that are being used to further our understanding of this deadly disease and provide an update of the current clinical trials for melanoma patients with brain metastases.", "title": "" }, { "docid": "05127dab049ef7608932913f66db0990", "text": "This paper presents a hybrid tele-manipulation system, comprising of a sensorized 3-D-printed soft robotic gripper and a soft fabric-based haptic glove that aim at improving grasping manipulation and providing sensing feedback to the operators. The flexible 3-D-printed soft robotic gripper broadens what a robotic gripper can do, especially for grasping tasks where delicate objects, such as glassware, are involved. It consists of four pneumatic finger actuators, casings with through hole for housing the actuators, and adjustable base. The grasping length and width can be configured easily to suit a variety of objects. The soft haptic glove is equipped with flex sensors and soft pneumatic haptic actuator, which enables the users to control the grasping, to determine whether the grasp is successful, and to identify the grasped object shape. The fabric-based soft pneumatic haptic actuator can simulate haptic perception by producing force feedback to the users. Both the soft pneumatic finger actuator and haptic actuator involve simple fabrication technique, namely 3-D-printed approach and fabric-based approach, respectively, which reduce fabrication complexity as compared to the steps involved in a traditional silicone-based approach. The sensorized soft robotic gripper is capable of picking up and holding a wide variety of objects in this study, ranging from lightweight delicate object weighing less than 50 g to objects weighing 1100 g. The soft haptic actuator can produce forces of up to 2.1 N, which is more than the minimum force of 1.5 N needed to stimulate haptic perception. The subjects are able to differentiate the two objects with significant shape differences in the pilot test. Compared to the existing soft grippers, this is the first soft sensorized 3-D-printed gripper, coupled with a soft fabric-based haptic glove that has the potential to improve the robotic grasping manipulation by introducing haptic feedback to the users.", "title": "" }, { "docid": "a58769ca02b9409a983ac6d7ba69f0be", "text": "In this paper, we describe an approach for the automatic medical annotation task of the 2008 CLEF cross-language image retrieval campaign (ImageCLEF). The data comprise 12076 fully annotated images according to the IRMA code. This work is focused on the process of feature extraction from images and hierarchical multi-label classification. To extract features from the images we used a technique called: local distribution of edges. With this techniques each image was described with 80 variables. The goal of the classification task was to classify an image according to the IRMA code. The IRMA code is organized hierarchically. Hence, as classifer we selected an extension of the predictive clustering trees (PCTs) that is able to handle this type of data. Further more, we constructed ensembles (Bagging and Random Forests) that use PCTs as base classifiers.", "title": "" }, { "docid": "adddebf272a3b0fe510ea04ed7cc3837", "text": "PURPOSE\nTo explore the association of angiographic nonperfusion in focal and diffuse recalcitrant diabetic macular edema (DME) in diabetic retinopathy (DR).\n\n\nDESIGN\nA retrospective, observational case series of patients with the diagnosis of recalcitrant DME for at least 2 years placed into 1 of 4 cohorts based on the degree of DR.\n\n\nMETHODS\nA total of 148 eyes of 76 patients met the inclusion criteria at 1 academic institution. Ultra-widefield fluorescein angiography (FA) images and spectral-domain optical coherence tomography (SD OCT) images were obtained on all patients. Ultra-widefield FA images were graded for quantity of nonperfusion, which was used to calculate ischemic index. Main outcome measures were mean ischemic index, mean change in central macular thickness (CMT), and mean number of macular photocoagulation treatments over the 2-year study period.\n\n\nRESULTS\nThe mean ischemic index was 47% (SD 25%; range 0%-99%). The mean ischemic index of eyes within Cohorts 1, 2, 3, and 4 was 0%, 34% (range 16%-51%), 53% (range 32%-89%), and 65% (range 47%-99%), respectively. The mean percentage decrease in CMT in Cohorts 1, 2, 3, and 4 were 25.2%, 19.1%, 11.6%, and 7.2%, respectively. The mean number of macular photocoagulation treatments in Cohorts 1, 2, 3, and 4 was 2.3, 4.8, 5.3, and 5.7, respectively.\n\n\nCONCLUSIONS\nEyes with larger areas of retinal nonperfusion and greater severity of DR were found to have the most recalcitrant DME, as evidenced by a greater number of macular photocoagulation treatments and less reduction in SD OCT CMT compared with eyes without retinal nonperfusion. Areas of untreated retinal nonperfusion may generate biochemical mediators that promote ischemia and recalcitrant DME.", "title": "" }, { "docid": "d798bc49068356495074f92b3bfe7a4b", "text": "This study presents an experimental evaluation of neural networks for nonlinear time-series forecasting. The e!ects of three main factors * input nodes, hidden nodes and sample size, are examined through a simulated computer experiment. Results show that neural networks are valuable tools for modeling and forecasting nonlinear time series while traditional linear methods are not as competent for this task. The number of input nodes is much more important than the number of hidden nodes in neural network model building for forecasting. Moreover, large sample is helpful to ease the over\"tting problem.", "title": "" }, { "docid": "77f5216ede8babf4fb3b2bcbfc9a3152", "text": "Various aspects of the theory of random walks on graphs are surveyed. In particular, estimates on the important parameters of access time, commute time, cover time and mixing time are discussed. Connections with the eigenvalues of graphs and with electrical networks, and the use of these connections in the study of random walks is described. We also sketch recent algorithmic applications of random walks, in particular to the problem of sampling.", "title": "" }, { "docid": "fbe1e6b899b1a2e9d53d25e3fa70bd86", "text": "Previous empirical studies examining the relationship between IT capability and accountingbased measures of firm performance report mixed results. We argue that extant research (1) has relied on aggregate overall measures of the firm’s IT capability, ignoring the specific type and nature of IT capability; and (2) has not fully considered important contextual (environmental) conditions that influence the IT capability-firm performance relationship. Drawing on the resource-based view (RBV), we advance a contingency perspective and propose that IT capabilities’ impact on firm resources is contingent on the “fit” between the type of IT capability/resource a firm possesses and the demands of the environment (industry) in which it competes. Specifically, using publicly available rankings as proxies for two types of IT capabilities (internally-focused and externally-focused capabilities), we empirically examines the degree to which three industry characteristics (dynamism, munificence, and complexity) influence the impact of each type of IT capability on measures of financial performance. After controlling for prior performance, the findings provide general support for the posited contingency model of IT impact. The implications of these findings on practice and research are discussed.", "title": "" }, { "docid": "28b796954834230a0e8218e24bab0d35", "text": "Oral Squamous Cell Carcinoma (OSCC) is a common type of cancer of the oral epithelium. Despite their high impact on mortality, sufficient screening methods for early diagnosis of OSCC often lack accuracy and thus OSCCs are mostly diagnosed at a late stage. Early detection and accurate outline estimation of OSCCs would lead to a better curative outcome and a reduction in recurrence rates after surgical treatment. Confocal Laser Endomicroscopy (CLE) records sub-surface micro-anatomical images for in vivo cell structure analysis. Recent CLE studies showed great prospects for a reliable, real-time ultrastructural imaging of OSCC in situ. We present and evaluate a novel automatic approach for OSCC diagnosis using deep learning technologies on CLE images. The method is compared against textural feature-based machine learning approaches that represent the current state of the art. For this work, CLE image sequences (7894 images) from patients diagnosed with OSCC were obtained from 4 specific locations in the oral cavity, including the OSCC lesion. The present approach is found to outperform the state of the art in CLE image recognition with an area under the curve (AUC) of 0.96 and a mean accuracy of 88.3% (sensitivity 86.6%, specificity 90%).", "title": "" }, { "docid": "be48b00ee50c872d42ab95e193ac774b", "text": "T profitability of remanufacturing systems for different cost, technology, and logistics structures has been extensively investigated in the literature. We provide an alternative and somewhat complementary approach that considers demand-related issues, such as the existence of green segments, original equipment manufacturer competition, and product life-cycle effects. The profitability of a remanufacturing system strongly depends on these issues as well as on their interactions. For a monopolist, we show that there exist thresholds on the remanufacturing cost savings, the green segment size, market growth rate, and consumer valuations for the remanufactured products, above which remanufacturing is profitable. More important, we show that under competition remanufacturing can become an effective marketing strategy, which allows the manufacturer to defend its market share via price discrimination.", "title": "" }, { "docid": "37c35b782bb80d2324749fc71089c445", "text": "Predicting the stock market is considered to be a very difficult task due to its non-linear and dynamic nature. Our proposed system is designed in such a way that even a layman can use it. It reduces the burden on the user. The user’s job is to give only the recent closing prices of a stock as input and the proposed Recommender system will instruct him when to buy and when to sell if it is profitable or not to buy share in case if it is not profitable to do trading. Using soft computing based techniques is considered to be more suitable for predicting trends in stock market where the data is chaotic and large in number. The soft computing based systems are capable of extracting relevant information from large sets of data by discovering hidden patterns in the data. Here regression trees are used for dimensionality reduction and clustering is done with the help of Self Organizing Maps (SOM). The proposed system is designed to assist stock market investors identify possible profit-making opportunities and also help in developing a better understanding on how to extract the relevant information from stock price data. © 2016 Elsevier Ltd. All rights reserved.", "title": "" }, { "docid": "a84b5fa43c17eebd9cc3ddf2a0d2129e", "text": "The lack of realistic and open benchmarking datasets for pedestrian visual-inertial odometry has made it hard to pinpoint differences in published methods. Existing datasets either lack a full six degree-of-freedom ground-truth or are limited to small spaces with optical tracking systems. We take advantage of advances in pure inertial navigation, and develop a set of versatile and challenging real-world computer vision benchmark sets for visual-inertial odometry. For this purpose, we have built a test rig equipped with an iPhone, a Google Pixel Android phone, and a Google Tango device. We provide a wide range of raw sensor data that is accessible on almost any modern-day smartphone together with a high-quality ground-truth track. We also compare resulting visual-inertial tracks from Google Tango, ARCore, and Apple ARKit with two recent methods published in academic forums. The data sets cover both indoor and outdoor cases, with stairs, escalators, elevators, office environments, a shopping mall, and metro station.", "title": "" }, { "docid": "80477fdab96ae761dbbb7662b87e82a0", "text": "This article provides minimum requirements for having confidence in the accuracy of EC50/IC50 estimates. Two definitions of EC50/IC50s are considered: relative and absolute. The relative EC50/IC50 is the parameter c in the 4-parameter logistic model and is the concentration corresponding to a response midway between the estimates of the lower and upper plateaus. The absolute EC50/IC50 is the response corresponding to the 50% control (the mean of the 0% and 100% assay controls). The guidelines first describe how to decide whether to use the relative EC50/IC50 or the absolute EC50/IC50. Assays for which there is no stable 100% control must use the relative EC50/IC50. Assays having a stable 100% control but for which there may be more than 5% error in the estimate of the 50% control mean should use the relative EC50/IC50. Assays that can be demonstrated to produce an accurate and stable 100% control and less than 5% error in the estimate of the 50% control mean may gain efficiency as well as accuracy by using the absolute EC50/IC50. Next, the guidelines provide rules for deciding when the EC50/IC50 estimates are reportable. The relative EC50/IC50 should only be used if there are at least two assay concentrations beyond the lower and upper bend points. The absolute EC50/IC50 should only be used if there are at least two assay concentrations whose predicted response is less than 50% and two whose predicted response is greater than 50%. A wide range of typical assay conditions are considered in the development of the guidelines.", "title": "" } ]
scidocsrr
eacc5b915ce11792286986f305652163
Fuzzy Filter Design for Nonlinear Systems in Finite-Frequency Domain
[ { "docid": "239644f4ecd82758ca31810337a10fda", "text": "This paper discusses a design of stable filters withH∞ disturbance attenuation of Takagi–Sugeno fuzzy systemswith immeasurable premise variables. When we consider the filter design of Takagi–Sugeno fuzzy systems, the selection of premise variables plays an important role. If the premise variable is the state of the system, then a fuzzy system describes a wide class of nonlinear systems. In this case, however, a filter design of fuzzy systems based on parallel distributed compensator idea is infeasible. To avoid such a difficulty, we consider the premise variables uncertainties. Then we consider a robust H∞ filtering problem for such an uncertain system. A solution of the problem is given in terms of linear matrix inequalities (LMIs). Some numerical examples are given to illustrate our theory. © 2008 Elsevier B.V. All rights reserved.", "title": "" } ]
[ { "docid": "eaf16b3e9144426aed7edc092ad4a649", "text": "In order to use a synchronous dynamic RAM (SDRAM) as the off-chip memory of an H.264/AVC encoder, this paper proposes an efficient SDRAM memory controller with an asynchronous bridge. With the proposed architecture, the SDRAM bandwidth is increased by making the operation frequency of an external SDRAM higher than that of the hardware accelerators of an H.264/AVC encoder. Experimental results show that the encoding speed is increased by 30.5% when the SDRAM clock frequency is increased from 100 MHz to 200 MHz while the H.264/AVC hardware accelerators operate at 100 MHz.", "title": "" }, { "docid": "42127829aebaaaa4a4ac6c7e9417feaf", "text": "The study was to compare treatment preference, efficacy, and tolerability of sildenafil citrate (sildenafil) and tadalafil for treating erectile dysfunction (ED) in Chinese men naοve to phosphodiesterase 5 (PDE5) inhibitor therapies. This multicenter, randomized, open-label, crossover study evaluated whether Chinese men with ED preferred 20-mg tadalafil or 100-mg sildenafil. After a 4 weeks baseline assessment, 383 eligible patients were randomized to sequential 20-mg tadalafil per 100-mg sildenafil or vice versa for 8 weeks respectively and then chose which treatment they preferred to take during the 8 weeks extension. Primary efficacy was measured by Question 1 of the PDE5 Inhibitor Treatment Preference Questionnaire (PITPQ). Secondary efficacy was analyzed by PITPQ Question 2, the International Index of Erectile Function (IIEF) erectile function (EF) domain, sexual encounter profile (SEP) Questions 2 and 3, and the Drug Attributes Questionnaire. Three hundred and fifty men (91%) completed the randomized treatment phase. Two hundred and forty-two per 350 (69.1%) patients preferred 20-mg tadalafil, and 108/350 (30.9%) preferred 100-mg sildenafil (P < 0.001) as their treatment in the 8 weeks extension. Ninety-two per 242 (38%) patients strongly preferred tadalafil and 37/108 (34.3%) strongly the preferred sildenafil. The SEP2 (penetration), SEP3 (successful intercourse), and IIEF-EF domain scores were improved in both tadalafil and sildenafil treatment groups. For patients who preferred tadalafil, getting an erection long after taking the medication was the most reported reason for tadalafil preference. The only treatment-emergent adverse event reported by > 2% of men was headache. After tadalafil and sildenafil treatments, more Chinese men with ED naοve to PDE5 inhibitor preferred tadalafil. Both sildenafil and tadalafil treatments were effective and safe.", "title": "" }, { "docid": "5e952c10a30baffc511bb3ffe86cd4a8", "text": "Chitin and its deacetylated derivative chitosan are natural polymers composed of randomly distributed -(1-4)linked D-glucosamine (deacetylated unit) and N-acetyl-D-glucosamine (acetylated unit). Chitin is insoluble in aqueous media while chitosan is soluble in acidic conditions due to the free protonable amino groups present in the D-glucosamine units. Due to their natural origin, both chitin and chitosan can not be defined as a unique chemical structure but as a family of polymers which present a high variability in their chemical and physical properties. This variability is related not only to the origin of the samples but also to their method of preparation. Chitin and chitosan are used in fields as different as food, biomedicine and agriculture, among others. The success of chitin and chitosan in each of these specific applications is directly related to deep research into their physicochemical properties. In recent years, several reviews covering different aspects of the applications of chitin and chitosan have been published. However, these reviews have not taken into account the key role of the physicochemical properties of chitin and chitosan in their possible applications. The aim of this review is to highlight the relationship between the physicochemical properties of the polymers and their behaviour. A functional characterization of chitin and chitosan regarding some biological properties and some specific applications (drug delivery, tissue engineering, functional food, food preservative, biocatalyst immobilization, wastewater treatment, molecular imprinting and metal nanocomposites) is presented. The molecular mechanism of the biological properties such as biocompatibility, mucoadhesion, permeation enhancing effect, anticholesterolemic, and antimicrobial has been up-", "title": "" }, { "docid": "d258a14fc9e64ba612f2c8ea77f85d08", "text": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities.", "title": "" }, { "docid": "90df69e590373e757523f4c92a841d5c", "text": "A new impedance-based stability criterion was proposed for a grid-tied inverter system based on a Norton equivalent circuit of the inverter [18]. As an extension of the work in [18], this paper shows that using a Thévenin representation of the inverter can lead to the same criterion in [18]. Further, this paper shows that the criterion proposed by Middlebrook can still be used for the inverter systems. The link between the criterion in [18] and the original criterion is the inverse Nyquist stability criterion. The criterion in [18] is easier to be used. Because the current feedback controller and the phase-locked loop of the inverter introduce poles at the origin and right-half plane to the output impedance of the inverter. These poles do not appear in the minor loop gain defined in [18] but in the minor loop gain defined by Middlebrook. Experimental systems are used to verify the proposed analysis.", "title": "" }, { "docid": "93e6194dc3d8922edb672ac12333ea82", "text": "Sensors including RFID tags have been widely deployed for measuring environmental parameters such as temperature, humidity, oxygen concentration, monitoring the location and velocity of moving objects, tracking tagged objects, and many others. To support effective, efficient, and near real-time phenomena probing and objects monitoring, streaming sensor data have to be gracefully managed in an event processing manner. Different from the traditional events, sensor events come with temporal or spatio-temporal constraints and can be non-spontaneous. Meanwhile, like general event streams, sensor event streams can be generated with very high volumes and rates. Primitive sensor events need to be filtered, aggregated and correlated to generate more semantically rich complex events to facilitate the requirements of up-streaming applications. Motivated by such challenges, many new methods have been proposed in the past to support event processing in sensor event streams. In this chapter, we survey state-of-the-art research on event processing in sensor networks, and provide a broad overview of major topics in Springer Science+Business Media New York 2013 © Managing and Mining Sensor Data, DOI 10.1007/978-1-4614-6309-2_4, C.C. Aggarwal (ed.), 77 78 MANAGING AND MINING SENSOR DATA complex RFID event processing, including event specification languages, event detection models, event processing methods and their optimizations. Additionally, we have presented an open discussion on advanced issues such as processing uncertain and out-of-order sensor events.", "title": "" }, { "docid": "26e79793addc4750dcacc0408764d1e1", "text": "It has been shown that integration of acoustic and visual information especially in noisy conditions yields improved speech recognition results. This raises the question of how to weight the two modalities in different noise conditions. Throughout this paper we develop a weighting process adaptive to various background noise situations. In the presented recognition system, audio and video data are combined following a Separate Integration (SI) architecture. A hybrid Artificial Neural Network/Hidden Markov Model (ANN/HMM) system is used for the experiments. The neural networks were in all cases trained on clean data. Firstly, we evaluate the performance of different weighting schemes in a manually controlled recognition task with different types of noise. Next, we compare different criteria to estimate the reliability of the audio stream. Based on this, a mapping between the measurements and the free parameter of the fusion process is derived and its applicability is demonstrated. Finally, the possibilities and limitations of adaptive weighting are compared and discussed.", "title": "" }, { "docid": "2d4cb6980cf8716699bdffca6cfed274", "text": "Advances in laser technology have progressed so rapidly during the past decade that successful treatment of many cutaneous concerns and congenital defects, including vascular and pigmented lesions, tattoos, scars and unwanted haircan be achieved. The demand for laser surgery has increased as a result of the relative ease with low incidence of adverse postoperative sequelae. In this review, the currently available laser systems with cutaneous applications are outlined to identify the various types of dermatologic lasers available, to list their clinical indications and to understand the possible side effects.", "title": "" }, { "docid": "2b310a05b6a0c0fae45a2e15f8d52101", "text": "Cyber threats and the field of computer cyber defense are gaining more and more an increased importance in our lives. Starting from our regular personal computers and ending with thin clients such as netbooks or smartphones we find ourselves bombarded with constant malware attacks. In this paper we will present a new and novel way in which we can detect these kind of attacks by using elements of modern game theory. We will present the effects and benefits of game theory and we will talk about a defense exercise model that can be used to train cyber response specialists.", "title": "" }, { "docid": "09085fc15308a96cd9441bb0e23e6c1a", "text": "Convolutional neural networks (CNNs) are able to model local stationary structures in natural images in a multi-scale fashion, when learning all model parameters with supervision. While excellent performance was achieved for image classification when large amounts of labeled visual data are available, their success for unsupervised tasks such as image retrieval has been moderate so far.Our paper focuses on this latter setting and explores several methods for learning patch descriptors without supervision with application to matching and instance-level retrieval. To that effect, we propose a new family of patch representations, based on the recently introduced convolutional kernel networks. We show that our descriptor, named Patch-CKN, performs better than SIFT as well as other convolutional networks learned by artificially introducing supervision and is significantly faster to train. To demonstrate its effectiveness, we perform an extensive evaluation on standard benchmarks for patch and image retrieval where we obtain state-of-the-art results. We also introduce a new dataset called RomePatches, which allows to simultaneously study descriptor performance for patch and image retrieval.", "title": "" }, { "docid": "5394df4e1d6f52a608bfdab8731da088", "text": "For over a decade, researchers have devoted much effort to construct theoretical models, such as the Technology Acceptance Model (TAM) and the Expectation Confirmation Model (ECM) for explaining and predicting user behavior in IS acceptance and continuance. Another model, the Cognitive Model (COG), was proposed for continuance behavior; it combines some of the variables used in both TAM and ECM. This study applied the technique of structured equation modeling with multiple group analysis to compare the TAM, ECM, and COG models. Results indicate that TAM, ECM, and COG have quite different assumptions about the underlying constructs that dictate user behavior and thus have different explanatory powers. The six constructs in the three models were synthesized to propose a new Technology Continuance Theory (TCT). A major contribution of TCT is that it combines two central constructs: attitude and satisfaction into one continuance model, and has applicability for users at different stages of the adoption life cycle, i.e., initial, short-term and long-term users. The TCT represents a substantial improvement over the TAM, ECM and COG models in terms of both breadth of applicability and explanatory power.", "title": "" }, { "docid": "e4b54824b2528b66e28e82ad7d496b36", "text": "Objective: In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients. Methods: The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon. The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts, where each expert models the physiological data streams associated with a specific patient subtype. Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (e.g., age, gender, transfer status, ICD-9 codes, etc). Results: Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value. Conclusion: Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients’ heterogeneity. Significance: The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks.", "title": "" }, { "docid": "7a87ffc98d8bab1ff0c80b9e8510a17d", "text": "We present a method for transferring neural representations from label-rich source domains to unlabeled target domains. Recent adversarial methods proposed for this task learn to align features across domains by fooling a special domain critic network. However, a drawback of this approach is that the critic simply labels the generated features as in-domain or not, without considering the boundaries between classes. This can lead to ambiguous features being generated near class boundaries, reducing target classification accuracy. We propose a novel approach, Adversarial Dropout Regularization (ADR), to encourage the generator to output more discriminative features for the target domain. Our key idea is to replace the critic with one that detects non-discriminative features, using dropout on the classifier network. The generator then learns to avoid these areas of the feature space and thus creates better features. We apply our ADR approach to the problem of unsupervised domain adaptation for image classification and semantic segmentation tasks, and demonstrate significant improvement over the state of the art. We also show that our approach can be used to train Generative Adversarial Networks for semi-supervised learning.", "title": "" }, { "docid": "a39091796e8f679f246baa8dce08f213", "text": "Resource scheduling in cloud is a challenging job and the scheduling of appropriate resources to cloud workloads depends on the QoS requirements of cloud applications. In cloud environment, heterogeneity, uncertainty and dispersion of resources encounters problems of allocation of resources, which cannot be addressed with existing resource allocation policies. Researchers still face troubles to select the efficient and appropriate resource scheduling algorithm for a specific workload from the existing literature of resource scheduling algorithms. This research depicts a broad methodical literature analysis of resource management in the area of cloud in general and cloud resource scheduling in specific. In this survey, standard methodical literature analysis technique is used based on a complete collection of 110 research papers out of large collection of 1206 research papers published in 19 foremost workshops, symposiums and conferences and 11 prominent journals. The current status of resource scheduling in cloud computing is distributed into various categories. Methodical analysis of resource scheduling in cloud computing is presented, resource scheduling algorithms and management, its types and benefits with tools, resource scheduling aspects and resource distribution policies are described. The literature concerning to thirteen types of resource scheduling algorithms has also been stated. Further, eight types of resource distribution policies are described. Methodical analysis of this research work will help researchers to find the important characteristics of resource scheduling algorithms and also will help to select most suitable algorithm for scheduling a specific workload. Future research directions have also been suggested in this research work.", "title": "" }, { "docid": "048d54f4997bfea726f69cf7f030543d", "text": "In this article, we have reviewed the state of the art of IPT systems and have explored the suitability of the technology to wirelessly charge battery powered vehicles. the review shows that the IPT technology has merits for stationary charging (when the vehicle is parked), opportunity charging (when the vehicle is stopped for a short period of time, for example, at a bus stop), and dynamic charging (when the vehicle is moving along a dedicated lane equipped with an IPT system). Dynamic wireless charging holds promise to partially or completely eliminate the overnight charging through a compact network of dynamic chargers installed on the roads that would keep the vehicle batteries charged at all times, consequently reducing the range anxiety and increasing the reliability of EVs. Dynamic charging can help lower the price of EVs by reducing the size of the battery pack. Indeed, if the recharging energy is readily available, the batteries do not have to support the whole driving range but only supply power when the IPT system is not available. Depending on the power capability, the use of dynamic charging may increase driving range and reduce the size of the battery pack.", "title": "" }, { "docid": "ffb1610fddb36fa4db5fa3c3dc1e5fad", "text": "The complex methodology of investigations was applied to study a movement structure on bench press. We have checked the usefulness of multimodular measuring system (SMART-E, BTS company, Italy) and a special device for tracking the position of barbell (pantograph). Software Smart Analyser was used to create a database allowing chosen parameters to be compared. The results from different measuring devices are very similar, therefore the replacement of many devices by one multimodular system is reasonable. In our study, the effect of increased barbell load on the values of muscles activity and bar kinematics during the flat bench press movement was clearly visible. The greater the weight of a barbell, the greater the myoactivity of shoulder muscles and vertical velocity of the bar. It was also confirmed the presence of the so-called sticking point (period) during the concentric phase of the bench press. In this study, the initial velocity of the barbell decreased (v(min)) not only under submaximal and maximal loads (90 and 100% of the one repetition maximum; 1-RM), but also under slightly lighter weights (70 and 80% of 1-RM).", "title": "" }, { "docid": "51e2f490072820230d71f648d70babcb", "text": "Classification and regression trees are becoming increasingly popular for partitioning data and identifying local structure in small and large datasets. Classification trees include those models in which the dependent variable (the predicted variable) is categorical. Regression trees include those in which it is continuous. This paper discusses pitfalls in the use of these methods and highlights where they are especially suitable. Paper presented at the 1992 Sun Valley, ID, Sawtooth/SYSTAT Joint Software Conference.", "title": "" }, { "docid": "8bae8e7937f4c9a492a7030c62d7d9f4", "text": "Although there is considerable interest in the advance bookings model as a forecasting method in the hotel industry, there has been little research analyzing the use of an advance booking curve in forecasting hotel reservations. The mainstream of advance booking models reviewed in the literature uses only the bookings-on-hand data on a certain day and ignores the previous booking data. This empirical study analyzes the entire booking data set for one year provided by the Hotel ICON in Hong Kong, and identifies the trends and patterns in the data. The analysis demonstrates the use of an advance booking curve in forecasting hotel reservations at property level.", "title": "" }, { "docid": "717dd8e3c699d6cc22ba483002ab0a6f", "text": "Our analysis of many real-world event based applications has revealed that existing Complex Event Processing technology (CEP), while effective for efficient pattern matching on event stream, is limited in its capability of reacting in realtime to opportunities and risks detected or environmental changes. We are the first to tackle this problem by providing active rule support embedded directly within the CEP engine, henceforth called Active Complex Event Processing technology, or short, Active CEP. We design the Active CEP model and associated rule language that allows rules to be triggered by CEP system state changes and correctly executed during the continuous query process. Moreover we design an Active CEP infrastructure, that integrates the active rule component into the CEP kernel, allowing finegrained and optimized rule processing. We demonstrate the power of Active CEP by applying it to the development of a collaborative project with UMass Medical School, which detects potential threads of infection and reminds healthcare workers to perform hygiene precautions in real-time. 1. BACKGROUND AND MOTIVATION Complex patterns of events often capture exceptions, threats or opportunities occurring across application space and time. Complex Event Processing (CEP) technology has thus increasingly gained popularity for efficiently detecting such event patterns in real-time. For example CEP has been employed by diverse applications ranging from healthcare systems , financial analysis , real-time business intelligence to RFID based surveillance. However, existing CEP technologies [3, 7, 2, 5], while effective for pattern matching, are limited in their capability of supporting active rules. We motivate the need for such capability based on our experience with the development of a real-world hospital infection control system, called HygieneReminder, or short HyReminder. Application: HyReminder. According to the U.S. Centers for Disease Control and Prevention [8], healthcareassociated infections hit 1.7 million people a year in the Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Articles from this volume were presented at The 36th International Conference on Very Large Data Bases, September 13-17, 2010, Singapore. Proceedings of the VLDB Endowment, Vol. 3, No. 2 Copyright 2010 VLDB Endowment 2150-8097/10/09... $ 10.00. United States, causing an estimated 99,000 deaths. HyReminder is a collaborated project between WPI and University of Massachusetts Medical School (UMMS) that uses advanced CEP technologies to solve this long-standing public health problem. HyReminder system aims to continuously track healthcare workers (HCW) for hygiene compliance (for example cleansing hands before entering a H1N1 patient’s room), and remind the HCW at the appropriate moments to perform hygiene precautions thus preventing spread of infections. CEP technologies are adopted to efficiently monitor event patterns, such as the sequence that a HCW left a patient room (this behavior is measured by a sensor reading and modeled as “exit” event), did not sanitize his hands (referred as “!sanitize”, where ! represents negation), and then entered another patient’s room (referred as “enter”). Such a sequence of behaviors, i.e. SEQ(exit,!sanitize,enter), would be deemed as a violation of hand hygiene regulations. Besides detecting complex events, the HyReminder system requires the ability to specify logic rules reminding HCWs to perform the respective appropriate hygiene upon detection of an imminent hand hygiene violation or an actual observed violation. A condensed version of example logic rules derived from HyReminder and modeled using CEP semantics is depicted in Figure 1. In the figure, the edge marked “Q1.1” expresses the logic that “if query Q1.1 is satisfied for a HCW, then change his hygiene status to warning and change his badge light to yellow”. This logic rule in fact specifies how the system should react to the observed change, here meaning the risk being detected by the continuous pattern matching query Q1.1, during the long running query process. The system’s streaming environment requires that such reactions be executed in a timely fashion. An additional complication arises in that the HCW status changed by this logic rule must be used as a condition by other continuous queries at run time, like Q2.1 and Q2.2. We can see that active rules and continuous queries over streaming data are tightly-coupled: continuous queries are monitoring the world while active rules are changing the world, both in real-time. Yet contrary to traditional databases, data is not persistently stored in a DSMS, but rather streamed through the system in fluctuating arrival rate. Thus processing active rules in CEP systems requires precise synchronization between queries and rules and careful consideration of latency and resource utilization. Limitations of Existing CEP Technology. In summary, the following active functionalities are needed by many event stream applications, but not supported by the existing", "title": "" } ]
scidocsrr
8984257b3fea005a6bee6049c2375f5f
A Critical Review of Online Social Data: Biases, Methodological Pitfalls, and Ethical Boundaries
[ { "docid": "1f700c0c55b050db7c760f0c10eab947", "text": "Cathy O’Neil’s Weapons of Math Destruction is a timely reminder of the power and perils of predictive algorithms and model-driven decision processes. The book deals in some depth with eight case studies of the abuses she associates with WMDs: “weapons of math destruction.” The cases include the havoc wrought by value-added models used to evaluate teacher performance and by the college ranking system introduced by U.S. News and World Report; the collateral damage of online advertising and models devised to track and monetize “eyeballs”; the abuses associated with the recidivism models used in judicial decisions; the inequities perpetrated by the use of personality tests in hiring decisions; the burdens placed on low-wage workers by algorithm-driven attempts to maximize labor efficiency; the injustices written into models that evaluate creditworthiness; the inequities produced by insurance companies’ risk models; and the potential assault on the democratic process by the use of big data in political campaigns. As this summary suggests, O’Neil had plenty of examples to choose from when she wrote the book, but since the publication of Weapons of Math Destruction, two more problems associated with model-driven decision procedures have surfaced, making O’Neil’s work even more essential reading. The first—the role played by fake news, much of it circulated on Facebook, in the 2016 election—has led to congressional investigations. The second—the failure of algorithm-governed oversight to recognize and delete gruesome posts on the Facebook Live streaming service—has caused CEO Mark Zuckerberg to announce the addition of 3,000 human screeners to the Facebook staff. While O’Neil’s book may seem too polemical to some readers and too cautious to others, it speaks forcefully to the cultural moment we share. O’Neil weaves the story of her own credentials and work experience into her analysis, because, as she explains, her training as a mathematician and her experience in finance shaped the way she now understands the world. O’Neil earned a PhD in mathematics from Harvard; taught at Barnard College, where her research area was algebraic number theory; and worked for the hedge fund D. E. Shaw, which uses mathematical analysis to guide investment decisions. When the financial crisis of 2008 revealed that even the most sophisticated models were incapable of anticipating risks associated with “black swans”—events whose rarity make them nearly impossible to predict—O’Neil left the world of corporate finance to join the RiskMetrics Group, where she helped market risk models to financial institutions eager to rehabilitate their image. Ultimately, she became disillusioned with the financial industry’s refusal to take seriously the limitations of risk management models and left RiskMetrics. She rebranded herself a “data scientist” and took a job at Intent Media, where she helped design algorithms that would make big data useful for all kinds of applications. All the while, as O’Neil describes it, she “worried about the separation between technical models and real people, and about the moral repercussions of that separation” (page 48). O’Neil eventually left Intent Media to devote her energies to inWeapons of Math Destruction", "title": "" } ]
[ { "docid": "08e8629cf29da3532007c5cf5c57d8bb", "text": "Social networks are growing in number and size, with hundreds of millions of user accounts among them. One added benefit of these networks is that they allow users to encode more information about their relationships than just stating who they know. In this work, we are particularly interested in trust relationships, and how they can be used in designing interfaces. In this paper, we present FilmTrust, a website that uses trust in web-based social networks to create predictive movie recommendations. Using the FilmTrust system as a foundation, we show that these recommendations are more accurate than other techniques when the user’s opinions about a film are divergent from the average. We discuss this technique both as an application of social network analysis, as well as how it suggests other analyses that can be performed to help improve collaborative filtering algorithms of all types.", "title": "" }, { "docid": "7a8faa4e8ecef8e28aa2203f0aa9d888", "text": "In today’s global marketplace, individual firms do not compete as independent entities rather as an integral part of a supply chain. This paper proposes a fuzzy mathematical programming model for supply chain planning which considers supply, demand and process uncertainties. The model has been formulated as a fuzzy mixed-integer linear programming model where data are ill-known andmodelled by triangular fuzzy numbers. The fuzzy model provides the decision maker with alternative decision plans for different degrees of satisfaction. This proposal is tested by using data from a real automobile supply chain. © 2009 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "ff029b2b9799ab1de433a3264d28d711", "text": "This paper introduces and summarises the findings of a new shared task at the intersection of Natural Language Processing and Computer Vision: the generation of image descriptions in a target language, given an image and/or one or more descriptions in a different (source) language. This challenge was organised along with the Conference on Machine Translation (WMT16), and called for system submissions for two task variants: (i) a translation task, in which a source language image description needs to be translated to a target language, (optionally) with additional cues from the corresponding image, and (ii) a description generation task, in which a target language description needs to be generated for an image, (optionally) with additional cues from source language descriptions of the same image. In this first edition of the shared task, 16 systems were submitted for the translation task and seven for the image description task, from a total of 10 teams.", "title": "" }, { "docid": "011d0fa5eac3128d5127a66741689df7", "text": "Tweets often contain a large proportion of abbreviations, alternative spellings, novel words and other non-canonical language. These features are problematic for standard language analysis tools and it can be desirable to convert them to canonical form. We propose a novel text normalization model based on learning edit operations from labeled data while incorporating features induced from unlabeled data via character-level neural text embeddings. The text embeddings are generated using an Simple Recurrent Network. We find that enriching the feature set with text embeddings substantially lowers word error rates on an English tweet normalization dataset. Our model improves on stateof-the-art with little training data and without any lexical resources.", "title": "" }, { "docid": "68fb48f456383db1865c635e64333d8a", "text": "Documenting underwater archaeological sites is an extremely challenging problem. Sites covering large areas are particularly daunting for traditional techniques. In this paper, we present a novel approach to this problem using both an autonomous underwater vehicle (AUV) and a diver-controlled stereo imaging platform to document the submerged Bronze Age city at Pavlopetri, Greece. The result is a three-dimensional (3D) reconstruction covering 26,600 m2 at a resolution of 2 mm/pixel, the largest-scale underwater optical 3D map, at such a resolution, in the world to date. We discuss the advances necessary to achieve this result, including i) an approach to color correct large numbers of images at varying altitudes and over varying bottom types; ii) a large-scale bundle adjustment framework that is capable of handling upward of 400,000 stereo images; and iii) a novel approach to the registration and rapid documentation of an underwater excavations area that can quickly produce maps of site change. We present visual and quantitative comparisons to the authors’ previous underwater mapping approaches. C © 2016 Wiley Periodicals, Inc.", "title": "" }, { "docid": "b121ba0b5d24e0d53f85d04415b8c41d", "text": "Until now, most systems for Internet of Things (IoT) management, have been designed in a Cloud-centric manner, getting benefits from the unified platform that the Cloud offers. However, a Cloud-centric infrastructure mainly achieves static sensor and data streaming systems, which do not support the direct configuration management of IoT components. To address this issue, a virtualization of IoT components (Virtual Resources) is introduced at the edge of the IoT network. This research also introduces permission-based Blockchain protocols to handle the provisioning of Virtual Resources directly onto edge devices. The architecture presented by this research focuses on the use of Virtual Resources and Blockchain protocols as management tools to distribute configuration tasks towards the edge of the IoT network. Results from lab experiments demonstrate the successful deployment and communication performance (response time in milliseconds) of Virtual Resources on two edge platforms, Raspberry Pi and Edison board. This work also provides performance evaluations of two permission-based blockchain protocol approaches. The first blockchain approach is a Blockchain as a Service (BaaS) in the Cloud, Bluemix. The second blockchain approach is a private cluster hosted in a Fog network, Multichain.", "title": "" }, { "docid": "3149dd6f03208af01333dbe2c045c0c6", "text": "Debates about human nature often revolve around what is built in. However, the hallmark of human nature is how much of a person's identity is not built in; rather, it is humans' great capacity to adapt, change, and grow. This nature versus nurture debate matters-not only to students of human nature-but to everyone. It matters whether people believe that their core qualities are fixed by nature (an entity theory, or fixed mindset) or whether they believe that their qualities can be developed (an incremental theory, or growth mindset). In this article, I show that an emphasis on growth not only increases intellectual achievement but can also advance conflict resolution between long-standing adversaries, decrease even chronic aggression, foster cross-race relations, and enhance willpower. I close by returning to human nature and considering how it is best conceptualized and studied.", "title": "" }, { "docid": "19ebb5c0cdf90bf5aef36ad4b9f621a1", "text": "There has been a dramatic increase in the number and complexity of new ventilation modes over the last 30 years. The impetus for this has been the desire to improve the safety, efficiency, and synchrony of ventilator-patient interaction. Unfortunately, the proliferation of names for ventilation modes has made understanding mode capabilities problematic. New modes are generally based on increasingly sophisticated closed-loop control systems or targeting schemes. We describe the 6 basic targeting schemes used in commercially available ventilators today: set-point, dual, servo, adaptive, optimal, and intelligent. These control systems are designed to serve the 3 primary goals of mechanical ventilation: safety, comfort, and liberation. The basic operations of these schemes may be understood by clinicians without any engineering background, and they provide the basis for understanding the wide variety of ventilation modes and their relative advantages for improving patient-ventilator synchrony. Conversely, their descriptions may provide engineers with a means to better communicate to end users.", "title": "" }, { "docid": "5eeb17964742e1bf1e517afcb1963b02", "text": "Global navigation satellite system reflectometry is a multistatic radar using navigation signals as signals of opportunity. It provides wide-swath and improved spatiotemporal sampling over current space-borne missions. The lack of experimental datasets from space covering signals from multiple constellations (GPS, GLONASS, Galileo, and Beidou) at dual-band (L1 and L2) and dual-polarization (right- and left-hand circular polarization), over the ocean, land, and cryosphere remains a bottleneck to further develop these techniques. 3Cat-2 is a 6-unit (3 × 2 elementary blocks of 10 × 10 × 10 cm3) CubeSat mission designed and implemented at the Universitat Politècnica de Catalunya-BarcelonaTech to explore fundamental issues toward an improvement in the understanding of the bistatic scattering properties of different targets. Since geolocalization of the specific reflection points is determined by the geometry only, a moderate pointing accuracy is only required to correct the antenna pattern in scatterometry measurements. This paper describes the mission analysis and the current status of the assembly, integration, and verification activities of both the engineering model and the flight model performed at Universitat Politècnica de Catalunya NanoSatLab premises. 3Cat-2 launch is foreseen for the second quarter of 2016 into a Sun-Synchronous orbit of 510-km height.", "title": "" }, { "docid": "eb271acef996a9ba0f84a50b5055953b", "text": "Makeup is widely used to improve facial attractiveness and is well accepted by the public. However, different makeup styles will result in significant facial appearance changes. It remains a challenging problem to match makeup and non-makeup face images. This paper proposes a learning from generation approach for makeup-invariant face verification by introducing a bi-level adversarial network (BLAN). To alleviate the negative effects from makeup, we first generate non-makeup images from makeup ones, and then use the synthesized nonmakeup images for further verification. Two adversarial networks in BLAN are integrated in an end-to-end deep network, with the one on pixel level for reconstructing appealing facial images and the other on feature level for preserving identity information. These two networks jointly reduce the sensing gap between makeup and non-makeup images. Moreover, we make the generator well constrained by incorporating multiple perceptual losses. Experimental results on three benchmark makeup face datasets demonstrate that our method achieves state-of-the-art verification accuracy across makeup status and can produce photo-realistic non-makeup", "title": "" }, { "docid": "1785135fa0a35fd59a6181ec5886ddc1", "text": "We aimed to describe the surgical technique and clinical outcomes of paraspinal-approach reduction and fixation (PARF) in a group of patients with Denis type B thoracolumbar burst fracture (TLBF) with neurological deficiencies. A total of 62 patients with Denis B TLBF with neurological deficiencies were included in this study between January 2009 and December 2011. Clinical evaluations including the Frankel scale, pain visual analog scale (VAS) and radiological assessment (CT scans for fragment reduction and X-ray for the Cobb angle, adjacent superior and inferior intervertebral disc height, and vertebral canal diameter) were performed preoperatively and at 3 days, 6 months, and 1 and 2 years postoperatively. All patients underwent successful PARF, and were followed-up for at least 2 years. Average surgical time, blood loss and incision length were recorded. The sagittal vertebral canal diameter was significantly enlarged. The canal stenosis index was also improved. Kyphosis was corrected and remained at 8.6±1.4o (P>0.05) 1 year postoperatively. Adjacent disc heights remained constant. Average Frankel grades were significantly improved at the end of follow-up. All 62 patients were neurologically assessed. Pain scores decreased at 6 months postoperatively, compared to before surgery (P<0.05). PARF provided excellent reduction for traumatic segmental kyphosis, and resulted in significant spinal canal clearance, which restored and maintained the vertebral body height of patients with Denis B TLBF with neurological deficits.", "title": "" }, { "docid": "2dc2e201bee0f963355d10572ad71955", "text": "This paper presents Dynamoth, a dynamic, scalable, channel-based pub/sub middleware targeted at large scale, distributed and latency constrained systems. Our approach provides a software layer that balances the load generated by a high number of publishers, subscribers and messages across multiple, standard pub/sub servers that can be deployed in the Cloud. In order to optimize Cloud infrastructure usage, pub/sub servers can be added or removed as needed. Balancing takes into account the live characteristics of each channel and is done in an hierarchical manner across channels (macro) as well as within individual channels (micro) to maintain acceptable performance and low latencies despite highly varying conditions. Load monitoring is performed in an unintrusive way, and rebalancing employs a lazy approach in order to minimize its temporal impact on performance while ensuring successful and timely delivery of all messages. Extensive real-world experiments that illustrate the practicality of the approach within a massively multiplayer game setting are presented. Results indicate that with a given number of servers, Dynamoth was able to handle 60% more simultaneous clients than the consistent hashing approach, and that it was properly able to deal with highly varying conditions in the context of large workloads.", "title": "" }, { "docid": "23ed8f887128cb1cd6ea2f386c099a43", "text": "The capability to overcome terrain irregularities or obstacles, named terrainability, is mostly dependant on the suspension mechanism of the rover and its control. For a given wheeled robot, the terrainability can be improved by using a sophisticated control, and is somewhat related to minimizing wheel slip. The proposed control method, named torque control, improves the rover terrainability by taking into account the whole mechanical structure. The rover model is based on the Newton-Euler equations and knowing the complete state of the mechanical structures allows us to compute the force distribution in the structure, and especially between the wheels and the ground. Thus, a set of torques maximizing the traction can be used to drive the rover. The torque control algorithm is presented in this paper, as well as tests showing its impact and improvement in terms of terrainability. Using the CRAB rover platform, we show that the torque control not only increases the climbing performance but also limits odometric errors and reduces the overall power consumption.", "title": "" }, { "docid": "134578862a01dc4729999e9076362ee0", "text": "PURPOSE\nBasal-like breast cancer is associated with high grade, poor prognosis, and younger patient age. Clinically, a triple-negative phenotype definition [estrogen receptor, progesterone receptor, and human epidermal growth factor receptor (HER)-2, all negative] is commonly used to identify such cases. EGFR and cytokeratin 5/6 are readily available positive markers of basal-like breast cancer applicable to standard pathology specimens. This study directly compares the prognostic significance between three- and five-biomarker surrogate panels to define intrinsic breast cancer subtypes, using a large clinically annotated series of breast tumors.\n\n\nEXPERIMENTAL DESIGN\nFour thousand forty-six invasive breast cancers were assembled into tissue microarrays. All had staging, pathology, treatment, and outcome information; median follow-up was 12.5 years. Cox regression analyses and likelihood ratio tests compared the prognostic significance for breast cancer death-specific survival (BCSS) of the two immunohistochemical panels.\n\n\nRESULTS\nAmong 3,744 interpretable cases, 17% were basal using the triple-negative definition (10-year BCSS, 6 7%) and 9% were basal using the five-marker method (10-year BCSS, 62%). Likelihood ratio tests of multivariable Cox models including standard clinical variables show that the five-marker panel is significantly more prognostic than the three-marker panel. The poor prognosis of triple-negative phenotype is conferred almost entirely by those tumors positive for basal markers. Among triple-negative patients treated with adjuvant anthracycline-based chemotherapy, the additional positive basal markers identified a cohort of patients with significantly worse outcome.\n\n\nCONCLUSIONS\nThe expanded surrogate immunopanel of estrogen receptor, progesterone receptor, human HER-2, EGFR, and cytokeratin 5/6 provides a more specific definition of basal-like breast cancer that better predicts breast cancer survival.", "title": "" }, { "docid": "4b69831f2736ae08049be81e05dd4046", "text": "One of the most important aspects in playing the piano is using the appropriate fingers to facilitate movement and transitions. The fingering arrangement depends to a ce rtain extent on the size of the musician’s hand. We hav e developed an automatic fingering system that, given a sequence of pitches, suggests which fingers should be used. The output can be personalized to agree with t he limitations of the user’s hand. We also consider this system to be the base of a more complex future system: a score reduction system that will reduce orchestra scor e to piano scores. This paper describes: • “Vertical cost” model: the stretch induced by a given hand position. • “Horizontal cost” model: transition between two hand positions. • A system that computes low-cost fingering for a given piece of music. • A machine learning technique used to learn the appropriate parameters in the models.", "title": "" }, { "docid": "65385cdaac98022605efd2fd82bb211b", "text": "As electric vehicles (EVs) take a greater share in the personal automobile market, their penetration may bring higher peak demand at the distribution level. This may cause potential transformer overloads, feeder congestions, and undue circuit faults. This paper focuses on the impact of charging EVs on a residential distribution circuit. Different EV penetration levels, EV types, and charging profiles are considered. In order to minimize the impact of charging EVs on a distribution circuit, a demand response strategy is proposed in the context of a smart distribution network. In the proposed DR strategy, consumers will have their own choices to determine which load to control and when. Consumer comfort indices are introduced to measure the impact of demand response on consumers' lifestyle. The proposed indices can provide electric utilities a better estimation of the customer acceptance of a DR program, and the capability of a distribution circuit to accommodate EV penetration.", "title": "" }, { "docid": "952d97cc8302a6a1ab584ae32bfb64ee", "text": "1 Background and Objective of the Survey Compared with conventional centralized systems, blockchain technologies used for transactions of value records, such as bitcoins, structurally have the characteristics that (i) enable the creation of a system that substantially ensures no downtime (ii) make falsification extremely hard, and (iii) realize inexpensive system. Blockchain technologies are expected to be utilized in diverse fields including IoT. Japanese companies just started technology verification independently, and there is a risk that the initiative might be taken by foreign companies in blockchain technologies, which are highly likely to serve as the next-generation platform for all industrial fields in the future. From such point of view, this survey was conducted for the purpose of  comparing and analyzing details of numbers of blockchains and advantages/challenges therein;  ascertaining promising fields in which the technology should be utilized;  ascertaining the impact of the technology on society and the economy; and  developing policy guidelines for encouraging industries to utilize the technology in the future. This report compiles the results of interviews with domestic and overseas companies involving blockchain technology and experts. The content of this report is mostly based on data as of the end of February 2016. As specifications of blockchains and the status of services being provided change by the minute, it is recommended to check the latest conditions when intending to utilize any related technologies in business, etc. Terms and abbreviations used in this report are defined as follows. Terms Explanations BTC Abbreviation used as a currency unit of bitcoins FinTech A coined term combining Finance and Technology; Technologies and initiatives to create new services and businesses by utilizing ICT in the financial business Virtual currency / Cryptocurrency Bitcoins or other information whose value is recognized only on the Internet Exchange Services to exchange virtual currency, such as bitcoins, with another virtual currency or with legal currency, such as Japanese yen or US dollars; Some exchange offers services for contracts for difference, such as foreign exchange margin transactions (FX transactions) Consensus A series of procedures from approving a transaction as an official one and mutually confirming said results by using the following consensus algorithm Consensus algorithm Algorithm in general for mutually approving a distributed ledger using Proof of Work and Proof of Stake, etc. Token Virtual currency unique to blockchains; Virtual currency used for paying fees for asset management, etc. on blockchains is referred to …", "title": "" }, { "docid": "69e86a1f6f4d7f1039a3448e06df3725", "text": "In this paper, a low profile LLC resonant converter with two planar transformers is proposed for a slim SMPS (Switching Mode Power Supply). Design procedures and voltage gain characteristics on the proposed planar transformer and converter are described in detail. Two planar transformers applied to LLC resonant converter are connected in series at primary and in parallel by the center-tap winding at secondary. Based on the theoretical analysis and simulation results of the voltage gain characteristics, a 300W LLC resonant converter for LED TV power module is designed and tested.", "title": "" }, { "docid": "f9d4b66f395ec6660da8cb22b96c436c", "text": "The purpose of the study was to measure objectively the home use of the reciprocating gait orthosis (RGO) and the electrically augmented (hybrid) RGO. It was hypothesised that RGO use would increase following provision of functional electrical stimulation (FES). Five adult subjects participated in the study with spinal cord lesions ranging from C2 (incomplete) to T6. Selection criteria included active RGO use and suitability for electrical stimulation. Home RGO use was measured for up to 18 months by determining the mean number of steps taken per week. During this time patients were supplied with the hybrid system. Three alternatives for the measurement of steps taken were investigated: a commercial digital pedometer, a magnetically actuated counter and a heel contact switch linked to an electronic counter. The latter was found to be the most reliable system and was used for all measurements. Additional information on RGO use was acquired using three patient diaries administered throughout the study and before and after the provision of the hybrid system. Testing of the original hypothesis was complicated by problems in finding a reliable measurement tool and difficulties with data collection. However, the results showed that overall use of the RGO, whether with or without stimulation, is low. Statistical analysis of the step counter results was not realistic. No statistically significant change in RGO use was found between the patient diaries. The study suggests that the addition of electrical stimulation does not increase RGO use. The study highlights the problem of objectively measuring orthotic use in the home.", "title": "" } ]
scidocsrr
8ebdc8fee8a3c35cd03cb1a3c1bae8d1
Novel Cellular Active Array Antenna System at Base Station for Beyond 4G
[ { "docid": "cac379c00a4146acd06c446358c3e95a", "text": "In this work, a new base station antenna is proposed. Two separate frequency bands with separate radiating elements are used in each band. The frequency band separation ratio is about 1.3:1. These elements are arranged with different spacing (wider spacing for the lower frequency band, and narrower spacing for the higher frequency band). Isolation between bands inherently exists in this approach. This avoids the grating lobe effect, and mitigates the beam narrowing (dispersion) seen with fixed element spacing covering the whole wide bandwidth. A new low-profile cross dipole is designed, which is integrated in the array with an EBG/AMC structure for reducing the size of low band elements and decreasing coupling at high band.", "title": "" }, { "docid": "0dd462fa371d270a63e7ad88b070d8a2", "text": "Currently, many operators worldwide are deploying Long Term Evolution (LTE) to provide much faster access with lower latency and higher efficiency than its predecessors 3G and 3.5G. Meanwhile, the service rollout of LTE-Advanced, which is an evolution of LTE and a “true 4G” mobile broadband, is being underway to further enhance LTE performance. However, the anticipated challenges of the next decade (2020s) are so tremendous and diverse that there is a vastly increased need for a new generation mobile communications system with even further enhanced capabilities and new functionalities, namely a fifth generation (5G) system. Envisioning the development of a 5G system by 2020, at DOCOMO we started studies on future radio access as early as 2010, just after the launch of LTE service. The aim at that time was to anticipate the future user needs and the requirements of 10 years later (2020s) in order to identify the right concept and radio access technologies for the next generation system. The identified 5G concept consists of an efficient integration of existing spectrum bands for current cellular mobile and future new spectrum bands including higher frequency bands, e.g., millimeter wave, with a set of spectrum specific and spectrum agnostic technologies. Since a few years ago, we have been conducting several proof-of-concept activities and investigations on our 5G concept and its key technologies, including the development of a 5G real-time simulator, experimental trials of a wide range of frequency bands and technologies and channel measurements for higher frequency bands. In this paper, we introduce an overview of our views on the requirements, concept and promising technologies for 5G radio access, in addition to our ongoing activities for paving the way toward the realization of 5G by 2020. key words: next generation mobile communications system, 5G, 4G, LTE, LTE-advanced", "title": "" } ]
[ { "docid": "6660bcfd564726421d9eaaa696549454", "text": "When building intelligent spaces, the knowledge representation for encapsulating rooms, users, groups, roles, and other information is a fundamental design question. We present a semantic network as such a representation, and demonstrate its utility as a basis for ongoing work.", "title": "" }, { "docid": "d13ecf582ac820cdb8ea6353c44c535f", "text": "We have previously shown that, while the intrinsic quality of the oocyte is the main factor affecting blastocyst yield during bovine embryo development in vitro, the main factor affecting the quality of the blastocyst is the postfertilization culture conditions. Therefore, any improvement in the quality of blastocysts produced in vitro is likely to derive from the modification of the postfertilization culture conditions. The objective of this study was to examine the effect of the presence or absence of serum and the concentration of BSA during the period of embryo culture in vitro on 1) cleavage rate, 2) the kinetics of embryo development, 3) blastocyst yield, and 4) blastocyst quality, as assessed by cryotolerance and gene expression patterns. The quantification of all gene transcripts was carried out by real-time quantitative reverse transcription-polymerase chain reaction. Bovine blastocysts from four sources were used: 1) in vitro culture in synthetic oviduct fluid (SOF) supplemented with 3 mg/ml BSA and 10% fetal calf serum (FCS), 2) in vitro culture in SOF + 3 mg/ml BSA in the absence of serum, 3) in vitro culture in SOF + 16 mg/ml BSA in the absence of serum, and 4) in vivo blastocysts. There was no difference in overall blastocyst yield at Day 9 between the groups. However, significantly more blastocysts were present by Day 6 in the presence of 10% serum (20.0%) compared with 3 mg/ml BSA (4.6%, P < 0.001) or 16 mg/ml BSA (11.6%, P < 0.01). By Day 7, however, this difference had disappeared. Following vitrification, there was no difference in survival between blastocysts produced in the presence of 16 mg/ml BSA or those produced in the presence of 10% FCS; the survival of both groups was significantly lower than the in vivo controls at all time points and in terms of hatching rate. In contrast, survival of blastocysts produced in SOF + 3 mg/ml BSA in the absence of serum was intermediate, with no difference remaining at 72 h when compared with in vivo embryos. Differences in relative mRNA abundance among the two groups of blastocysts analyzed were found for genes related to apoptosis (Bax), oxidative stress (MnSOD, CuZnSOD, and SOX), communication through gap junctions (Cx31 and Cx43), maternal recognition of pregnancy (IFN-tau), and differentiation and implantation (LIF and LR-beta). The presence of serum during the culture period resulted in a significant increase in the level of expression of MnSOD, SOX, Bax, LIF, and LR-beta. The level of expression of Cx31 and Cu/ZnSOD also tended to be increased, although the difference was not significant. In contrast, the level of expression of Cx43 and IFN-tau was decreased in the presence of serum. In conclusion, using a combination of measures of developmental competence (cleavage and blastocyst rates) and qualitative measures such as cryotolerance and relative mRNA abundance to give a more complete picture of the consequences of modifying medium composition on the embryo, we have shown that conditions of postfertilization culture, in particular, the presence of serum in the medium, can affect the speed of embryo development and the quality of the resulting blastocysts. The reduced cryotolerance of blastocysts generated in the presence of serum is accompanied by deviations in the relative abundance of developmentally important gene transcripts. Omission of serum during the postfertilization culture period can significantly improve the cryotolerance of the blastocysts to a level intermediate between serum-generated blastocysts and those derived in vivo. The challenge now is to try and bridge this gap.", "title": "" }, { "docid": "ba96f2099e6e44ad14b85bfc2b49ddff", "text": "In this paper, an improved multimodel optimal quadratic control structure for variable speed, pitch regulated wind turbines (operating at high wind speeds) is proposed in order to integrate high levels of wind power to actively provide a primary reserve for frequency control. On the basis of the nonlinear model of the studied plant, and taking into account the wind speed fluctuations, and the electrical power variation, a multimodel linear description is derived for the wind turbine, and is used for the synthesis of an optimal control law involving a state feedback, an integral action and an output reference model. This new control structure allows a rapid transition of the wind turbine generated power between different desired set values. This electrical power tracking is ensured with a high-performance behavior for all other state variables: turbine and generator rotational speeds and mechanical shaft torque; and smooth and adequate evolution of the control variables.", "title": "" }, { "docid": "0aab0c0fa6a1b0f283478b390dece614", "text": "Hydrokinetic turbines can provide a source of electricity for remote areas located near a river or stream. The objective of this paper is to describe the design, simulation, build, and testing of a novel hydrokinetic turbine. The main components of the system are a permanent magnet synchronous generator (PMSG), a machined H-Darrieus rotor, an embedded controls system, and a cataraft. The design and construction of this device was conducted at the Oregon Institute of Technology in Wilsonville, Oregon.", "title": "" }, { "docid": "a1d0bf0d28bbe3dd568e7e01bc9d59c3", "text": "A novel coupling technique for circularly polarized annular-ring patch antenna is developed and discussed. The circular polarization (CP) radiation of the annular-ring patch antenna is achieved by a simple microstrip feed line through the coupling of a fan-shaped patch on the same plane of the antenna. Proper positioning of the coupling fan-shaped patch excites two orthogonal resonant modes with 90 phase difference, and a pure circular polarization is obtained. The dielectric material is a cylindrical block of ceramic with a permittivity of 25 and that reduces the size of the antenna. The prototype has been designed and fabricated and found to have an impedance bandwidth of 2.3% and a 3 dB axial-ratio bandwidth of about 0.6% at the center frequency of 2700 MHz. The characteristics of the proposed antenna have been by simulation software HFSS and experiment. The measured and simulated results are in good agreement.", "title": "" }, { "docid": "6b718717d5ecef343a8f8033803a55e6", "text": "BACKGROUND\nMedication and adverse drug event (ADE) information extracted from electronic health record (EHR) notes can be a rich resource for drug safety surveillance. Existing observational studies have mainly relied on structured EHR data to obtain ADE information; however, ADEs are often buried in the EHR narratives and not recorded in structured data.\n\n\nOBJECTIVE\nTo unlock ADE-related information from EHR narratives, there is a need to extract relevant entities and identify relations among them. In this study, we focus on relation identification. This study aimed to evaluate natural language processing and machine learning approaches using the expert-annotated medical entities and relations in the context of drug safety surveillance, and investigate how different learning approaches perform under different configurations.\n\n\nMETHODS\nWe have manually annotated 791 EHR notes with 9 named entities (eg, medication, indication, severity, and ADEs) and 7 different types of relations (eg, medication-dosage, medication-ADE, and severity-ADE). Then, we explored 3 supervised machine learning systems for relation identification: (1) a support vector machines (SVM) system, (2) an end-to-end deep neural network system, and (3) a supervised descriptive rule induction baseline system. For the neural network system, we exploited the state-of-the-art recurrent neural network (RNN) and attention models. We report the performance by macro-averaged precision, recall, and F1-score across the relation types.\n\n\nRESULTS\nOur results show that the SVM model achieved the best average F1-score of 89.1% on test data, outperforming the long short-term memory (LSTM) model with attention (F1-score of 65.72%) as well as the rule induction baseline system (F1-score of 7.47%) by a large margin. The bidirectional LSTM model with attention achieved the best performance among different RNN models. With the inclusion of additional features in the LSTM model, its performance can be boosted to an average F1-score of 77.35%.\n\n\nCONCLUSIONS\nIt shows that classical learning models (SVM) remains advantageous over deep learning models (RNN variants) for clinical relation identification, especially for long-distance intersentential relations. However, RNNs demonstrate a great potential of significant improvement if more training data become available. Our work is an important step toward mining EHRs to improve the efficacy of drug safety surveillance. Most importantly, the annotated data used in this study will be made publicly available, which will further promote drug safety research in the community.", "title": "" }, { "docid": "afbd0ecad829246ed7d6e1ebcebf5815", "text": "Battery thermal management system (BTMS) is essential for electric-vehicle (EV) and hybrid-vehicle (HV) battery packs to operate effectively in all climates. Lithium-ion (Li-ion) batteries offer many advantages to the EV such as high power and high specific energy. However, temperature affects their performance, safety, and productive life. This paper is about the design and evaluation of a BTMS based on the Peltier effect heat pumps. The discharge efficiency of a 60-Ah prismatic Li-ion pouch cell was measured under different rates and different ambient temperature values. The obtained results were used to design a solid-state BTMS based on Peltier thermoelectric coolers (TECs). The proposed BTMS is then modeled and evaluated at constant current discharge in the laboratory. In addition, The BTMS was installed in an EV that was driven in the US06 cycle. The thermal response and the energy consumption of the proposed BTMS were satisfactory.", "title": "" }, { "docid": "6f5ada16b55afc21f7291f7764ec85ee", "text": "Breast cancer is often treated with radiotherapy (RT), with two opposing tangential fields. When indicated, supraclavicular lymph nodes have to be irradiated, and a third anterior field is applied. The junction region has the potential to be over or underdosed. To overcome this problem, many techniques have been proposed. A literature review of 3 Dimensional Conformal RT (3D CRT) and older 3-field techniques was carried out. Intensity Modulated RT (IMRT) techniques are also briefly discussed. Techniques are categorized, few characteristic examples are presented and a comparison is attempted. Three-field techniques can be divided in monoisocentric and two-isocentric. Two-isocentric techniques can be further divided in full field and half field techniques. Monoisocentric techniques show certain great advantages over two-isocentric techniques. However, they are not always applicable and they require extra caution as they are characterized by high dose gradient in the junction region. IMRT has been proved to give better dosimetric results. Three-field matching is a complicated procedure, with potential of over or undredosage in the junction region. Many techniques have been proposed, each with advantages and disadvantages. Among them, monoisocentric techniques, when carefully applied, are the ideal choice, provided IMRT facility is not available. Otherwise, a two-isocentric half beam technique is recommended.", "title": "" }, { "docid": "601ffeb412bac0baa6fdb6da7a4a9a42", "text": "CLCWeb: Comparative Literature and Culture, the peer-reviewed, full-text, and open-access learned journal in the humanities and social sciences, publishes new scholarship following tenets of the discipline of comparative literature and the field of cultural studies designated as \"comparative cultural studies.\" Publications in the journal are indexed in the Annual Bibliography of English Language and Literature (Chadwyck-Healey), the Arts and Humanities Citation Index (Thomson Reuters ISI), the Humanities Index (Wilson), Humanities International Complete (EBSCO), the International Bibliography of the Modern Language Association of America, and Scopus (Elsevier). The journal is affiliated with the Purdue University Press monograph series of Books in Comparative Cultural Studies. Contact: <[email protected]>", "title": "" }, { "docid": "36fef38de53386e071ee2a1996aa733f", "text": "Knowledge embedding, which projects triples in a given knowledge base to d-dimensional vectors, has attracted considerable research efforts recently. Most existing approaches treat the given knowledge base as a set of triplets, each of whose representation is then learned separately. However, as a fact, triples are connected and depend on each other. In this paper, we propose a graph aware knowledge embedding method (GAKE), which formulates knowledge base as a directed graph, and learns representations for any vertices or edges by leveraging the graph’s structural information. We introduce three types of graph context for embedding: neighbor context, path context, and edge context, each reflects properties of knowledge from different perspectives. We also design an attention mechanism to learn representative power of different vertices or edges. To validate our method, we conduct several experiments on two tasks. Experimental results suggest that our method outperforms several state-of-art knowledge embedding models.", "title": "" }, { "docid": "8589ec481e78d14fbeb3e6e4205eee50", "text": "This paper presents a novel ensemble classifier generation technique RotBoost, which is constructed by combining Rotation Forest and AdaBoost. The experiments conducted with 36 real-world data sets available from the UCI repository, among which a classification tree is adopted as the base learning algorithm, demonstrate that RotBoost can generate ensemble classifiers with significantly lower prediction error than either Rotation Forest or AdaBoost more often than the reverse. Meanwhile, RotBoost is found to perform much better than Bagging and MultiBoost. Through employing the bias and variance decompositions of error to gain more insight of the considered classification methods, RotBoost is seen to simultaneously reduce the bias and variance terms of a single tree and the decrement achieved by it is much greater than that done by the other ensemble methods, which leads RotBoost to perform best among the considered classification procedures. Furthermore, RotBoost has a potential advantage over AdaBoost of suiting parallel execution. 2008 Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "fd70fff204201c33ed3d901c48560980", "text": "I n the early 1960s, the average American adult male weighed 168 pounds. Today, he weighs nearly 180 pounds. Over the same time period, the average female adult weight rose from 143 pounds to over 155 pounds (U.S. Department of Health and Human Services, 1977, 1996). In the early 1970s, 14 percent of the population was classified as medically obese. Today, obesity rates are two times higher (Centers for Disease Control, 2003). Weights have been rising in the United States throughout the twentieth century, but the rise in obesity since 1980 is fundamentally different from past changes. For most of the twentieth century, weights were below levels recommended for maximum longevity (Fogel, 1994), and the increase in weight represented an increase in health, not a decrease. Today, Americans are fatter than medical science recommends, and weights are still increasing. While many other countries have experienced significant increases in obesity, no other developed country is quite as heavy as the United States. What explains this growth in obesity? Why is obesity higher in the United States than in any other developed country? The available evidence suggests that calories expended have not changed significantly since 1980, while calories consumed have risen markedly. But these facts just push the puzzle back a step: why has there been an increase in calories consumed? We propose a theory based on the division of labor in food preparation. In the 1960s, the bulk of food preparation was done by families that cooked their own food and ate it at home. Since then, there has been a revolution in the mass preparation of food that is roughly comparable to the mass", "title": "" }, { "docid": "3cf174505ecd647930d762327fc7feb6", "text": "The purpose of the present study was to examine the relationship between workplace friendship and social loafing effect among employees in Certified Public Accounting (CPA) firms. Previous studies showed that workplace friendship has both positive and negative effects, meaning that there is an inconsistent relationship between workplace friendship and social loafing. The present study investigated the correlation between workplace friendship and social loafing effect among employees from CPA firms in Taiwan. The study results revealed that there was a negative relationship between workplace friendship and social loafing effect among CPA employees. In other words, the better the workplace friendship, the lower the social loafing effect. An individual would not put less effort in work when there was a low social loafing effect.", "title": "" }, { "docid": "b4d5bfc26bac32e1e1db063c3696540a", "text": "Symmetric positive semidefinite (SPSD) matrix approximation is an important problem with applications in kernel methods. However, existing SPSD matrix approximation methods such as the Nyström method only have weak error bounds. In this paper we conduct in-depth studies of an SPSD matrix approximation model and establish strong relative-error bounds. We call it the prototype model for it has more efficient and effective extensions, and some of its extensions have high scalability. Though the prototype model itself is not suitable for large-scale data, it is still useful to study its properties, on which the analysis of its extensions relies. This paper offers novel theoretical analysis, efficient algorithms, and a highly accurate extension. First, we establish a lower error bound for the prototype model, and we improve the error bound of an existing column selection algorithm to match the lower bound. In this way, we obtain the first optimal column selection algorithm for the prototype model. We also prove that the prototype model is exact under certain conditions. Second, we develop a simple column selection algorithm with a provable error bound. Third, we propose a socalled spectral shifting model to make the approximation more accurate when the spectrum of the matrix decays slowly, and the improvement is theoretically quantified. The spectral shifting method can also be applied to improve other SPSD matrix approximation models.", "title": "" }, { "docid": "0a143c2d4af3cc726964a90927556399", "text": "Humans prefer to interact with each other using speech. Since this is the most natural mode of communication, the humans also want to interact with machines using speech only. So, automatic speech recognition has gained a lot of popularity. Different approaches for speech recognition exists like Hidden Markov Model (HMM), Dynamic Time Warping (DTW), Vector Quantization (VQ), etc. This paper uses Neural Network (NN) along with Mel Frequency Cepstrum Coefficients (MFCC) for speech recognition. Mel Frequency Cepstrum Coefiicients (MFCC) has been used for the feature extraction of speech. This gives the feature of the waveform. For pattern matching FeedForward Neural Network with Back propagation algorithm has been applied. The paper analyzes the various training algorithms present for training the Neural Network and uses train scg for the experiment. The work has been done on MATLAB and experimental results show that system is able to recognize words at sufficiently high accuracy.", "title": "" }, { "docid": "c2553e6256ef130fbd5bc0029bb5e7b7", "text": "Using Blockchain seems a promising approach for Business Process Reengineering (BPR) to alleviate trust issues among stakeholders, by providing decentralization, transparency, traceability, and immutability of information along with its business logic. However, little work seems to be available on utilizing Blockchain for supporting BPR in a systematic and rational way, potentially leading to disappointments and even doubts on the utility of Blockchain. In this paper, as ongoing research, we outline Fides - a framework for exploiting Blockchain towards enhancing the trustworthiness for BPR. Fides supports diagnosing trust issues with AS-IS business processes, exploring TO-BE business process alternatives using Blockchain, and selecting among the alternatives. A business process of a retail chain for a food supply chain is used throughout the paper to illustrate Fides concepts.", "title": "" }, { "docid": "562ec4c39f0d059fbb9159ecdecd0358", "text": "In this paper, we propose the factorized hidden layer FHL approach to adapt the deep neural network DNN acoustic models for automatic speech recognition ASR. FHL aims at modeling speaker dependent SD hidden layers by representing an SD affine transformation as a linear combination of bases. The combination weights are low-dimensional speaker parameters that can be initialized using speaker representations like i-vectors and then reliably refined in an unsupervised adaptation fashion. Therefore, our method provides an efficient way to perform both adaptive training and test-time adaptation. Experimental results have shown that the FHL adaptation improves the ASR performance significantly, compared to the standard DNN models, as well as other state-of-the-art DNN adaptation approaches, such as training with the speaker-normalized CMLLR features, speaker-aware training using i-vector and learning hidden unit contributions LHUC. For Aurora 4, FHL achieves 3.8% and 2.3% absolute improvements over the standard DNNs trained on the LDA + STC and CMLLR features, respectively. It also achieves 1.7% absolute performance improvement over a system that combines the i-vector adaptive training with LHUC adaptation. For the AMI dataset, FHL achieved 1.4% and 1.9% absolute improvements over the sequence-trained CMLLR baseline systems, for the IHM and SDM tasks, respectively.", "title": "" }, { "docid": "8d9cae70a7334afcd558c0fa850d551a", "text": "A popular approach to solving large probabilistic systems relies on aggregating states based on a measure of similarity. Many approaches in the literature are heuristic. A number of recent methods rely instead on metrics based on the notion of bisimulation, or behavioral equivalence between states (Givan et al., 2003; Ferns et al., 2004). An integral component of such metrics is the Kantorovich metric between probability distributions. However, while this metric enables many satisfying theoretical properties, it is costly to compute in practice. In this paper, we use techniques from network optimization and statistical sampling to overcome this problem. We obtain in this manner a variety of distance functions for MDP state aggregation that differ in the tradeoff between time and space complexity, as well as the quality of the aggregation. We provide an empirical evaluation of these tradeoffs.", "title": "" }, { "docid": "e22564e88d82b91e266b0a118bd2ec91", "text": "Non-lethal dose of 70% ethanol extract of the Nerium oleander dry leaves (1000 mg/kg body weight) was subcutaneously injected into male and female mice once a week for 9 weeks (total 10 doses). One day after the last injection, final body weight gain (relative percentage to the initial body weight) had a tendency, in both males and females, towards depression suggesting a metabolic insult at other sites than those involved in myocardial function. Multiple exposure of the mice to the specified dose failed to express a significant influence on blood parameters (WBC, RBC, Hb, HCT, PLT) as well as myocardium. On the other hand, a lethal dose (4000 mg/kg body weight) was capable of inducing progressive changes in myocardial electrical activity ending up in cardiac arrest. The electrocardiogram abnormalities could be brought about by the expected Na+, K(+)-ATPase inhibition by the cardiac glycosides (cardenolides) content of the lethal dose.", "title": "" }, { "docid": "3b64e99ea608819fc4bf06a6850a5aff", "text": "Cloud computing is one of the most useful technology that is been widely used all over the world. It generally provides on demand IT services and products. Virtualization plays a major role in cloud computing as it provides a virtual storage and computing services to the cloud clients which is only possible through virtualization. Cloud computing is a new business computing paradigm that is based on the concepts of virtualization, multi-tenancy, and shared infrastructure. This paper discusses about cloud computing, how virtualization is done in cloud computing, virtualization basic architecture, its advantages and effects [1].", "title": "" } ]
scidocsrr
52c1d35a8fd58fe024f3b5b19174c2ce
Blockchain And Its Applications
[ { "docid": "469c17aa0db2c70394f081a9a7c09be5", "text": "The potential of blockchain technology has received attention in the area of FinTech — the combination of finance and technology. Blockchain technology was first introduced as the technology behind the Bitcoin decentralized virtual currency, but there is the expectation that its characteristics of accurate and irreversible data transfer in a decentralized P2P network could make other applications possible. Although a precise definition of blockchain technology has not yet been given, it is important to consider how to classify different blockchain systems in order to better understand their potential and limitations. The goal of this paper is to add to the discussion on blockchain technology by proposing a classification based on two dimensions external to the system: (1) existence of an authority (without an authority and under an authority) and (2) incentive to participate in the blockchain (market-based and non-market-based). The combination of these elements results in four types of blockchains. We define these dimensions and describe the characteristics of the blockchain systems belonging to each classification.", "title": "" }, { "docid": "4deea3312fe396f81919b07462551682", "text": "The purpose of this paper is to explore applications of blockchain technology related to the 4th Industrial Revolution (Industry 4.0) and to present an example where blockchain is employed to facilitate machine-to-machine (M2M) interactions and establish a M2M electricity market in the context of the chemical industry. The presented scenario includes two electricity producers and one electricity consumer trading with each other over a blockchain. The producers publish exchange offers of energy (in kWh) for currency (in USD) in a data stream. The consumer reads the offers, analyses them and attempts to satisfy its energy demand at a minimum cost. When an offer is accepted it is executed as an atomic exchange (multiple simultaneous transactions). Additionally, this paper describes and discusses the research and application landscape of blockchain technology in relation to the Industry 4.0. It concludes that this technology has significant under-researched potential to support and enhance the efficiency gains of the revolution and identifies areas for future research. Producer 2 • Issue energy • Post purchase offers (as atomic transactions) Consumer • Look through the posted offers • Choose cheapest and satisfy its own demand Blockchain Stream Published offers are visible here Offer sent", "title": "" } ]
[ { "docid": "98d998eae1fa7a00b73dcff0251f0bbd", "text": "Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 [19] and COCO-text [39]. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition.", "title": "" }, { "docid": "d6ca38ccad91c0c2c51ba3dd5be454b2", "text": "Dirty data is a serious problem for businesses leading to incorrect decision making, inefficient daily operations, and ultimately wasting both time and money. Dirty data often arises when domain constraints and business rules, meant to preserve data consistency and accuracy, are enforced incompletely or not at all in application code. In this work, we propose a new data-driven tool that can be used within an organization’s data quality management process to suggest possible rules, and to identify conformant and non-conformant records. Data quality rules are known to be contextual, so we focus on the discovery of context-dependent rules. Specifically, we search for conditional functional dependencies (CFDs), that is, functional dependencies that hold only over a portion of the data. The output of our tool is a set of functional dependencies together with the context in which they hold (for example, a rule that states for CS graduate courses, the course number and term functionally determines the room and instructor). Since the input to our tool will likely be a dirty database, we also search for CFDs that almost hold. We return these rules together with the non-conformant records (as these are potentially dirty records). We present effective algorithms for discovering CFDs and dirty values in a data instance. Our discovery algorithm searches for minimal CFDs among the data values and prunes redundant candidates. No universal objective measures of data quality or data quality rules are known. Hence, to avoid returning an unnecessarily large number of CFDs and only those that are most interesting, we evaluate a set of interest metrics and present comparative results using real datasets. We also present an experimental study showing the scalability of our techniques.", "title": "" }, { "docid": "d65376ed544623a927a868b35394409e", "text": "The balance compensating techniques for asymmetric Marchand balun are presented in this letter. The amplitude and phase difference are characterized explicitly by S21 and S31, from which the factors responsible for the balance compensating are determined. Finally, two asymmetric Marchand baluns, which have normal and enhanced balance compensation, respectively, are designed and fabricated in a 0.18 μm CMOS technology for demonstration. The simulation and measurement results show that the proposed balance compensating techniques are valid in a very wide frequency range up to millimeter-wave (MMW) band.", "title": "" }, { "docid": "99c29c6cacb623a857817c412d6d9515", "text": "Considering the rapid growth of China’s elderly rural population, establishing both an adequate and a financially sustainable rural pension system is a major challenge. Focusing on financial sustainability, this article defines this concept of financial sustainability before constructing sound actuarial models for China’s rural pension system. Based on these models and statistical data, the analysis finds that the rural pension funding gap should rise from 97.80 billion Yuan in 2014 to 3062.31 billion Yuan in 2049, which represents an annual growth rate of 10.34%. This implies that, as it stands, the rural pension system in China is not financially sustainable. Finally, the article explains how this problem could be fixed through policy recommendations based on recent international experiences.", "title": "" }, { "docid": "b8fa649e8b5a60a05aad257a0a364b51", "text": "This work intends to build a Game Mechanics Ontology based on the mechanics category presented in BoardGameGeek.com vis à vis the formal concepts from the MDA framework. The 51 concepts presented in BoardGameGeek (BGG) as game mechanics are analyzed and arranged in a systemic way in order to build a domain sub-ontology in which the root concept is the mechanics as defined in MDA. The relations between the terms were built from its available descriptions as well as from the authors’ previous experiences. Our purpose is to show that a set of terms commonly accepted by players can lead us to better understand how players perceive the games components that are closer to the designer. The ontology proposed in this paper is not exhaustive. The intent of this work is to supply a tool to game designers, scholars, and others that see game artifacts as study objects or are interested in creating games. However, although it can be used as a starting point for games construction or study, the proposed Game Mechanics Ontology should be seen as the seed of a domain ontology encompassing game mechanics in general.", "title": "" }, { "docid": "117c66505964344d9c350a4e57a4a936", "text": "Sorting is a key kernel in numerous big data application including database operations, graphs and text analytics. Due to low control overhead, parallel bitonic sorting networks are usually employed for hardware implementations to accelerate sorting. Although a typical implementation of merge sort network can lead to low latency and small memory usage, it suffers from low throughput due to the lack of parallelism in the final stage. We analyze a pipelined merge sort network, showing its theoretical limits in terms of latency, memory and, throughput. To increase the throughput, we propose a merge sort based hybrid design where the final few stages in the merge sort network are replaced with “folded” bitonic merge networks. In these “folded” networks, all the interconnection patterns are realized by streaming permutation networks (SPN). We present a theoretical analysis to quantify latency, memory and throughput of our proposed design. Performance evaluations are performed by experiments on Xilinx Virtex-7 FPGA with post place-androute results. We demonstrate that our implementation achieves a throughput close to 10 GBps, outperforming state-of-the-art implementation of sorting on the same hardware by 1.2x, while preserving lower latency and higher memory efficiency.", "title": "" }, { "docid": "28fa91e4476522f895a6874ebc967cfa", "text": "The lifetime of micro electro–thermo–mechanical actuators with complex electro–thermo–mechanical coupling mechanisms can be decreased significantly due to unexpected failure events. Even more serious is the fact that various failures are tightly coupled due to micro-size and multi-physics effects. Interrelation between performance and potential failures should be established to predict reliability of actuators and improve their design. Thus, a multiphysics modeling approach is proposed to evaluate such interactive effects of failure mechanisms on actuators, where potential failures are pre-analyzed via FMMEA (Failure Modes, Mechanisms, and Effects Analysis) tool for guiding the electro–thermo–mechanical-reliability modeling process. Peak values of temperature, thermal stresses/strains and tip deflection are estimated as indicators for various failure modes and factors (e.g. residual stresses, thermal fatigue, electrical overstress, plastic deformation and parameter variations). Compared with analytical solutions and experimental data, the obtained simulation results were found suitable for coupled performance and reliability analysis of micro actuators and assessment of their design.", "title": "" }, { "docid": "e502cdbbbf557c8365b0d4b69745e225", "text": "This half-day hands-on studio will teach how to design and develop effective interfaces for head mounted and wrist worn wearable computers through the application of user-centered design principles. Attendees will learn gain the knowledge and tools needed to rapidly develop prototype applications, and also complete a hands-on design task. They will also learn good design guidelines for wearable systems and how to apply those guidelines. A variety of tools will be used that do not require any hardware or software experience, many of which are free and/or open source. Attendees will also be provided with material that they can use to continue their learning after the studio is over.", "title": "" }, { "docid": "7e004a7b6a39ff29176dd19a07c15448", "text": "All humans will become presbyopic as part of the aging process where the eye losses the ability to focus at different depths. Progressive additive lenses (PALs) allow a person to focus on objects located at near versus far by combing lenses of different strengths within the same spectacle. However, it is unknown why some patients easily adapt to wearing these lenses while others struggle and complain of vertigo, swim, and nausea as well as experience difficulties with balance. Sixteen presbyopes (nine who adapted to PALs and seven who had tried but could not adapt) participated in this study. This research investigated vergence dynamics and its adaptation using a short-term motor learning experiment to asses the ability to adapt. Vergence dynamics were on average faster and the ability to change vergence dynamics was also greater for presbyopes who adapted to progressive lenses compared to those who could not. Data suggest that vergence dynamics and its adaptation may be used to predict which patients will easily adapt to progressive lenses and discern those who will have difficulty.", "title": "" }, { "docid": "6f9afe3cbf5cc675c6b4e96ee2ccfa76", "text": "As more firms begin to collect (and seek value from) richer customer-level datasets, a focus on the emerging concept of customer-base analysis is becoming increasingly common and critical. Such analyses include forward-looking projections ranging from aggregate-level sales trajectories to individual-level conditional expectations (which, in turn, can be used to derive estimates of customer lifetime value). We provide an overview of a class of parsimonious models (called probability models) that are well-suited to meet these rising challenges. We first present a taxonomy that captures some of the key distinctions across different kinds of business settings and customer relationships, and identify some of the unique modeling and measurement issues that arise across them. We then provide deeper coverage of these modeling issues, first for noncontractual settings (i.e., situations in which customer “death” is unobservable), then contractual ones (i.e., situations in which customer “death” can be observed). We review recent literature in these areas, highlighting substantive insights that arise from the research as well as the methods used to capture them. We focus on practical applications that use appropriately chosen data summaries (such as recency and frequency) and rely on commonly available software packages (such as Microsoft Excel). n 2009 Direct Marketing Educational Foundation, Inc. Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "213313382d4e5d24a065d551012887ed", "text": "The authors present full wave simulations and experimental results of propagation of electromagnetic waves in shallow seawaters. Transmitter and receiver antennas are ten-turns loops placed on the seabed. Some propagation frameworks are presented and simulated. Finally, simulation results are compared with experimental ones.", "title": "" }, { "docid": "b02dcd4d78f87d8ac53414f0afd8604b", "text": "This paper presents an ultra-low-power event-driven analog-to-digital converter (ADC) with real-time QRS detection for wearable electrocardiogram (ECG) sensors in wireless body sensor network (WBSN) applications. Two QRS detection algorithms, pulse-triggered (PUT) and time-assisted PUT (t-PUT), are proposed based on the level-crossing events generated from the ADC. The PUT detector achieves 97.63% sensitivity and 97.33% positive prediction in simulation on the MIT-BIH Arrhythmia Database. The t-PUT improves the sensitivity and positive prediction to 97.76% and 98.59% respectively. Fabricated in 0.13 μm CMOS technology, the ADC with QRS detector consumes only 220 nW measured under 300 mV power supply, making it the first nanoWatt compact analog-to-information (A2I) converter with embedded QRS detector.", "title": "" }, { "docid": "caab00ae6fcae59258ad4e45f787db64", "text": "Traditional bullying has received considerable research but the emerging phenomenon of cyber-bullying much less so. Our study aims to investigate environmental and psychological factors associated with traditional and cyber-bullying. In a school-based 2-year prospective survey, information was collected on 1,344 children aged 10 including bullying behavior/experience, depression, anxiety, coping strategies, self-esteem, and psychopathology. Parents reported demographic data, general health, and attention-deficit hyperactivity disorder (ADHD) symptoms. These were investigated in relation to traditional and cyber-bullying perpetration and victimization at age 12. Male gender and depressive symptoms were associated with all types of bullying behavior and experience. Living with a single parent was associated with perpetration of traditional bullying while higher ADHD symptoms were associated with victimization from this. Lower academic achievement and lower self esteem were associated with cyber-bullying perpetration and victimization, and anxiety symptoms with cyber-bullying perpetration. After adjustment, previous bullying perpetration was associated with victimization from cyber-bullying but not other outcomes. Cyber-bullying has differences in predictors from traditional bullying and intervention programmes need to take these into consideration.", "title": "" }, { "docid": "e5aed574fbe4560a794cf8b77fb84192", "text": "Warping is one of the basic image processing techniques. Directly applying existing monocular image warping techniques to stereoscopic images is problematic as it often introduces vertical disparities and damages the original disparity distribution. In this paper, we show that these problems can be solved by appropriately warping both the disparity map and the two images of a stereoscopic image. We accordingly develop a technique for extending existing image warping algorithms to stereoscopic images. This technique divides stereoscopic image warping into three steps. Our method first applies the user-specified warping to one of the two images. Our method then computes the target disparity map according to the user specified warping. The target disparity map is optimized to preserve the perceived 3D shape of image content after image warping. Our method finally warps the other image using a spatially-varying warping method guided by the target disparity map. Our experiments show that our technique enables existing warping methods to be effectively applied to stereoscopic images, ranging from parametric global warping to non-parametric spatially-varying warping.", "title": "" }, { "docid": "22bb6af742b845dea702453b6b14ef3a", "text": "Errors are prevalent in data sequences, such as GPS trajectories or sensor readings. Existing methods on cleaning sequential data employ a constraint on value changing speeds and perform constraint-based repairing. While such speed constraints are effective in identifying large spike errors, the small errors that do not significantly deviate from the truth and indeed satisfy the speed constraints can hardly be identified and repaired. To handle such small errors, in this paper, we propose a statistical based cleaning method. Rather than declaring a broad constraint of max/min speeds, we model the probability distribution of speed changes. The repairing problem is thus to maximize the likelihood of the sequence w.r.t. the probability of speed changes. We formalize the likelihood-based cleaning problem, show its NP-hardness, devise exact algorithms, and propose several approximate/heuristic methods to trade off effectiveness for efficiency. Experiments on real data sets (in various applications) demonstrate the superiority of our proposal.", "title": "" }, { "docid": "cc8a4744f05d5f46feacaff27b91a86c", "text": "In the recent past, several sampling-based algorithms have been proposed to compute trajectories that are collision-free and dynamically-feasible. However, the outputs of such algorithms are notoriously jagged. In this paper, by focusing on robots with car-like dynamics, we present a fast and simple heuristic algorithm, named Convex Elastic Smoothing (CES) algorithm, for trajectory smoothing and speed optimization. The CES algorithm is inspired by earlier work on elastic band planning and iteratively performs shape and speed optimization. The key feature of the algorithm is that both optimization problems can be solved via convex programming, making CES particularly fast. A range of numerical experiments show that the CES algorithm returns high-quality solutions in a matter of a few hundreds of milliseconds and hence appears amenable to a real-time implementation.", "title": "" }, { "docid": "f44d3512cd8658f824b0ba0ea5a69e4a", "text": "Customer retention is a major issue for various service-based organizations particularly telecom industry, wherein predictive models for observing the behavior of customers are one of the great instruments in customer retention process and inferring the future behavior of the customers. However, the performances of predictive models are greatly affected when the real-world data set is highly imbalanced. A data set is called imbalanced if the samples size from one class is very much smaller or larger than the other classes. The most commonly used technique is over/under sampling for handling the class-imbalance problem (CIP) in various domains. In this paper, we survey six well-known sampling techniques and compare the performances of these key techniques, i.e., mega-trend diffusion function (MTDF), synthetic minority oversampling technique, adaptive synthetic sampling approach, couples top-N reverse k-nearest neighbor, majority weighted minority oversampling technique, and immune centroids oversampling technique. Moreover, this paper also reveals the evaluation of four rules-generation algorithms (the learning from example module, version 2 (LEM2), covering, exhaustive, and genetic algorithms) using publicly available data sets. The empirical results demonstrate that the overall predictive performance of MTDF and rules-generation based on genetic algorithms performed the best as compared with the rest of the evaluated oversampling methods and rule-generation algorithms.", "title": "" }, { "docid": "3e9de22ac9f81cf3233950a0d72ef15a", "text": "Increasing of head rise (HR) and decreasing of head loss (HL), simultaneously, are important purpose in the design of different types of fans. Therefore, multi-objective optimization process is more applicable for the design of such turbo machines. In the present study, multi-objective optimization of Forward-Curved (FC) blades centrifugal fans is performed at three steps. At the first step, Head rise (HR) and the Head loss (HL) in a set of FC centrifugal fan is numerically investigated using commercial software NUMECA. Two meta-models based on the evolved group method of data handling (GMDH) type neural networks are obtained, at the second step, for modeling of HR and HL with respect to geometrical design variables. Finally, using obtained polynomial neural networks, multi-objective genetic algorithms are used for Pareto based optimization of FC centrifugal fans considering two conflicting objectives, HR and HL. It is shown that some interesting and important relationships as useful optimal design principles involved in the performance of FC fans can be discovered by Pareto based multi-objective optimization of the obtained polynomial meta-models representing their HR and HL characteristics. Such important optimal principles would not have been obtained without the use of both GMDH type neural network modeling and the Pareto optimization approach.", "title": "" }, { "docid": "bddf8420c2dd67dd5be10556088bf653", "text": "The Hadoop Distributed File System (HDFS) is a distributed storage system that stores large-scale data sets reliably and streams those data sets to applications at high bandwidth. HDFS provides high performance, reliability and availability by replicating data, typically three copies of every data. The data in HDFS changes in popularity over time. To get better performance and higher disk utilization, the replication policy of HDFS should be elastic and adapt to data popularity. In this paper, we describe ERMS, an elastic replication management system for HDFS. ERMS provides an active/standby storage model for HDFS. It utilizes a complex event processing engine to distinguish real-time data types, and then dynamically increases extra replicas for hot data, cleans up these extra replicas when the data cool down, and uses erasure codes for cold data. ERMS also introduces a replica placement strategy for the extra replicas of hot data and erasure coding parities. The experiments show that ERMS effectively improves the reliability and performance of HDFS and reduce storage overhead.", "title": "" }, { "docid": "40beda0d1e99f4cc5a15a3f7f6438ede", "text": "One of the major challenges with electric shipboard power systems (SPS) is preserving the survivability of the system under fault situations. Some minor faults in SPS can result in catastrophic consequences. Therefore, it is essential to investigate available fault management techniques for SPS applications that can enhance SPS robustness and reliability. Many recent studies in this area take different approaches to address fault tolerance in SPSs. This paper provides an overview of the concepts and methodologies that are utilized to deal with faults in the electric SPS. First, a taxonomy of the types of faults and their sources in SPS is presented; then, the methods that are used to detect, identify, isolate, and manage faults are reviewed. Furthermore, common techniques for designing a fault management system in SPS are analyzed and compared. This paper also highlights several possible future research directions.", "title": "" } ]
scidocsrr
1089ad0b6e4711d848b904c08ad9bc56
THE FAILURE OF E-GOVERNMENT IN DEVELOPING COUNTRIES: A LITERATURE REVIEW
[ { "docid": "310aa30e2dd2b71c09780f7984a3663c", "text": "E-governance is more than just a government website on the Internet. The strategic objective of e-governance is to support and simplify governance for all parties; government, citizens and businesses. The use of ICTs can connect all three parties and support processes and activities. In other words, in e-governance electronic means support and stimulate good governance. Therefore, the objectives of e-governance are similar to the objectives of good governance. Good governance can be seen as an exercise of economic, political, and administrative authority to better manage affairs of a country at all levels. It is not difficult for people in developed countries to imagine a situation in which all interaction with government can be done through one counter 24 hours a day, 7 days a week, without waiting in lines. However to achieve this same level of efficiency and flexibility for developing countries is going to be difficult. The experience in developed countries shows that this is possible if governments are willing to decentralize responsibilities and processes, and if they start to use electronic means. This paper is going to examine the legal and infrastructure issues related to e-governance from the perspective of developing countries. Particularly it will examine how far the developing countries have been successful in providing a legal framework.", "title": "" } ]
[ { "docid": "70242cb6aee415682c03da6bfd033845", "text": "This paper presents a class of linear predictors for nonlinear controlled dynamical systems. The basic idea is to lift (or embed) the nonlinear dynamics into a higher dimensional space where its evolution is approximately linear. In an uncontrolled setting, this procedure amounts to numerical approximations of the Koopman operator associated to the nonlinear dynamics. In this work, we extend the Koopman operator to controlled dynamical systems and apply the Extended Dynamic Mode Decomposition (EDMD) to compute a finite-dimensional approximation of the operator in such a way that this approximation has the form of a linear controlled dynamical system. In numerical examples, the linear predictors obtained in this way exhibit a performance superior to existing linear predictors such as those based on local linearization or the so called Carleman linearization. Importantly, the procedure to construct these linear predictors is completely data-driven and extremely simple – it boils down to a nonlinear transformation of the data (the lifting) and a linear least squares problem in the lifted space that can be readily solved for large data sets. These linear predictors can be readily used to design controllers for the nonlinear dynamical system using linear controller design methodologies. We focus in particular on model predictive control (MPC) and show that MPC controllers designed in this way enjoy computational complexity of the underlying optimization problem comparable to that of MPC for a linear dynamical system with the same number of control inputs and the same dimension of the state-space. Importantly, linear inequality constraints on the state and control inputs as well as nonlinear constraints on the state can be imposed in a linear fashion in the proposed MPC scheme. Similarly, cost functions nonlinear in the state variable can be handled in a linear fashion. We treat both the full-state measurement case and the input-output case, as well as systems with disturbances / noise. Numerical examples (including a high-dimensional nonlinear PDE control) demonstrate the approach with the source code available online2.", "title": "" }, { "docid": "ced13f6c3e904f5bd833e2f2621ae5e2", "text": "A growing amount of research focuses on learning in group settings and more specifically on learning in computersupported collaborative learning (CSCL) settings. Studies on western students indicate that online collaboration enhances student learning achievement; however, few empirical studies have examined student satisfaction, performance, and knowledge construction through online collaboration from a cross-cultural perspective. This study examines satisfaction, performance, and knowledge construction via online group discussions of students in two different cultural contexts. Students were both first-year university students majoring in educational sciences at a Flemish university and a Chinese university. Differences and similarities of the two groups of students with regard to satisfaction, learning process, and achievement were analyzed.", "title": "" }, { "docid": "3a0da20211697fbcce3493aff795556c", "text": "OBJECTIVES\nWe studied whether park size, number of features in the park, and distance to a park from participants' homes were related to a park being used for physical activity.\n\n\nMETHODS\nWe collected observational data on 28 specific features from 33 parks. Adult residents in surrounding areas (n=380) completed 7-day physical activity logs that included the location of their activities. We used logistic regression to examine the relative importance of park size, features, and distance to participants' homes in predicting whether a park was used for physical activity, with control for perceived neighborhood safety and aesthetics.\n\n\nRESULTS\nParks with more features were more likely to be used for physical activity; size and distance were not significant predictors. Park facilities were more important than were park amenities. Of the park facilities, trails had the strongest relationship with park use for physical activity.\n\n\nCONCLUSIONS\nSpecific park features may have significant implications for park-based physical activity. Future research should explore these factors in diverse neighborhoods and diverse parks among both younger and older populations.", "title": "" }, { "docid": "102ed07783d46a8ebadcad4b30ccb3c8", "text": "Ongoing innovations in recurrent neural network architectures have provided a steady influx of apparently state-of-the-art results on language modelling benchmarks. However, these have been evaluated using differing codebases and limited computational resources, which represent uncontrolled sources of experimental variation. We reevaluate several popular architectures and regularisation methods with large-scale automatic black-box hyperparameter tuning and arrive at the somewhat surprising conclusion that standard LSTM architectures, when properly regularised, outperform more recent models. We establish a new state of the art on the Penn Treebank and Wikitext-2 corpora, as well as strong baselines on the Hutter Prize dataset.", "title": "" }, { "docid": "99206cfadd7aeb90f4cebaa1edebc0e1", "text": "An energy-efficient gait planning (EEGP) and control system is established for biped robots with three-mass inverted pendulum mode (3MIPM), which utilizes both vertical body motion (VBM) and allowable zero-moment-point (ZMP) region (AZR). Given a distance to be traveled, we newly designed an online gait synthesis algorithm to construct a complete walking cycle, i.e., a starting step, multiple cyclic steps, and a stopping step, in which: 1) ZMP was fully manipulated within AZR; and 2) vertical body movement was allowed to relieve knee bending. Moreover, gait parameter optimization is effectively performed to determine the optimal set of gait parameters, i.e., average body height and amplitude of VBM, number of steps, and average walking speed, which minimizes energy consumption of actuation motors for leg joints under practical constraints, i.e., geometrical constraints, friction force limit, and yawing moment limit. Various simulations were conducted to identify the effectiveness of the proposed method and verify energy-saving performance for various ZMP regions. Our control system was implemented and tested on the humanoid robot DARwIn-OP.", "title": "" }, { "docid": "9fc2d92c42400a45cb7bf6c998dc9236", "text": "This paper presents a new probabilistic model of information retrieval. The most important modeling assumption made is that documents and queries are defined by an ordered sequence of single terms. This assumption is not made in well-known existing models of information retrieval, but is essential in the field of statistical natural language processing. Advances already made in statistical natural language processing will be used in this paper to formulate a probabilistic justification for using tf×idf term weighting. The paper shows that the new probabilistic interpretation of tf×idf term weighting might lead to better understanding of statistical ranking mechanisms, for example by explaining how they relate to coordination level ranking. A pilot experiment on the TREC collection shows that the linguistically motivated weighting algorithm outperforms the popular BM25 weighting algorithm.", "title": "" }, { "docid": "c1ba049befffa94e358555056df15cc2", "text": "People design what they say specifically for their conversational partners, and they adapt to their partners over the course of a conversation. A comparison of keyboard conversations involving a simulated computer partner (as in a natural language interface) with those involving a human partner (as in teleconferencing) yielded striking differences and some equally striking similarities. For instance, there were significantly fewer acknowledgments in human/computer dialogue than in human/human. However, regardless of the conversational partner, people expected connectedness across conversational turns. In addition, the style of a partner's response shaped what people subsequently typed. These results suggest some issues that need to be addressed before a natural language computer interface will be able to hold up its end of a conversation.", "title": "" }, { "docid": "277bdeccc25baa31ba222ff80a341ef2", "text": "Teaching by examples and cases is widely used to promote learning, but it varies widely in its effectiveness. The authors test an adaptation to case-based learning that facilitates abstracting problemsolving schemas from examples and using them to solve further problems: analogical encoding, or learning by drawing a comparison across examples. In 3 studies, the authors examined schema abstraction and transfer among novices learning negotiation strategies. Experiment 1 showed a benefit for analogical learning relative to no case study. Experiment 2 showed a marked advantage for comparing two cases over studying the 2 cases separately. Experiment 3 showed that increasing the degree of comparison support increased the rate of transfer in a face-to-face dynamic negotiation exercise.", "title": "" }, { "docid": "a0c6b1817a08d1be63dff9664852a6b4", "text": "Despite years of HCI research on digital technology in museums, it is still unclear how different interactions impact on visitors'. A comparative evaluation of smart replicas, phone app and smart cards looked at the personal preferences, behavioural change, and the appeal of mobiles in museums. 76 participants used all three interaction modes and gave their opinions in a questionnaire; participants interaction was also observed. The results show the phone is the most disliked interaction mode while tangible interaction (smart card and replica combined) is the most liked. Preference for the phone favour mobility to the detriment of engagement with the exhibition. Different behaviours when interacting with the phone or the tangibles where observed. The personal visiting style appeared to be only marginally affected by the device. Visitors also expect museums to provide the phones against the current trend of developing apps in a \"bring your own device\" approach.", "title": "" }, { "docid": "d9df98fbd7281b67347df0f2643323fa", "text": "Predefined categories can be assigned to the natural language text using for text classification. It is a “bag-of-word” representation, previous documents have a word with values, it represents how frequently this word appears in the document or not. But large documents may face many problems because they have irrelevant or abundant information is there. This paper explores the effect of other types of values, which express the distribution of a word in the document. These values are called distributional features. All features are calculated by tfidf style equation and these features are combined with machine learning techniques. Term frequency is one of the major factor for distributional features it holds weighted item set. When the need is to minimize a certain score function, discovering rare data correlations is more interesting than mining frequent ones. This paper tackles the issue of discovering rare and weighted item sets, i.e., the infrequent weighted item set mining problem. The classifier which gives the more accurate result is selected for categorization. Experiments show that the distributional features are useful for text categorization.", "title": "" }, { "docid": "46f646c82f30eae98142c83045176353", "text": "In this article, the authors present a psychodynamically oriented psychotherapy approach for posttraumatic stress disorder (PTSD) related to childhood abuse. This neurobiologically informed, phase-oriented treatment approach, which has been developed in Germany during the past 20 years, takes into account the broad comorbidity and the large degree of ego-function impairment typically found in these patients. Based on a psychodynamic relationship orientation, this treatment integrates a variety of trauma-specific imaginative and resource-oriented techniques. The approach places major emphasis on the prevention of vicarious traumatization. The authors are presently planning to test the approach in a randomized controlled trial aimed at strengthening the evidence base for psychodynamic psychotherapy in PTSD.", "title": "" }, { "docid": "87c793be992e5d25c8422011bd52be12", "text": "A major challenge in real-world feature matching problems is to tolerate the numerous outliers arising in typical visual tasks. Variations in object appearance, shape, and structure within the same object class make it harder to distinguish inliers from outliers due to clutters. In this paper, we propose a max-pooling approach to graph matching, which is not only resilient to deformations but also remarkably tolerant to outliers. The proposed algorithm evaluates each candidate match using its most promising neighbors, and gradually propagates the corresponding scores to update the neighbors. As final output, it assigns a reliable score to each match together with its supporting neighbors, thus providing contextual information for further verification. We demonstrate the robustness and utility of our method with synthetic and real image experiments.", "title": "" }, { "docid": "d7108ba99aaa9231d926a52617baa712", "text": "In this paper, an ultra-compact single-chip solar energy harvesting IC using on-chip solar cell for biomedical implant applications is presented. By employing an on-chip charge pump with parallel connected photodiodes, a 3.5 <inline-formula> <tex-math notation=\"LaTeX\">$\\times$</tex-math></inline-formula> efficiency improvement can be achieved when compared with the conventional stacked photodiode approach to boost the harvested voltage while preserving a single-chip solution. A photodiode-assisted dual startup circuit (PDSC) is also proposed to improve the area efficiency and increase the startup speed by 77%. By employing an auxiliary charge pump (AQP) using zero threshold voltage (ZVT) devices in parallel with the main charge pump, a low startup voltage of 0.25 V is obtained while minimizing the reversion loss. A <inline-formula> <tex-math notation=\"LaTeX\">$4\\, {\\mathbf{V}}_{\\mathbf{in}}$</tex-math></inline-formula> gate drive voltage is utilized to reduce the conduction loss. Systematic charge pump and solar cell area optimization is also introduced to improve the energy harvesting efficiency. The proposed system is implemented in a standard 0.18- <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{m}$</tex-math></inline-formula> CMOS technology and occupies an active area of 1.54 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mm}^{2}$</tex-math></inline-formula>. Measurement results show that the on-chip charge pump can achieve a maximum efficiency of 67%. With an incident power of 1.22 <inline-formula> <tex-math notation=\"LaTeX\">$\\text{mW/cm}^{2}$</tex-math></inline-formula> from a halogen light source, the proposed energy harvesting IC can deliver an output power of 1.65 <inline-formula> <tex-math notation=\"LaTeX\">$\\mu\\text{W}$</tex-math></inline-formula> at 64% charge pump efficiency. The chip prototype is also verified using <italic>in-vitro</italic> experiment.", "title": "" }, { "docid": "e3eae34f1ad48264f5b5913a65bf1247", "text": "Double spending and blockchain forks are two main issues that the Bitcoin crypto-system is confronted with. The former refers to an adversary's ability to use the very same coin more than once while the latter reflects the occurrence of transient inconsistencies in the history of the blockchain distributed data structure. We present a new approach to tackle these issues: it consists in adding some local synchronization constraints on Bitcoin's validation operations, and in making these constraints independent from the native blockchain protocol. Synchronization constraints are handled by nodes which are randomly and dynamically chosen in the Bitcoin system. We show that with such an approach, content of the blockchain is consistent with all validated transactions and blocks which guarantees the absence of both double-spending attacks and blockchain forks.", "title": "" }, { "docid": "fb2287cb1c41441049288335f10fd473", "text": "One of the important types of information on the Web is the opinions expressed in the user generated content, e.g., customer reviews of products, forum posts, and blogs. In this paper, we focus on customer reviews of products. In particular, we study the problem of determining the semantic orientations (positive, negative or neutral) of opinions expressed on product features in reviews. This problem has many applications, e.g., opinion mining, summarization and search. Most existing techniques utilize a list of opinion (bearing) words (also called opinion lexicon) for the purpose. Opinion words are words that express desirable (e.g., great, amazing, etc.) or undesirable (e.g., bad, poor, etc) states. These approaches, however, all have some major shortcomings. In this paper, we propose a holistic lexicon-based approach to solving the problem by exploiting external evidences and linguistic conventions of natural language expressions. This approach allows the system to handle opinion words that are context dependent, which cause major difficulties for existing algorithms. It also deals with many special words, phrases and language constructs which have impacts on opinions based on their linguistic patterns. It also has an effective function for aggregating multiple conflicting opinion words in a sentence. A system, called Opinion Observer, based on the proposed technique has been implemented. Experimental results using a benchmark product review data set and some additional reviews show that the proposed technique is highly effective. It outperforms existing methods significantly", "title": "" }, { "docid": "d40aa76e76c44da4c6237f654dcdab45", "text": "The flipped classroom pedagogy has achieved significant mention in academic circles in recent years. \"Flipping\" involves the reinvention of a traditional course so that students engage with learning materials via recorded lectures and interactive exercises prior to attending class and then use class time for more interactive activities. Proper implementation of a flipped classroom is difficult to gauge, but combines successful techniques for distance education with constructivist learning theory in the classroom. While flipped classrooms are not a novel concept, technological advances and increased comfort with distance learning have made the tools to produce and consume course materials more pervasive. Flipped classroom experiments have had both positive and less-positive results and are generally measured by a significant improvement in learning outcomes. This study, however, analyzes the opinions of students in a flipped sophomore-level information technology course by using a combination of surveys and reflective statements. The author demonstrates that at the outset students are new - and somewhat receptive - to the concept of the flipped classroom. By the conclusion of the course satisfaction with the pedagogy is significant. Finally, student feedback is provided in an effort to inform instructors in the development of their own flipped classrooms.", "title": "" }, { "docid": "5838d6a17e2223c6421da33d5985edd1", "text": "In this article, I provide commentary on the Rudd et al. (2009) article advocating thorough informed consent with suicidal clients. I examine the Rudd et al. recommendations in light of their previous empirical-research and clinical-practice articles on suicidality, and from the perspective of clinical practice with suicidal clients in university counseling center settings. I conclude that thorough informed consent is a clinical intervention that is still in preliminary stages of development, necessitating empirical research and clinical training before actual implementation as an ethical clinical intervention. (PsycINFO Database Record (c) 2010 APA, all rights reserved).", "title": "" }, { "docid": "a4cfe72cae5bdaed110299d652e60a6f", "text": "Hoffa's (infrapatellar) fat pad (HFP) is one of the knee fat pads interposed between the joint capsule and the synovium. Located posterior to patellar tendon and anterior to the capsule, the HFP is richly innervated and, therefore, one of the sources of anterior knee pain. Repetitive local microtraumas, impingement, and surgery causing local bleeding and inflammation are the most frequent causes of HFP pain and can lead to a variety of arthrofibrotic lesions. In addition, the HFP may be secondarily involved to menisci and ligaments disorders, injuries of the patellar tendon and synovial disorders. Patients with oedema or abnormalities of the HFP on magnetic resonance imaging (MRI) are often symptomatic; however, these changes can also be seen in asymptomatic patients. Radiologists should be cautious in emphasising abnormalities of HFP since they do not always cause pain and/or difficulty in walking and, therefore, do not require therapy. Teaching Points • Hoffa's fat pad (HFP) is richly innervated and, therefore, a source of anterior knee pain. • HFP disorders are related to traumas, involvement from adjacent disorders and masses. • Patients with abnormalities of the HFP on MRI are often but not always symptomatic. • Radiologists should be cautious in emphasising abnormalities of HFP.", "title": "" }, { "docid": "4ae82b3362756b0efed84596076ea6fb", "text": "Smart grids equipped with bi-directional communication flow are expected to provide more sophisticated consumption monitoring and energy trading. However, the issues related to the security and privacy of consumption and trading data present serious challenges. In this paper we address the problem of providing transaction security in decentralized smart grid energy trading without reliance on trusted third parties. We have implemented a proof-of-concept for decentralized energy trading system using blockchain technology, multi-signatures, and anonymous encrypted messaging streams, enabling peers to anonymously negotiate energy prices and securely perform trading transactions. We conducted case studies to perform security analysis and performance evaluation within the context of the elicited security and privacy requirements.", "title": "" } ]
scidocsrr
19e7b796871086d407576d1f0ef80d83
Bidirectional Single-Stage Grid-Connected Inverter for a Battery Energy Storage System
[ { "docid": "f1e9c9106dd3cdd7b568d5513b39ac7a", "text": "This paper presents a novel zero-voltage switching (ZVS) approach to a grid-connected single-stage flyback inverter. The soft-switching of the primary switch is achieved by allowing negative current from the grid side through bidirectional switches placed on the secondary side of the transformer. Basically, the negative current discharges the metal-oxide-semiconductor field-effect transistor's output capacitor, thereby allowing turn on of the primary switch under zero voltage. To optimize the amount of reactive current required to achieve ZVS, a variable-frequency control scheme is implemented over the line cycle. In addition, the bidirectional switches on the secondary side of the transformer have ZVS during the turn- on times. Therefore, the switching losses of the bidirectional switches are negligible. A 250-W prototype has been implemented to validate the proposed scheme. Experimental results confirm the feasibility and superior performance of the converter compared with the conventional flyback inverter.", "title": "" }, { "docid": "5042532d025cd5bdb21893a2c2e9f9b4", "text": "This paper presents an energy sharing state-of-charge (SOC) balancing control scheme based on a distributed battery energy storage system architecture where the cell balancing system and the dc bus voltage regulation system are combined into a single system. The battery cells are decoupled from one another by connecting each cell with a small lower power dc-dc power converter. The small power converters are utilized to achieve both SOC balancing between the battery cells and dc bus voltage regulation at the same time. The battery cells' SOC imbalance issue is addressed from the root by using the energy sharing concept to automatically adjust the discharge/charge rate of each cell while maintaining a regulated dc bus voltage. Consequently, there is no need to transfer the excess energy between the cells for SOC balancing. The theoretical basis and experimental prototype results are provided to illustrate and validate the proposed energy sharing controller.", "title": "" } ]
[ { "docid": "9dd6d9f5643c4884e981676230f3ee66", "text": "A rank-r matrix X ∈ Rm×n can be written as a product UV >, where U ∈ Rm×r and V ∈ Rn×r. One could exploit this observation in optimization: e.g., consider the minimization of a convex function f(X) over rank-r matrices, where the scaffold of rank-r matrices is modeled via the factorization in U and V variables. Such heuristic has been widely used before for specific problem instances, where the solution sought is (approximately) low-rank. Though such parameterization reduces the number of variables and is more efficient in computational speed and memory requirement (of particular interest is the case r min{m,n}), it comes at a cost: f(UV >) becomes a non-convex function w.r.t. U and V . In this paper, we study such parameterization in optimization of generic convex f and focus on first-order, gradient descent algorithmic solutions. We propose an algorithm we call the Bi-Factored Gradient Descent (BFGD) algorithm, an efficient first-order method that operates on the U, V factors. We show that when f is smooth, BFGD has local sublinear convergence, and linear convergence when f is both smooth and strongly convex. Moreover, for several key applications, we provide simple and efficient initialization schemes that provide approximate solutions good enough for the above convergence results to hold.", "title": "" }, { "docid": "d5e573802d6519a8da402f2e66064372", "text": "Targeted cyberattacks play an increasingly significant role in disrupting the online social and economic model, not to mention the threat they pose to nation-states. A variety of components and techniques come together to bring about such attacks.", "title": "" }, { "docid": "074d9b68f1604129bcfdf0bb30bbd365", "text": "This paper describes a methodology for semi-supervised learning of dialogue acts using the similarity between sentences. We suppose that the dialogue sentences with the same dialogue act are more similar in terms of semantic and syntactic information. However, previous work on sentence similarity mainly modeled a sentence as bag-of-words and then compared different groups of words using corpus-based or knowledge-based measurements of word semantic similarity. Novelly, we present a vector-space sentence representation, composed of word embeddings, that is, the related word distributed representations, and these word embeddings are organised in a sentence syntactic structure. Given the vectors of the dialogue sentences, a distance measurement can be well-defined to compute the similarity between them. Finally, a seeded k-means clustering algorithm is implemented to classify the dialogue sentences into several categories corresponding to particular dialogue acts. This constitutes the semi-supervised nature of the approach, which aims to ameliorate the reliance of the availability of annotated corpora. Experiments with Switchboard Dialog Act corpus show that classification accuracy is improved by 14%, compared to the state-of-art methods based on Support Vector Machine.", "title": "" }, { "docid": "e1958dc823feee7f88ab5bf256655bee", "text": "We describe an approach for testing a software system for possible securi ty flaws. Traditionally, security testing is done using penetration analysis and formal methods. Based on the observation that most security flaws are triggered due to a flawed interaction with the envi ronment, we view the security testing problem as the problem of testing for the fault-tolerance prop erties of a software system. We consider each environment perturbation as a fault and the resulting security ompromise a failure in the toleration of such faults. Our approach is based on the well known techn ique of fault-injection. Environment faults are injected into the system under test and system beha vior observed. The failure to tolerate faults is an indicator of a potential security flaw in the syst em. An Environment-Application Interaction (EAI) fault model is proposed. EAI allows us to decide what f aults to inject. Based on EAI, we present a security-flaw classification scheme. This scheme was used to classif y 142 security flaws in a vulnerability database. This classification revealed that 91% of the security flaws in the database are covered by the EAI model.", "title": "" }, { "docid": "b5af51c869fa4863dfa581b0fb8cc20a", "text": "This paper describes progress toward a prototype implementation of a tool which aims to improve literacy in deaf high school and college students who are native (or near native) signers of American Sign Language (ASL). We envision a system that will take a piece of text written by a deaf student, analyze that text for grammatical errors, and engage that student in a tutorial dialogue, enabling the student to generate appropriate corrections to the text. A strong focus of this work is to develop a system which adapts this process to the knowledge level and learning strengths of the user and which has the flexibility to engage in multi-modal, multilingual tutorial instruction utilizing both English and the native language of the user.", "title": "" }, { "docid": "7f6e966f3f924e18cb3be0ae618309e6", "text": "designed shapes incorporating typedesign tradition, the rules related to visual appearance, and the design ideas of a skilled character designer. The typographic design process is structured and systematic: letterforms are visually related in weight, contrast, space, alignment, and style. To create a new typeface family, type designers generally start by designing a few key characters—such as o, h, p, and v— incorporating the most important structure elements such as vertical stems, round parts, diagonal bars, arches, and serifs (see Figure 1). They can then use the design features embedded into these structure elements (stem width, behavior of curved parts, contrast between thick and thin shape parts, and so on) to design the font’s remaining characters. Today’s industrial font description standards such as Adobe Type 1 or TrueType represent typographic characters by their shape outlines, because of the simplicity of digitizing the contours of well-designed, large-size master characters. However, outline characters only implicitly incorporate the designer’s intentions. Because their structure elements aren’t explicit, creating aesthetically appealing derived designs requiring coherent changes in character width, weight (boldness), and contrast is difficult. Outline characters aren’t suitable for optical scaling, which requires relatively fatter letter shapes at small sizes. Existing approaches for creating derived designs from outline fonts require either specifying constraints to maintain the coherence of structure elements across different characters or creating multiple master designs for the interpolation of derived designs. We present a new approach for describing and synthesizing typographic character shapes. Instead of describing characters by their outlines, we conceive each character as an assembly of structure elements (stems, bars, serifs, round parts, and arches) implemented by one or several shape components. We define the shape components by typeface-category-dependent global parameters such as the serif and junction types, by global font-dependent metrics such as the location of reference lines and the width of stems and curved parts, and by group and local parameters. (See the sidebar “Previous Work” for background information on the field of parameterizable fonts.)", "title": "" }, { "docid": "b527ade4819e314a723789de58280724", "text": "Securing collaborative filtering systems from malicious attack has become an important issue with increasing popularity of recommender Systems. Since recommender systems are entirely based on the input provided by the users or customers, they tend to become highly vulnerable to outside attacks. Prior research has shown that attacks can significantly affect the robustness of the systems. To prevent such attacks, researchers proposed several unsupervised detection mechanisms. While these approaches produce satisfactory results in detecting some well studied attacks, they are not suitable for all types of attacks studied recently. In this paper, we show that the unsupervised clustering can be used effectively for attack detection by computing detection attributes modeled on basic descriptive statistics. We performed extensive experiments and discussed different approaches regarding their performances. Our experimental results showed that attribute-based unsupervised clustering algorithm can detect spam users with a high degree of accuracy and fewer misclassified genuine users regardless of attack strategies.", "title": "" }, { "docid": "73e4fed83bf8b1f473768ce15d6a6a86", "text": "Improving science, technology, engineering, and mathematics (STEM) education, especially for traditionally disadvantaged groups, is widely recognized as pivotal to the U.S.'s long-term economic growth and security. In this article, we review and discuss current research on STEM education in the U.S., drawing on recent research in sociology and related fields. The reviewed literature shows that different social factors affect the two major components of STEM education attainment: (1) attainment of education in general, and (2) attainment of STEM education relative to non-STEM education conditional on educational attainment. Cognitive and social psychological characteristics matter for both major components, as do structural influences at the neighborhood, school, and broader cultural levels. However, while commonly used measures of socioeconomic status (SES) predict the attainment of general education, social psychological factors are more important influences on participation and achievement in STEM versus non-STEM education. Domestically, disparities by family SES, race, and gender persist in STEM education. Internationally, American students lag behind those in some countries with less economic resources. Explanations for group disparities within the U.S. and the mediocre international ranking of US student performance require more research, a task that is best accomplished through interdisciplinary approaches.", "title": "" }, { "docid": "7c5f2c92cb3d239674f105a618de99e0", "text": "We consider the isolated spelling error correction problem as a specific subproblem of the more general string-to-string translation problem. In this context, we investigate four general string-to-string transformation models that have been suggested in recent years and apply them within the spelling error correction paradigm. In particular, we investigate how a simple ‘k-best decoding plus dictionary lookup’ strategy performs in this context and find that such an approach can significantly outdo baselines such as edit distance, weighted edit distance, and the noisy channel Brill and Moore model to spelling error correction. We also consider elementary combination techniques for our models such as language model weighted majority voting and center string combination. Finally, we consider real-world OCR post-correction for a dataset sampled from medieval Latin texts.", "title": "" }, { "docid": "4a5abe07b93938e7549df068967731fc", "text": "A novel compact dual-polarized unidirectional wideband antenna based on two crossed magneto-electric dipoles is proposed. The proposed miniaturization method consist in transforming the electrical filled square dipoles into vertical folded square loops. The surface of the radiating element is reduced to 0.23λ0∗0.23λ0, where λ0 is the wavelength at the lowest operation frequency for a standing wave ratio (SWR) <2.5, which corresponds to a reduction factor of 48%. The antenna has been prototyped using 3D printing technology. The measured input impedance bandwidth is 51.2% from 1.7 GHz to 2.9 GHz with a Standing wave ratio (SWR) <2.", "title": "" }, { "docid": "1331dc5705d4b416054341519126f32f", "text": "There is a large tradition of work in moral psychology that explores the capacity for moral judgment by focusing on the basic capacity to distinguish moral violations (e.g. hitting another person) from conventional violations (e.g. playing with your food). However, only recently have there been attempts to characterize the cognitive mechanisms underlying moral judgment (e.g. Cognition 57 (1995) 1; Ethics 103 (1993) 337). Recent evidence indicates that affect plays a crucial role in mediating the capacity to draw the moral/conventional distinction. However, the prevailing account of the role of affect in moral judgment is problematic. This paper argues that the capacity to draw the moral/conventional distinction depends on both a body of information about which actions are prohibited (a Normative Theory) and an affective mechanism. This account leads to the prediction that other normative prohibitions that are connected to an affective mechanism might be treated as non-conventional. An experiment is presented that indicates that \"disgust\" violations (e.g. spitting at the table), are distinguished from conventional violations along the same dimensions as moral violations.", "title": "" }, { "docid": "ad5b787fd972c202a69edc98a8fbc7ba", "text": "BACKGROUND\nIntimate partner violence (IPV) is a major public health problem with serious consequences for women's physical, mental, sexual and reproductive health. Reproductive health outcomes such as unwanted and terminated pregnancies, fetal loss or child loss during infancy, non-use of family planning methods, and high fertility are increasingly recognized. However, little is known about the role of community influences on women's experience of IPV and its effect on terminated pregnancy, given the increased awareness of IPV being a product of social context. This study sought to examine the role of community-level norms and characteristics in the association between IPV and terminated pregnancy in Nigeria.\n\n\nMETHODS\nMultilevel logistic regression analyses were performed on nationally-representative cross-sectional data including 19,226 women aged 15-49 years in Nigeria. Data were collected by a stratified two-stage sampling technique, with 888 primary sampling units (PSUs) selected in the first sampling stage, and 7,864 households selected through probability sampling in the second sampling stage.\n\n\nRESULTS\nWomen who had experienced physical IPV, sexual IPV, and any IPV were more likely to have terminated a pregnancy compared to women who had not experienced these IPV types.IPV types were significantly associated with factors reflecting relationship control, relationship inequalities, and socio-demographic characteristics. Characteristics of the women aggregated at the community level (mean education, justifying wife beating, mean age at first marriage, and contraceptive use) were significantly associated with IPV types and terminated pregnancy.\n\n\nCONCLUSION\nFindings indicate the role of community influence in the association between IPV-exposure and terminated pregnancy, and stress the need for screening women seeking abortions for a history of abuse.", "title": "" }, { "docid": "20718ae394b5f47387499e5f3360a888", "text": "Crowding, the inability to recognize objects in clutter, sets a fundamental limit on conscious visual perception and object recognition throughout most of the visual field. Despite how widespread and essential it is to object recognition, reading and visually guided action, a solid operational definition of what crowding is has only recently become clear. The goal of this review is to provide a broad-based synthesis of the most recent findings in this area, to define what crowding is and is not, and to set the stage for future work that will extend our understanding of crowding well beyond low-level vision. Here we define six diagnostic criteria for what counts as crowding, and further describe factors that both escape and break crowding. All of these lead to the conclusion that crowding occurs at multiple stages in the visual hierarchy.", "title": "" }, { "docid": "e5ce1ddd50a728fab41043324938a554", "text": "B-trees are used by many file systems to represent files and directories. They provide guaranteed logarithmic time key-search, insert, and remove. File systems like WAFL and ZFS use shadowing, or copy-on-write, to implement snapshots, crash recovery, write-batching, and RAID. Serious difficulties arise when trying to use b-trees and shadowing in a single system.\n This article is about a set of b-tree algorithms that respects shadowing, achieves good concurrency, and implements cloning (writeable snapshots). Our cloning algorithm is efficient and allows the creation of a large number of clones.\n We believe that using our b-trees would allow shadowing file systems to better scale their on-disk data structures.", "title": "" }, { "docid": "54234eef5d56951e408d2a163dfd27f8", "text": "In many applications of wireless sensor networks (WSNs), node location is required to locate the monitored event once occurs. Mobility-assisted localization has emerged as an efficient technique for node localization. It works on optimizing a path planning of a location-aware mobile node, called mobile anchor (MA). The task of the MA is to traverse the area of interest (network) in a way that minimizes the localization error while maximizing the number of successful localized nodes. For simplicity, many path planning models assume that the MA has a sufficient source of energy and time, and the network area is obstacle-free. However, in many real-life applications such assumptions are rare. When the network area includes many obstacles, which need to be avoided, and the MA itself has a limited movement distance that cannot be exceeded, a dynamic movement approach is needed. In this paper, we propose two novel dynamic movement techniques that offer obstacle-avoidance path planning for mobility-assisted localization in WSNs. The movement planning is designed in a real-time using two swarm intelligence based algorithms, namely grey wolf optimizer and whale optimization algorithm. Both of our proposed models, grey wolf optimizer-based path planning and whale optimization algorithm-based path planning, provide superior outcomes in comparison to other existing works in several metrics including both localization ratio and localization error rate.", "title": "" }, { "docid": "a488509590cd496669bdcc3ce8cc5fe5", "text": "Ghrelin is an endogenous ligand for the growth hormone secretagogue receptor and a well-characterized food intake regulatory peptide. Hypothalamic ghrelin-, neuropeptide Y (NPY)-, and orexin-containing neurons form a feeding regulatory circuit. Orexins and NPY are also implicated in sleep-wake regulation. Sleep responses and motor activity after central administration of 0.2, 1, or 5 microg ghrelin in free-feeding rats as well as in feeding-restricted rats (1 microg dose) were determined. Food and water intake and behavioral responses after the light onset injection of saline or 1 microg ghrelin were also recorded. Light onset injection of ghrelin suppressed non-rapid-eye-movement sleep (NREMS) and rapid-eye-movement sleep (REMS) for 2 h. In the first hour, ghrelin induced increases in behavioral activity including feeding, exploring, and grooming and stimulated food and water intake. Ghrelin administration at dark onset also elicited NREMS and REMS suppression in hours 1 and 2, but the effect was not as marked as that, which occurred in the light period. In hours 3-12, a secondary NREMS increase was observed after some doses of ghrelin. In the feeding-restricted rats, ghrelin suppressed NREMS in hours 1 and 2 and REMS in hours 3-12. Data are consistent with the notion that ghrelin has a role in the integration of feeding, metabolism, and sleep regulation.", "title": "" }, { "docid": "7b27d8b8f05833888b9edacf9ace0a18", "text": "This paper reports results from a study on the adoption of an information visualization system by administrative data analysts. Despite the fact that the system was neither fully integrated with their current software tools nor with their existing data analysis practices, analysts identified a number of key benefits that visualization systems provide to their work. These benefits for the most part occurred when analysts went beyond their habitual and well-mastered data analysis routines and engaged in creative discovery processes. We analyze the conditions under which these benefits arose, to inform the design of visualization systems that can better assist the work of administrative data analysts.", "title": "" }, { "docid": "8a7f4cde54d120aab50c9d4f45e67a43", "text": "The purpose of this study was to assess the perceived discomfort of patrol officers related to equipment and vehicle design and whether there were discomfort differences between day and night shifts. A total of 16 participants were recruited (10 males, 6 females) from a local police force to participate for one full day shift and one full night shift. A series of questionnaires were administered to acquire information regarding comfort with specific car features and occupational gear, body part discomfort and health and lifestyle. The discomfort questionnaires were administered three times during each shift to monitor discomfort progression within a shift. Although there were no significant discomfort differences reported between the day and night shifts, perceived discomfort was identified for specific equipment, vehicle design and vehicle configuration, within each 12-h shift.", "title": "" }, { "docid": "6150e19bffad5629c6d5cb7439663b13", "text": "We present NeuroLinear, a system for extracting oblique decision rules from neural networks that have been trained for classiication of patterns. Each condition of an oblique decision rule corresponds to a partition of the attribute space by a hyperplane that is not necessarily axis-parallel. Allowing a set of such hyperplanes to form the boundaries of the decision regions leads to a signiicant reduction in the number of rules generated while maintaining the accuracy rates of the networks. We describe the components of NeuroLinear in detail by way of two examples using artiicial datasets. Our experimental results on real-world datasets show that the system is eeective in extracting compact and comprehensible rules with high predictive accuracy from neural networks.", "title": "" }, { "docid": "ab0c80a10d26607134828c6b350089aa", "text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.", "title": "" } ]
scidocsrr
841fc2f45374901757ef197cf666e2e9
Perceived learning environment and students ’ emotional experiences : A multilevel analysis of mathematics classrooms *
[ { "docid": "e47276a0b7139e31266d032bb3a0cbfc", "text": "We assessed math anxiety in 6ththrough 12th-grade children (N = 564) as part of a comprehensive longitudinal investigation of children's beliefs, attitudes, and values concerning mathematics. Confirmatory factor analyses provided evidence for two components of math anxiety, a negative affective reactions component and a cognitive component. The affective component of math anxiety related more strongly and negatively than did the worry component to children's ability perceptions, performance perceptions, and math performance. The worry component related more strongly and positively than did the affective component to the importance that children attach to math and their reported actual effort in math. Girls reported stronger negative affective reactions to math than did boys. Ninth-grade students reported experiencing the most worry about math and sixth graders the least.", "title": "" }, { "docid": "db422d1fcb99b941a43e524f5f2897c2", "text": "AN INDIVIDUAL CORRELATION is a correlation in which the statistical object or thing described is indivisible. The correlation between color and illiteracy for persons in the United States, shown later in Table I, is an individual correlation, because the kind of thing described is an indivisible unit, a person. In an individual correlation the variables are descriptive properties of individuals, such as height, income, eye color, or race, and not descriptive statistical constants such as rates or means. In an ecological correlation the statistical object is a group of persons. The correlation between the percentage of the population which is Negro and the percentage of the population which is illiterate for the 48 states, shown later as Figure 2, is an ecological correlation. The thing described is the population of a state, and not a single individual. The variables are percentages, descriptive properties of groups, and not descriptive properties of individuals. Ecological correlations are used in an impressive number of quantitative sociological studies, some of which by now have attained the status of classics: Cowles’ ‘‘Statistical Study of Climate in Relation to Pulmonary Tuberculosis’’; Gosnell’s ‘‘Analysis of the 1932 Presidential Vote in Chicago,’’ Factorial and Correlational Analysis of the 1934 Vote in Chicago,’’ and the more elaborate factor analysis in Machine Politics; Ogburn’s ‘‘How women vote,’’ ‘‘Measurement of the Factors in the Presidential Election of 1928,’’ ‘‘Factors in the Variation of Crime Among Cities,’’ and Groves and Ogburn’s correlation analyses in American Marriage and Family Relationships; Ross’ study of school attendance in Texas; Shaw’s Delinquency Areas study of the correlates of delinquency, as well as The more recent analyses in Juvenile Delinquency in Urban Areas; Thompson’s ‘‘Some Factors Influencing the Ratios of Children to Women in American Cities, 1930’’; Whelpton’s study of the correlates of birth rates, in ‘‘Geographic and Economic Differentials in Fertility;’’ and White’s ‘‘The Relation of Felonies to Environmental Factors in Indianapolis.’’ Although these studies and scores like them depend upon ecological correlations, it is not because their authors are interested in correlations between the properties of areas as such. Even out-and-out ecologists, in studying delinquency, for example, rely primarily upon data describing individuals, not areas. In each study which uses ecological correlations, the obvious purpose is to discover something about the behavior of individuals. Ecological correlations are used simply because correlations between the properties of individuals are not available. In each instance, however, the substitution is made tacitly rather than explicitly. The purpose of this paper is to clarify the ecological correlation problem by stating, mathematically, the exact relation between ecological and individual correlations, and by showing the bearing of that relation upon the practice of using ecological correlations as substitutes for individual correlations.", "title": "" }, { "docid": "f71d0084ebb315a346b52c7630f36fb2", "text": "A theory of motivation and emotion is proposed in which causal ascriptions play a key role. It is first documented that in achievement-related contexts there are a few dominant causal perceptions. The perceived causes of success and failure share three common properties: locus, stability, and controllability, with intentionality and globality as other possible causal structures. The perceived stability of causes influences changes in expectancy of success; all three dimensions of causality affect a variety of common emotional experiences, including anger, gratitude, guilt, hopelessness, pity, pride, and shame. Expectancy and affect, in turn, are presumed to guide motivated behavior. The theory therefore relates the structure of thinking to the dynamics of feeling and action. Analysis of a created motivational episode involving achievement strivings is offered, and numerous empirical observations are examined from this theoretical position. The strength of the empirical evidence, the capability of this theory to address prevalent human emotions, and the potential generality of the conception are stressed.", "title": "" } ]
[ { "docid": "264aa89aa10fe05cff2f0e1a239e79ff", "text": "While the terminology has changed over time, the basic concept of the Digital Twin model has remained fairly stable from its inception in 2001. It is based on the idea that a digital informational construct about a physical system could be created as an entity on its own. This digital information would be a “twin” of the information that was embedded within the physical system itself and be linked with that physical system through the entire lifecycle of the system.", "title": "" }, { "docid": "fce170ad2238ad6066c9e17a3a388e7d", "text": "Language resources that systematically organize paraphrases for binary relations are of great value for various NLP tasks and have recently been advanced in projects like PATTY, WiseNet and DEFIE. This paper presents a new method for building such a resource and the resource itself, called POLY. Starting with a very large collection of multilingual sentences parsed into triples of phrases, our method clusters relational phrases using probabilistic measures. We judiciously leverage fine-grained semantic typing of relational arguments for identifying synonymous phrases. The evaluation of POLY shows significant improvements in precision and recall over the prior works on PATTY and DEFIE. An extrinsic use case demonstrates the benefits of POLY for question answering.", "title": "" }, { "docid": "d8ce92b054fc425a5db5bf17a62c6308", "text": "The possibility that wind turbine noise (WTN) affects human health remains controversial. The current analysis presents results related to WTN annoyance reported by randomly selected participants (606 males, 632 females), aged 18-79, living between 0.25 and 11.22 km from wind turbines. WTN levels reached 46 dB, and for each 5 dB increase in WTN levels, the odds of reporting to be either very or extremely (i.e., highly) annoyed increased by 2.60 [95% confidence interval: (1.92, 3.58), p < 0.0001]. Multiple regression models had R(2)'s up to 58%, with approximately 9% attributed to WTN level. Variables associated with WTN annoyance included, but were not limited to, other wind turbine-related annoyances, personal benefit, noise sensitivity, physical safety concerns, property ownership, and province. Annoyance was related to several reported measures of health and well-being, although these associations were statistically weak (R(2 )< 9%), independent of WTN levels, and not retained in multiple regression models. The role of community tolerance level as a complement and/or an alternative to multiple regression in predicting the prevalence of WTN annoyance is also provided. The analysis suggests that communities are between 11 and 26 dB less tolerant of WTN than of other transportation noise sources.", "title": "" }, { "docid": "7b4dd695182f7e15e58f44e309bf897c", "text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.", "title": "" }, { "docid": "c7808ecbca4c5bf8e8093dce4d8f1ea7", "text": "41  Abstract— This project deals with a design and motion planning algorithm of a caterpillar-based pipeline robot that can be used for inspection of 80–100-mm pipelines in an indoor pipeline environment. The robot system consists of a Robot body, a control system, a CMOS camera, an accelerometer, a temperature sensor, a ZigBee module. The robot module will be designed with the help of CAD tool. The control system consists of Atmega16 micro controller and Atmel studio IDE. The robot system uses a differential drive to steer the robot and spring loaded four-bar mechanisms to assure that the robot expands to have grip of the pipe walls. Unique features of this robot are the caterpillar wheel, the four-bar mechanism supports the well grip of wall, a simple and easy user interface.", "title": "" }, { "docid": "d3fda1730c1297ed3b63a1d4f133d893", "text": "Registered nurses were queried about their knowledge and attitudes regarding pain management. Results suggest knowledge of pain management principles and interventions is insufficient.", "title": "" }, { "docid": "42cfea27f8dcda6c58d2ae0e86f2fb1a", "text": "Most of the lane marking detection algorithms reported in the literature are suitable for highway scenarios. This paper presents a novel clustered particle filter based approach to lane detection, which is suitable for urban streets in normal traffic conditions. Furthermore, a quality measure for the detection is calculated as a measure of reliability. The core of this approach is the usage of weak models, i.e. the avoidance of strong assumptions about the road geometry. Experiments were carried out in Sydney urban areas with a vehicle mounted laser range scanner and a ccd camera. Through experimentations, we have shown that a clustered particle filter can be used to efficiently extract lane markings.", "title": "" }, { "docid": "76d22feb7da3dbc14688b0d999631169", "text": "Guilt proneness is a personality trait indicative of a predisposition to experience negative feelings about personal wrongdoing, even when the wrongdoing is private. It is characterized by the anticipation of feeling bad about committing transgressions rather than by guilty feelings in a particular moment or generalized guilty feelings that occur without an eliciting event. Our research has revealed that guilt proneness is an important character trait because knowing a person’s level of guilt proneness helps us to predict the likelihood that they will behave unethically. For example, online studies of adults across the U.S. have shown that people who score high in guilt proneness (compared to low scorers) make fewer unethical business decisions, commit fewer delinquent behaviors, and behave more honestly when they make economic decisions. In the workplace, guilt-prone employees are less likely to engage in counterproductive behaviors that harm their organization.", "title": "" }, { "docid": "2d0cc4c7ca6272200bb1ed1c9bba45f0", "text": "Advanced Driver Assistance Systems (ADAS) based on video camera tends to be generalized in today's automotive. However, if most of these systems perform nicely in good weather conditions, they perform very poorly under adverse weather particularly under rain. We present a novel approach that aims at detecting raindrops on a car windshield using only images from an in-vehicle camera. Based on the photometric properties of raindrops, the algorithm relies on image processing technics to highlight raindrops. Its results can be further used for image restoration and vision enhancement and hence it is a valuable tool for ADAS.", "title": "" }, { "docid": "bc1f7e30b8dcef97c1d8de2db801c4f6", "text": "In this paper a novel method is introduced based on the use of an unsupervised version of kernel least mean square (KLMS) algorithm for solving ordinary differential equations (ODEs). The algorithm is unsupervised because here no desired signal needs to be determined by user and the output of the model is generated by iterating the algorithm progressively. However, there are several new implementation, fast convergence and also little error. Furthermore, it is also a KLMS with obvious characteristics. In this paper the ability of KLMS is used to estimate the answer of ODE. First a trial solution of ODE is written as a sum of two parts, the first part satisfies the initial condition and the second part is trained using the KLMS algorithm so as the trial solution solves the ODE. The accuracy of the method is illustrated by solving several problems. Also the sensitivity of the convergence is analyzed by changing the step size parameters and kernel functions. Finally, the proposed method is compared with neuro-fuzzy [21] approach. Crown Copyright & 2011 Published by Elsevier B.V. All rights reserved.", "title": "" }, { "docid": "042431e96028ed9729e6b174a78d642d", "text": "We address the problem of multi-class classification in the case where the number of classes is very large. We propose a double sampling strategy on top of a multi-class to binary reduction strategy, which transforms the original multi-class problem into a binary classification problem over pairs of examples. The aim of the sampling strategy is to overcome the curse of long-tailed class distributions exhibited in majority of large-scale multi-class classification problems and to reduce the number of pairs of examples in the expanded data. We show that this strategy does not alter the consistency of the empirical risk minimization principle defined over the double sample reduction. Experiments are carried out on DMOZ and Wikipedia collections with 10,000 to 100,000 classes where we show the efficiency of the proposed approach in terms of training and prediction time, memory consumption, and predictive performance with respect to state-of-the-art approaches.", "title": "" }, { "docid": "a5ace543a0e9b87d54cbe77c6a86c40f", "text": "Packet capture is an essential function for many network applications. However, packet drop is a major problem with packet capture in high-speed networks. This paper presents WireCAP, a novel packet capture engine for commodity network interface cards (NICs) in high-speed networks. WireCAP provides lossless zero-copy packet capture and delivery services by exploiting multi-queue NICs and multicore architectures. WireCAP introduces two new mechanisms-the ring-buffer-pool mechanism and the buddy-group-based offloading mechanism-to address the packet drop problem of packet capture in high-speed network. WireCAP is efficient. It also facilitates the design and operation of a user-space packet-processing application. Experiments have demonstrated that WireCAP achieves better packet capture performance when compared to existing packet capture engines.\n In addition, WireCAP implements a packet transmit function that allows captured packets to be forwarded, potentially after the packets are modified or inspected in flight. Therefore, WireCAP can be used to support middlebox-type applications. Thus, at a high level, WireCAP provides a new packet I/O framework for commodity NICs in high-speed networks.", "title": "" }, { "docid": "a87da46ab4026c566e3e42a5695fd8c9", "text": "Micro aerial vehicles (MAVs) are an excellent platform for autonomous exploration. Most MAVs rely mainly on cameras for buliding a map of the 3D environment. Therefore, vision-based MAVs require an efficient exploration algorithm to select viewpoints that provide informative measurements. In this paper, we propose an exploration approach that selects in real time the next-best-view that maximizes the expected information gain of new measurements. In addition, we take into account the cost of reaching a new viewpoint in terms of distance and predictability of the flight path for a human observer. Finally, our approach selects a path that reduces the risk of crashes when the expected battery life comes to an end, while still maximizing the information gain in the process. We implemented and thoroughly tested our approach and the experiments show that it offers an improved performance compared to other state-of-the-art algorithms in terms of precision of the reconstruction, execution time, and smoothness of the path.", "title": "" }, { "docid": "2f5d428b8da4d5b5009729fc1794e53d", "text": "The resolution of a synthetic aperture radar (SAR) image, in range and azimuth, is determined by the transmitted bandwidth and the synthetic aperture length, respectively. Various superresolution techniques for improving resolution have been proposed, and we have proposed an algorithm that we call polarimetric bandwidth extrapolation (PBWE). To apply PBWE to a radar image, one needs to first apply PBWE in the range direction and then in the azimuth direction, or vice versa . In this paper, PBWE is further extended to the 2-D case. This extended case (2D-PBWE) utilizes a 2-D polarimetric linear prediction model and expands the spatial frequency bandwidth in range and azimuth directions simultaneously. The performance of the 2D-PBWE is shown through a simulated radar image and a real polarimetric SAR image", "title": "" }, { "docid": "3a75cf54ace0ebb56b985e1452151a91", "text": "Ubiquitous networks support the roaming service for mobile communication devices. The mobile user can use the services in the foreign network with the help of the home network. Mutual authentication plays an important role in the roaming services, and researchers put their interests on the authentication schemes. Recently, in 2016, Gope and Hwang found that mutual authentication scheme of He et al. for global mobility networks had security disadvantages such as vulnerability to forgery attacks, unfair key agreement, and destitution of user anonymity. Then, they presented an improved scheme. However, we find that the scheme cannot resist the off-line guessing attack and the de-synchronization attack. Also, it lacks strong forward security. Moreover, the session key is known to HA in that scheme. To get over the weaknesses, we propose a new two-factor authentication scheme for global mobility networks. We use formal proof with random oracle model, formal verification with the tool Proverif, and informal analysis to demonstrate the security of the proposed scheme. Compared with some very recent schemes, our scheme is more applicable. Copyright © 2016 John Wiley & Sons, Ltd.", "title": "" }, { "docid": "56bad8cef0c8ed0af6882dbc945298ef", "text": "We describe a new class of learning models called memory networks. Memory networks reason with inference components combined with a long-term memory component; they learn how to use these jointly. The long-term memory can be read and written to, with the goal of using it for prediction. We investigate these models in the context of question answering (QA) where the long-term memory effectively acts as a (dynamic) knowledge base, and the output is a textual response. We evaluate them on a large-scale QA task, and a smaller, but more complex, toy task generated from a simulated world. In the latter, we show the reasoning power of such models by chaining multiple supporting sentences to answer questions that require understanding the intension of verbs.", "title": "" }, { "docid": "f5ba54c76166eed39da96f86a8bbd2a1", "text": "The digital divide refers to the separation between those who have access to digital information and communications technology (ICT) and those who do not. Many believe that universal access to ICT would bring about a global community of interaction, commerce, and learning resulting in higher standards of living and improved social welfare. However, the digital divide threatens this outcome, leading many public policy makers to debate the best way to bridge the divide. Much of the research on the digital divide focuses on first order effects regarding who has access to the technology, but some work addresses the second order effects of inequality in the ability to use the technology among those who do have access. In this paper, we examine both first and second order effects of the digital divide at three levels of analysis  the individual level, the organizational level, and the global level. At each level, we survey the existing research noting the theoretical perspective taken in the work, the research methodology employed, and the key results that were obtained. We then suggest a series of research questions at each level of analysis to guide researchers seeking to further examine the digital divide and how it impacts citizens, managers, and economies.", "title": "" }, { "docid": "258e931d5c8d94f73be41cbb0058f49b", "text": "VerSum allows lightweight clients to outsource expensive computations over large and frequently changing data structures, such as the Bitcoin or Namecoin blockchains, or a Certificate Transparency log. VerSum clients ensure that the output is correct by comparing the outputs from multiple servers. VerSum assumes that at least one server is honest, and crucially, when servers disagree, VerSum uses an efficient conflict resolution protocol to determine which server(s) made a mistake and thus obtain the correct output.\n VerSum's contribution lies in achieving low server-side overhead for both incremental re-computation and conflict resolution, using three key ideas: (1) representing the computation as a functional program, which allows memoization of previous results; (2) recording the evaluation trace of the functional program in a carefully designed computation history to help clients determine which server made a mistake; and (3) introducing a new authenticated data structure for sequences, called SeqHash, that makes it efficient for servers to construct summaries of computation histories in the presence of incremental re-computation. Experimental results with an implementation of VerSum show that VerSum can be used for a variety of computations, that it can support many clients, and that it can easily keep up with Bitcoin's rate of new blocks with transactions.", "title": "" }, { "docid": "43ca9719740147e88e86452bb42f5644", "text": "Currently in the US, over 97% of food waste is estimated to be buried in landfills. There is nonetheless interest in strategies to divert this waste from landfills as evidenced by a number of programs and policies at the local and state levels, including collection programs for source separated organic wastes (SSO). The objective of this study was to characterize the state-of-the-practice of food waste treatment alternatives in the US and Canada. Site visits were conducted to aerobic composting and two anaerobic digestion facilities, in addition to meetings with officials that are responsible for program implementation and financing. The technology to produce useful products from either aerobic or anaerobic treatment of SSO is in place. However, there are a number of implementation issues that must be addressed, principally project economics and feedstock purity. Project economics varied by region based on landfill disposal fees. Feedstock purity can be obtained by enforcement of contaminant standards and/or manual or mechanical sorting of the feedstock prior to and after treatment. Future SSO diversion will be governed by economics and policy incentives, including landfill organics bans and climate change mitigation policies.", "title": "" }, { "docid": "c7b9c324171d40cec24ed089933a06ce", "text": "With the proliferation of the internet and increased global access to online media, cybercrime is also occurring at an increasing rate. Currently, both personal users and companies are vulnerable to cybercrime. A number of tools including firewalls and Intrusion Detection Systems (IDS) can be used as defense mechanisms. A firewall acts as a checkpoint which allows packets to pass through according to predetermined conditions. In extreme cases, it may even disconnect all network traffic. An IDS, on the other hand, automates the monitoring process in computer networks. The streaming nature of data in computer networks poses a significant challenge in building IDS. In this paper, a method is proposed to overcome this problem by performing online classification on datasets. In doing so, an incremental naive Bayesian classifier is employed. Furthermore, active learning enables solving the problem using a small set of labeled data points which are often very expensive to acquire. The proposed method includes two groups of actions i.e. offline and online. The former involves data preprocessing while the latter introduces the NADAL online method. The proposed method is compared to the incremental naive Bayesian classifier using the NSL-KDD standard dataset. There are three advantages with the proposed method: (1) overcoming the streaming data challenge; (2) reducing the high cost associated with instance labeling; and (3) improved accuracy and Kappa compared to the incremental naive Bayesian approach. Thus, the method is well-suited to IDS applications.", "title": "" } ]
scidocsrr
a1a4c99e02f541e789f8618ca65b41f3
Double Embeddings and CNN-based Sequence Labeling for Aspect Extraction
[ { "docid": "e3d212f67713f6a902fe0f3eb468eddf", "text": "We propose a novel LSTM-based deep multi-task learning framework for aspect term extraction from user review sentences. Two LSTMs equipped with extended memories and neural memory operations are designed for jointly handling the extraction tasks of aspects and opinions via memory interactions. Sentimental sentence constraint is also added for more accurate prediction via another LSTM. Experiment results over two benchmark datasets demonstrate the effectiveness of our framework.", "title": "" } ]
[ { "docid": "9ebdf3493d6a80d12c97348a2d203d3e", "text": "Agile software development methodologies have been greeted with enthusiasm by many software developers, yet their widespread adoption has also resulted in closer examination of their strengths and weaknesses. While analyses and evaluations abound, the need still remains for an objective and systematic appraisal of Agile processes specifically aimed at defining strategies for their improvement. We provide a review of the strengths and weaknesses identified in Agile processes, based on which a strengths- weaknesses-opportunities-threats (SWOT) analysis of the processes is performed. We suggest this type of analysis as a useful tool for highlighting and addressing the problem issues in Agile processes, since the results can be used as improvement strategies.", "title": "" }, { "docid": "b5097e718754c02cddd02a1c147c6398", "text": "Semi-automatic parking system is a driver convenience system automating steering control required during parking operation. This paper proposes novel monocular-vision based target parking-slot recognition by recognizing parking-slot markings when driver designates a seed-point inside the target parking-slot with touch screen. Proposed method compensates the distortion of fisheye lens and constructs a bird’s eye view image using homography. Because adjacent vehicles are projected along the outward direction from camera in the bird’s eye view image, if marking line-segment distinguishing parking-slots from roadway and front-ends of marking linesegments dividing parking-slots are observed, proposed method successfully recognizes the target parking-slot marking. Directional intensity gradient, utilizing the width of marking line-segment and the direction of seed-point with respect to camera position as a prior knowledge, can detect marking linesegments irrespective of noise and illumination variation. Making efficient use of the structure of parking-slot markings in the bird’s eye view image, proposed method simply recognizes the target parking-slot marking. It is validated by experiments that proposed method can successfully recognize target parkingslot under various situations and illumination conditions.", "title": "" }, { "docid": "8107b3dc36d240921571edfc778107ff", "text": "FinFET devices have been proposed as a promising substitute for conventional bulk CMOS-based devices at the nanoscale due to their extraordinary properties such as improved channel controllability, a high on/off current ratio, reduced short-channel effects, and relative immunity to gate line-edge roughness. This brief builds standard cell libraries for the advanced 7-nm FinFET technology, supporting multiple threshold voltages and supply voltages. The circuit synthesis results of various combinational and sequential circuits based on the presented 7-nm FinFET standard cell libraries forecast 10× and 1000× energy reductions on average in a superthreshold regime and 16× and 3000× energy reductions on average in a near-threshold regime as compared with the results of the 14-nm and 45-nm bulk CMOS technology nodes, respectively.", "title": "" }, { "docid": "ef65f603b9f0441378e53ec7cabf7940", "text": "Event extraction has been well studied for more than two decades, through both the lens of document-level and sentence-level event extraction. However, event extraction methods to date do not yet offer a satisfactory solution to providing concise, structured, document-level summaries of events in news articles. Prior work on document-level event extraction methods have focused on highly specific domains, often with great reliance on handcrafted rules. Such approaches do not generalize well to new domains. In contrast, sentence-level event extraction methods have applied to a much wider variety of domains, but generate output at such fine-grained details that they cannot offer good document-level summaries of events. In this thesis, we propose a new framework for extracting document-level event summaries called macro-events, unifying together aspects of both information extraction and text summarization. The goal of this work is to extract concise, structured representations of documents that can clearly outline the main event of interest and all the necessary argument fillers to describe the event. Unlike work in abstractive and extractive summarization, we seek to create template-based, structured summaries, rather than plain text summaries. We propose three novel methods to address the macro-event extraction task. First, we introduce a structured prediction model based on the Learning to Search framework for jointly learning argument fillers both across and within event argument slots. Second, we propose a multi-layer neural network that is trained directly on macro-event annotated data. Finally, we propose a deep learning method that treats the problem as machine comprehension, which does not require training with any on-domain macro-event labeled data. Our experimental results on a variety of domains show that such algorithms can achieve stronger performance on this task compared to existing baseline approaches. On average across all datasets, neural networks can achieve a 1.76% and 3.96% improvement on micro-averaged and macro-averaged F1 respectively over baseline approaches, while Learning to Search achieves a 3.87% and 5.10% improvement over baseline approaches on the same metrics. Furthermore, under scenarios of limited training data, we find that machine comprehension models can offer very strong performance compared to directly supervised algorithms, while requiring very little human effort to adapt to new domains.", "title": "" }, { "docid": "f20e0b50b72b4b2796b77757ff20210e", "text": "The dominant neural architectures in question answer retrieval are based on recurrent or convolutional encoders configured with complex word matching layers. Given that recent architectural innovations are mostly new word interaction layers or attention-based matching mechanisms, it seems to be a well-established fact that these components are mandatory for good performance. Unfortunately, the memory and computation cost incurred by these complex mechanisms are undesirable for practical applications. As such, this paper tackles the question of whether it is possible to achieve competitive performance with simple neural architectures. We propose a simple but novel deep learning architecture for fast and efficient question-answer ranking and retrieval. More specifically, our proposed model, HyperQA, is a parameter efficient neural network that outperforms other parameter intensive models such as Attentive Pooling BiLSTMs and Multi-Perspective CNNs on multiple QA benchmarks. The novelty behind HyperQA is a pairwise ranking objective that models the relationship between question and answer embeddings in Hyperbolic space instead of Euclidean space. This empowers our model with a self-organizing ability and enables automatic discovery of latent hierarchies while learning embeddings of questions and answers. Our model requires no feature engineering, no similarity matrix matching, no complicated attention mechanisms nor over-parameterized layers and yet outperforms and remains competitive to many models that have these functionalities on multiple benchmarks.", "title": "" }, { "docid": "17c9a72c46f63a7121ea9c9b6b893a2f", "text": "This paper presents the artificial neural network approach namely Back propagation network (BPNs) and probabilistic neural network (PNN). It is used to classify the type of tumor in MRI images of different patients with Astrocytoma type of brain tumor. The image processing techniques have been developed for detection of the tumor in the MRI images. Gray Level Co-occurrence Matrix (GLCM) is used to achieve the feature extraction. The whole system worked in two modes firstly Training/Learning mode and secondly Testing/Recognition mode.", "title": "" }, { "docid": "e724d4405f50fd74a2184187dcc52401", "text": "This paper presents security of Internet of things. In the Internet of Things vision, every physical object has a virtual component that can produce and consume services Such extreme interconnection will bring unprecedented convenience and economy, but it will also require novel approaches to ensure its safe and ethical use. The Internet and its users are already under continual attack, and a growing economy-replete with business models that undermine the Internet's ethical use-is fully focused on exploiting the current version's foundational weaknesses.", "title": "" }, { "docid": "29ec723fb3f26290f43af77210ca5022", "text": "—Social media and Social Network Analysis (SNA) acquired a huge popularity and represent one of the most important social and computer science phenomena of recent years. One of the most studied problems in this research area is influence and information propagation. The aim of this paper is to analyze the information diffusion process and predict the influence (represented by the rate of infected nodes at the end of the diffusion process) of an initial set of nodes in two networks: Flickr user's contacts and YouTube videos users commenting these videos. These networks are dissimilar in their structure (size, type, diameter, density, components), and the type of the relationships (explicit relationship represented by the contacts links, and implicit relationship created by commenting on videos), they are extracted using NodeXL tool. Three models are used for modeling the dissemination process: Linear Threshold Model (LTM), Independent Cascade Model (ICM) and an extension of this last called Weighted Cascade Model (WCM). Networks metrics and visualization were manipulated by NodeXL as well. Experiments results show that the structure of the network affect the diffusion process directly. Unlike results given in the blog world networks, the information can spread farther through explicit connections than through implicit relations.", "title": "" }, { "docid": "b10447097f8d513795b4f4e08e1838d8", "text": "We introduce a new deep learning model for semantic role labeling (SRL) that significantly improves the state of the art, along with detailed analyses to reveal its strengths and limitations. We use a deep highway BiLSTM architecture with constrained decoding, while observing a number of recent best practices for initialization and regularization. Our 8-layer ensemble model achieves 83.2 F1 on the CoNLL 2005 test set and 83.4 F1 on CoNLL 2012, roughly a 10% relative error reduction over the previous state of the art. Extensive empirical analysis of these gains show that (1) deep models excel at recovering long-distance dependencies but can still make surprisingly obvious errors, and (2) that there is still room for syntactic parsers to improve these results.", "title": "" }, { "docid": "c0546dabfcd377af78ae65a6e0a6a255", "text": "A hard real-time system is usually subject to stringent reliability and timing constraints since failure to produce correct results in a timely manner may lead to a disaster. One way to avoid missing deadlines is to trade the quality of computation results for timeliness, and software fault-tolerance is often achieved with the use of redundant programs. A deadline mechanism which combines these two methods is proposed to provide software faulttolerance in hard real-time periodic task systems. Specifically, we consider the problem of scheduling a set of realtime periodic tasks each of which has two versions:primary and alternate. The primary version contains more functions (thus more complex) and produces good quality results but its correctness is more difficult to verify because of its high level of complexity and resource usage. By contrast, the alternate version contains only the minimum required functions (thus simpler) and produces less precise but acceptable results, and its correctness is easy to verify. We propose a scheduling algorithm which (i) guarantees either the primary or alternate version of each critical task to be completed in time and (ii) attempts to complete as many primaries as possible. Our basic algorithm uses a fixed priority-driven preemptive scheduling scheme to pre-allocate time intervals to the alternates, and at run-time, attempts to execute primaries first. An alternate will be executed only (1) if its primary fails due to lack of time or manifestation of bugs, or (2) when the latest time to start execution of the alternate without missing the corresponding task deadline is reached. This algorithm is shown to be effective and easy to implement. This algorithm is enhanced further to prevent early failures in executing primaries from triggering failures in the subsequent job executions, thus improving efficiency of processor usage.", "title": "" }, { "docid": "f69f8b58e926a8a4573dd650ee29f80b", "text": "Zab is a crash-recovery atomic broadcast algorithm we designed for the ZooKeeper coordination service. ZooKeeper implements a primary-backup scheme in which a primary process executes clients operations and uses Zab to propagate the corresponding incremental state changes to backup processes1. Due the dependence of an incremental state change on the sequence of changes previously generated, Zab must guarantee that if it delivers a given state change, then all other changes it depends upon must be delivered first. Since primaries may crash, Zab must satisfy this requirement despite crashes of primaries.", "title": "" }, { "docid": "8ae257994c6f412ceb843fcb98a67043", "text": "Discovering the author's interest over time from documents has important applications in recommendation systems, authorship identification and opinion extraction. In this paper, we propose an interest drift model (IDM), which monitors the evolution of author interests in time-stamped documents. The model further uses the discovered author interest information to help finding better topics. Unlike traditional topic models, our model is sensitive to the ordering of words, thus it extracts more information from the semantic meaning of the context. The experiment results show that the IDM model learns better topics than state-of-the-art topic models.", "title": "" }, { "docid": "95d767d1b9a2ba2aecdf26443b3dd4af", "text": "Advanced sensing and measurement techniques are key technologies to realize a smart grid. The giant magnetoresistance (GMR) effect has revolutionized the fields of data storage and magnetic measurement. In this work, a design of a GMR current sensor based on a commercial analog GMR chip for applications in a smart grid is presented and discussed. Static, dynamic and thermal properties of the sensor were characterized. The characterizations showed that in the operation range from 0 to ±5 A, the sensor had a sensitivity of 28 mV·A(-1), linearity of 99.97%, maximum deviation of 2.717%, frequency response of −1.5 dB at 10 kHz current measurement, and maximum change of the amplitude response of 0.0335%·°C(-1) with thermal compensation. In the distributed real-time measurement and monitoring of a smart grid system, the GMR current sensor shows excellent performance and is cost effective, making it suitable for applications such as steady-state and transient-state monitoring. With the advantages of having a high sensitivity, high linearity, small volume, low cost, and simple structure, the GMR current sensor is promising for the measurement and monitoring of smart grids.", "title": "" }, { "docid": "4c5b74544b1452ffe0004733dbeee109", "text": "Literary genres are commonly viewed as being defined in terms of content and style. In this paper, we focus on one particular type of content feature, namely lexical expressions of emotion, and investigate the hypothesis that emotion-related information correlates with particular genres. Using genre classification as a testbed, we compare a model that computes lexiconbased emotion scores globally for complete stories with a model that tracks emotion arcs through stories on a subset of Project Gutenberg with five genres. Our main findings are: (a), the global emotion model is competitive with a largevocabulary bag-of-words genre classifier (80 % F1); (b), the emotion arc model shows a lower performance (59 % F1) but shows complementary behavior to the global model, as indicated by a very good performance of an oracle model (94 % F1) and an improved performance of an ensemble model (84 % F1); (c), genres differ in the extent to which stories follow the same emotional arcs, with particularly uniform behavior for anger (mystery) and fear (adventures, romance, humor, science fiction).", "title": "" }, { "docid": "ce55485a60213c7656eb804b89be36cc", "text": "In a previous article, we presented a systematic computational study of the extraction of semantic representations from the word-word co-occurrence statistics of large text corpora. The conclusion was that semantic vectors of pointwise mutual information values from very small co-occurrence windows, together with a cosine distance measure, consistently resulted in the best representations across a range of psychologically relevant semantic tasks. This article extends that study by investigating the use of three further factors--namely, the application of stop-lists, word stemming, and dimensionality reduction using singular value decomposition (SVD)--that have been used to provide improved performance elsewhere. It also introduces an additional semantic task and explores the advantages of using a much larger corpus. This leads to the discovery and analysis of improved SVD-based methods for generating semantic representations (that provide new state-of-the-art performance on a standard TOEFL task) and the identification and discussion of problems and misleading results that can arise without a full systematic study.", "title": "" }, { "docid": "e349ca11637dfad2d68a5082e27f11ff", "text": "As the capabilities of artificial intelligence (AI) systems improve, it becomes important to constrain their actions to ensure their behaviour remains beneficial to humanity. A variety of ethical, legal and safety-based frameworks have been proposed as a basis for designing these constraints. Despite their variations, these frameworks share the common characteristic that decision-making must consider multiple potentially conflicting factors. We demonstrate that these alignment frameworks can be represented as utility functions, but that the widely used Maximum Expected Utility (MEU) paradigm provides insufficient support for such multiobjective decision-making. We show that a Multiobjective Maximum Expected Utility paradigm based on the combination of vector utilities and non-linear action–selection can overcome many of the issues which limit MEU’s effectiveness in implementing aligned AI. We examine existing approaches to multiobjective AI, and identify how these can contribute to the development of human-aligned intelligent agents.", "title": "" }, { "docid": "77bbd6d3e1f1ae64bda32cd057cf0580", "text": "Although great progress has been made in automatic speech recognition, significant performance degradation still exists in noisy environments. Recently, very deep convolutional neural networks CNNs have been successfully applied to computer vision and speech recognition tasks. Based on our previous work on very deep CNNs, in this paper this architecture is further developed to improve recognition accuracy for noise robust speech recognition. In the proposed very deep CNN architecture, we study the best configuration for the sizes of filters, pooling, and input feature maps: the sizes of filters and poolings are reduced and dimensions of input features are extended to allow for adding more convolutional layers. Then the appropriate pooling, padding, and input feature map selection strategies are investigated and applied to the very deep CNN to make it more robust for speech recognition. In addition, an in-depth analysis of the architecture reveals key characteristics, such as compact model scale, fast convergence speed, and noise robustness. The proposed new model is evaluated on two tasks: Aurora4 task with multiple additive noise types and channel mismatch, and the AMI meeting transcription task with significant reverberation. Experiments on both tasks show that the proposed very deep CNNs can significantly reduce word error rate WER for noise robust speech recognition. The best architecture obtains a 10.0% relative reduction over the traditional CNN on AMI, competitive with the long short-term memory recurrent neural networks LSTM-RNN acoustic model. On Aurora4, even without feature enhancement, model adaptation, and sequence training, it achieves a WER of 8.81%, a 17.0% relative improvement over the LSTM-RNN. To our knowledge, this is the best published result on Aurora4.", "title": "" }, { "docid": "8c60d78e9c4db8a457c7555393089f7c", "text": "Artificially structured metamaterials have enabled unprecedented flexibility in manipulating electromagnetic waves and producing new functionalities, including the cloak of invisibility based on coordinate transformation. Unlike other cloaking approaches4–6, which are typically limited to subwavelength objects, the transformation method allows the design of cloaking devices that render a macroscopic object invisible. In addition, the design is not sensitive to the object that is being cloaked. The first experimental demonstration of such a cloak at microwave frequencies was recently reported7. We note, however, that that design cannot be implemented for an optical cloak, which is certainly of particular interest because optical frequencies are where the word ‘invisibility’ is conventionally defined. Here we present the design of a non-magnetic cloak operating at optical frequencies. The principle and structure of the proposed cylindrical cloak are analysed, and the general recipe for the implementation of such a device is provided. The coordinate transformation used in the proposed nonmagnetic optical cloak of cylindrical geometry is similar to that in ref. 7, by which a cylindrical region r , b is compressed into a concentric cylindrical shell a , r , b as shown in Fig. 1a. This transformation results in the following requirements for anisotropic permittivity and permeability in the cloaking shell:", "title": "" }, { "docid": "b75a9a52296877783431af9447200747", "text": "Sentiment analysis has been a major area of interest, for which the existence of highquality resources is crucial. In Arabic, there is a reasonable number of sentiment lexicons but with major deficiencies. The paper presents a large-scale Standard Arabic Sentiment Lexicon (SLSA) that is publicly available for free and avoids the deficiencies in the current resources. SLSA has the highest up-to-date reported coverage. The construction of SLSA is based on linking the lexicon of AraMorph with SentiWordNet along with a few heuristics and powerful back-off. SLSA shows a relative improvement of 37.8% over a state-of-theart lexicon when tested for accuracy. It also outperforms it by an absolute 3.5% of F1-score when tested for sentiment analysis.", "title": "" } ]
scidocsrr
785e7bc9e4b13685cc55441a65a157d2
A Bayesian approach to covariance estimation and data fusion
[ { "docid": "2d787b0deca95ce212e11385ae60c36d", "text": "In this paper, we introduce three novel distributed Kalman filtering (DKF) algorithms for sensor networks. The first algorithm is a modification of a previous DKF algorithm presented by the author in CDC-ECC '05. The previous algorithm was only applicable to sensors with identical observation matrices which meant the process had to be observable by every sensor. The modified DKF algorithm uses two identical consensus filters for fusion of the sensor data and covariance information and is applicable to sensor networks with different observation matrices. This enables the sensor network to act as a collective observer for the processes occurring in an environment. Then, we introduce a continuous-time distributed Kalman filter that uses local aggregation of the sensor data but attempts to reach a consensus on estimates with other nodes in the network. This peer-to-peer distributed estimation method gives rise to two iterative distributed Kalman filtering algorithms with different consensus strategies on estimates. Communication complexity and packet-loss issues are discussed. The performance and effectiveness of these distributed Kalman filtering algorithms are compared and demonstrated on a target tracking task.", "title": "" }, { "docid": "e9d0c366c241e1fc071d82ca810d1be2", "text": "The problem of distributed Kalman filtering (DKF) for sensor networks is one of the most fundamental distributed estimation problems for scalable sensor fusion. This paper addresses the DKF problem by reducing it to two separate dynamic consensus problems in terms of weighted measurements and inverse-covariance matrices. These to data fusion problems are solved is a distributed way using low-pass and band-pass consensus filters. Consensus filters are distributed algorithms that allow calculation of average-consensus of time-varying signals. The stability properties of consensus filters is discussed in a companion CDC ’05 paper [24]. We show that a central Kalman filter for sensor networks can be decomposed into n micro-Kalman filters with inputs that are provided by two types of consensus filters. This network of micro-Kalman filters collectively are capable to provide an estimate of the state of the process (under observation) that is identical to the estimate obtained by a central Kalman filter given that all nodes agree on two central sums. Later, we demonstrate that our consensus filters can approximate these sums and that gives an approximate distributed Kalman filtering algorithm. A detailed account of the computational and communication architecture of the algorithm is provided. Simulation results are presented for a sensor network with 200 nodes and more than 1000 links.", "title": "" } ]
[ { "docid": "5931cb779b24065c5ef48451bc46fac4", "text": "In order to provide a material that can facilitate the modeling and construction of a Furuta pendulum, this paper presents the deduction, step-by-step, of a Furuta pendulum mathematical model by using the Lagrange equations of motion. Later, a mechanical design of the Furuta pendulum is carried out via the software Solid Works and subsequently a prototype is built. Numerical simulations of the Furuta pendulum model are performed via Mat lab-Simulink. Furthermore, the Furuta pendulum prototype built is experimentally tested by using Mat lab-Simulink, Control Desk, and a DS1104 board from dSPACE.", "title": "" }, { "docid": "5b341604b207e80ef444d11a9de82f72", "text": "Digital deformities continue to be a common ailment among many patients who present to foot and ankle specialists. When conservative treatment fails to eliminate patient complaints, surgical correction remains a viable treatment option. Proximal interphalangeal joint arthrodesis remains the standard procedure among most foot and ankle surgeons. With continued advances in fixation technology and techniques, surgeons continue to have better options for the achievement of excellent digital surgery outcomes. This article reviews current trends in fixation of digital deformities while highlighting pertinent aspects of the physical examination, radiographic examination, and surgical technique.", "title": "" }, { "docid": "c197fcf3042099003f3ed682f7b7f19c", "text": "Interaction graphs are ubiquitous in many fields such as bioinformatics, sociology and physical sciences. There have been many studies in the literature targeted at studying and mining these graphs. However, almost all of them have studied these graphs from a static point of view. The study of the evolution of these graphs over time can provide tremendous insight on the behavior of entities, communities and the flow of information among them. In this work, we present an event-based characterization of critical behavioral patterns for temporally varying interaction graphs. We use non-overlapping snapshots of interaction graphs and develop a framework for capturing and identifying interesting events from them. We use these events to characterize complex behavioral patterns of individuals and communities over time. We demonstrate the application of behavioral patterns for the purposes of modeling evolution, link prediction and influence maximization. Finally, we present a diffusion model for evolving networks, based on our framework.", "title": "" }, { "docid": "8c0b544b88ebe81ebe4b374a4e08bb5e", "text": "We study 3D shape modeling from a single image and make contributions to it in three aspects. First, we present Pix3D, a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc. Building such a large-scale dataset, however, is highly challenging; existing datasets either contain only synthetic data, or lack precise alignment between 2D images and 3D shapes, or only have a small number of images. Second, we calibrate the evaluation criteria for 3D shape reconstruction through behavioral studies, and use them to objectively and systematically benchmark cutting-edge reconstruction algorithms on Pix3D. Third, we design a novel model that simultaneously performs 3D reconstruction and pose estimation; our multi-task learning approach achieves state-of-the-art performance on both tasks.", "title": "" }, { "docid": "b596be97699686e5e37cab71bee8fe4a", "text": "The task of selecting project portfolios is an important and recurring activity in many organizations. There are many techniques available to assist in this process, but no integrated framework for carrying it out. This paper simpli®es the project portfolio selection process by developing a framework which separates the work into distinct stages. Each stage accomplishes a particular objective and creates inputs to the next stage. At the same time, users are free to choose the techniques they ®nd the most suitable for each stage, or in some cases to omit or modify a stage if this will simplify and expedite the process. The framework may be implemented in the form of a decision support system, and a prototype system is described which supports many of the related decision making activities. # 1999 Published by Elsevier Science Ltd and IPMA. All rights reserved", "title": "" }, { "docid": "57ca7842e7ab21b51c4069e76121fc26", "text": "This paper surveys and investigates the strengths and weaknesses of a number of recent approaches to advanced workflow modelling. Rather than inventing just another workflow language, we briefly describe recent workflow languages, and we analyse them with respect to their support for advanced workflow topics. Object Coordination Nets, Workflow Graphs, WorkFlow Nets, and an approach based on Workflow Evolution are described as dedicated workflow modelling approaches. In addition, the Unified Modelling Language as the de facto standard in objectoriented modelling is also investigated. These approaches are discussed with respect to coverage of workflow perspectives and support for flexibility and analysis issues in workflow management, which are today seen as two major areas for advanced workflow support. Given the different goals and backgrounds of the approaches mentioned, it is not surprising that each approach has its specific strengths and weaknesses. We clearly identify these strengths and weaknesses, and we conclude with ideas for combining their best features.", "title": "" }, { "docid": "d93795318775df2c451eaf8c04a764cf", "text": "The queries issued to search engines are often ambiguous or multifaceted, which requires search engines to return diverse results that can fulfill as many different information needs as possible; this is called search result diversification. Recently, the relational learning to rank model, which designs a learnable ranking function following the criterion of maximal marginal relevance, has shown effectiveness in search result diversification [Zhu et al. 2014]. The goodness of a diverse ranking model is usually evaluated with diversity evaluation measures such as α-NDCG [Clarke et al. 2008], ERR-IA [Chapelle et al. 2009], and D#-NDCG [Sakai and Song 2011]. Ideally the learning algorithm would train a ranking model that could directly optimize the diversity evaluation measures with respect to the training data. Existing relational learning to rank algorithms, however, only train the ranking models by optimizing loss functions that loosely relate to the evaluation measures. To deal with the problem, we propose a general framework for learning relational ranking models via directly optimizing any diversity evaluation measure. In learning, the loss function upper-bounding the basic loss function defined on a diverse ranking measure is minimized. We can derive new diverse ranking algorithms under the framework, and several diverse ranking algorithms are created based on different upper bounds over the basic loss function. We conducted comparisons between the proposed algorithms with conventional diverse ranking methods using the TREC benchmark datasets. Experimental results show that the algorithms derived under the diverse learning to rank framework always significantly outperform the state-of-the-art baselines.", "title": "" }, { "docid": "8b71cb1b7cdaa434ac4b238b97a30e66", "text": "Research on interoperability of technology-enhanced learning (TEL) repositories throughout the last decade has led to a fragmented landscape of competing approaches, such as metadata schemas and interface mechanisms. However, so far Web-scale integration of resources is not facilitated, mainly due to the lack of take-up of shared principles, datasets and schemas. On the other hand, the Linked Data approach has emerged as the de-facto standard for sharing data on the Web and offers a large potential to solve interoperability issues in the field of TEL. In this paper, we describe a general approach to exploit the wealth of already existing TEL data on the Web by allowing its exposure as Linked Data and by taking into account automated enrichment and interlinking techniques to provide rich and well-interlinked data for the educational domain. This approach has been implemented in the context of the mEducator project where data from a number of open TEL data repositories has been integrated, exposed and enriched by following Linked Data principles.", "title": "" }, { "docid": "61e8deaaa02297ba3edb2eb14ffb7f26", "text": "Given an edge-weighted graph G and two distinct vertices s and t of G, the next-to-shortest path problem asks for a path from s to t of minimum length among all paths from s to t except the shortest ones. In this article, we consider the version where G is directed and all edge weights are positive. Some properties of the requested path are derived when G is an arbitrary digraph. In addition, if G is planar, an O(n3)-time algorithm is proposed, where n is the number of vertices of G. © 2015 Wiley Periodicals, Inc. NETWORKS, Vol. 000(00), 000–00", "title": "" }, { "docid": "07e2b3550183fd4d2a42591a9726f77c", "text": "Modern cryptocurrency systems, such as Ethereum, permit complex financial transactions through scripts called smart contracts. These smart contracts are executed many, many times, always without real concurrency. First, all smart contracts are serially executed by miners before appending them to the blockchain. Later, those contracts are serially re-executed by validators to verify that the smart contracts were executed correctly by miners. Serial execution limits system throughput and fails to exploit today's concurrent multicore and cluster architectures. Nevertheless, serial execution appears to be required: contracts share state, and contract programming languages have a serial semantics.\n This paper presents a novel way to permit miners and validators to execute smart contracts in parallel, based on techniques adapted from software transactional memory. Miners execute smart contracts speculatively in parallel, allowing non-conflicting contracts to proceed concurrently, and \"discovering\" a serializable concurrent schedule for a block's transactions, This schedule is captured and encoded as a deterministic fork-join program used by validators to re-execute the miner's parallel schedule deterministically but concurrently.\n Smart contract benchmarks run on a JVM with ScalaSTM show that a speedup of 1.33x can be obtained for miners and 1.69x for validators with just three concurrent threads.", "title": "" }, { "docid": "7c5ce3005c4529e0c34220c538412a26", "text": "Six studies investigate whether and how distant future time perspective facilitates abstract thinking and impedes concrete thinking by altering the level at which mental representations are construed. In Experiments 1-3, participants who envisioned their lives and imagined themselves engaging in a task 1 year later as opposed to the next day subsequently performed better on a series of insight tasks. In Experiments 4 and 5 a distal perspective was found to improve creative generation of abstract solutions. Moreover, Experiment 5 demonstrated a similar effect with temporal distance manipulated indirectly, by making participants imagine their lives in general a year from now versus tomorrow prior to performance. In Experiment 6, distant time perspective undermined rather than enhanced analytical problem solving.", "title": "" }, { "docid": "ce384939966654196aabbb076326c779", "text": "We address the problem of detecting duplicate questions in forums, which is an important step towards automating the process of answering new questions. As finding and annotating such potential duplicates manually is very tedious and costly, automatic methods based on machine learning are a viable alternative. However, many forums do not have annotated data, i.e., questions labeled by experts as duplicates, and thus a promising solution is to use domain adaptation from another forum that has such annotations. Here we focus on adversarial domain adaptation, deriving important findings about when it performs well and what properties of the domains are important in this regard. Our experiments with StackExchange data show an average improvement of 5.6% over the best baseline across multiple pairs of domains.", "title": "" }, { "docid": "f33ca4cfba0aab107eb8bd6d3d041b74", "text": "Deep neural networks (DNNs) require very large amounts of computation both for training and for inference when deployed in the field. A common approach to implementing DNNs is to recast the most computationally expensive operations as general matrix multiplication (GEMM). However, as we demonstrate in this paper, there are a great many different ways to express DNN convolution operations using GEMM. Although different approaches all perform the same number of operations, the size of temorary data structures differs significantly. Convolution of an input matrix with dimensions C × H × W , requires O(KCHW ) additional space using the classical im2col approach. More recently memory-efficient approaches requiring just O(KCHW ) auxiliary space have been proposed. We present two novel GEMM-based algorithms that require just O(MHW ) and O(KW ) additional space respectively, where M is the number of channels in the result of the convolution. These algorithms dramatically reduce the space overhead of DNN convolution, making it much more suitable for memory-limited embedded systems. Experimental evaluation shows that our lowmemory algorithms are just as fast as the best patch-building approaches despite requiring just a fraction of the amount of additional memory. Our low-memory algorithms have excellent data locality which gives them a further edge over patch-building algorithms when multiple cores are used. As a result, our low memory algorithms often outperform the best patch-building algorithms using multiple threads.", "title": "" }, { "docid": "c6e6099599be3cd2d1d87c05635f4248", "text": "PURPOSE\nThe Food Cravings Questionnaires are among the most often used measures for assessing the frequency and intensity of food craving experiences. However, there is a lack of studies that have examined specific cut-off scores that may indicate pathologically elevated levels of food cravings.\n\n\nMETHODS\nReceiver-Operating-Characteristic analysis was used to determine sensitivity and specificity of scores on the Food Cravings Questionnaire-Trait-reduced (FCQ-T-r) for discriminating between individuals with (n = 43) and without (n = 389) \"food addiction\" as assessed with the Yale Food Addiction Scale 2.0.\n\n\nRESULTS\nA cut-off score of 50 on the FCQ-T-r discriminated between individuals with and without \"food addiction\" with high sensitivity (85%) and specificity (93%).\n\n\nCONCLUSIONS\nFCQ-T-r scores of 50 and higher may indicate clinically relevant levels of trait food craving.\n\n\nLEVEL OF EVIDENCE\nLevel V, descriptive study.", "title": "" }, { "docid": "104c71324594c907f87d483c8c222f0f", "text": "Operational controls are designed to support the integration of wind and solar power within microgrids. An aggregated model of renewable wind and solar power generation forecast is proposed to support the quantification of the operational reserve for day-ahead and real-time scheduling. Then, a droop control for power electronic converters connected to battery storage is developed and tested. Compared with the existing droop controls, it is distinguished in that the droop curves are set as a function of the storage state-of-charge (SOC) and can become asymmetric. The adaptation of the slopes ensures that the power output supports the terminal voltage while at the same keeping the SOC within a target range of desired operational reserve. This is shown to maintain the equilibrium of the microgrid's real-time supply and demand. The controls are implemented for the special case of a dc microgrid that is vertically integrated within a high-rise host building of an urban area. Previously untapped wind and solar power are harvested on the roof and sides of a tower, thereby supporting delivery to electric vehicles on the ground. The microgrid vertically integrates with the host building without creating a large footprint.", "title": "" }, { "docid": "fd576b16a55c8f6bc4922561ef0d80bd", "text": "Abs t rad -Th i s paper presents all controllers for the general ~'® control problem (with no assumptions on the plant matrices). Necessary and sufficient conditions for the existence of an ~® controller of any order are given in terms of three Linear Matrix Inequalities (LMIs). Our existence conditions are equivalent to Scherer's results, but with a more elementary derivation. Furthermore, we provide the set of all ~(= controllers explicitly parametrized in the state space using the positive definite solutions to the LMIs. Even under standard assumptions (full rank, etc.), our controller parametrization has an advantage over the Q-parametrization. The freedom Q (a real-rational stable transfer matrix with the ~® norm bounded above by a specified number) is replaced by a constant matrix L of fixed dimension with a norm bound, and the solutions (X, Y) to the LMIs. The inequality formulation converts the existence conditions to a convex feasibility problem, and also the free matrix L and the pair (X, Y) define a finite dimensional design space, as opposed to the infinite dimensional space associated with the Q-parametrization.", "title": "" }, { "docid": "e92ab865f33c7548c21ba99785912d03", "text": "Given a query graph q and a data graph g, the subgraph isomorphism search finds all occurrences of q in g and is considered one of the most fundamental query types for many real applications. While this problem belongs to NP-hard, many algorithms have been proposed to solve it in a reasonable time for real datasets. However, a recent study has shown, through an extensive benchmark with various real datasets, that all existing algorithms have serious problems in their matching order selection. Furthermore, all algorithms blindly permutate all possible mappings for query vertices, often leading to useless computations. In this paper, we present an efficient and robust subgraph search solution, called TurboISO, which is turbo-charged with two novel concepts, candidate region exploration and the combine and permute strategy (in short, Comb/Perm). The candidate region exploration identifies on-the-fly candidate subgraphs (i.e, candidate regions), which contain embeddings, and computes a robust matching order for each candidate region explored. The Comb/Perm strategy exploits the novel concept of the neighborhood equivalence class (NEC). Each query vertex in the same NEC has identically matching data vertices. During subgraph isomorphism search, Comb/Perm generates only combinations for each NEC instead of permutating all possible enumerations. Thus, if a chosen combination is determined to not contribute to a complete solution, all possible permutations for that combination will be safely pruned. Extensive experiments with many real datasets show that TurboISO consistently and significantly outperforms all competitors by up to several orders of magnitude.", "title": "" }, { "docid": "3f00cb229ea1f64e8b60bebaff0d99fe", "text": "It is widely known that in wireless sensor networks (WSN), energy efficiency is of utmost importance. WSN need to be energy efficient but also need to provide better performance, particularly latency. A common protocol design guideline has been to trade off some performance metrics such as throughput and delay for energy. This paper presents a novel MAC (Express Energy Efficient Media Access Control) protocol that not only preserves the energy efficiency of current alternatives but also coordinates the transfer of packets from source to destination in such a way that latency and jitter are improved considerably. Our simulations show how EX-MAC (Express Energy Efficient MAC) outperforms the well-known S-MAC protocols in several performance metrics.", "title": "" }, { "docid": "2ba1321f64fc8567fd70c030ea49b9e0", "text": "Datasets originating from social networks are very valuable to many fields such as sociology and psychology. However, the supports from technical perspective are far from enough, and specific approaches are urgently in need. This paper applies data mining to psychology area for detecting depressed users in social network services. Firstly, a sentiment analysis method is proposed utilizing vocabulary and man-made rules to calculate the depression inclination of each micro-blog. Secondly, a depression detection model is constructed based on the proposed method and 10 features of depressed users derived from psychological research. Then 180 users and 3 kinds of classifiers are used to verify the model, whose precisions are all around 80%. Also, the significance of each feature is analyzed. Lastly, an application is developed within the proposed model for mental health monitoring online. This study is supported by some psychologists, and facilitates them in data-centric aspect in turn.", "title": "" }, { "docid": "7edddf437e1759b8b13821670f52f4ba", "text": "This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.", "title": "" } ]
scidocsrr
a94558043aadec25b546b7c275f808ed
Deformable Pose Traversal Convolution for 3D Action and Gesture Recognition
[ { "docid": "1d6e23fedc5fa51b5125b984e4741529", "text": "Human action recognition from well-segmented 3D skeleton data has been intensively studied and attracting an increasing attention. Online action detection goes one step further and is more challenging, which identifies the action type and localizes the action positions on the fly from the untrimmed stream. In this paper, we study the problem of online action detection from the streaming skeleton data. We propose a multi-task end-to-end Joint Classification-Regression Recurrent Neural Network to better explore the action type and temporal localization information. By employing a joint classification and regression optimization objective, this network is capable of automatically localizing the start and end points of actions more accurately. Specifically, by leveraging the merits of the deep Long Short-Term Memory (LSTM) subnetwork, the proposed model automatically captures the complex long-range temporal dynamics, which naturally avoids the typical sliding window design and thus ensures high computational efficiency. Furthermore, the subtask of regression optimization provides the ability to forecast the action prior to its occurrence. To evaluate our proposed model, we build a large streaming video dataset with annotations. Experimental results on our dataset and the public G3D dataset both demonstrate very promising performance of our scheme.", "title": "" }, { "docid": "401b2494b8b032751c219726671cb48e", "text": "Current state-of-the-art approaches to skeleton-based action recognition are mostly based on recurrent neural networks (RNN). In this paper, we propose a novel convolutional neural networks (CNN) based framework for both action classification and detection. Raw skeleton coordinates as well as skeleton motion are fed directly into CNN for label prediction. A novel skeleton transformer module is designed to rearrange and select important skeleton joints automatically. With a simple 7-layer network, we obtain 89.3% accuracy on validation set of the NTU RGB+D dataset. For action detection in untrimmed videos, we develop a window proposal network to extract temporal segment proposals, which are further classified within the same network. On the recent PKU-MMD dataset, we achieve 93.7% mAP, surpassing the baseline by a large margin.", "title": "" } ]
[ { "docid": "901174e2dd911afada2e8ccf245d25f3", "text": "This article presents the state of the art in passive devices for enhancing limb movement in people with neuromuscular disabilities. Both upper- and lower-limb projects and devices are described. Special emphasis is placed on a passive functional upper-limb orthosis called the Wilmington Robotic Exoskeleton (WREX). The development and testing of the WREX with children with limited arm strength are described. The exoskeleton has two links and 4 degrees of freedom. It uses linear elastic elements that balance the effects of gravity in three dimensions. The experiences of five children with arthrogryposis who used the WREX are described.", "title": "" }, { "docid": "11557714ac3bbd9fc9618a590722212e", "text": "In Taobao, the largest e-commerce platform in China, billions of items are provided and typically displayed with their images.For better user experience and business effectiveness, Click Through Rate (CTR) prediction in online advertising system exploits abundant user historical behaviors to identify whether a user is interested in a candidate ad. Enhancing behavior representations with user behavior images will help understand user's visual preference and improve the accuracy of CTR prediction greatly. So we propose to model user preference jointly with user behavior ID features and behavior images. However, training with user behavior images brings tens to hundreds of images in one sample, giving rise to a great challenge in both communication and computation. To handle these challenges, we propose a novel and efficient distributed machine learning paradigm called Advanced Model Server (AMS). With the well-known Parameter Server (PS) framework, each server node handles a separate part of parameters and updates them independently. AMS goes beyond this and is designed to be capable of learning a unified image descriptor model shared by all server nodes which embeds large images into low dimensional high level features before transmitting images to worker nodes. AMS thus dramatically reduces the communication load and enables the arduous joint training process. Based on AMS, the methods of effectively combining the images and ID features are carefully studied, and then we propose a Deep Image CTR Model. Our approach is shown to achieve significant improvements in both online and offline evaluations, and has been deployed in Taobao display advertising system serving the main traffic.", "title": "" }, { "docid": "8994470e355b5db188090be731ee4fe9", "text": "A system that allows museums to build and manage Virtual and Augmented Reality exhibitions based on 3D models of artifacts is presented. Dynamic content creation based on pre-designed visualization templates allows content designers to create virtual exhibitions very efficiently. Virtual Reality exhibitions can be presented both inside museums, e.g. on touch-screen displays installed inside galleries and, at the same time, on the Internet. Additionally, the presentation based on Augmented Reality technologies allows museum visitors to interact with the content in an intuitive and exciting manner.", "title": "" }, { "docid": "557451621286ecd4fbf21909ff88450f", "text": "BACKGROUND\nMany studies have demonstrated that honey has antibacterial activity in vitro, and a small number of clinical case studies have shown that application of honey to severely infected cutaneous wounds is capable of clearing infection from the wound and improving tissue healing. Research has also indicated that honey may possess anti-inflammatory activity and stimulate immune responses within a wound. The overall effect is to reduce infection and to enhance wound healing in burns, ulcers, and other cutaneous wounds. The objective of the study was to find out the results of topical wound dressings in diabetic wounds with natural honey.\n\n\nMETHODS\nThe study was conducted at department of Orthopaedics, Unit-1, Liaquat University of Medical and Health Sciences, Jamshoro from July 2006 to June 2007. Study design was experimental. The inclusion criteria were patients of either gender with any age group having diabetic foot Wagner type I, II, III and II. The exclusion criteria were patients not willing for studies and who needed urgent amputation due to deteriorating illness. Initially all wounds were washed thoroughly and necrotic tissues removed and dressings with honey were applied and continued up to healing of wounds.\n\n\nRESULTS\nTotal number of patients was 12 (14 feet). There were 8 males (66.67%) and 4 females (33.33%), 2 cases (16.67%) were presented with bilateral diabetic feet. The age range was 35 to 65 years (46 +/- 9.07 years). Amputations of big toe in 3 patients (25%), second and third toe ray in 2 patients (16.67%) and of fourth and fifth toes at the level of metatarsophalengeal joints were done in 3 patients (25%). One patient (8.33%) had below knee amputation.\n\n\nCONCLUSION\nIn our study we observed excellent results in treating diabetic wounds with dressings soaked with natural honey. The disability of diabetic foot patients was minimized by decreasing the rate of leg or foot amputations and thus enhancing the quality and productivity of individual life.", "title": "" }, { "docid": "b24f07add0da3931b23f4a13ea6983b9", "text": "Recently, with the development of artificial intelligence technologies and the popularity of mobile devices, walking detection and step counting have gained much attention since they play an important role in the fields of equipment positioning, saving energy, behavior recognition, etc. In this paper, a novel algorithm is proposed to simultaneously detect walking motion and count steps through unconstrained smartphones in the sense that the smartphone placement is not only arbitrary but also alterable. On account of the periodicity of the walking motion and sensitivity of gyroscopes, the proposed algorithm extracts the frequency domain features from three-dimensional (3D) angular velocities of a smartphone through FFT (fast Fourier transform) and identifies whether its holder is walking or not irrespective of its placement. Furthermore, the corresponding step frequency is recursively updated to evaluate the step count in real time. Extensive experiments are conducted by involving eight subjects and different walking scenarios in a realistic environment. It is shown that the proposed method achieves the precision of 93.76 % and recall of 93.65 % for walking detection, and its overall performance is significantly better than other well-known methods. Moreover, the accuracy of step counting by the proposed method is 95.74 % , and is better than both of the several well-known counterparts and commercial products.", "title": "" }, { "docid": "e464cde1434026c17b06716c6a416b7a", "text": "Three experiments supported the hypothesis that people are more willing to express attitudes that could be viewed as prejudiced when their past behavior has established their credentials as nonprejudiced persons. In Study 1, participants given the opportunity to disagree with blatantly sexist statements were later more willing to favor a man for a stereotypically male job. In Study 2, participants who first had the opportunity to select a member of a stereotyped group (a woman or an African American) for a category-neutral job were more likely to reject a member of that group for a job stereotypically suited for majority members. In Study 3, participants who had established credentials as nonprejudiced persons revealed a greater willingness to express a politically incorrect opinion even when the audience was unaware of their credentials. The general conditions under which people feel licensed to act on illicit motives are discussed.", "title": "" }, { "docid": "4d42e42469fcead51969f3e642920abc", "text": "In this paper, we present a dual-band antenna for Long Term Evolution (LTE) handsets. The proposed antenna is composed of a meandered monopole operating in the 700 MHz band and a parasitic element which radiates in the 2.5–2.7 GHz band. Two identical antennas are then closely positioned on the same 120×50 mm2 ground plane (Printed Circuit Board) which represents a modern-size PDA-mobile phone. To enhance the port-to-port isolation of the antennas, a neutralization technique is implemented between them. Scattering parameters, radiations patterns and total efficiencies are presented to illustrate the performance of the antenna-system.", "title": "" }, { "docid": "fff89d9e97dbb5a13febe48c35d08c94", "text": "The positive effects of social popularity (i.e., information based on other consumers’ behaviors) and deal scarcity (i.e., information provided by product vendors) on consumers’ consumption behaviors are well recognized. However, few studies have investigated their potential joint and interaction effects and how such effects may differ at different timing of a shopping process. This study examines the individual and interaction effects of social popularity and deal scarcity as well as how such effects change as consumers’ shopping goals become more concrete. The results of a laboratory experiment show that in the initial shopping stage when consumers do not have specific shopping goals, social popularity and deal scarcity information weaken each other’s effects; whereas in the later shopping stage when consumers have constructed concrete shopping goals, these two information cues reinforce each other’s effects. Implications on theory and practice are discussed.", "title": "" }, { "docid": "d0e977ab137cd004420bda28bd0b11be", "text": "This study investigates the roles of cohesion and coherence in evaluations of essay quality. Cohesion generally has a facilitative effect on text comprehension and is assumed to be related to essay coherence. By contrast, recent studies of essay writing have demonstrated that computational indices of cohesion are not predictive of evaluations of writing quality. This study investigates expert ratings of individual text features, including coherence, in order to examine their relation to evaluations of holistic essay quality. The results suggest that coherence is an important attribute of overall essay quality, but that expert raters evaluate coherence based on the absence of cohesive cues in the essays rather than their presence. This finding has important implications for text understanding and the role of coherence in writing quality.", "title": "" }, { "docid": "733f5029329072adf5635f0b4d0ad1cb", "text": "We present a new approach to scalable training of deep learning machines by incremental block training with intra-block parallel optimization to leverage data parallelism and blockwise model-update filtering to stabilize learning process. By using an implementation on a distributed GPU cluster with an MPI-based HPC machine learning framework to coordinate parallel job scheduling and collective communication, we have trained successfully deep bidirectional long short-term memory (LSTM) recurrent neural networks (RNNs) and fully-connected feed-forward deep neural networks (DNNs) for large vocabulary continuous speech recognition on two benchmark tasks, namely 309-hour Switchboard-I task and 1,860-hour \"Switch-board+Fisher\" task. We achieve almost linear speedup up to 16 GPU cards on LSTM task and 64 GPU cards on DNN task, with either no degradation or improved recognition accuracy in comparison with that of running a traditional mini-batch based stochastic gradient descent training on a single GPU.", "title": "" }, { "docid": "08353c7d40a0df4909b09f2d3e5ab4fe", "text": "Object detection has made great progress in the past few years along with the development of deep learning. However, most current object detection methods are resource hungry, which hinders their wide deployment to many resource restricted usages such as usages on always-on devices, battery-powered low-end devices, etc. This paper considers the resource and accuracy trade-off for resource-restricted usages during designing the whole object detection framework. Based on the deeply supervised object detection (DSOD) framework, we propose Tiny-DSOD dedicating to resource-restricted usages. Tiny-DSOD introduces two innovative and ultra-efficient architecture blocks: depthwise dense block (DDB) based backbone and depthwise feature-pyramid-network (D-FPN) based front-end. We conduct extensive experiments on three famous benchmarks (PASCAL VOC 2007, KITTI, and COCO), and compare Tiny-DSOD to the state-of-the-art ultra-efficient object detection solutions such as Tiny-YOLO, MobileNet-SSD (v1 & v2), SqueezeDet, Pelee, etc. Results show that Tiny-DSOD outperforms these solutions in all the three metrics (parameter-size, FLOPs, accuracy) in each comparison. For instance, Tiny-DSOD achieves 72.1% mAP with only 0.95M parameters and 1.06B FLOPs, which is by far the state-of-the-arts result with such a low resource requirement.∗", "title": "" }, { "docid": "2665314258f4b7f59a55702166f59fcc", "text": "In this paper, a wireless power transfer system with magnetically coupled resonators is studied. The idea to use metamaterials to enhance the coupling coefficient and the transfer efficiency is proposed and analyzed. With numerical calculations of a system with and without metamaterials, we show that the transfer efficiency can be improved with metamaterials.", "title": "" }, { "docid": "be1c50de2963341423960ba0f59fbc1f", "text": "Deep neural networks have been shown to be very successful at learning feature hierarchies in supervised learning tasks. Generative models, on the other hand, have benefited less from hierarchical models with multiple layers of latent variables. In this paper, we prove that hierarchical latent variable models do not take advantage of the hierarchical structure when trained with some existing variational methods, and provide some limitations on the kind of features existing models can learn. Finally we propose an alternative architecture that does not suffer from these limitations. Our model is able to learn highly interpretable and disentangled hierarchical features on several natural image datasets with no taskspecific regularization.", "title": "" }, { "docid": "00602badbfba6bc97dffbdd6c5a2ae2d", "text": "Accurately drawing 3D objects is difficult for untrained individuals, as it requires an understanding of perspective and its effects on geometry and proportions. Step-by-step tutorials break the complex task of sketching an entire object down into easy-to-follow steps that even a novice can follow. However, creating such tutorials requires expert knowledge and is time-consuming. As a result, the availability of tutorials for a given object or viewpoint is limited. How2Sketch (H2S) addresses this problem by automatically generating easy-to-follow tutorials for arbitrary 3D objects. Given a segmented 3D model and a camera viewpoint, H2S computes a sequence of steps for constructing a drawing scaffold comprised of geometric primitives, which helps the user draw the final contours in correct perspective and proportion. To make the drawing scaffold easy to construct, the algorithm solves for an ordering among the scaffolding primitives and explicitly makes small geometric modifications to the size and location of the object parts to simplify relative positioning. Technically, we formulate this scaffold construction as a single selection problem that simultaneously solves for the ordering and geometric changes of the primitives. We generate different tutorials on man-made objects using our method and evaluate how easily the tutorials can be followed with a user study.", "title": "" }, { "docid": "efec2ff9384e17a698c88e742e41bcc9", "text": "— A new versatile Hydraulically-powered Quadruped robot (HyQ) has been developed to serve as a platform to study not only highly dynamic motions such as running and jumping, but also careful navigation over very rough terrain. HyQ stands 1 meter tall, weighs roughly 90kg and features 12 torque-controlled joints powered by a combination of hydraulic and electric actuators. The hydraulic actuation permits the robot to perform powerful and dynamic motions that are hard to achieve with more traditional electrically actuated robots. This paper describes design and specifications of the robot and presents details on the hardware of the quadruped platform, such as the mechanical design of the four articulated legs and of the torso frame, and the configuration of the hydraulic power system. Results from the first walking experiments are presented along with test studies using a previously built prototype leg. 1 INTRODUCTION The development of mobile robotic platforms is an important and active area of research. Within this domain, the major focus has been to develop wheeled or tracked systems that cope very effectively with flat and well-structured solid surfaces (e.g. laboratories and roads). In recent years, there has been considerable success with robotic vehicles even for off-road conditions [1]. However, wheeled robots still have major limitations and difficulties in navigating uneven and rough terrain. These limitations and the capabilities of legged animals encouraged researchers for the past decades to focus on the construction of biologically inspired legged machines. These robots have the potential to outperform the more traditional designs with wheels and tracks in terms of mobility and versatility. The vast majority of the existing legged robots have been, and continue to be, actuated by electric motors with high gear-ratio reduction drives, which are popular because of their size, price, ease of use and accuracy of control. However, electric motors produce small torques relative to their size and weight, thereby making reduction drives with high ratios essential to convert velocity into torque. Unfortunately, this approach results in systems with reduced speed capability and limited passive back-driveability and therefore not very suitable for highly dynamic motions and interactions with unforeseen terrain variance. Significant examples of such legged robots are: the biped series of HRP robots [2], Toyota humanoid robot [3], and Honda's Asimo [4]; and the quadruped robot series of Hirose et al. [5], Sony's AIBO [6] and Little Dog [7]. In combination with high position gain control and …", "title": "" }, { "docid": "01295570af41ff14f0b55d6fe7139c9d", "text": "YES is a simplified stroke-based method for sorting Chinese characters. It is free from stroke counting and grouping, and thus much faster and more accurate than the traditional method. This paper presents a collation element table built in YES for a large joint Chinese character set covering (a) all 20,902 characters of Unicode CJK Unified Ideographs, (b) all 11,408 characters in the Complete List of Chinese Characters Used by the Media in 2013, (c) all 13,000 plus characters in the latest versions of Xinhua Dictionary(v11) and Contemporary Chinese Dictionary(v6). Of the 20,902 Chinese characters in Unicode, 97.23% have one-to-one relationship with their stroke order codes in YES, comparing with 90.69% of the traditional method. Enhanced with the secondary and tertiary sorting levels of stroke layout and Unicode value, there is a guarantee of one-to-one relationship between the characters and collation elements. The collation element table has been successfully applied to sorting CC-CEDICT, a Chinese-English dictionary of over 112,000 word entries.", "title": "" }, { "docid": "dbe0b895c78dd90c69cc1a1f8289aadf", "text": "This paper presents the design procedure of monolithic microwave integrated circuit (MMIC) high-power amplifiers (HPAs) as well as implementation of high-efficiency and compact-size HPAs in a 0.25- μm AlGaAs-InGaAs pHEMT technology. Presented design techniques used to extend bandwidth, improve efficiency, and reduce chip area of the HPAs are described in detail. The first HPA delivers 5 W of output power with 40% power-added efficiency (PAE) in the frequency band of 8.5-12.5 GHz, while providing 20 dB of small-signal gain. The second HPA delivers 8 W of output power with 35% PAE in the frequency band of 7.5-12 GHz, while maintaining a small-signal gain of 17.5 dB. The 8-W HPA chip area is 8.8 mm2, which leads to the maximum power/area ratio of 1.14 W/mm2. These are the lowest area and highest power/area ratio reported in GaAs HPAs operating within the same frequency band.", "title": "" }, { "docid": "e8ef5dfb9aafb4a2b453ebdda6e923ea", "text": "This paper addresses the problem of vegetation detection from laser measurements. The ability to detect vegetation is important for robots operating outdoors, since it enables a robot to navigate more efficiently and safely in such environments. In this paper, we propose a novel approach for detecting low, grass-like vegetation using laser remission values. In our algorithm, the laser remission is modeled as a function of distance, incidence angle, and material. We classify surface terrain based on 3D scans of the surroundings of the robot. The model is learned in a self-supervised way using vibration-based terrain classification. In all real world experiments we carried out, our approach yields a classification accuracy of over 99%. We furthermore illustrate how the learned classifier can improve the autonomous navigation capabilities of mobile robots.", "title": "" }, { "docid": "2793f528a9b29345b1ee8ce1202933e3", "text": "Neural Networks are prevalent in todays NLP research. Despite their success for different tasks, training time is relatively long. We use Hogwild! to counteract this phenomenon and show that it is a suitable method to speed up training Neural Networks of different architectures and complexity. For POS tagging and translation we report considerable speedups of training, especially for the latter. We show that Hogwild! can be an important tool for training complex NLP architectures.", "title": "" }, { "docid": "884281b32a82a1d1f9811acc73257387", "text": "The low power wide area network (LPWAN) technologies, which is now embracing a booming era with the development in the Internet of Things (IoT), may offer a brand new solution for current smart grid communications due to their excellent features of low power, long range, and high capacity. The mission-critical smart grid communications require secure and reliable connections between the utilities and the devices with high quality of service (QoS). This is difficult to achieve for unlicensed LPWAN technologies due to the crowded license-free band. Narrowband IoT (NB-IoT), as a licensed LPWAN technology, is developed based on the existing long-term evolution specifications and facilities. Thus, it is able to provide cellular-level QoS, and henceforth can be viewed as a promising candidate for smart grid communications. In this paper, we introduce NB-IoT to the smart grid and compare it with the existing representative communication technologies in the context of smart grid communications in terms of data rate, latency, range, etc. The overall requirements of communications in the smart grid from both quantitative and qualitative perspectives are comprehensively investigated and each of them is carefully examined for NB-IoT. We further explore the representative applications in the smart grid and analyze the corresponding feasibility of NB-IoT. Moreover, the performance of NB-IoT in typical scenarios of the smart grid communication environments, such as urban and rural areas, is carefully evaluated via Monte Carlo simulations.", "title": "" } ]
scidocsrr
dbd0d01702a50dcaab924ba4033ab378
An information theoretical approach to prefrontal executive function
[ { "docid": "5dde27787ee92c2e56729b25b9ca4311", "text": "The prefrontal cortex (PFC) subserves cognitive control: the ability to coordinate thoughts or actions in relation with internal goals. Its functional architecture, however, remains poorly understood. Using brain imaging in humans, we showed that the lateral PFC is organized as a cascade of executive processes from premotor to anterior PFC regions that control behavior according to stimuli, the present perceptual context, and the temporal episode in which stimuli occur, respectively. The results support an unified modular model of cognitive control that describes the overall functional organization of the human lateral PFC and has basic methodological and theoretical implications.", "title": "" } ]
[ { "docid": "594bbdf08b7c3d0a31b2b0f60e50bae3", "text": "This paper concerns the behavior of spatially extended dynamical systems —that is, systems with both temporal and spatial degrees of freedom. Such systems are common in physics, biology, and even social sciences such as economics. Despite their abundance, there is little understanding of the spatiotemporal evolution of these complex systems. ' Seemingly disconnected from this problem are two widely occurring phenomena whose very generality require some unifying underlying explanation. The first is a temporal effect known as 1/f noise or flicker noise; the second concerns the evolution of a spatial structure with scale-invariant, self-similar (fractal) properties. Here we report the discovery of a general organizing principle governing a class of dissipative coupled systems. Remarkably, the systems evolve naturally toward a critical state, with no intrinsic time or length scale. The emergence of the self-organized critical state provides a connection between nonlinear dynamics, the appearance of spatial self-similarity, and 1/f noise in a natural and robust way. A short account of some of these results has been published previously. The usual strategy in physics is to reduce a given problem to one or a few important degrees of freedom. The effect of coupling between the individual degrees of freedom is usually dealt with in a perturbative manner —or in a \"mean-field manner\" where the surroundings act on a given degree of freedom as an external field —thus again reducing the problem to a one-body one. In dynamics theory one sometimes finds that complicated systems reduce to a few collective degrees of freedom. This \"dimensional reduction'* has been termed \"selforganization, \" or the so-called \"slaving principle, \" and much insight into the behavior of dynamical systems has been achieved by studying the behavior of lowdimensional at tractors. On the other hand, it is well known that some dynamical systems act in a more concerted way, where the individual degrees of freedom keep each other in a more or less stab1e balance, which cannot be described as a \"perturbation\" of some decoupled state, nor in terms of a few collective degrees of freedom. For instance, ecological systems are organized such that the different species \"support\" each other in a way which cannot be understood by studying the individual constituents in isolation. The same interdependence of species also makes the ecosystem very susceptible to small changes or \"noise.\" However, the system cannot be too sensitive since then it could not have evolved into its present state in the first place. Owing to this balance we may say that such a system is \"critical. \" We shall see that this qualitative concept of criticality can be put on a firm quantitative basis. Such critical systems are abundant in nature. We shaB see that the dynamics of a critical state has a specific ternporal fingerprint, namely \"flicker noise, \" in which the power spectrum S(f) scales as 1/f at low frequencies. Flicker noise is characterized by correlations extended over a wide range of time scales, a clear indication of some sort of cooperative effect. Flicker noise has been observed, for example, in the light from quasars, the intensity of sunspots, the current through resistors, the sand flow in an hour glass, the flow of rivers such as the Nile, and even stock exchange price indices. ' All of these may be considered to be extended dynamical systems. Despite the ubiquity of flicker noise, its origin is not well understood. Indeed, one may say that because of its ubiquity, no proposed mechanism to data can lay claim as the single general underlying root of 1/f noise. We shall argue that flicker noise is in fact not noise but reflects the intrinsic dynamics of self-organized critical systems. Another signature of criticality is spatial selfsimilarity. It has been pointed out that nature is full of self-similar \"fractal\" structures, though the physical reason for this is not understood. \" Most notably, the whole universe is an extended dynamical system where a self-similar cosmic string structure has been claimed. Turbulence is a phenomenon where self-similarity is believed to occur in both space and time. Cooperative critical phenomena are well known in the context of phase transitions in equilibrium statistical mechanics. ' At the transition point, spatial selfsirnilarity occurs, and the dynamical response function has a characteristic power-law \"1/f\" behavior. (We use quotes because often flicker noise involves frequency spectra with dependence f ~ with P only roughly equal to 1.0.) Low-dimensional nonequilibrium dynamical systems also undergo phase transitions (bifurcations, mode locking, intermittency, etc.) where the properties of the attractors change. However, the critical point can be reached only by fine tuning a parameter (e.g. , temperature), and so may occur only accidentally in nature: It", "title": "" }, { "docid": "3fcce3664db5812689c121138e2af280", "text": "We examine and compare simulation-based algorithms for solving the agent scheduling problem in a multiskill call center. This problem consists in minimizing the total costs of agents under constraints on the expected service level per call type, per period, and aggregated. We propose a solution approach that combines simulation with integer or linear programming, with cut generation. In our numerical experiments with realistic problem instances, this approach performs better than all other methods proposed previously for this problem. We also show that the two-step approach, which is the standard method for solving this problem, sometimes yield solutions that are highly suboptimal and inferior to those obtained by our proposed method. 2009 Published by Elsevier B.V.", "title": "" }, { "docid": "63c2662fdac3258587c5b1baa2133df9", "text": "Automatic design via Bayesian optimization holds great promise given the constant increase of available data across domains. However, it faces difficulties from high-dimensional, potentially discrete, search spaces. We propose to probabilistically embed inputs into a lower dimensional, continuous latent space, where we perform gradient-based optimization guided by a Gaussian process. Building on variational autoncoders, we use both labeled and unlabeled data to guide the encoding and increase its accuracy. In addition, we propose an adversarial extension to render the latent representation invariant with respect to specific design attributes, which allows us to transfer these attributes across structures. We apply the framework both to a functional-protein dataset and to perform optimization of drag coefficients directly over high-dimensional shapes without incorporating domain knowledge or handcrafted features.", "title": "" }, { "docid": "072b17732d8b628d3536e7045cd0047d", "text": "In this paper, we propose a high-speed parallel 128 bit multiplier for Ghash Function in conjunction with its FPGA implementation. Through the use of Verilog the designs are evaluated by using Xilinx Vertax5 with 65nm technic and 30,000 logic cells. The highest throughput of 30.764Gpbs can be achieved on virtex5 with the consumption of 8864 slices LUT. The proposed design of the multiplier can be utilized as a design IP core for the implementation of the Ghash Function. The architecture of the multiplier can also apply in more general polynomial basis. Moreover it can be used as arithmetic module in other encryption field.", "title": "" }, { "docid": "561b37c506657693d27fa65341faf51e", "text": "Currently, much of machine learning is opaque, just like a “black box”. However, in order for humans to understand, trust and effectively manage the emerging AI systems, an AI needs to be able to explain its decisions and conclusions. In this paper, I propose an argumentation-based approach to explainable AI, which has the potential to generate more comprehensive explanations than existing approaches.", "title": "" }, { "docid": "f8e3b21fd5481137a80063e04e9b5488", "text": "On the basis of the notion that the ability to exert self-control is critical to the regulation of aggressive behaviors, we suggest that mindfulness, an aspect of the self-control process, plays a key role in curbing workplace aggression. In particular, we note the conceptual and empirical distinctions between dimensions of mindfulness (i.e., mindful awareness and mindful acceptance) and investigate their respective abilities to regulate workplace aggression. In an experimental study (Study 1), a multiwave field study (Study 2a), and a daily diary study (Study 2b), we established that the awareness dimension, rather than the acceptance dimension, of mindfulness plays a more critical role in attenuating the association between hostility and aggression. In a second multiwave field study (Study 3), we found that mindful awareness moderates the association between hostility and aggression by reducing the extent to which individuals use dysfunctional emotion regulation strategies (i.e., surface acting), rather than by reducing the extent to which individuals engage in dysfunctional thought processes (i.e., rumination). The findings are discussed in terms of the implications of differentiating the dimensions and mechanisms of mindfulness for regulating workplace aggression. (PsycINFO Database Record", "title": "" }, { "docid": "4502ba935124c2daa9a49fc24ec5865b", "text": "Medical image processing is the most challenging and emerging field now a day’s. In this field, detection of brain tumor from MRI brain scan has become one of the most challenging problems, due to complex structure of brain. The quantitative analysis of MRI brain tumor allows obtaining useful key indicators of disease progression. A computer aided diagnostic system has been proposed here for detecting the tumor texture in biological study. This is an attempt made which describes the proposed strategy for detection of tumor with the help of segmentation techniques in MATLAB; which incorporates preprocessing stages of noise removal, image enhancement and edge detection. Processing stages includes segmentation like intensity and watershed based segmentation, thresholding to extract the area of unwanted cells from the whole image. Here algorithms are proposed to calculate area and percentage of the tumor. Keywords— MRI, FCM, MKFCM, SVM, Otsu, threshold, fudge factor", "title": "" }, { "docid": "11c245ca7bc133155ff761374dfdea6e", "text": "Received Nov 12, 2017 Revised Jan 20, 2018 Accepted Feb 11, 2018 In this paper, a modification of PVD (Pixel Value Differencing) algorithm is used for Image Steganography in spatial domain. It is normalizing secret data value by encoding method to make the new pixel edge difference less among three neighbors (horizontal, vertical and diagonal) and embedding data only to less intensity pixel difference areas or regions. The proposed algorithm shows a good improvement for both color and gray-scale images compared to other algorithms. Color images performances are better than gray images. However, in this work the focus is mainly on gray images. The strenght of this scheme is that any random hidden/secret data do not make any shuttle differences to Steg-image compared to original image. The bit plane slicing is used to analyze the maximum payload that has been embeded into the cover image securely. The simulation results show that the proposed algorithm is performing better and showing great consistent results for PSNR, MSE values of any images, also against Steganalysis attack.", "title": "" }, { "docid": "05b1be7a90432eff4b62675826b77e09", "text": "People invest time, attention, and emotion while engaging in various activities in the real-world, for either purposes of awareness or participation. Social media platforms such as Twitter offer tremendous opportunities for people to become engaged in such real-world events through information sharing and communicating about these events. However, little is understood about the factors that affect people’s Twitter engagement in such real-world events. In this paper, we address this question by first operationalizing a person’s Twitter engagement in real-world events such as posting, retweeting, or replying to tweets about such events. Next, we construct statistical models that examine multiple predictive factors associated with four different perspectives of users’ Twitter engagement, and quantify their potential influence on predicting the (i) presence; and (ii) degree – of the user’s engagement with 643 real-world events. We also consider the effect of these factors with respect to a finer granularization of the different categories of events. We find that the measures of people’s prior Twitter activities, topical interests, geolocation, and social network structures are all variously correlated to their engagement with real-world events.", "title": "" }, { "docid": "d6f322f4dd7daa9525f778ead18c8b5e", "text": "Face perception, perhaps the most highly developed visual skill in humans, is mediated by a distributed neural system in humans that is comprised of multiple, bilateral regions. We propose a model for the organization of this system that emphasizes a distinction between the representation of invariant and changeable aspects of faces. The representation of invariant aspects of faces underlies the recognition of individuals, whereas the representation of changeable aspects of faces, such as eye gaze, expression, and lip movement, underlies the perception of information that facilitates social communication. The model is also hierarchical insofar as it is divided into a core system and an extended system. The core system is comprised of occipitotemporal regions in extrastriate visual cortex that mediate the visual analysis of faces. In the core system, the representation of invariant aspects is mediated more by the face-responsive region in the fusiform gyrus, whereas the representation of changeable aspects is mediated more by the face-responsive region in the superior temporal sulcus. The extended system is comprised of regions from neural systems for other cognitive functions that can be recruited to act in concert with the regions in the core system to extract meaning from faces.", "title": "" }, { "docid": "8a1e94245d8fbdaf97402923d4dbc213", "text": "This is the first study to measure the 'sense of community' reportedly offered by the CrossFit gym model. A cross-sectional study adapted Social Capital and General Belongingness scales to compare perceptions of a CrossFit gym and a traditional gym. CrossFit gym members reported significantly higher levels of social capital (both bridging and bonding) and community belongingness compared with traditional gym members. However, regression analysis showed neither social capital, community belongingness, nor gym type was an independent predictor of gym attendance. Exercise and health professionals may benefit from evaluating further the 'sense of community' offered by gym-based exercise programmes.", "title": "" }, { "docid": "840d4b26eec402038b9b3462fc0a98ac", "text": "A bench model of the new generation intelligent universal transformer (IUT) has been recently developed for distribution applications. The distribution IUT employs high-voltage semiconductor device technologies along with multilevel converter circuits for medium-voltage grid connection. This paper briefly describes the basic operation of the IUT and its experimental setup. Performances under source and load disturbances are characterized with extensive tests using a voltage sag generator and various linear and nonlinear loads. Experimental results demonstrate that IUT input and output can avoid direct impact from its opposite side disturbances. The output voltage is well regulated when the voltage sag is applied to the input. The input voltage and current maintains clean sinusoidal and unity power factor when output is nonlinear load. Under load transients, the input and output voltages remain well regulated. These key features prove that the power quality performance of IUT is far superior to that of conventional copper-and-iron based transformers", "title": "" }, { "docid": "e6dba9e9ad2db632caed6b19b9f5a010", "text": "Efficient and accurate similarity searching on a large time series data set is an important but non- trivial problem. In this work, we propose a new approach to improve the quality of similarity search on time series data by combining symbolic aggregate approximation (SAX) and piecewise linear approximation. The approach consists of three steps: transforming real valued time series sequences to symbolic strings via SAX, pattern matching on the symbolic strings and a post-processing via Piecewise Linear Approximation.", "title": "" }, { "docid": "d6cf367f29ed1c58fb8fd0b7edf69458", "text": "Diabetes mellitus is a chronic disease that leads to complications including heart disease, stroke, kidney failure, blindness and nerve damage. Type 2 diabetes, characterized by target-tissue resistance to insulin, is epidemic in industrialized societies and is strongly associated with obesity; however, the mechanism by which increased adiposity causes insulin resistance is unclear. Here we show that adipocytes secrete a unique signalling molecule, which we have named resistin (for resistance to insulin). Circulating resistin levels are decreased by the anti-diabetic drug rosiglitazone, and increased in diet-induced and genetic forms of obesity. Administration of anti-resistin antibody improves blood sugar and insulin action in mice with diet-induced obesity. Moreover, treatment of normal mice with recombinant resistin impairs glucose tolerance and insulin action. Insulin-stimulated glucose uptake by adipocytes is enhanced by neutralization of resistin and is reduced by resistin treatment. Resistin is thus a hormone that potentially links obesity to diabetes.", "title": "" }, { "docid": "641d09ff15b731b679dbe3e9004c1578", "text": "In recent years, geological disposal of radioactive waste has focused on placement of highand intermediate-level wastes in mined underground caverns at depths of 500–800 m. Notwithstanding the billions of dollars spent to date on this approach, the difficulty of finding suitable sites and demonstrating to the public and regulators that a robust safety case can be developed has frustrated attempts to implement disposal programmes in several countries, and no disposal facility for spent nuclear fuel exists anywhere. The concept of deep borehole disposal was first considered in the 1950s, but was rejected as it was believed to be beyond existing drilling capabilities. Improvements in drilling and associated technologies and advances in sealing methods have prompted a re-examination of this option for the disposal of high-level radioactive wastes, including spent fuel and plutonium. Since the 1950s, studies of deep boreholes have involved minimal investment. However, deep borehole disposal offers a potentially safer, more secure, cost-effective and environmentally sound solution for the long-term management of high-level radioactive waste than mined repositories. Potentially it could accommodate most of the world’s spent fuel inventory. This paper discusses the concept, the status of existing supporting equipment and technologies and the challenges that remain.", "title": "" }, { "docid": "ab677299ffa1e6ae0f65daf5de75d66c", "text": "This paper proposes a new theory of the relationship between the sentence processing mechanism and the available computational resources. This theory--the Syntactic Prediction Locality Theory (SPLT)--has two components: an integration cost component and a component for the memory cost associated with keeping track of obligatory syntactic requirements. Memory cost is hypothesized to be quantified in terms of the number of syntactic categories that are necessary to complete the current input string as a grammatical sentence. Furthermore, in accordance with results from the working memory literature both memory cost and integration cost are hypothesized to be heavily influenced by locality (1) the longer a predicted category must be kept in memory before the prediction is satisfied, the greater is the cost for maintaining that prediction; and (2) the greater the distance between an incoming word and the most local head or dependent to which it attaches, the greater the integration cost. The SPLT is shown to explain a wide range of processing complexity phenomena not previously accounted for under a single theory, including (1) the lower complexity of subject-extracted relative clauses compared to object-extracted relative clauses, (2) numerous processing overload effects across languages, including the unacceptability of multiply center-embedded structures, (3) the lower complexity of cross-serial dependencies relative to center-embedded dependencies, (4) heaviness effects, such that sentences are easier to understand when larger phrases are placed later and (5) numerous ambiguity effects, such as those which have been argued to be evidence for the Active Filler Hypothesis.", "title": "" }, { "docid": "e7f91b90eab54dfd7f115a3a0225b673", "text": "The recent trend of outsourcing network functions, aka. middleboxes, raises confidentiality and integrity concern on redirected packet, runtime state, and processing result. The outsourced middleboxes must be protected against cyber attacks and malicious service provider. It is challenging to simultaneously achieve strong security, practical performance, complete functionality and compatibility. Prior software-centric approaches relying on customized cryptographic primitives fall short of fulfilling one or more desired requirements. In this paper, after systematically addressing key challenges brought to the fore, we design and build a secure SGX-assisted system, LightBox, which supports secure and generic middlebox functions, efficient networking, and most notably, lowoverhead stateful processing. LightBox protects middlebox from powerful adversary, and it allows stateful network function to run at nearly native speed: it adds only 3μs packet processing delay even when tracking 1.5M concurrent flows.", "title": "" }, { "docid": "684b9d64f4476a6b9dd3df1bd18bcb1d", "text": "We present the cases of three children with patent ductus arteriosus (PDA), pulmonary arterial hypertension (PAH), and desaturation. One of them had desaturation associated with atrial septal defect (ASD). His ASD, PAH, and desaturation improved after successful device closure of the PDA. The other two had desaturation associated with Down syndrome. One had desaturation only at room air oxygen (21% oxygen) but well saturated with 100% oxygen, subsequently underwent successful device closure of the PDA. The other had experienced desaturation at a younger age but spontaneously recovered when he was older, following attempted device closure of the PDA, with late embolization of the device.", "title": "" }, { "docid": "527e750a6047100cba1f78a3036acb9b", "text": "This paper presents a Generative Adversarial Network (GAN) to model multi-turn dialogue generation, which trains a latent hierarchical recurrent encoder-decoder simultaneously with a discriminative classifier that make the prior approximate to the posterior. Experiments show that our model achieves better results.", "title": "" }, { "docid": "27ddea786e06ffe20b4f526875cdd76b", "text": "It , is generally unrecognized that Sigmund Freud's contribution to the scientific understanding of dreams derived from a radical reorientation to the dream experience. During the nineteenth century, before publication of The Interpretation of Dreams, the presence of dreaming was considered by the scientific community as a manifestation of mental activity during sleep. The state of sleep was given prominence as a factor accounting for the seeming lack of organization and meaning to the dream experience. Thus, the assumed relatively nonpsychological sleep state set the scientific stage for viewing the nature of the dream. Freud radically shifted the context. He recognized-as myth, folklore, and common sense had long understood-that dreams were also linked with the psychology of waking life. This shift in orientation has proved essential for our modern view of dreams and dreaming. Dreams are no longer dismissed as senseless notes hit at random on a piano keyboard by an untrained player. Dreams are now recognized as psychologically significant and meaningful expressions of the life of the dreamer, albeit expressed in disguised and concealed forms. (For a contrasting view, see AcFIIa ION_sYNTHESIS xxroTESis .) Contemporary Dream Research During the past quarter-century, there has been increasing scientific interest in the process of dreaming. A regular sleep-wakefulness cycle has been discovered, and if experimental subjects are awakened during periods of rapid eye movements (REM periods), they will frequently report dreams. In a typical night, four or five dreams occur during REM periods, accompanied by other signs of physiological activation, such as increased respiratory rate, heart rate, and penile and clitoral erection. Dreams usually last for the duration of the eye movements, from about 10 to 25 minutes. Although dreaming usually occurs in such regular cycles ;.dreaming may occur at other times during sleep, as well as during hypnagogic (falling asleep) or hypnopompic .(waking up) states, when REMs are not present. The above findings are discoveries made since the monumental work of Freud reported in The Interpretation of Dreams, and .although of great interest to the study of the mind-body problem, these .findings as yet bear only a peripheral relationship to the central concerns of the psychology of dream formation, the meaning of dream content, the dream as an approach to a deeper understanding of emotional life, and the use of the dream in psychoanalytic treatment .", "title": "" } ]
scidocsrr
7b2ed986ed98f67cdc3456f543a73f54
In-DBMS Sampling-based Sub-trajectory Clustering
[ { "docid": "03aba9a44f1ee13cc7f16aadbebb7165", "text": "The increasing pervasiveness of location-acquisition technologies has enabled collection of huge amount of trajectories for almost any kind of moving objects. Discovering useful patterns from their movement behaviors can convey valuable knowledge to a variety of critical applications. In this light, we propose a novel concept, called gathering, which is a trajectory pattern modeling various group incidents such as celebrations, parades, protests, traffic jams and so on. A key observation is that these incidents typically involve large congregations of individuals, which form durable and stable areas with high density. In this work, we first develop a set of novel techniques to tackle the challenge of efficient discovery of gathering patterns on archived trajectory dataset. Afterwards, since trajectory databases are inherently dynamic in many real-world scenarios such as traffic monitoring, fleet management and battlefield surveillance, we further propose an online discovery solution by applying a series of optimization schemes, which can keep track of gathering patterns while new trajectory data arrive. Finally, the effectiveness of the proposed concepts and the efficiency of the approaches are validated by extensive experiments based on a real taxicab trajectory dataset.", "title": "" } ]
[ { "docid": "2089f931cf6fca595898959cbfbca28a", "text": "Continuum robotic manipulators articulate due to their inherent compliance. Tendon actuation leads to compression of the manipulator, extension of the actuators, and is limited by the practical constraint that tendons cannot support compression. In light of these observations, we present a new linear model for transforming desired beam configuration to tendon displacements and vice versa. We begin from first principles in solid mechanics by analyzing the effects of geometrically nonlinear tendon loads. These loads act both distally at the termination point and proximally along the conduit contact interface. The resulting model simplifies to a linear system including only the bending and axial modes of the manipulator as well as the actuator compliance. The model is then manipulated to form a concise mapping from beam configuration-space parameters to n redundant tendon displacements via the internal loads and strains experienced by the system. We demonstrate the utility of this model by implementing an optimal feasible controller. The controller regulates axial strain to a constant value while guaranteeing positive tendon forces and minimizing their magnitudes over a range of articulations. The mechanics-based model from this study provides insight as well as performance gains for this increasingly ubiquitous class of manipulators.", "title": "" }, { "docid": "c551e19208e367cc5546a3d46f7534c8", "text": "We propose a novel approach for solving the approximate nearest neighbor search problem in arbitrary metric spaces. The distinctive feature of our approach is that we can incrementally build a non-hierarchical distributed structure for given metric space data with a logarithmic complexity scaling on the size of the structure and adjustable accuracy probabilistic nearest neighbor queries. The structure is based on a small world graph with vertices corresponding to the stored elements, edges for links between them and the greedy algorithm as base algorithm for searching. Both search and addition algorithms require only local information from the structure. The performed simulation for data in the Euclidian space shows that the structure built using the proposed algorithm has navigable small world properties with logarithmic search complexity at fixed accuracy and has weak (power law) scalability with the dimensionality of the stored data.", "title": "" }, { "docid": "880aa3de3b839739927cbd82b7abcf8a", "text": "Can parents burn out? The aim of this research was to examine the construct validity of the concept of parental burnout and to provide researchers which an instrument to measure it. We conducted two successive questionnaire-based online studies, the first with a community-sample of 379 parents using principal component analyses and the second with a community- sample of 1,723 parents using both principal component analyses and confirmatory factor analyses. We investigated whether the tridimensional structure of the burnout syndrome (i.e., exhaustion, inefficacy, and depersonalization) held in the parental context. We then examined the specificity of parental burnout vis-à-vis professional burnout assessed with the Maslach Burnout Inventory, parental stress assessed with the Parental Stress Questionnaire and depression assessed with the Beck Depression Inventory. The results support the validity of a tri-dimensional burnout syndrome including exhaustion, inefficacy and emotional distancing with, respectively, 53.96 and 55.76% variance explained in study 1 and study 2, and reliability ranging from 0.89 to 0.94. The final version of the Parental Burnout Inventory (PBI) consists of 22 items and displays strong psychometric properties (CFI = 0.95, RMSEA = 0.06). Low to moderate correlations between parental burnout and professional burnout, parental stress and depression suggests that parental burnout is not just burnout, stress or depression. The prevalence of parental burnout confirms that some parents are so exhausted that the term \"burnout\" is appropriate. The proportion of burnout parents lies somewhere between 2 and 12%. The results are discussed in light of their implications at the micro-, meso- and macro-levels.", "title": "" }, { "docid": "9441113599194d172b6f618058b2ba88", "text": "Vegetable quality is frequently referred to size, shape, mass, firmness, color and bruises from which fruits can be classified and sorted. However, technological by small and middle producers implementation to assess this quality is unfeasible, due to high costs of software, equipment as well as operational costs. Based on these considerations, the proposal of this research is to evaluate a new open software that enables the classification system by recognizing fruit shape, volume, color and possibly bruises at a unique glance. The software named ImageJ, compatible with Windows, Linux and MAC/OS, is quite popular in medical research and practices, and offers algorithms to obtain the above mentioned parameters. The software allows calculation of volume, area, averages, border detection, image improvement and morphological operations in a variety of image archive formats as well as extensions by means of “plugins” written in Java.", "title": "" }, { "docid": "997a1ec16394a20b3a7f2889a583b09d", "text": "This second article of our series looks at the process of designing a survey. The design process begins with reviewing the objectives, examining the target population identified by the objectives, and deciding how best to obtain the information needed to address those objectives. However, we also need to consider factors such as determining the appropriate sample size and ensuring the largest possible response rate.To illustrate our ideas, we use the three surveys described in Part 1 of this series to suggest good and bad practice in software engineering survey research.", "title": "" }, { "docid": "2e3c1fc6daa33ee3a4dc3fe1e11a3c21", "text": "Cloud computing technologies have matured enough that the service providers are compelled to migrate their services to virtualized infrastructure in cloud data centers. However, moving the computation and network to shared physical infrastructure poses a multitude of questions, both for service providers and for data center owners. In this work, we propose HyViDE - a framework for optimal placement of multiple virtual data center networks on a physical data center network. HyViDE preselects a subset of virtual data center network requests and uses a hybrid strategy for embedding them on the physical data center. Coordinated static and dynamic embedding algorithms are used in this hybrid framework to minimize the rejection of requests and fulfill QoS demands of the embedded networks. HyViDE can employ suitable static and dynamic strategies to meet the objectives of data center owners and customers. Experimental evaluation of our algorithms on HyViDE shows that, the acceptance rate is high with faster servicing of requests.", "title": "" }, { "docid": "1583d8c41b15fb77787deef955ace886", "text": "The primary focus of autonomous driving research is to improve driving accuracy. While great progress has been made, state-of-the-art algorithms still fail at times. Such failures may have catastrophic consequences. It therefore is im- portant that automated cars foresee problems ahead as early as possible. This is also of paramount importance if the driver will be asked to take over. We conjecture that failures do not occur randomly. For instance, driving models may fail more likely at places with heavy traffic, at complex intersections, and/or under adverse weather/illumination conditions. This work presents a method to learn to predict the occurrence of these failures, i.e., to assess how difficult a scene is to a given driving model and to possibly give the human driver an early headsup. A camera- based driving model is developed and trained over real driving datasets. The discrepancies between the model's predictions and the human 'ground-truth' maneuvers were then recorded, to yield the 'failure' scores. Experimental results show that the failure score can indeed be learned and predicted. Thus, our prediction method is able to improve the overall safety of an automated driving model by alerting the human driver timely, leading to better human-vehicle collaborative driving.", "title": "" }, { "docid": "f81059b5ff3d621dfa9babc8e68bc0ab", "text": "A zero voltage switching (ZVS) isolated Sepic converter with active clamp topology is presented. The buck-boost type of active clamp is connected in parallel with the primary side of the transformer to absorb all the energy stored in the transformer leakage inductance and to limit the peak voltage on the switching device. During the transition interval between the main and auxiliary switches, the resonance based on the output capacitor of switch and the transformer leakage inductor can achieve ZVS for both switches. The operational principle, steady state analysis and design consideration of the proposed converter are presented. Finally, the proposed converter is verified by the experimental results based on an 180 W prototype circuit.", "title": "" }, { "docid": "c57c69fd1858b50998ec9706e34f6c46", "text": "Hashing has recently attracted considerable attention for large scale similarity search. However, learning compact codes with good performance is still a challenge. In many cases, the real-world data lies on a low-dimensional manifold embedded in high-dimensional ambient space. To capture meaningful neighbors, a compact hashing representation should be able to uncover the intrinsic geometric structure of the manifold, e.g., the neighborhood relationships between subregions. Most existing hashing methods only consider this issue during mapping data points into certain projected dimensions. When getting the binary codes, they either directly quantize the projected values with a threshold, or use an orthogonal matrix to refine the initial projection matrix, which both consider projection and quantization separately, and will not well preserve the locality structure in the whole learning process. In this paper, we propose a novel hashing algorithm called Locality Preserving Hashing to effectively solve the above problems. Specifically, we learn a set of locality preserving projections with a joint optimization framework, which minimizes the average projection distance and quantization loss simultaneously. Experimental comparisons with other state-of-the-art methods on two large scale datasets demonstrate the effectiveness and efficiency of our method.", "title": "" }, { "docid": "fd32f2117ae01049314a0c1cfb565724", "text": "Smart phones, tablets, and the rise of the Internet of Things are driving an insatiable demand for wireless capacity. This demand requires networking and Internet infrastructures to evolve to meet the needs of current and future multimedia applications. Wireless HetNets will play an important role toward the goal of using a diverse spectrum to provide high quality-of-service, especially in indoor environments where most data are consumed. An additional tier in the wireless HetNets concept is envisioned using indoor gigabit small-cells to offer additional wireless capacity where it is needed the most. The use of light as a new mobile access medium is considered promising. In this article, we describe the general characteristics of WiFi and VLC (or LiFi) and demonstrate a practical framework for both technologies to coexist. We explore the existing research activity in this area and articulate current and future research challenges based on our experience in building a proof-of-concept prototype VLC HetNet.", "title": "" }, { "docid": "638c9e4ba1c3d35fdb766c17b188529d", "text": "Association football is a popular sport, but it is also a big business. From a managerial perspective, the most important decisions that team managers make concern player transfers, so issues related to player valuation, especially the determination of transfer fees and market values, are of major concern. Market values can be understood as estimates of transfer fees—that is, prices that could be paid for a player on the football market—so they play an important role in transfer negotiations. These values have traditionally been estimated by football experts, but crowdsourcing has emerged as an increasingly popular approach to estimating market value. While researchers have found high correlations between crowdsourced market values and actual transfer fees, the process behind crowd judgments is not transparent, crowd estimates are not replicable, and they are updated infrequently because they require the participation of many users. Data analytics may thus provide a sound alternative or a complementary approach to crowd-based estimations of market value. Based on a unique data set that is comprised of 4217 players from the top five European leagues and a period of six playing seasons, we estimate players’ market values using multilevel regression analysis. The regression results suggest that data-driven estimates of market value can overcome several of the crowd’s practical limitations while producing comparably accurate numbers. Our results have important implications for football managers and scouts, as data analytics facilitates precise, objective, and reliable estimates of market value that can be updated at any time. © 2017 The Author(s). Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license. ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )", "title": "" }, { "docid": "5dda89fbe7f5757588b5dff0e6c2565d", "text": "Introductory psychology students (120 females and 120 males) rated attractiveness and fecundity of one of six computer-altered female Ž gures representing three body-weight categories (underweight, normal weight and overweight) and two levels of waist-to-hip ratio (WHR), one in the ideal range (0.72) and one in the non-ideal range (0.86). Both females and males judged underweight Ž gures to be more attractive than normal or overweight Ž gures, regardless of WHR. The female Ž gure with the high WHR (0.86) was judged to be more attractive than the Ž gure with the low WHR (0.72) across all body-weight conditions. Analyses of fecundity ratings revealed an interaction between weight and WHR such that the models did not differ in the normal weight category, but did differ in the underweight (model with WHR of 0.72 was less fecund) and overweight (model with WHR of 0.86 was more fecund) categories. These Žndings lend stronger support to sociocultural rather than evolutionary hypotheses.", "title": "" }, { "docid": "a492dcdbb9ec095cdfdab797c4b4e659", "text": "We present a new class of methods for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. SpAM is essentially a functional version of the grouped lasso of Yuan and Lin (2006). SpAM is also closely related to the COSSO model of Lin and Zhang (2006), but decouples smoothing and sparsity, enabling the use of arbitrary nonparametric smoothers. We give an analysis of the theoretical properties of sparse additive models, and present empirical results on synthetic and real data, showing that SpAM can be effective in fitting sparse nonparametric models in high dimensional data.", "title": "" }, { "docid": "813b4607e9675ad4811ba181a912bbe9", "text": "The end-Permian mass extinction was the most severe biodiversity crisis in Earth history. To better constrain the timing, and ultimately the causes of this event, we collected a suite of geochronologic, isotopic, and biostratigraphic data on several well-preserved sedimentary sections in South China. High-precision U-Pb dating reveals that the extinction peak occurred just before 252.28 ± 0.08 million years ago, after a decline of 2 per mil (‰) in δ(13)C over 90,000 years, and coincided with a δ(13)C excursion of -5‰ that is estimated to have lasted ≤20,000 years. The extinction interval was less than 200,000 years and synchronous in marine and terrestrial realms; associated charcoal-rich and soot-bearing layers indicate widespread wildfires on land. A massive release of thermogenic carbon dioxide and/or methane may have caused the catastrophic extinction.", "title": "" }, { "docid": "fe94febc520eab11318b49391d46476b", "text": "BACKGROUND\nDiabetes is a chronic disease, with high prevalence across many nations, which is characterized by elevated level of blood glucose and risk of acute and chronic complication. The Kingdom of Saudi Arabia (KSA) has one of the highest levels of diabetes prevalence globally. It is well-known that the treatment of diabetes is complex process and requires both lifestyle change and clear pharmacologic treatment plan. To avoid the complication from diabetes, the effective behavioural change and extensive education and self-management is one of the key approaches to alleviate such complications. However, this process is lengthy and expensive. The recent studies on the user of smart phone technologies for diabetes self-management have proven to be an effective tool in controlling hemoglobin (HbA1c) levels especially in type-2 diabetic (T2D) patients. However, to date no reported study addressed the effectiveness of this approach in the in Saudi patients. This study investigates the impact of using mobile health technologies for the self-management of diabetes in Saudi Arabia.\n\n\nMETHODS\nIn this study, an intelligent mobile diabetes management system (SAED), tailored for T2D patients in KSA was developed. A pilot study of the SAED system was conducted in Saudi Arabia with 20 diabetic patients for 6 months duration. The patients were randomly categorized into a control group who did not use the SAED system and an intervention group whom used the SAED system for their diabetes management during this period. At the end of the follow-up period, the HbA1c levels in the patients in both groups were measure together with a diabetes knowledge test was also conducted to test the diabetes awareness of the patients.\n\n\nRESULTS\nThe results of SAED pilot study showed that the patients in the intervention group were able to significantly decrease their HbA1c levels compared to the control group. The SAED system also enhanced the diabetes awareness amongst the patients in the intervention group during the trial period. These outcomes confirm the global studies on the effectiveness of smart phone technologies in diabetes management. The significance of the study is that this was one of the first such studies conducted on Saudi patients and of their acceptance for such technology in their diabetes self-management treatment plans.\n\n\nCONCLUSIONS\nThe pilot study of the SAED system showed that a mobile health technology can significantly improve the HbA1C levels among Saudi diabetic and improve their disease management plans. The SAED system can also be an effective and low-cost solution in improving the quality of life of diabetic patients in the Kingdom considering the high level of prevalence and the increasing economic burden of this disease.", "title": "" }, { "docid": "98d40e5a6df5b6a3ab39a04bf04c6a65", "text": "T Internet has increased the flexibility of retailers, allowing them to operate an online arm in addition to their physical stores. The online channel offers potential benefits in selling to customer segments that value the convenience of online shopping, but it also raises new challenges. These include the higher likelihood of costly product returns when customers’ ability to “touch and feel” products is important in determining fit. We study competing retailers that can operate dual channels (“bricks and clicks”) and examine how pricing strategies and physical store assistance levels change as a result of the additional Internet outlet. A central result we obtain is that when differentiation among competing retailers is not too high, having an online channel can actually increase investment in store assistance levels (e.g., greater shelf display, more-qualified sales staff, floor samples) and decrease profits. Consequently, when the decision to open an Internet channel is endogenized, there can exist an asymmetric equilibrium where only one retailer elects to operate an online arm but earns lower profits than its bricks-only rival. We also characterize equilibria where firms open an online channel, even though consumers only use it for research and learning purposes but buy in stores. A number of extensions are discussed, including retail settings where firms carry multiple product categories, shipping and handling costs, and the role of store assistance in impacting consumer perceived benefits.", "title": "" }, { "docid": "ecd7fca4f2ea0207582755a2b9733419", "text": "This work introduces a novel framework for quantifying the presence and strength of recurrent dynamics in video data. Specifically, we provide continuous measures of periodicity (perfect repetition) and quasiperiodicity (superposition of periodic modes with non-commensurate periods), in a way which does not require segmentation, training, object tracking or 1-dimensional surrogate signals. Our methodology operates directly on video data. The approach combines ideas from nonlinear time series analysis (delay embeddings) and computational topology (persistent homology), by translating the problem of finding recurrent dynamics in video data, into the problem of determining the circularity or toroidality of an associated geometric space. Through extensive testing, we show the robustness of our scores with respect to several noise models/levels; we show that our periodicity score is superior to other methods when compared to human-generated periodicity rankings; and furthermore, we show that our quasiperiodicity score clearly indicates the presence of biphonation in videos of vibrating vocal folds, which has never before been accomplished end to end quantitatively.", "title": "" }, { "docid": "2a89fb135d7c53bda9b1e3b8598663a5", "text": "We propose a new equilibrium enforcing method paired with a loss derived from the Wasserstein distance for training auto-encoder based Generative Adversarial Networks. This method balances the generator and discriminator during training. Additionally, it provides a new approximate convergence measure, fast and stable training and high visual quality. We also derive a way of controlling the trade-off between image diversity and visual quality. We focus on the image generation task, setting a new milestone in visual quality, even at higher resolutions. This is achieved while using a relatively simple model architecture and a standard training procedure.", "title": "" }, { "docid": "850a7daa56011e6c53b5f2f3e33d4c49", "text": "Multi-objective evolutionary algorithms (MOEAs) have achieved great progress in recent decades, but most of them are designed to solve unconstrained multi-objective optimization problems. In fact, many real-world multi-objective problems usually contain a number of constraints. To promote the research of constrained multi-objective optimization, we first propose three primary types of difficulty, which reflect the challenges in the real-world optimization problems, to characterize the constraint functions in CMOPs, including feasibility-hardness, convergencehardness and diversity-hardness. We then develop a general toolkit to construct difficulty adjustable and scalable constrained multi-objective optimization problems (CMOPs) with three types of parameterized constraint functions according to the proposed three primary types of difficulty. In fact, combination of the three primary constraint functions with different parameters can lead to construct a large variety of CMOPs, whose difficulty can be uniquely defined by a triplet with each of its parameter specifying the level of each primary difficulty type respectively. Furthermore, the number of objectives in this toolkit are able to scale to more than two. Based on this toolkit, we suggest nine difficulty adjustable and scalable CMOPs named DAS-CMOP1-9. To evaluate the proposed test problems, two popular CMOEAs MOEA/D-CDP and NSGA-II-CDP are adopted to test their performances on DAS-CMOP1-9 with different difficulty triplets. The experiment results demonstrate that none of them can solve these problems efficiently, which stimulate us to develop new constrained MOEAs to solve the suggested DAS-CMOPs.", "title": "" }, { "docid": "dc54b73eb740bc1bbdf1b834a7c40127", "text": "This paper discusses the design and evaluation of an online social network used within twenty-two established after school programs across three major urban areas in the Northeastern United States. The overall goal of this initiative is to empower students in grades K-8 to prevent obesity through healthy eating and exercise. The online social network was designed to support communication between program participants. Results from the related evaluation indicate that the online social network has potential for advancing awareness and community action around health related issues; however, greater attention is needed to professional development programs for program facilitators, and design features could better support critical thinking, social presence, and social activity.", "title": "" } ]
scidocsrr
b0fe005c63685b8e6c294dd475fc55e9
BilBOWA: Fast Bilingual Distributed Representations without Word Alignments
[ { "docid": "3bb905351ce1ea2150f37059ed256a90", "text": "A major assumption in many machine learning and data mining algorithms is that the training and future data must be in the same feature space and have the same distribution. However, in many real-world applications, this assumption may not hold. For example, we sometimes have a classification task in one domain of interest, but we only have sufficient training data in another domain of interest, where the latter data may be in a different feature space or follow a different data distribution. In such cases, knowledge transfer, if done successfully, would greatly improve the performance of learning by avoiding much expensive data-labeling efforts. In recent years, transfer learning has emerged as a new learning framework to address this problem. This survey focuses on categorizing and reviewing the current progress on transfer learning for classification, regression, and clustering problems. In this survey, we discuss the relationship between transfer learning and other related machine learning techniques such as domain adaptation, multitask learning and sample selection bias, as well as covariate shift. We also explore some potential future issues in transfer learning research.", "title": "" }, { "docid": "09df260d26638f84ec3bd309786a8080", "text": "If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and find that each of the three word representations improves the accuracy of these baselines. We find further improvements by combining different word representations. You can download our word features, for off-the-shelf use in existing NLP systems, as well as our code, here: http://metaoptimize. com/projects/wordreprs/", "title": "" }, { "docid": "8acd410ff0757423d09928093e7e8f63", "text": "We present a simple log-linear reparameterization of IBM Model 2 that overcomes problems arising from Model 1’s strong assumptions and Model 2’s overparameterization. Efficient inference, likelihood evaluation, and parameter estimation algorithms are provided. Training the model is consistently ten times faster than Model 4. On three large-scale translation tasks, systems built using our alignment model outperform IBM Model 4. An open-source implementation of the alignment model described in this paper is available from http://github.com/clab/fast align .", "title": "" } ]
[ { "docid": "2639c6ed94ad68f5e0c4579f84f52f35", "text": "This article introduces the Swiss Army Menu (SAM), a radial menu that enables a very large number of functions on a single small tactile screen. The design of SAM relies on four different kinds of items, support for navigating in hierarchies of items and a control based on small thumb movements. SAM can thus offer a set of functions so large that it would typically have required a number of widgets that could not have been displayed in a single viewport at the same time.", "title": "" }, { "docid": "feca1bd8b881f3d550f0f0912913081f", "text": "There is an ever-increasing interest in the development of automatic medical diagnosis systems due to the advancement in computing technology and also to improve the service by medical community. The knowledge about health and disease is required for reliable and accurate medical diagnosis. Diabetic Retinopathy (DR) is one of the most common causes of blindness and it can be prevented if detected and treated early. DR has different signs and the most distinctive are microaneurysm and haemorrhage which are dark lesions and hard exudates and cotton wool spots which are bright lesions. Location and structure of blood vessels and optic disk play important role in accurate detection and classification of dark and bright lesions for early detection of DR. In this article, we propose a computer aided system for the early detection of DR. The article presents algorithms for retinal image preprocessing, blood vessel enhancement and segmentation and optic disk localization and detection which eventually lead to detection of different DR lesions using proposed hybrid fuzzy classifier. The developed methods are tested on four different publicly available databases. The presented methods are compared with recently published methods and the results show that presented methods outperform all others.", "title": "" }, { "docid": "ea048488791219be809072862a061444", "text": "Our object oriented programming approach have great ability to improve the programming behavior for modern system and software engineering but it does not give the proper interaction of real world .In real world , programming required powerful interlinking among properties and characteristics towards the various objects. Basically this approach of programming gives the better presentation of object with real world and provide the better relationship among the objects. I have explained the new concept of my neuro object oriented approach .This approach contains many new features like originty , new concept of inheritance , new concept of encapsulation , object relation with dimensions , originty relation with dimensions and time , category of NOOPA like high order thinking object and low order thinking object , differentiation model for achieving the various requirements from the user and a rotational model .", "title": "" }, { "docid": "24632f6891d12600619e4bf7f9a444d1", "text": "Product recommender systems are often deployed by e-commerce websites to improve user experience and increase sales. However, recommendation is limited by the product information hosted in those e-commerce sites and is only triggered when users are performing e-commerce activities. In this paper, we develop a novel product recommender system called METIS, a MErchanT Intelligence recommender System, which detects users' purchase intents from their microblogs in near real-time and makes product recommendation based on matching the users' demographic information extracted from their public profiles with product demographics learned from microblogs and online reviews. METIS distinguishes itself from traditional product recommender systems in the following aspects: 1) METIS was developed based on a microblogging service platform. As such, it is not limited by the information available in any specific e-commerce website. In addition, METIS is able to track users' purchase intents in near real-time and make recommendations accordingly. 2) In METIS, product recommendation is framed as a learning to rank problem. Users' characteristics extracted from their public profiles in microblogs and products' demographics learned from both online product reviews and microblogs are fed into learning to rank algorithms for product recommendation. We have evaluated our system in a large dataset crawled from Sina Weibo. The experimental results have verified the feasibility and effectiveness of our system. We have also made a demo version of our system publicly available and have implemented a live system which allows registered users to receive recommendations in real time.", "title": "" }, { "docid": "9817009ca281ae09baf45b5f8bdef87d", "text": "The rise of graph-structured data such as social networks, regulatory networks, citation graphs, and functional brain networks, in combination with resounding success of deep learning in various applications, has brought the interest in generalizing deep learning models to non-Euclidean domains. In this paper, we introduce a new spectral domain convolutional architecture for deep learning on graphs. The core ingredient of our model is a new class of parametric rational complex functions (Cayley polynomials) allowing to efficiently compute spectral filters on graphs that specialize on frequency bands of interest. Our model generates rich spectral filters that are localized in space, scales linearly with the size of the input data for sparsely-connected graphs, and can handle different constructions of Laplacian operators. Extensive experimental results show the superior performance of our approach on spectral image classification, community detection, vertex classification and matrix completion tasks.", "title": "" }, { "docid": "4290b4ba8000aeaf24cd7fb8640b4570", "text": "Drawing on semi-structured interviews and cognitive mapping with 14 craftspeople, this paper analyzes the socio-technical arrangements of people and tools in the context of workspaces and productivity. Using actor-network theory and the concept of companionability, both of which emphasize the role of human and non-human actants in the socio-technical fabrics of everyday life, I analyze the relationships between people, productivity and technology through the following themes: embodiment, provenance, insecurity, flow and companionability. The discussion section develops these themes further through comparison with rhetoric surrounding the Internet of Things (IoT). By putting the experiences of craftspeople in conversation with IoT rhetoric, I suggest several policy interventions for understanding connectivity and inter-device operability as material, flexible and respectful of human agency.", "title": "" }, { "docid": "4782e5fb1044fa5f6a54cf8130f8f6fb", "text": "Despite significant progress in object categorization, in recent years, a number of important challenges remain, mainly, ability to learn from limited labeled data and ability to recognize object classes within large, potentially open, set of labels. Zero-shot learning is one way of addressing these challenges, but it has only been shown to work with limited sized class vocabularies and typically requires separation between supervised and unsupervised classes, allowing former to inform the latter but not vice versa. We propose the notion of semi-supervised vocabulary-informed learning to alleviate the above mentioned challenges and address problems of supervised, zero-shot and open set recognition using a unified framework. Specifically, we propose a maximum margin framework for semantic manifold-based recognition that incorporates distance constraints from (both supervised and unsupervised) vocabulary atoms, ensuring that labeled samples are projected closest to their correct prototypes, in the embedding space, than to others. We show that resulting model shows improvements in supervised, zero-shot, and large open set recognition, with up to 310K class vocabulary on AwA and ImageNet datasets.", "title": "" }, { "docid": "48703205408e6ebd8f8fc357560acc41", "text": "Two experiments found that when asked to perform the physically exerting tasks of clapping and shouting, people exhibit a sizable decrease in individual effort when performing in groups as compared to when they perform alone. This decrease, which we call social loafing, is in addition to losses due to faulty coordination of group efforts. Social loafing is discussed in terms of its experimental generality and theoretical importance. The widespread occurrence, the negative consequences for society, and some conditions that can minimize social loafing are also explored.", "title": "" }, { "docid": "8b3ab5df68f71ff4be4d3902c81e35be", "text": "When learning to program, frustrating experiences contribute to negative learning outcomes and poor retention in the field. Defining a common framework that explains why these experiences occur can lead to better interventions and learning mechanisms. To begin constructing such a framework, we asked 45 software developers about the severity of their frustration and to recall their most recent frustrating programming experience. As a result, 67% considered their frustration to be severe. Further, we distilled the reported experiences into 11 categories, which include issues with mapping behaviors to code and broken programming tools. Finally, we discuss future directions for defining our framework and designing future interventions.", "title": "" }, { "docid": "05ea7a05b620c0dc0a0275f55becfbc3", "text": "Automated story generation is the problem of automatically selecting a sequence of events, actions, or words that can be told as a story. We seek to develop a system that can generate stories by learning everything it needs to know from textual story corpora. To date, recurrent neural networks that learn language models at character, word, or sentence levels have had little success generating coherent stories. We explore the question of event representations that provide a midlevel of abstraction between words and sentences in order to retain the semantic information of the original data while minimizing event sparsity. We present a technique for preprocessing textual story data into event sequences. We then present a technique for automated story generation whereby we decompose the problem into the generation of successive events (event2event) and the generation of natural language sentences from events (event2sentence). We give empirical results comparing different event representations and their effects on event successor generation and the translation of events to natural language.", "title": "" }, { "docid": "a81e4507632505b64f4839a1a23fa440", "text": "Unity am e Deelopm nt w ith C# Alan Thorn In Pro Unity Game Development with C#, Alan Thorn, author of Learn Unity for 2D` Game Development and experienced game developer, takes you through the complete C# workflow for developing a cross-platform first person shooter in Unity. C# is the most popular programming language for experienced Unity developers, helping them get the most out of what Unity offers. If you’re already using C# with Unity and you want to take the next step in becoming an experienced, professional-level game developer, this is the book you need. Whether you are a student, an indie developer, or a seasoned game dev professional, you’ll find helpful C# examples of how to build intelligent enemies, create event systems and GUIs, develop save-game states, and lots more. You’ll understand and apply powerful programming concepts such as singleton classes, component based design, resolution independence, delegates, and event driven programming.", "title": "" }, { "docid": "c6a7c67fa77d2a5341b8e01c04677058", "text": "Human brain imaging studies have shown that greater amygdala activation to emotional relative to neutral events leads to enhanced episodic memory. Other studies have shown that fearful faces also elicit greater amygdala activation relative to neutral faces. To the extent that amygdala recruitment is sufficient to enhance recollection, these separate lines of evidence predict that recognition memory should be greater for fearful relative to neutral faces. Experiment 1 demonstrated enhanced memory for emotionally negative relative to neutral scenes; however, fearful faces were not subject to enhanced recognition across a variety of delays (15 min to 2 wk). Experiment 2 demonstrated that enhanced delayed recognition for emotional scenes was associated with increased sympathetic autonomic arousal, indexed by the galvanic skin response, relative to fearful faces. These results suggest that while amygdala activation may be necessary, it alone is insufficient to enhance episodic memory formation. It is proposed that a sufficient level of systemic arousal is required to alter memory consolidation resulting in enhanced recollection of emotional events.", "title": "" }, { "docid": "0f20cfce49eaa9f447fc45b1d4c04be0", "text": "Face recognition is a widely used technology with numerous large-scale applications, such as surveillance, social media and law enforcement. There has been tremendous progress in face recognition accuracy over the past few decades, much of which can be attributed to deep learning based approaches during the last five years. Indeed, automated face recognition systems are now believed to surpass human performance in some scenarios. Despite this progress, a crucial question still remains unanswered: given a face representation, how many identities can it resolve? In other words, what is the capacity of the face representation? A scientific basis for estimating the capacity of a given face representation will not only benefit the evaluation and comparison of different face representation methods, but will also establish an upper bound on the scalability of an automatic face recognition system. We cast the face capacity estimation problem under the information theoretic framework of capacity of a Gaussian noise channel. By explicitly accounting for two sources of representational noise: epistemic (model) uncertainty and aleatoric (data) variability, our approach is able to estimate the capacity of any given face representation. To demonstrate the efficacy of our approach, we estimate the capacity of a 128-dimensional deep neural network based face representation, FaceNet [1], and that of the classical Eigenfaces [2] representation of the same dimensionality. Our numerical experiments on unconstrained faces indicate that, (a) our capacity estimation model yields a capacity upper bound of 5.8×108 for FaceNet and 1×100 for Eigenface representation at a false acceptance rate (FAR) of 1%, (b) the capacity of the face representation reduces drastically as you lower the desired FAR (for FaceNet representation; the capacity at FAR of 0.1% and 0.001% is 2.4×106 and 7.0×102, respectively), and (c) the empirical performance of the FaceNet representation is significantly below the theoretical limit.", "title": "" }, { "docid": "152122f523efc9150033dbf5798c650f", "text": "Nowadays, computer systems are presented in almost all types of human activity and they support any kind of industry as well. Most of these systems are distributed where the communication between nodes is based on computer networks of any kind. Connectivity between system components is the key issue when designing distributed systems, especially systems of industrial informatics. The industrial area requires a wide range of computer communication means, particularly time-constrained and safety-enhancing ones. From fieldbus and industrial Ethernet technologies through wireless and internet-working solutions to standardization issues, there are many aspects of computer networks uses and many interesting research domains. Lots of them are quite sophisticated or even unique. The main goal of this paper is to present the survey of the latest trends in the communication domain of industrial distributed systems and to emphasize important questions as dependability, and standardization. Finally, the general assessment and estimation of the future development is provided. The presentation is based on the abstract description of dataflow within a system.", "title": "" }, { "docid": "90c2121fc04c0c8d9c4e3d8ee7b8ecc0", "text": "Measuring similarity between two data objects is a more challenging problem for data mining and knowledge discovery tasks. The traditional clustering algorithms have been mainly stressed on numerical data, the implicit property of which can be exploited to define distance function between the data points to define similarity measure. The problem of similarity becomes more complex when the data is categorical which do not have a natural ordering of values or can be called as non geometrical attributes. Clustering on relational data sets when majority of its attributes are of categorical types makes interesting facts. No earlier work has been done on clustering categorical attributes of relational data set types making use of the property of functional dependency as parameter to measure similarity. This paper is an extension of earlier work on clustering relational data sets where domains are unique and similarity is context based and introduces a new notion of similarity based on dependency of an attribute on other attributes prevalent in the relational data set. This paper also gives a brief overview of popular similarity measures of categorical attributes. This novel similarity measure can be used to apply on tuples and their respective values. The important property of categorical domain is that they have smaller number of attribute values. The similarity measure of relational data sets then can be applied to the smaller data sets for efficient results.", "title": "" }, { "docid": "28e1c4c2622353fc87d3d8a971b9e874", "text": "In-memory key/value store (KV-store) is a key building block for many systems like databases and large websites. Two key requirements for such systems are efficiency and availability, which demand a KV-store to continuously handle millions of requests per second. A common approach to availability is using replication, such as primary-backup (PBR), which, however, requires M+1 times memory to tolerate M failures. This renders scarce memory unable to handle useful user jobs.\n This article makes the first case of building highly available in-memory KV-store by integrating erasure coding to achieve memory efficiency, while not notably degrading performance. A main challenge is that an in-memory KV-store has much scattered metadata. A single KV put may cause excessive coding operations and parity updates due to excessive small updates to metadata. Our approach, namely Cocytus, addresses this challenge by using a hybrid scheme that leverages PBR for small-sized and scattered data (e.g., metadata and key), while only applying erasure coding to relatively large data (e.g., value). To mitigate well-known issues like lengthy recovery of erasure coding, Cocytus uses an online recovery scheme by leveraging the replicated metadata information to continuously serve KV requests. To further demonstrate the usefulness of Cocytus, we have built a transaction layer by using Cocytus as a fast and reliable storage layer to store database records and transaction logs. We have integrated the design of Cocytus to Memcached and extend it to support in-memory transactions. Evaluation using YCSB with different KV configurations shows that Cocytus incurs low overhead for latency and throughput, can tolerate node failures with fast online recovery, while saving 33% to 46% memory compared to PBR when tolerating two failures. A further evaluation using the SmallBank OLTP benchmark shows that in-memory transactions can run atop Cocytus with high throughput, low latency, and low abort rate and recover fast from consecutive failures.", "title": "" }, { "docid": "fee1419f689259bc5fe7e4bfd8f0242c", "text": "One of the challenges in computer vision is how to learn an accurate classifier for a new domain by using labeled images from an old domain under the condition that there is no available labeled images in the new domain. Domain adaptation is an outstanding solution that tackles this challenge by employing available source-labeled datasets, even with significant difference in distribution and properties. However, most prior methods only reduce the difference in subspace marginal or conditional distributions across domains while completely ignoring the source data label dependence information in a subspace. In this paper, we put forward a novel domain adaptation approach, referred to as Enhanced Subspace Distribution Matching. Specifically, it aims to jointly match the marginal and conditional distributions in a kernel principal dimensionality reduction procedure while maximizing the source label dependence in a subspace, thus raising the subspace distribution matching degree. Extensive experiments verify that it can significantly outperform several state-of-the-art methods for cross-domain image classification problems.", "title": "" }, { "docid": "2d6ea84dcdae28291c5fdca01495d51f", "text": "This paper presents how to generate questions from given passages using neural networks, where large scale QA pairs are automatically crawled and processed from Community-QA website, and used as training data. The contribution of the paper is 2-fold: First, two types of question generation approaches are proposed, one is a retrieval-based method using convolution neural network (CNN), the other is a generation-based method using recurrent neural network (RNN); Second, we show how to leverage the generated questions to improve existing question answering systems. We evaluate our question generation method for the answer sentence selection task on three benchmark datasets, including SQuAD, MS MARCO, and WikiQA. Experimental results show that, by using generated questions as an extra signal, significant QA improvement can be achieved.", "title": "" }, { "docid": "0a35370e6c99e122b8051a977029d77a", "text": "To truly understand the visual world our models should be able not only to recognize images but also generate them. To this end, there has been exciting recent progress on generating images from natural language descriptions. These methods give stunning results on limited domains such as descriptions of birds or flowers, but struggle to faithfully reproduce complex sentences with many objects and relationships. To overcome this limitation we propose a method for generating images from scene graphs, enabling explicitly reasoning about objects and their relationships. Our model uses graph convolution to process input graphs, computes a scene layout by predicting bounding boxes and segmentation masks for objects, and converts the layout to an image with a cascaded refinement network. The network is trained adversarially against a pair of discriminators to ensure realistic outputs. We validate our approach on Visual Genome and COCO-Stuff, where qualitative results, ablations, and user studies demonstrate our method's ability to generate complex images with multiple objects.", "title": "" }, { "docid": "a30de4a213fe05c606fb16d204b9b170", "text": "– The recent work on cross-country regressions can be compared to looking at “a black cat in a dark room”. Whether or not all this work has accomplished anything on the substantive economic issues is a moot question. But the search for “a black cat ” has led to some progress on the econometric front. The purpose of this paper is to comment on this progress. We discuss the problems with the use of cross-country panel data in the context of two problems: The analysis of economic growth and that of the purchasing power parity (PPP) theory. A propos de l’emploi des méthodes de panel sur des données inter-pays RÉSUMÉ. – Les travaux récents utilisant des régressions inter-pays peuvent être comparés à la recherche d'« un chat noir dans une pièce sans lumière ». La question de savoir si ces travaux ont apporté quelque chose de significatif à la connaissance économique est assez controversée. Mais la recherche du « chat noir » a conduit à quelques progrès en économétrie. L'objet de cet article est de discuter de ces progrès. Les problèmes posés par l'utilisation de panels de pays sont discutés dans deux contextes : celui de la croissance économique et de la convergence d'une part ; celui de la théorie de la parité des pouvoirs d'achat d'autre part. * G.S. MADDALA: Department of Economics, The Ohio State University. I would like to thank M. NERLOVE, P. SEVESTRE and an anonymous referee for helpful comments. Responsability for the omissions and any errors is my own. ANNALES D’ÉCONOMIE ET DE STATISTIQUE. – N° 55-56 – 1999 « The Gods love the obscure and hate the obvious » BRIHADARANYAKA UPANISHAD", "title": "" } ]
scidocsrr
519cad491c492024d286bfcba25e17a6
A Heuristics Approach for Fast Detecting Suspicious Money Laundering Cases in an Investment Bank
[ { "docid": "e67dc912381ebbae34d16aad0d3e7d92", "text": "In this paper, we study the problem of applying data mining to facilitate the investigation of money laundering crimes (MLCs). We have identified a new paradigm of problems --- that of automatic community generation based on uni-party data, the data in which there is no direct or explicit link information available. Consequently, we have proposed a new methodology for Link Discovery based on Correlation Analysis (LDCA). We have used MLC group model generation as an exemplary application of this problem paradigm, and have focused on this application to develop a specific method of automatic MLC group model generation based on timeline analysis using the LDCA methodology, called CORAL. A prototype of CORAL method has been implemented, and preliminary testing and evaluations based on a real MLC case data are reported. The contributions of this work are: (1) identification of the uni-party data community generation problem paradigm, (2) proposal of a new methodology LDCA to solve for problems in this paradigm, (3) formulation of the MLC group model generation problem as an example of this paradigm, (4) application of the LDCA methodology in developing a specific solution (CORAL) to the MLC group model generation problem, and (5) development, evaluation, and testing of the CORAL prototype in a real MLC case data.", "title": "" }, { "docid": "0a0f4f5fc904c12cacb95e87f62005d0", "text": "This text is intended to provide a balanced introduction to machine vision. Basic concepts are introduced with only essential mathematical elements. The details to allow implementation and use of vision algorithm in practical application are provided, and engineering aspects of techniques are emphasized. This text intentionally omits theories of machine vision that do not have sufficient practical applications at the time.", "title": "" } ]
[ { "docid": "5666b1a6289f4eac05531b8ff78755cb", "text": "Neural text generation models are often autoregressive language models or seq2seq models. These models generate text by sampling words sequentially, with each word conditioned on the previous word, and are state-of-the-art for several machine translation and summarization benchmarks. These benchmarks are often defined by validation perplexity even though this is not a direct measure of the quality of the generated text. Additionally, these models are typically trained via maximum likelihood and teacher forcing. These methods are well-suited to optimizing perplexity but can result in poor sample quality since generating text requires conditioning on sequences of words that may have never been observed at training time. We propose to improve sample quality using Generative Adversarial Networks (GANs), which explicitly train the generator to produce high quality samples and have shown a lot of success in image generation. GANs were originally designed to output differentiable values, so discrete language generation is challenging for them. We claim that validation perplexity alone is not indicative of the quality of text generated by a model. We introduce an actor-critic conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively, evidence that this produces more realistic conditional and unconditional text samples compared to a maximum likelihood trained model.", "title": "" }, { "docid": "bfa178f35027a55e8fd35d1c87789808", "text": "We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published figures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional reg ularities that are salient in the data.", "title": "" }, { "docid": "56cf91a279fdcee59841cb9b8c866626", "text": "This paper describes a new maximum-power-point-tracking method for a photovoltaic system based on the Lagrange Interpolation Formula and proposes the particle swarm optimization method. The proposed control scheme eliminates the problems of conventional methods by using only a simple numerical calculation to initialize the particles around the global maximum power point. Hence, the suggested control scheme will utilize less iterations to reach the maximum power point. Simulation study is carried out using MATLAB/SIMULINK and compared with the Perturb and Observe method, the Incremental Conductance method, and the conventional Particle Swarm Optimization algorithm. The proposed algorithm is verified with the OPAL-RT real-time simulator. The simulation results confirm that the proposed algorithm can effectively enhance the stability and the fast tracking capability under abnormal insolation conditions.", "title": "" }, { "docid": "70d7c838e7b5c4318e8764edb5a70555", "text": "This research developed and tested a model of turnover contagion in which the job embeddedness and job search behaviors of coworkers influence employees’ decisions to quit. In a sample of 45 branches of a regional bank and 1,038 departments of a national hospitality firm, multilevel analysis revealed that coworkers’ job embeddedness and job search behaviors explain variance in individual “voluntary turnover” over and above that explained by other individual and group-level predictors. Broadly speaking, these results suggest that coworkers’ job embeddedness and job search behaviors play critical roles in explaining why people quit their jobs. Implications are discussed.", "title": "" }, { "docid": "9fab400cba6d9c91aba707c6952889f8", "text": "Deep Neural Networks (DNNs) have recently been shown to be vulnerable against adversarial examples, which are carefully crafted instances that can mislead DNNs to make errors during prediction. To better understand such attacks, a characterization is needed of the properties of regions (the so-called ‘adversarial subspaces’) in which adversarial examples lie. We tackle this challenge by characterizing the dimensional properties of adversarial regions, via the use of Local Intrinsic Dimensionality (LID). LID assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors. We first provide explanations about how adversarial perturbation can affect the LID characteristic of adversarial regions, and then show empirically that LID characteristics can facilitate the distinction of adversarial examples generated using state-of-the-art attacks. As a proof-of-concept, we show that a potential application of LID is to distinguish adversarial examples, and the preliminary results show that it can outperform several state-of-the-art detection measures by large margins for five attack strategies considered in this paper across three benchmark datasets . Our analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.", "title": "" }, { "docid": "db1d87d3e5ab39ef639d7c53a740340a", "text": "Plants are natural producers of chemical substances, providing potential treatment of human ailments since ancient times. Some herbal chemicals in medicinal plants of traditional and modern medicine carry the risk of herb induced liver injury (HILI) with a severe or potentially lethal clinical course, and the requirement of a liver transplant. Discontinuation of herbal use is mandatory in time when HILI is first suspected as diagnosis. Although, herbal hepatotoxicity is of utmost clinical and regulatory importance, lack of a stringent causality assessment remains a major issue for patients with suspected HILI, while this problem is best overcome by the use of the hepatotoxicity specific CIOMS (Council for International Organizations of Medical Sciences) scale and the evaluation of unintentional reexposure test results. Sixty five different commonly used herbs, herbal drugs, and herbal supplements and 111 different herbs or herbal mixtures of the traditional Chinese medicine (TCM) are reported causative for liver disease, with levels of causality proof that appear rarely conclusive. Encouraging steps in the field of herbal hepatotoxicity focus on introducing analytical methods that identify cases of intrinsic hepatotoxicity caused by pyrrolizidine alkaloids, and on omics technologies, including genomics, proteomics, metabolomics, and assessing circulating micro-RNA in the serum of some patients with intrinsic hepatotoxicity. It remains to be established whether these new technologies can identify idiosyncratic HILI cases. To enhance its globalization, herbal medicine should universally be marketed as herbal drugs under strict regulatory surveillance in analogy to regulatory approved chemical drugs, proving a positive risk/benefit profile by enforcing evidence based clinical trials and excellent herbal drug quality.", "title": "" }, { "docid": "57290d8e0a236205c4f0ce887ffed3ab", "text": "We propose a novel, projection based way to incorporate the conditional information into the discriminator of GANs that respects the role of the conditional information in the underlining probabilistic model. This approach is in contrast with most frameworks of conditional GANs used in application today, which use the conditional information by concatenating the (embedded) conditional vector to the feature vectors. With this modification, we were able to significantly improve the quality of the class conditional image generation on ILSVRC2012 (ImageNet) 1000-class image dataset from the current state-of-the-art result, and we achieved this with a single pair of a discriminator and a generator. We were also able to extend the application to super-resolution and succeeded in producing highly discriminative super-resolution images. This new structure also enabled high quality category transformation based on parametric functional transformation of conditional batch normalization layers in the generator. The code with Chainer (Tokui et al., 2015), generated images and pretrained models are available at https://github.com/pfnet-research/sngan_projection.", "title": "" }, { "docid": "a6e2652aa074719ac2ca6e94d12fed03", "text": "■ Lincoln Laboratory led the nation in the development of high-power wideband radar with a unique capability for resolving target scattering centers and producing three-dimensional images of individual targets. The Laboratory fielded the first wideband radar, called ALCOR, in 1970 at Kwajalein Atoll. Since 1970 the Laboratory has developed and fielded several other wideband radars for use in ballistic-missile-defense research and space-object identification. In parallel with these radar systems, the Laboratory has developed high-capacity, high-speed signal and data processing techniques and algorithms that permit generation of target images and derivation of other target features in near real time. It has also pioneered new ways to realize improved resolution and scatterer-feature identification in wideband radars by the development and application of advanced signal processing techniques. Through the analysis of dynamic target images and other wideband observables, we can acquire knowledge of target form, structure, materials, motion, mass distribution, identifying features, and function. Such capability is of great benefit in ballistic missile decoy discrimination and in space-object identification.", "title": "" }, { "docid": "e82cd7c22668b0c9ed62b4afdf49d1f4", "text": "This paper presents a tutorial on delta-sigma fractional-N PLLs for frequency synthesis. The presentation assumes the reader has a working knowledge of integer-N PLLs. It builds on this knowledge by introducing the additional concepts required to understand ΔΣ fractional-N PLLs. After explaining the limitations of integerN PLLs with respect to tuning resolution, the paper introduces the delta-sigma fractional-N PLL as a means of avoiding these limitations. It then presents a selfcontained explanation of the relevant aspects of deltasigma modulation, an extension of the well known integerN PLL linearized model to delta-sigma fractional-N PLLs, a design example, and techniques for wideband digital modulation of the VCO within a delta-sigma fractional-N PLL.", "title": "" }, { "docid": "10d9758469a1843d426f56a379c2fecb", "text": "A novel compact-size branch-line coupler using composite right/left-handed transmission lines is proposed in this paper. In order to obtain miniaturization, composite right/left-handed transmission lines with novel complementary split single ring resonators which are realized by loading a pair of meander-shaped-slots in the split of the ring are designed. This novel coupler occupies only 22.8% of the area of the conventional approach at 0.7 GHz. The proposed coupler can be implemented by using the standard printed-circuit-board etching processes without any implementation of lumped elements and via-holes, making it very useful for wireless communication systems. The agreement between measured and stimulated results validates the feasible configuration of the proposed coupler.", "title": "" }, { "docid": "58858f0cd3561614f1742fe7b0380861", "text": "This study focuses on how technology can encourage and ease awkwardness-free communications between people in real-world scenarios. We propose a device, The Wearable Aura, able to project a personalized animation onto one's Personal Distance zone. This projection, as an extension of one-self is reactive to user's cognitive status, aware of its environment, context and user's activity. Our user study supports the idea that an interactive projection around an individual can indeed benefit the communications with other individuals.", "title": "" }, { "docid": "e5539337c36ec7a03bf327069156ea2c", "text": "An approach is proposed to estimate the location, velocity, and acceleration of a target vehicle to avoid a possible collision. Radial distance, velocity, and acceleration are extracted from the hybrid linear frequency modulation (LFM)/frequency-shift keying (FSK) echoed signals and then processed using the Kalman filter and the trilateration process. This approach proves to converge fast with good accuracy. Two other approaches, i.e., an extended Kalman filter (EKF) and a two-stage Kalman filter (TSKF), are used as benchmarks for comparison. Several scenarios of vehicle movement are also presented to demonstrate the effectiveness of this approach.", "title": "" }, { "docid": "1ad353e3d7765e1681c062c777087be7", "text": "The cyber world provides an anonymous environment for criminals to conduct malicious activities such as spamming, sending ransom e-mails, and spreading botnet malware. Often, these activities involve textual communication between a criminal and a victim, or between criminals themselves. The forensic analysis of online textual documents for addressing the anonymity problem called authorship analysis is the focus of most cybercrime investigations. Authorship analysis is the statistical study of linguistic and computational characteristics of the written documents of individuals. This paper is the first work that presents a unified data mining solution to address authorship analysis problems based on the concept of frequent pattern-based writeprint. Extensive experiments on real-life data suggest that our proposed solution can precisely capture the writing styles of individuals. Furthermore, the writeprint is effective to identify the author of an anonymous text from ∗Corresponding author Email addresses: [email protected] (Farkhund Iqbal), [email protected] (Hamad Binsalleeh), [email protected] (Benjamin C. M. Fung), [email protected] (Mourad Debbabi) Preprint submitted to Information Sciences March 10, 2011 a group of suspects and to infer sociolinguistic characteristics of the author.", "title": "" }, { "docid": "fb6494dcf01a927597ff784a3323e8c2", "text": "Detection of defects in induction machine rotor bars for unassembled motors is required to evaluate machines considered for repair as well as fulfilling incremental quality assurance checks in the manufacture of new machines. Detection of rotor bar defects prior to motor assembly are critical in increasing repair efficiency and assuring the quality of newly manufactured machines. Many methods of detecting rotor bar defects in unassembled motors lack the sensitivity to find both major and minor defects in both cast and fabricated rotors along with additional deficiencies in quantifiable test results and arc-flash safety hazards. A process of direct magnetic field analysis can examine measurements from induced currents in a rotor separated from its stator yielding a high-resolution fingerprint of a rotor's magnetic field. This process identifies both major and minor rotor bar defects in a repeatable and quantifiable manner appropriate for numerical evaluation without arc-flash safety hazards.", "title": "" }, { "docid": "d0e5ddcc0aa85ba6a3a18796c335dcd2", "text": "A novel planar end-fire circularly polarized (CP) complementary Yagi array antenna is proposed. The antenna has a compact and complementary structure, and exhibits excellent properties (low profile, single feed, broadband, high gain, and CP radiation). It is based on a compact combination of a pair of complementary Yagi arrays with a common driven element. In the complementary structure, the vertical polarization is contributed by a microstrip patch Yagi array, while the horizontal polarization is yielded by a strip dipole Yagi array. With the combination of the two orthogonally polarized Yagi arrays, a CP antenna with high gain and wide bandwidth is obtained. With a profile of <inline-formula> <tex-math notation=\"LaTeX\">$0.05\\lambda _{\\mathrm{0}}$ </tex-math></inline-formula> (3 mm), the antenna has a gain of about 8 dBic, an impedance bandwidth (<inline-formula> <tex-math notation=\"LaTeX\">$\\vert S_{11}\\vert < -10 $ </tex-math></inline-formula> dB) of 13.09% (4.57–5.21 GHz) and a 3-dB axial-ratio bandwidth of 10.51% (4.69–5.21 GHz).", "title": "" }, { "docid": "70c6da9da15ad40b4f64386b890ccf51", "text": "In this paper, we describe a positioning control for a SCARA robot using a recurrent neural network. The simultaneous perturbation optimization method is used for the learning rule of the recurrent neural network. Then the recurrent neural network learns inverse dynamics of the SCARA robot. We present details of the control scheme using the simultaneous perturbation. Moreover, we consider an example for two target positions using an actual SCARA robot. The result is shown.", "title": "" }, { "docid": "0fb45311d5e6a7348917eaa12ffeab46", "text": "Question Answering is a task which requires building models capable of providing answers to questions expressed in human language. Full question answering involves some form of reasoning ability. We introduce a neural network architecture for this task, which is a form of Memory Network, that recognizes entities and their relations to answers through a focus attention mechanism. Our model is named Question Dependent Recurrent Entity Network and extends Recurrent Entity Network by exploiting aspects of the question during the memorization process. We validate the model on both synthetic and real datasets: the bAbI question answering dataset and the CNN & Daily News reading comprehension dataset. In our experiments, the models achieved a State-ofThe-Art in the former and competitive results in the latter.", "title": "" }, { "docid": "decbbd09bcf7a36a3886d52864e9a08c", "text": "INTRODUCTION\nBirth preparedness and complication readiness (BPCR) is a strategy to promote timely use of skilled maternal and neonatal care during childbirth. According to World Health Organization, BPCR should be a key component of focused antenatal care. Dakshina Kannada, a coastal district of Karnataka state, is categorized as a high-performing district (institutional delivery rate >25%) under the National Rural Health Mission. However, a substantial proportion of women in the district experience complications during pregnancy (58.3%), childbirth (45.7%), and postnatal (17.4%) period. There is a paucity of data on BPCR practice and the factors associated with it in the district. Exploring this would be of great use in the evidence-based fine-tuning of ongoing maternal and child health interventions.\n\n\nOBJECTIVE\nTo assess BPCR practice and the factors associated with it among the beneficiaries of two rural Primary Health Centers (PHCs) of Dakshina Kannada district, Karnataka, India.\n\n\nMETHODS\nA facility-based cross-sectional study was conducted among 217 pregnant (>28 weeks of gestation) and recently delivered (in the last 6 months) women in two randomly selected PHCs from June -September 2013. Exit interviews were conducted using a pre-designed semi-structured interview schedule. Information regarding socio-demographic profile, obstetric variables, and knowledge of key danger signs was collected. BPCR included information on five key components: identified the place of delivery, saved money to pay for expenses, mode of transport identified, identified a birth companion, and arranged a blood donor if the need arises. In this study, a woman who recalled at least two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (total six) was considered as knowledgeable on key danger signs. Optimal BPCR practice was defined as following at least three out of five key components of BPCR.\n\n\nOUTCOME MEASURES\nProportion, Odds ratio, and adjusted Odds ratio (adj OR) for optimal BPCR practice.\n\n\nRESULTS\nA total of 184 women completed the exit interview (mean age: 26.9±3.9 years). Optimal BPCR practice was observed in 79.3% (95% CI: 73.5-85.2%) of the women. Multivariate logistic regression revealed that age >26 years (adj OR = 2.97; 95%CI: 1.15-7.7), economic status of above poverty line (adj OR = 4.3; 95%CI: 1.12-16.5), awareness of minimum two key danger signs in each of the three phases, i.e., pregnancy, childbirth, and postpartum (adj OR = 3.98; 95%CI: 1.4-11.1), preference to private health sector for antenatal care/delivery (adj OR = 2.9; 95%CI: 1.1-8.01), and woman's discussion about the BPCR with her family members (adj OR = 3.4; 95%CI: 1.1-10.4) as the significant factors associated with optimal BPCR practice.\n\n\nCONCLUSION\nIn this study population, BPCR practice was better than other studies reported from India. Healthcare workers at the grassroots should be encouraged to involve women's family members while explaining BPCR and key danger signs with a special emphasis on young (<26 years) and economically poor women. Ensuring a reinforcing discussion between woman and her family members may further enhance the BPCR practice.", "title": "" }, { "docid": "91eaef6e482601533656ca4786b7a023", "text": "Budget optimization is one of the primary decision-making issues faced by advertisers in search auctions. A quality budget optimization strategy can significantly improve the effectiveness of search advertising campaigns, thus helping advertisers to succeed in the fierce competition of online marketing. This paper investigates budget optimization problems in search advertisements and proposes a novel hierarchical budget optimization framework (BOF), with consideration of the entire life cycle of advertising campaigns. Then, we formulated our BOF framework, made some mathematical analysis on some desirable properties, and presented an effective solution algorithm. Moreover, we established a simple but illustrative instantiation of our BOF framework which can help advertisers to allocate and adjust the budget of search advertising campaigns. Our BOF framework provides an open testbed environment for various strategies of budget allocation and adjustment across search advertising markets. With field reports and logs from real-world search advertising campaigns, we designed some experiments to evaluate the effectiveness of our BOF framework and instantiated strategies. Experimental results are quite promising, where our BOF framework and instantiated strategies perform better than two baseline budget strategies commonly used in practical advertising campaigns.", "title": "" }, { "docid": "bba4d637cf40e81ea89e61e875d3c425", "text": "Recent years have witnessed the fast development of UAVs (unmanned aerial vehicles). As an alternative to traditional image acquisition methods, UAVs bridge the gap between terrestrial and airborne photogrammetry and enable flexible acquisition of high resolution images. However, the georeferencing accuracy of UAVs is still limited by the low-performance on-board GNSS and INS. This paper investigates automatic geo-registration of an individual UAV image or UAV image blocks by matching the UAV image(s) with a previously taken georeferenced image, such as an individual aerial or satellite image with a height map attached or an aerial orthophoto with a DSM (digital surface model) attached. As the biggest challenge for matching UAV and aerial images is in the large differences in scale and rotation, we propose a novel feature matching method for nadir or slightly tilted images. The method is comprised of a dense feature detection scheme, a one-to-many matching strategy and a global geometric verification scheme. The proposed method is able to find thousands of valid matches in cases where SIFT and ASIFT fail. Those matches can be used to geo-register the whole UAV image block towards the reference image data. When the reference images offer high georeferencing accuracy, the UAV images can also be geolocalized in a global coordinate system. A series of experiments involving different scenarios was conducted to validate the proposed method. The results demonstrate that our approach achieves not only decimeter-level registration accuracy, but also comparable global accuracy as the reference images.", "title": "" } ]
scidocsrr
d85aa425e7c3ca40f0275b09af8446bf
A fuzzy spatial coherence-based approach to background/foreground separation for moving object detection
[ { "docid": "00e8c142e7f059c10cd9eabdb78e0120", "text": "Running average method and its modified version are two simple and fast methods for background modeling. In this paper, some weaknesses of running average method and standard background subtraction are mentioned. Then, a fuzzy approach for background modeling and background subtraction is proposed. For fuzzy background modeling, fuzzy running average is suggested. Background modeling and background subtraction algorithms are very commonly used in vehicle detection systems. To demonstrate the advantages of fuzzy running average and fuzzy background subtraction, these methods and their standard versions are compared in vehicle detection application. Experimental results show that fuzzy approach is relatively more accurate than classical approach.", "title": "" } ]
[ { "docid": "4c5dd43f350955b283f1a04ddab52d41", "text": "This thesis deals with interaction design for a class of upcoming computer technologies for human use characterized by being different from traditional desktop computers in their physical appearance and the contexts in which they are used. These are typically referred to as emerging technologies. Emerging technologies often imply interaction dissimilar from how computers are usually operated. This challenges the scope and applicability of existing knowledge about human-computer interaction design. The thesis focuses on three specific technologies: virtual reality, augmented reality and mobile computer systems. For these technologies, five themes are addressed: current focus of research, concepts, interaction styles, methods and tools. These themes inform three research questions, which guide the conducted research. The thesis consists of five published research papers and a summary. In the summary, current focus of research is addressed from the perspective of research methods and research purpose. Furthermore, the notions of human-computer interaction design and emerging technologies are discussed and two central distinctions are introduced. Firstly, interaction design is divided into two categories with focus on systems and processes respectively. Secondly, the three studied emerging technologies are viewed in relation to immersion into virtual space and mobility in physical space. These distinctions are used to relate the five paper contributions, each addressing one of the three studied technologies with focus on properties of systems or the process of creating them respectively. Three empirical sources contribute to the results. Experiments with interaction design inform the development of concepts and interaction styles suitable for virtual reality, augmented reality and mobile computer systems. Experiments with designing interaction inform understanding of how methods and tools support design processes for these technologies. Finally, a literature survey informs a review of existing research, and identifies current focus, limitations and opportunities for future research. The primary results of the thesis are: 1) Current research within human-computer interaction design for the studied emerging technologies focuses on building systems ad-hoc and evaluating them in artificial settings. This limits the generation of cumulative theoretical knowledge. 2) Interaction design for the emerging technologies studied requires the development of new suitable concepts and interaction styles. Suitable concepts describe unique properties and challenges of a technology. Suitable interaction styles respond to these challenges by exploiting the technology’s unique properties. 3) Designing interaction for the studied emerging technologies involves new use situations, a distance between development and target platforms and complex programming. Elements of methods exist, which are useful for supporting the design of interaction, but they are fragmented and do not support the process as a whole. The studied tools do not support the design process as a whole either but support aspects of interaction design by bridging the gulf between development and target platforms and providing advanced programming environments. Menneske-maskine interaktionsdesign for opkommende teknologier Virtual Reality, Augmented Reality og Mobile Computersystemer", "title": "" }, { "docid": "b04ba2e942121b7a32451f0b0f690553", "text": "Due to the growing number of vehicles on the roads worldwide, road traffic accidents are currently recognized as a major public safety problem. In this context, connected vehicles are considered as the key enabling technology to improve road safety and to foster the emergence of next generation cooperative intelligent transport systems (ITS). Through the use of wireless communication technologies, the deployment of ITS will enable vehicles to autonomously communicate with other nearby vehicles and roadside infrastructures and will open the door for a wide range of novel road safety and driver assistive applications. However, connecting wireless-enabled vehicles to external entities can make ITS applications vulnerable to various security threats, thus impacting the safety of drivers. This article reviews the current research challenges and opportunities related to the development of secure and safe ITS applications. It first explores the architecture and main characteristics of ITS systems and surveys the key enabling standards and projects. Then, various ITS security threats are analyzed and classified, along with their corresponding cryptographic countermeasures. Finally, a detailed ITS safety application case study is analyzed and evaluated in light of the European ETSI TC ITS standard. An experimental test-bed is presented, and several elliptic curve digital signature algorithms (ECDSA) are benchmarked for signing and verifying ITS safety messages. To conclude, lessons learned, open research challenges and opportunities are discussed. Electronics 2015, 4 381", "title": "" }, { "docid": "9aa24f6e014ac5104c5b9ff68dc45576", "text": "The development of social networks has led the public in general to find easy accessibility for communication with respect to rapid communication to each other at any time. Such services provide the quick transmission of information which is its positive side but its negative side needs to be kept in mind thereby misinformation can spread. Nowadays, in this era of digitalization, the validation of such information has become a real challenge, due to lack of information authentication method. In this paper, we design a framework for the rumors detection from the Facebook events data, which is based on inquiry comments. The proposed Inquiry Comments Detection Model (ICDM) identifies inquiry comments utilizing a rule-based approach which entails regular expressions to categorize the sentences as an inquiry into those starting with an intransitive verb (like is, am, was, will, would and so on) and also those sentences ending with a question mark. We set the threshold value to compare with the ratio of Inquiry to English comments and identify the rumors. We verified the proposed ICDM on labeled data, collected from snopes.com. Our experiments revealed that the proposed method achieved considerably well in comparison to the existing machine learning techniques. The proposed ICDM approach attained better results of 89% precision, 77% recall, and 82% F-measure. We are of the opinion that our experimental findings of this study will be useful for the worldwide adoption. Keywords—Social networks; rumors; inquiry comments; question identification", "title": "" }, { "docid": "153f452486e2eacb9dc1cf95275dd015", "text": "This paper presents a Fuzzy Neural Network (FNN) control system for a traveling-wave ultrasonic motor (TWUSM) driven by a dual mode modulation non-resonant driving circuit. First, the motor configuration and the proposed driving circuit of a TWUSM are introduced. To drive a TWUSM effectively, a novel driving circuit, that simultaneously employs both the driving frequency and phase modulation control scheme, is proposed to provide two-phase balance voltage for a TWUSM. Since the dynamic characteristics and motor parameters of the TWUSM are highly nonlinear and time-varying, a FNN control system is therefore investigated to achieve high-precision speed control. The proposed FNN control system incorporates neuro-fuzzy control and the driving frequency and phase modulation to solve the problem of nonlinearities and variations. The proposed control system is digitally implemented by a low-cost digital signal processor based microcontroller, hence reducing the system hardware size and cost. The effectiveness of the proposed driving circuit and control system is verified with hardware experiments under the occurrence of uncertainties. In addition, the advantages of the proposed control scheme are indicated in comparison with a conventional proportional-integral control system.", "title": "" }, { "docid": "096bc66bb6f4c04109cf26d9d474421c", "text": "A statistical analysis of full text downloads of articles in Elsevier's ScienceDirect covering all disciplines reveals large differences in download frequencies, their skewness, and their correlation with Scopus-based citation counts, between disciplines, journals, and document types. Download counts tend to be two orders of magnitude higher and less skewedly distributed than citations. A mathematical model based on the sum of two exponentials does not adequately capture monthly download counts. The degree of correlation at the article level within a journal is similar to that at the journal level in the discipline covered by that journal, suggesting that the differences between journals are to a large extent discipline-specific. Despite the fact that in all study journals download and citation counts per article positively correlate, little overlap may exist between the set of articles appearing in the top of the citation distribution and that with the most frequently downloaded ones. Usage and citation leaks, bulk downloading, differences between reader and author populations in a subject field, the type of document or its content, differences in obsolescence patterns between downloads and citations, different functions of reading and citing in the research process, all provide possible explanations of differences between download and citation distributions.", "title": "" }, { "docid": "9728b73d9b5075b5b0ee878ddfc9379a", "text": "The security research community has invested significant effort in improving the security of Android applications over the past half decade. This effort has addressed a wide range of problems and resulted in the creation of many tools for application analysis. In this article, we perform the first systematization of Android security research that analyzes applications, characterizing the work published in more than 17 top venues since 2010. We categorize each paper by the types of problems they solve, highlight areas that have received the most attention, and note whether tools were ever publicly released for each effort. Of the released tools, we then evaluate a representative sample to determine how well application developers can apply the results of our community’s efforts to improve their products. We find not only that significant work remains to be done in terms of research coverage but also that the tools suffer from significant issues ranging from lack of maintenance to the inability to produce functional output for applications with known vulnerabilities. We close by offering suggestions on how the community can more successfully move forward.", "title": "" }, { "docid": "1585d7e1f1e6950949dc954c2d0bba51", "text": "The state-of-the-art techniques for aspect-level sentiment analysis focus on feature modeling using a variety of deep neural networks (DNN). Unfortunately, their practical performance may fall short of expectations due to semantic complexity of natural languages. Motivated by the observation that linguistic hints (e.g. explicit sentiment words and shift words) can be strong indicators of sentiment, we present a joint framework, SenHint, which integrates the output of deep neural networks and the implication of linguistic hints into a coherent reasoning model based on Markov Logic Network (MLN). In SenHint, linguistic hints are used in two ways: (1) to identify easy instances, whose sentiment can be automatically determined by machine with high accuracy; (2) to capture implicit relations between aspect polarities. We also empirically evaluate the performance of SenHint on both English and Chinese benchmark datasets. Our experimental results show that SenHint can effectively improve accuracy compared with the state-of-the-art alternatives.", "title": "" }, { "docid": "bf445955186e2f69f4ef182850090ffc", "text": "The majority of online display ads are served through real-time bidding (RTB) --- each ad display impression is auctioned off in real-time when it is just being generated from a user visit. To place an ad automatically and optimally, it is critical for advertisers to devise a learning algorithm to cleverly bid an ad impression in real-time. Most previous works consider the bid decision as a static optimization problem of either treating the value of each impression independently or setting a bid price to each segment of ad volume. However, the bidding for a given ad campaign would repeatedly happen during its life span before the budget runs out. As such, each bid is strategically correlated by the constrained budget and the overall effectiveness of the campaign (e.g., the rewards from generated clicks), which is only observed after the campaign has completed. Thus, it is of great interest to devise an optimal bidding strategy sequentially so that the campaign budget can be dynamically allocated across all the available impressions on the basis of both the immediate and future rewards. In this paper, we formulate the bid decision process as a reinforcement learning problem, where the state space is represented by the auction information and the campaign's real-time parameters, while an action is the bid price to set. By modeling the state transition via auction competition, we build a Markov Decision Process framework for learning the optimal bidding policy to optimize the advertising performance in the dynamic real-time bidding environment. Furthermore, the scalability problem from the large real-world auction volume and campaign budget is well handled by state value approximation using neural networks. The empirical study on two large-scale real-world datasets and the live A/B testing on a commercial platform have demonstrated the superior performance and high efficiency compared to state-of-the-art methods.", "title": "" }, { "docid": "63dcb42d456ab4b6512c47437e354f7b", "text": "The deep learning revolution brought us an extensive array of neural network architectures that achieve state-of-the-art performance in a wide variety of Computer Vision tasks including among others classification, detection and segmentation. In parallel, we have also been observing an unprecedented demand in computational and memory requirements, rendering the efficient use of neural networks in low-powered devices virtually unattainable. Towards this end, we propose a threestage compression and acceleration pipeline that sparsifies, quantizes and entropy encodes activation maps of Convolutional Neural Networks. Sparsification increases the representational power of activation maps leading to both acceleration of inference and higher model accuracy. Inception-V3 and MobileNet-V1 can be accelerated by as much as 1.6× with an increase in accuracy of 0.38% and 0.54% on the ImageNet and CIFAR-10 datasets respectively. Quantizing and entropy coding the sparser activation maps lead to higher compression over the baseline, reducing the memory cost of the network execution. Inception-V3 and MobileNet-V1 activation maps, quantized to 16 bits, are compressed by as much as 6× with an increase in accuracy of 0.36% and 0.55% respectively.", "title": "" }, { "docid": "023fa0ac94b2ea1740f1bbeb8de64734", "text": "The establishment of an endosymbiotic relationship typically seems to be driven through complementation of the host's limited metabolic capabilities by the biochemical versatility of the endosymbiont. The most significant examples of endosymbiosis are represented by the endosymbiotic acquisition of plastids and mitochondria, introducing photosynthesis and respiration to eukaryotes. However, there are numerous other endosymbioses that evolved more recently and repeatedly across the tree of life. Recent advances in genome sequencing technology have led to a better understanding of the physiological basis of many endosymbiotic associations. This review focuses on endosymbionts in protists (unicellular eukaryotes). Selected examples illustrate the incorporation of various new biochemical functions, such as photosynthesis, nitrogen fixation and recycling, and methanogenesis, into protist hosts by prokaryotic endosymbionts. Furthermore, photosynthetic eukaryotic endosymbionts display a great diversity of modes of integration into different protist hosts. In conclusion, endosymbiosis seems to represent a general evolutionary strategy of protists to acquire novel biochemical functions and is thus an important source of genetic innovation.", "title": "" }, { "docid": "2d492d66d0abee5d5dd41cf73a83e943", "text": "Using a novel replacement gate SOI FinFET device structure, we have fabricated FinFETs with fin width (D<inf>Fin</inf>) of 4nm, fin pitch (FP) of 40nm, and gate length (L<inf>G</inf>) of 20nm. With this structure, we have achieved arrays of thousands of fins for D<inf>Fin</inf> down to 4nm with robust yield and structural integrity. We observe performance degradation, increased variability, and V<inf>T</inf> shift as D<inf>Fin</inf> is reduced. Capacitance measurements agree with quantum confinement behavior which has been predicted to pose a fundamental limit to scaling FinFETs below 10nm L<inf>G</inf>.", "title": "" }, { "docid": "b3a775719d87c3837de671001c77568b", "text": "Regularization of Deep Neural Networks (DNNs) for the sake of improving their generalization capability is important and challenging. The development in this line benefits theoretical foundation of DNNs and promotes their usability in different areas of artificial intelligence. In this paper, we investigate the role of Rademacher complexity in improving generalization of DNNs and propose a novel regularizer rooted in Local Rademacher Complexity (LRC). While Rademacher complexity is well known as a distribution-free complexity measure of function class that help boost generalization of statistical learning methods, extensive study shows that LRC, its counterpart focusing on a restricted function class, leads to sharper convergence rates and potential better generalization given finite training sample. Our LRC based regularizer is developed by estimating the complexity of the function class centered at the minimizer of the empirical loss of DNNs. Experiments on various types of network architecture demonstrate the effectiveness of LRC regularization in improving generalization. Moreover, our method features the state-of-the-art result on the CIFAR-10 dataset with network architecture found by neural architecture search.", "title": "" }, { "docid": "c41038d0e3cf34e8a1dcba07a86cce9a", "text": "Alzheimer's disease (AD) is a major neurodegenerative disease and is one of the most common cause of dementia in older adults. Among several factors, neuroinflammation is known to play a critical role in the pathogenesis of chronic neurodegenerative diseases. In particular, studies of brains affected by AD show a clear involvement of several inflammatory pathways. Furthermore, depending on the brain regions affected by the disease, the nature and the effect of inflammation can vary. Here, in order to shed more light on distinct and common features of inflammation in different brain regions affected by AD, we employed a computational approach to analyze gene expression data of six site-specific neuronal populations from AD patients. Our network based computational approach is driven by the concept that a sustained inflammatory environment could result in neurotoxicity leading to the disease. Thus, our method aims to infer intracellular signaling pathways/networks that are likely to be constantly activated or inhibited due to persistent inflammatory conditions. The computational analysis identified several inflammatory mediators, such as tumor necrosis factor alpha (TNF-a)-associated pathway, as key upstream receptors/ligands that are likely to transmit sustained inflammatory signals. Further, the analysis revealed that several inflammatory mediators were mainly region specific with few commonalities across different brain regions. Taken together, our results show that our integrative approach aids identification of inflammation-related signaling pathways that could be responsible for the onset or the progression of AD and can be applied to study other neurodegenerative diseases. Furthermore, such computational approaches can enable the translation of clinical omics data toward the development of novel therapeutic strategies for neurodegenerative diseases.", "title": "" }, { "docid": "4cbec8031ea32380675b1d8dff107cab", "text": "Quorum-sensing bacteria communicate with extracellular signal molecules called autoinducers. This process allows community-wide synchronization of gene expression. A screen for additional components of the Vibrio harveyi and Vibrio cholerae quorum-sensing circuits revealed the protein Hfq. Hfq mediates interactions between small, regulatory RNAs (sRNAs) and specific messenger RNA (mRNA) targets. These interactions typically alter the stability of the target transcripts. We show that Hfq mediates the destabilization of the mRNA encoding the quorum-sensing master regulators LuxR (V. harveyi) and HapR (V. cholerae), implicating an sRNA in the circuit. Using a bioinformatics approach to identify putative sRNAs, we identified four candidate sRNAs in V. cholerae. The simultaneous deletion of all four sRNAs is required to stabilize hapR mRNA. We propose that Hfq, together with these sRNAs, creates an ultrasensitive regulatory switch that controls the critical transition into the high cell density, quorum-sensing mode.", "title": "" }, { "docid": "329487a07d4f71e30b64da5da1c6684a", "text": "The purpose was to investigate the effect of 25 weeks heavy strength training in young elite cyclists. Nine cyclists performed endurance training and heavy strength training (ES) while seven cyclists performed endurance training only (E). ES, but not E, resulted in increases in isometric half squat performance, lean lower body mass, peak power output during Wingate test, peak aerobic power output (W(max)), power output at 4 mmol L(-1)[la(-)], mean power output during 40-min all-out trial, and earlier occurrence of peak torque during the pedal stroke (P < 0.05). ES achieved superior improvements in W(max) and mean power output during 40-min all-out trial compared with E (P < 0.05). The improvement in 40-min all-out performance was associated with the change toward achieving peak torque earlier in the pedal stroke (r = 0.66, P < 0.01). Neither of the groups displayed alterations in VO2max or cycling economy. In conclusion, heavy strength training leads to improved cycling performance in elite cyclists as evidenced by a superior effect size of ES training vs E training on relative improvements in power output at 4 mmol L(-1)[la(-)], peak power output during 30-s Wingate test, W(max), and mean power output during 40-min all-out trial.", "title": "" }, { "docid": "a059fc50eb0e4cab21b04a75221b3160", "text": "This paper presents the design of an X-band active antenna self-oscillating down-converter mixer in substrate integrated waveguide technology (SIW). Electromagnetic analysis is used to design a SIW cavity backed patch antenna with resonance at 9.9 GHz used as the receiving antenna, and subsequently harmonic balance analysis combined with optimization techniques are used to synthesize a self-oscillating mixer with oscillating frequency of 6.525 GHz. The conversion gain is optimized for the mixing product involving the second harmonic of the oscillator and the RF input signal, generating an IF frequency of 3.15 GHz to have conversion gain in at least 600 MHz bandwidth around the IF frequency. The active antenna circuit finds application in compact receiver front-end modules as well as active self-oscillating mixer arrays.", "title": "" }, { "docid": "d5e5d79b8a06d4944ee0c3ddcd84ce4c", "text": "Recent years have observed a significant progress in information retrieval and natural language processing with deep learning technologies being successfully applied into almost all of their major tasks. The key to the success of deep learning is its capability of accurately learning distributed representations (vector representations or structured arrangement of them) of natural language expressions such as sentences, and effectively utilizing the representations in the tasks. This tutorial aims at summarizing and introducing the results of recent research on deep learning for information retrieval, in order to stimulate and foster more significant research and development work on the topic in the future.\n The tutorial mainly consists of three parts. In the first part, we introduce the fundamental techniques of deep learning for natural language processing and information retrieval, such as word embedding, recurrent neural networks, and convolutional neural networks. In the second part, we explain how deep learning, particularly representation learning techniques, can be utilized in fundamental NLP and IR problems, including matching, translation, classification, and structured prediction. In the third part, we describe how deep learning can be used in specific application tasks in details. The tasks are search, question answering (from either documents, database, or knowledge base), and image retrieval.", "title": "" }, { "docid": "094906bcd076ae3207ba04755851c73a", "text": "The paper describes our approach for SemEval-2018 Task 1: Affect Detection in Tweets. We perform experiments with manually compelled sentiment lexicons and word embeddings. We test their performance on twitter affect detection task to determine which features produce the most informative representation of a sentence. We demonstrate that general-purpose word embeddings produces more informative sentence representation than lexicon features. However, combining lexicon features with embeddings yields higher performance than embeddings alone.", "title": "" }, { "docid": "598744a94cbff466c42e6788d5e23a79", "text": "The energy consumption of DRAM is a critical concern in modern computing systems. Improvements in manufacturing process technology have allowed DRAM vendors to lower the DRAM supply voltage conservatively, which reduces some of the DRAM energy consumption. We would like to reduce the DRAM supply voltage more aggressively, to further reduce energy. Aggressive supply voltage reduction requires a thorough understanding of the effect voltage scaling has on DRAM access latency and DRAM reliability.\n In this paper, we take a comprehensive approach to understanding and exploiting the latency and reliability characteristics of modern DRAM when the supply voltage is lowered below the nominal voltage level specified by DRAM standards. Using an FPGA-based testing platform, we perform an experimental study of 124 real DDR3L (low-voltage) DRAM chips manufactured recently by three major DRAM vendors. We find that reducing the supply voltage below a certain point introduces bit errors in the data, and we comprehensively characterize the behavior of these errors. We discover that these errors can be avoided by increasing the latency of three major DRAM operations (activation, restoration, and precharge). We perform detailed DRAM circuit simulations to validate and explain our experimental findings. We also characterize the various relationships between reduced supply voltage and error locations, stored data patterns, DRAM temperature, and data retention.\n Based on our observations, we propose a new DRAM energy reduction mechanism, called Voltron. The key idea of Voltron is to use a performance model to determine by how much we can reduce the supply voltage without introducing errors and without exceeding a user-specified threshold for performance loss. Our evaluations show that Voltron reduces the average DRAM and system energy consumption by 10.5% and 7.3%, respectively, while limiting the average system performance loss to only 1.8%, for a variety of memory-intensive quad-core workloads. We also show that Voltron significantly outperforms prior dynamic voltage and frequency scaling mechanisms for DRAM.", "title": "" } ]
scidocsrr
0bd9c78ab4332552b8a0deee10c732db
Programming models for sensor networks: A survey
[ { "docid": "f3574f1e3f0ef3a5e1d20cb15b040105", "text": "Composed of tens of thousands of tiny devices with very limited resources (\"motes\"), sensor networks are subject to novel systems problems and constraints. The large number of motes in a sensor network means that there will often be some failing nodes; networks must be easy to repopulate. Often there is no feasible method to recharge motes, so energy is a precious resource. Once deployed, a network must be reprogrammable although physically unreachable, and this reprogramming can be a significant energy cost.We present Maté, a tiny communication-centric virtual machine designed for sensor networks. Maté's high-level interface allows complex programs to be very short (under 100 bytes), reducing the energy cost of transmitting new programs. Code is broken up into small capsules of 24 instructions, which can self-replicate through the network. Packet sending and reception capsules enable the deployment of ad-hoc routing and data aggregation algorithms. Maté's concise, high-level program representation simplifies programming and allows large networks to be frequently reprogrammed in an energy-efficient manner; in addition, its safe execution environment suggests a use of virtual machines to provide the user/kernel boundary on motes that have no hardware protection mechanisms.", "title": "" } ]
[ { "docid": "0f3cad05c9c267f11c4cebd634a12c59", "text": "The recent, exponential rise in adoption of the most disparate Internet of Things (IoT) devices and technologies has reached also Agriculture and Food (Agri-Food) supply chains, drumming up substantial research and innovation interest towards developing reliable, auditable and transparent traceability systems. Current IoT-based traceability and provenance systems for Agri-Food supply chains are built on top of centralized infrastructures and this leaves room for unsolved issues and major concerns, including data integrity, tampering and single points of failure. Blockchains, the distributed ledger technology underpinning cryptocurrencies such as Bitcoin, represent a new and innovative technological approach to realizing decentralized trustless systems. Indeed, the inherent properties of this digital technology provide fault-tolerance, immutability, transparency and full traceability of the stored transaction records, as well as coherent digital representations of physical assets and autonomous transaction executions. This paper presents AgriBlockIoT, a fully decentralized, blockchain-based traceability solution for Agri-Food supply chain management, able to seamless integrate IoT devices producing and consuming digital data along the chain. To effectively assess AgriBlockIoT, first, we defined a classical use-case within the given vertical domain, namely from-farm-to-fork. Then, we developed and deployed such use-case, achieving traceability using two different blockchain implementations, namely Ethereum and Hyperledger Sawtooth. Finally, we evaluated and compared the performance of both the deployments, in terms of latency, CPU, and network usage, also highlighting their main pros and cons.", "title": "" }, { "docid": "49fa638e44d13695217c7f1bbb3f6ebd", "text": "Kernel methods enable the direct usage of structured representations of textual data during language learning and inference tasks. Expressive kernels, such as Tree Kernels, achieve excellent performance in NLP. On the other side, deep neural networks have been demonstrated effective in automatically learning feature representations during training. However, their input is tensor data, i.e., they cannot manage rich structured information. In this paper, we show that expressive kernels and deep neural networks can be combined in a common framework in order to (i) explicitly model structured information and (ii) learn non-linear decision functions. We show that the input layer of a deep architecture can be pre-trained through the application of the Nyström low-rank approximation of kernel spaces. The resulting “kernelized” neural network achieves state-of-the-art accuracy in three different tasks.", "title": "" }, { "docid": "4b68d3c94ef785f80eac9c4c6ca28cfe", "text": "We address the problem of recovering a common set of covariates that are relevant simultaneously to several classification problems. By penalizing the sum of l2-norms of the blocks of coefficients associated with each covariate across different classification problems, similar sparsity patterns in all models are encouraged. To take computational advantage of the sparsity of solutions at high regularization levels, we propose a blockwise path-following scheme that approximately traces the regularization path. As the regularization coefficient decreases, the algorithm maintains and updates concurrently a growing set of covariates that are simultaneously active for all problems. We also show how to use random projections to extend this approach to the problem of joint subspace selection, where multiple predictors are found in a common low-dimensional subspace. We present theoretical results showing that this random projection approach converges to the solution yielded by trace-norm regularization. Finally, we present a variety of experimental results exploring joint covariate selection and joint subspace selection, comparing the path-following approach to competing algorithms in terms of prediction accuracy and running time.", "title": "" }, { "docid": "54b43b5e3545710dfe37f55b93084e34", "text": "Cloud computing is a model for delivering information technology services, wherein resources are retrieved from the Internet through web-based tools and applications instead of a direct connection to a server. The capability to provision and release cloud computing resources with minimal management effort or service provider interaction led to the rapid increase of the use of cloud computing. Therefore, balancing cloud computing resources to provide better performance and services to end users is important. Load balancing in cloud computing means balancing three important stages through which a request is processed. The three stages are data center selection, virtual machine scheduling, and task scheduling at a selected data center. User task scheduling plays a significant role in improving the performance of cloud services. This paper presents a review of various energy-efficient task scheduling methods in a cloud environment. A brief analysis of various scheduling parameters considered in these methods is also presented. The results show that the best power-saving percentage level can be achieved by using both DVFS and DNS.", "title": "" }, { "docid": "ca8bb290339946e2d3d3e14c01023aa5", "text": "OBJECTIVE\nTo establish a centile chart of cervical length between 18 and 32 weeks of gestation in a low-risk population of women.\n\n\nMETHODS\nA prospective longitudinal cohort study of women with a low risk, singleton pregnancy using public healthcare facilities in Cape Town, South Africa. Transvaginal measurement of cervical length was performed between 16 and 32 weeks of gestation and used to construct centile charts. The distribution of cervical length was determined for gestational ages and was used to establish estimates of longitudinal percentiles. Centile charts were constructed for nulliparous and multiparous women together and separately.\n\n\nRESULTS\nCentile estimation was based on data from 344 women. Percentiles showed progressive cervical shortening with increasing gestational age. Averaged over the entire follow-up period, mean cervical length was 1.5 mm shorter in nulliparous women compared with multiparous women (95% CI, 0.4-2.6).\n\n\nCONCLUSIONS\nEstablishment of longitudinal reference values of cervical length in a low-risk population will contribute toward a better understanding of cervical length in women at risk for preterm labor.", "title": "" }, { "docid": "2d0cc17115692f1e72114c636ba74811", "text": "A new inline coupling topology for narrowband helical resonator filters is proposed that allows to introduce selectively located transmission zeros (TZs) in the stopband. We show that a pair of helical resonators arranged in an interdigital configuration can realize a large range of in-band coupling coefficient values and also selectively position a TZ in the stopband. The proposed technique dispenses the need for auxiliary elements, so that the size, complexity, power handling and insertion loss of the filter are not compromised. A second order prototype filter with dimensions of the order of 0.05λ, power handling capability up to 90 W, measured insertion loss of 0.18 dB and improved selectivity is presented.", "title": "" }, { "docid": "b5d3c7822f2ba9ca89d474dda5f180b6", "text": "We consider a class of a nested optimization problems involving inner and outer objectives. We observe that by taking into explicit account the optimization dynamics for the inner objective it is possible to derive a general framework that unifies gradient-based hyperparameter optimization and meta-learning (or learning-to-learn). Depending on the specific setting, the variables of the outer objective take either the meaning of hyperparameters in a supervised learning problem or parameters of a meta-learner. We show that some recently proposed methods in the latter setting can be instantiated in our framework and tackled with the same gradient-based algorithms. Finally, we discuss possible design patterns for learning-to-learn and present encouraging preliminary experiments for few-shot learning.", "title": "" }, { "docid": "d8752c40782d8189d454682d1d30738e", "text": "This article reviews the empirical literature on personality, leadership, and organizational effectiveness to make 3 major points. First, leadership is a real and vastly consequential phenomenon, perhaps the single most important issue in the human sciences. Second, leadership is about the performance of teams, groups, and organizations. Good leadership promotes effective team and group performance, which in turn enhances the well-being of the incumbents; bad leadership degrades the quality of life for everyone associated with it. Third, personality predicts leadership—who we are is how we lead—and this information can be used to select future leaders or improve the performance of current incumbents.", "title": "" }, { "docid": "1461157186183f11d7270d89eecd926a", "text": "This review analyzes trends and commonalities among prominent theories of media effects. On the basis of exemplary meta-analyses of media effects and bibliometric studies of well-cited theories, we identify and discuss five features of media effects theories as well as their empirical support. Each of these features specifies the conditions under which media may produce effects on certain types of individuals. Our review ends with a discussion of media effects in newer media environments. This includes theories of computer-mediated communication, the development of which appears to share a similar pattern of reformulation from unidirectional, receiver-oriented views, to theories that recognize the transactional nature of communication. We conclude by outlining challenges and promising avenues for future research.", "title": "" }, { "docid": "1a69b777e03d2d2589dd9efb9cda2a10", "text": "Three-dimensional measurement of joint motion is a promising tool for clinical evaluation and therapeutic treatment comparisons. Although many devices exist for joints kinematics assessment, there is a need for a system that could be used in routine practice. Such a system should be accurate, ambulatory, and easy to use. The combination of gyroscopes and accelerometers (i.e., inertial measurement unit) has proven to be suitable for unrestrained measurement of orientation during a short period of time (i.e., few minutes). However, due to their inability to detect horizontal reference, inertial-based systems generally fail to measure differential orientation, a prerequisite for computing the three-dimentional knee joint angle recommended by the Internal Society of Biomechanics (ISB). A simple method based on a leg movement is proposed here to align two inertial measurement units fixed on the thigh and shank segments. Based on the combination of the former alignment and a fusion algorithm, the three-dimensional knee joint angle is measured and compared with a magnetic motion capture system during walking. The proposed system is suitable to measure the absolute knee flexion/extension and abduction/adduction angles with mean (SD) offset errors of -1 degree (1 degree ) and 0 degrees (0.6 degrees ) and mean (SD) root mean square (RMS) errors of 1.5 degrees (0.4 degrees ) and 1.7 degrees (0.5 degrees ). The system is also suitable for the relative measurement of knee internal/external rotation (mean (SD) offset error of 3.4 degrees (2.7 degrees )) with a mean (SD) RMS error of 1.6 degrees (0.5 degrees ). The method described in this paper can be easily adapted in order to measure other joint angular displacements such as elbow or ankle.", "title": "" }, { "docid": "88def96b7287ce217f1abf8fb1b413a5", "text": "Designing a metric manually for unsupervised sequence generation tasks, such as text generation, is essentially difficult. In a such situation, learning a metric of a sequence from data is one possible solution. The previous study, SeqGAN, proposed the framework for unsupervised sequence generation, in which a metric is learned from data, and a generator is optimized with regard to the learned metric with policy gradient, inspired by generative adversarial nets (GANs) and reinforcement learning. In this paper, we make two proposals to learn better metric than SeqGAN’s: partial reward function and expert-based reward function training. The partial reward function is a reward function for a partial sequence of a certain length. SeqGAN employs a reward function for completed sequence only. By combining long-scale and short-scale partial reward functions, we expect a learned metric to be able to evaluate a partial correctness as well as a coherence of a sequence, as a whole. In expert-based reward function training, a reward function is trained to discriminate between an expert (or true) sequence and a fake sequence that is produced by editing an expert sequence. Expert-based reward function training is not a kind of GAN frameworks. This makes the optimization of the generator easier. We examine the effect of the partial reward function and expert-based reward function training on synthetic data and real text data, and show improvements over SeqGAN and the model trained with MLE. Specifically, whereas SeqGAN gains 0.42 improvement of NLL over MLE on synthetic data, our best model gains 3.02 improvement, and whereas SeqGAN gains 0.029 improvement of BLEU over MLE, our best model gains 0.250 improvement.", "title": "" }, { "docid": "2de3078c249eb87b041a2a74b6efcfdf", "text": "To lay the groundwork for devising, improving and implementing strategies to prevent or delay the onset of disability in the elderly, we conducted a systematic literature review of longitudinal studies published between 1985 and 1997 that reported statistical associations between individual base-line risk factors and subsequent functional status in community-living older persons. Functional status decline was defined as disability or physical function limitation. We used MEDLINE, PSYCINFO, SOCA, EMBASE, bibliographies and expert consultation to select the articles, 78 of which met the selection criteria. Risk factors were categorized into 14 domains and coded by two independent abstractors. Based on the methodological quality of the statistical analyses between risk factors and functional outcomes (e.g. control for base-line functional status, control for confounding, attrition rate), the strength of evidence was derived for each risk factor. The association of functional decline with medical findings was also analyzed. The highest strength of evidence for an increased risk in functional status decline was found for (alphabetical order) cognitive impairment, depression, disease burden (comorbidity), increased and decreased body mass index, lower extremity functional limitation, low frequency of social contacts, low level of physical activity, no alcohol use compared to moderate use, poor self-perceived health, smoking and vision impairment. The review revealed that some risk factors (e.g. nutrition, physical environment) have been neglected in past research. This review will help investigators set priorities for future research of the Disablement Process, plan health and social services for elderly persons and develop more cost-effective programs for preventing disability among them.", "title": "" }, { "docid": "96af2e34acf9f1e9c0c57cc24795d0f9", "text": "Poker games provide a useful testbed for modern Artificial Intelligence techniques. Unlike many classical game domains such as chess and checkers, poker includes elements of imperfect information, stochastic events, and one or more adversarial agents to interact with. Furthermore, in poker it is possible to win or lose by varying degrees. Therefore, it can be advantageous to adapt ones’ strategy to exploit a weak opponent. A poker agent must address these challenges, acting in uncertain environments and exploiting other agents, in order to be highly successful. Arguably, poker games more closely resemble many real world problems than games with perfect information. In this brief paper, we outline Polaris, a Texas Hold’em poker program. Polaris recently defeated top human professionals at the Man vs. Machine Poker Championship and it is currently the reigning AAAI Computer Poker Competition winner in the limit equilibrium and no-limit events.", "title": "" }, { "docid": "80c9f1d983bc3ddfd73cdf2abc936600", "text": "Jazz guitar solos are improvised melody lines played on one instrument on top of a chordal accompaniment (comping). As the improvisation happens spontaneously, a reference score is non-existent, only a lead sheet. There are situations, however, when one would like to have the original melody lines in the form of notated music, see the Real Book. The motivation is either for the purpose of practice and imitation or for musical analysis. In this work, an automatic transcriber for jazz guitar solos is developed. It resorts to a very intuitive representation of tonal music signals: the pitchgram. No instrument-specific modeling is involved, so the transcriber should be applicable to other pitched instruments as well. Neither is there the need to learn any note profiles prior to or during the transcription. Essentially, the proposed transcriber is a decision tree, thus a classifier, with a depth of 3. It has a (very) low computational complexity and can be run on-line. The decision rules can be refined or extended with no or little musical education. The transcriber’s performance is evaluated on a set of ten jazz solo excerpts and compared with a state-of-the-art transcription system for the guitar plus PYIN. We achieve an improvement of 34 % w.r.t. the reference system and 19 % w.r.t. PYIN in terms of the F-measure. Another measure of accuracy, the error score, attests that the number of erroneous pitch detections is reduced by more than 50 % w.r.t. the reference system and by 45 % w.r.t. PYIN.", "title": "" }, { "docid": "c0cbea5f38a04e0d123fc51af30d08c0", "text": "This brief presents a high-efficiency current-regulated charge pump for a white light-emitting diode driver. The charge pump incorporates no series current regulator, unlike conventional voltage charge pump circuits. Output current regulation is accomplished by the proposed pumping current control. The experimental system, with two 1-muF flying and load capacitors, delivers a regulated 20-mA current from an input supply voltage of 2.8-4.2 V. The measured variation is less than 0.6% at a pumping frequency of 200 kHz. The active area of the designed chip is 0.43 mm2 in a 0.5-mum CMOS process.", "title": "" }, { "docid": "334e97a1f50b5081ac08651c1d7ed943", "text": "Veterans of all war eras have a high rate of chronic disease, mental health disorders, and chronic multi-symptom illnesses (CMI).(1-3) Many veterans report symptoms that affect multiple biological systems as opposed to isolated disease states. Standard medical treatments often target isolated disease states such as headaches, insomnia, or back pain and at times may miss the more complex, multisystem dysfunction that has been documented in the veteran population. Research has shown that veterans have complex symptomatology involving physical, cognitive, psychological, and behavioral disturbances, such as difficult to diagnose pain patterns, irritable bowel syndrome, chronic fatigue, anxiety, depression, sleep disturbance, or neurocognitive dysfunction.(2-4) Meditation and acupuncture are each broad-spectrum treatments designed to target multiple biological systems simultaneously, and thus, may be well suited for these complex chronic illnesses. The emerging literature indicates that complementary and integrative medicine (CIM) approaches augment standard medical treatments to enhance positive outcomes for those with chronic disease, mental health disorders, and CMI.(5-12.)", "title": "" }, { "docid": "a6a98d0599c1339c1f2c6a6c7525b843", "text": "We consider a generalized version of the Steiner problem in graphs, motivated by the wire routing phase in physical VLSI design: given a connected, undirected distance graph with required classes of vertices and Steiner vertices, find a shortest connected subgraph containing at least one vertex of each required class. We show that this problem is NP-hard, even if there are no Steiner vertices and the graph is a tree. Moreover, the same complexity result holds if the input class Steiner graph additionally is embedded in a unit grid, if each vertex has degree at most three, and each class consists of no more than three vertices. For similar restricted versions, we prove MAX SNP-hardness and we show that there exists no polynomial-time approximation algorithm with a constant bound on the relative error, unless P = NP. We propose two efficient heuristics computing different approximate solutions in time 0(/E] + /VI log IV]) and in time O(c(lEl + IV1 log (VI)), respectively, where E is the set of edges in the given graph, V is the set of vertices, and c is the number of classes. We present some promising implementation results.", "title": "" }, { "docid": "c9f2fd6bdcca5e55c5c895f65768e533", "text": "We implemented live-textured geometry model creation with immediate coverage feedback visualizations in AR on the Microsoft HoloLens. A user walking and looking around a physical space can create a textured model of the space, ready for remote exploration and AR collaboration. Out of the box, a HoloLens builds a triangle mesh of the environment while scanning and being tracked in a new environment. The mesh contains vertices, triangles, and normals, but not color. We take the video stream from the color camera and use it to color a UV texture to be mapped to the mesh. Due to the limited graphics memory of the HoloLens, we use a fixed-size texture. Since the mesh generation dynamically changes in real time, we use an adaptive mapping scheme that evenly distributes every triangle of the dynamic mesh onto the fixed-size texture and adapts to new geometry without compromising existing color data. Occlusion is also considered. The user can walk around their environment and continuously fill in the texture while growing the mesh in real-time. We describe our texture generation algorithm and illustrate benefits and limitations of our system with example modeling sessions. Having first-person immediate AR feedback on the quality of modeled physical infrastructure, both in terms of mesh resolution and texture quality, helps the creation of high-quality colored meshes with this standalone wireless device and a fixed memory footprint in real-time.", "title": "" }, { "docid": "160726aa34ba677292a2ae14666727e8", "text": "Child sex tourism is an obscure industry where the tourist‟s primary purpose is to engage in a sexual experience with a child. Under international legislation, tourism with the intent of having sexual relations with a minor is in violation of the UN Convention of the Rights of a Child. The intent and act is a crime and in violation of human rights. This paper examines child sex tourism in the Philippines, a major destination country for the purposes of child prostitution. The purpose is to bring attention to the atrocities that occur under the guise of tourism. It offers a definition of the crisis, a description of the victims and perpetrators, and a discussion of the social and cultural factors that perpetuate the problem. Research articles and reports from non-government organizations, advocacy groups, governments and educators were examined. Although definitional challenges did emerge, it was found that several of the articles and reports varied little in their definitions of child sex tourism and in the descriptions of the victims and perpetrators. A number of differences emerged that identified the social and cultural factors responsible for the creation and perpetuation of the problem.", "title": "" } ]
scidocsrr