query_id
stringlengths 32
32
| query
stringlengths 5
5.38k
| positive_passages
listlengths 1
26
| negative_passages
listlengths 7
100
| subset
stringclasses 7
values |
---|---|---|---|---|
51f3961336efb81b85462a9fd239944b | A model for improved association of radar and camera objects in an indoor environment | [
{
"docid": "8e18fa3850177d016a85249555621723",
"text": "Obstacle fusion algorithms usually perform obstacle association and gating in order to improve the obstacle position if it was detected by multiple sensors. However, this strategy is not common in multi sensor occupancy grid fusion. Thus, the quality of the fused grid, in terms of obstacle position accuracy, largely depends on the sensor with the lowest accuracy. In this paper an efficient method to associate obstacles across sensor grids is proposed. Imprecise sensors are discounted locally in cells where a more accurate sensor, that detected the same obstacle, derived free space. Furthermore, fixed discount factors to optimize false negative and false positive rates are used. Because of its generic formulation with the covariance of each sensor grid, the method is scalable to any sensor setup. The quantitative evaluation with a highly precise navigation map shows an increased obstacle position accuracy compared to standard evidential occupancy grid fusion.",
"title": ""
}
] | [
{
"docid": "00eeceba7118e7a8a2f68deadc612f14",
"text": "I n the growing fields of wearable robotics, rehabilitation robotics, prosthetics, and walking robots, variable stiffness actuators (VSAs) or adjustable compliant actuators are being designed and implemented because of their ability to minimize large forces due to shocks, to safely interact with the user, and their ability to store and release energy in passive elastic elements. This review article describes the state of the art in the design of actuators with adaptable passive compliance. This new type of actuator is not preferred for classical position-controlled applications such as pick and place operations but is preferred in novel robots where safe human– robot interaction is required or in applications where energy efficiency must be increased by adapting the actuator’s resonance frequency. The working principles of the different existing designs are explained and compared. The designs are divided into four groups: equilibrium-controlled stiffness, antagonistic-controlled stiffness, structure-controlled stiffness (SCS), and mechanically controlled stiffness. In classical robotic applications, actuators are preferred to be as stiff as possible to make precise position movements or trajectory tracking control easier (faster systems with high bandwidth). The biological counterpart is the muscle that has superior functional performance and a neuromechanical control system that is much more advanced at adapting and tuning its parameters. The superior power-to-weight ratio, force-toweight ratio, compliance, and control of muscle, when compared with traditional robotic actuators, are the main barriers for the development of machines that can match the motion, safety, and energy efficiency of human or other animals. One of the key differences of these systems is the compliance or springlike behavior found in biological systems [1]. Although such compliant",
"title": ""
},
{
"docid": "b910de28ecbfa82713b30f5918eaae80",
"text": "Raman microscopy is a non-destructive technique requiring minimal sample preparation that can be used to measure the chemical properties of the mineral and collagen parts of bone simultaneously. Modern Raman instruments contain the necessary components and software to acquire the standard information required in most bone studies. The spatial resolution of the technique is about a micron. As it is non-destructive and small samples can be used, it forms a useful part of a bone characterisation toolbox.",
"title": ""
},
{
"docid": "a84ee8a0f06e07abd53605bf5b542519",
"text": "Abeta peptide accumulation is thought to be the primary event in the pathogenesis of Alzheimer's disease (AD), with downstream neurotoxic effects including the hyperphosphorylation of tau protein. Glycogen synthase kinase-3 (GSK-3) is increasingly implicated as playing a pivotal role in this amyloid cascade. We have developed an adult-onset Drosophila model of AD, using an inducible gene expression system to express Arctic mutant Abeta42 specifically in adult neurons, to avoid developmental effects. Abeta42 accumulated with age in these flies and they displayed increased mortality together with progressive neuronal dysfunction, but in the apparent absence of neuronal loss. This fly model can thus be used to examine the role of events during adulthood and early AD aetiology. Expression of Abeta42 in adult neurons increased GSK-3 activity, and inhibition of GSK-3 (either genetically or pharmacologically by lithium treatment) rescued Abeta42 toxicity. Abeta42 pathogenesis was also reduced by removal of endogenous fly tau; but, within the limits of detection of available methods, tau phosphorylation did not appear to be altered in flies expressing Abeta42. The GSK-3-mediated effects on Abeta42 toxicity appear to be at least in part mediated by tau-independent mechanisms, because the protective effect of lithium alone was greater than that of the removal of tau alone. Finally, Abeta42 levels were reduced upon GSK-3 inhibition, pointing to a direct role of GSK-3 in the regulation of Abeta42 peptide level, in the absence of APP processing. Our study points to the need both to identify the mechanisms by which GSK-3 modulates Abeta42 levels in the fly and to determine if similar mechanisms are present in mammals, and it supports the potential therapeutic use of GSK-3 inhibitors in AD.",
"title": ""
},
{
"docid": "ceb270c07d26caec5bc20e7117690f9f",
"text": "Pesticides including insecticides and miticides are primarily used to regulate arthropod (insect and mite) pest populations in agricultural and horticultural crop production systems. However, continual reliance on pesticides may eventually result in a number of potential ecological problems including resistance, secondary pest outbreaks, and/or target pest resurgence [1,2]. Therefore, implementation of alternative management strategies is justified in order to preserve existing pesticides and produce crops with minimal damage from arthropod pests. One option that has gained interest by producers is integrating pesticides with biological control agents or natural enemies including parasitoids and predators [3]. This is often referred to as ‘compatibility,’ which is the ability to integrate or combine natural enemies with pesticides so as to regulate arthropod pest populations without directly or indirectly affecting the life history parameters or population dynamics of natural enemies [2,4]. This may also refer to pesticides being effective against targeted arthropod pests but relatively non-harmful to natural enemies [5,6].",
"title": ""
},
{
"docid": "16f75bcd060ae7a7b6f7c9c8412ca479",
"text": "Deep neural networks (DNNs) are powerful machine learning models and have succeeded in various artificial intelligence tasks. Although various architectures and modules for the DNNs have been proposed, selecting and designing the appropriate network structure for a target problem is a challenging task. In this paper, we propose a method to simultaneously optimize the network structure and weight parameters during neural network training. We consider a probability distribution that generates network structures, and optimize the parameters of the distribution instead of directly optimizing the network structure. The proposed method can apply to the various network structure optimization problems under the same framework. We apply the proposed method to several structure optimization problems such as selection of layers, selection of unit types, and selection of connections using the MNIST, CIFAR-10, and CIFAR-100 datasets. The experimental results show that the proposed method can find the appropriate and competitive network structures.",
"title": ""
},
{
"docid": "ac9f71a97f6af0718587ffd0ea92d31d",
"text": "Modern cyber-physical systems are complex networked computing systems that electronically control physical systems. Autonomous road vehicles are an important and increasingly ubiquitous instance. Unfortunately, their increasing complexity often leads to security vulnerabilities. Network connectivity exposes these vulnerable systems to remote software attacks that can result in real-world physical damage, including vehicle crashes and loss of control authority. We introduce an integrated architecture to provide provable security and safety assurance for cyber-physical systems by ensuring that safety-critical operations and control cannot be unintentionally affected by potentially malicious parts of the system. Finegrained information flow control is used to design both hardware and software, determining how low-integrity information can affect high-integrity control decisions. This security assurance is used to improve end-to-end security across the entire cyber-physical system. We demonstrate this integrated approach by developing a mobile robotic testbed modeling a self-driving system and testing it with a malicious attack. ACM Reference Format: Jed Liu, Joe Corbett-Davies, Andrew Ferraiuolo, Alexander Ivanov, Mulong Luo, G. Edward Suh, Andrew C. Myers, and Mark Campbell. 2018. Secure Autonomous Cyber-Physical Systems Through Verifiable Information Flow Control. InWorkshop on Cyber-Physical Systems Security & Privacy (CPS-SPC ’18), October 19, 2018, Toronto, ON, Canada. ACM, New York, NY, USA, 12 pages. https://doi.org/10.1145/3264888.3264889",
"title": ""
},
{
"docid": "0afd0f70859772054e589a2256efeba4",
"text": "Hair is typically modeled and rendered using either explicitly defined hair strand geometry or a volume texture of hair densities. Taken each on their own, these two hair representations have difficulties in the case of animal fur as it consists of very dense and thin undercoat hairs in combination with coarse guard hairs. Explicit hair strand geometry is not well-suited for the undercoat hairs, while volume textures are not well-suited for the guard hairs. To efficiently model and render both guard hairs and undercoat hairs, we present a hybrid technique that combines rasterization of explicitly defined guard hairs with ray marching of a prismatic shell volume with dynamic resolution. The latter is the key to practical combination of the two techniques, and it also enables a high degree of detail in the undercoat. We demonstrate that our hybrid technique creates a more detailed and soft fur appearance as compared with renderings that only use explicitly defined hair strands. Finally, our rasterization approach is based on order-independent transparency and renders high-quality fur images in seconds.",
"title": ""
},
{
"docid": "ab70c8814c0e15695c8142ce8aad69bc",
"text": "Domain-oriented dialogue systems are often faced with users that try to cross the limits of their knowledge, by unawareness of its domain limitations or simply to test their capacity. These interactions are considered to be Out-Of-Domain and several strategies can be found in the literature to deal with some specific situations. Since if a certain input appears once, it has a non-zero probability of being entered later, the idea of taking advantage of real human interactions to feed these dialogue systems emerges, thus, naturally. In this paper, we introduce the SubTle Corpus, a corpus of Interaction-Response pairs extracted from subtitles files, created to help dialogue systems to deal with Out-of-Domain interactions.",
"title": ""
},
{
"docid": "d75ebc4041927b525d8f4937c760518e",
"text": "Most current term frequency normalization approaches for information retrieval involve the use of parameters. The tuning of these parameters has an important impact on the overall performance of the information retrieval system. Indeed, a small variation in the involved parameter(s) could lead to an important variation in the precision/recall values. Most current tuning approaches are dependent on the document collections. As a consequence, the effective parameter value cannot be obtained for a given new collection without extensive training data. In this paper, we propose a novel and robust method for the tuning of term frequency normalization parameter(s), by measuring the normalization effect on the within document frequency of the query terms. As an illustration, we apply our method on Amati \\& Van Rijsbergen's so-called normalization 2. The experiments for the ad-hoc TREC-6,7,8 tasks and TREC-8,9,10 Web tracks show that the new method is independent of the collections and able to provide reliable and good performance.",
"title": ""
},
{
"docid": "ee82b52d5a0bc28a0a8e78e09da09340",
"text": "AIMS\nExcessive internet use is becoming a concern, and some have proposed that it may involve addiction. We evaluated the dimensions assessed by, and psychometric properties of, a range of questionnaires purporting to assess internet addiction.\n\n\nMETHODS\nFourteen questionnaires were identified purporting to assess internet addiction among adolescents and adults published between January 1993 and October 2011. Their reported dimensional structure, construct, discriminant and convergent validity and reliability were assessed, as well as the methods used to derive these.\n\n\nRESULTS\nMethods used to evaluate internet addiction questionnaires varied considerably. Three dimensions of addiction predominated: compulsive use (79%), negative outcomes (86%) and salience (71%). Less common were escapism (21%), withdrawal symptoms (36%) and other dimensions. Measures of validity and reliability were found to be within normally acceptable limits.\n\n\nCONCLUSIONS\nThere is a broad convergence of questionnaires purporting to assess internet addiction suggesting that compulsive use, negative outcome and salience should be covered and the questionnaires show adequate psychometric properties. However, the methods used to evaluate the questionnaires vary widely and possible factors contributing to excessive use such as social motivation do not appear to be covered.",
"title": ""
},
{
"docid": "ad8a727d0e3bd11cd972373451b90fe7",
"text": "The loss functions of deep neural networks are complex and their geometric properties are not well understood. We show that the optima of these complex loss functions are in fact connected by simple curves over which training and test accuracy are nearly constant. We introduce a training procedure to discover these high-accuracy pathways between modes. Inspired by this new geometric insight, we also propose a new ensembling method entitled Fast Geometric Ensembling (FGE). Using FGE we can train high-performing ensembles in the time required to train a single model. We achieve improved performance compared to the recent state-of-the-art Snapshot Ensembles, on CIFAR-10, CIFAR-100, and ImageNet.",
"title": ""
},
{
"docid": "b160d69d87ad113286ee432239b090d7",
"text": "Isogeometric analysis has been proposed as a methodology for bridging the gap between computer aided design (CAD) and finite element analysis (FEA). Although both the traditional and isogeometric pipelines rely upon the same conceptualization to solid model steps, they drastically differ in how they bring the solid model both to and through the analysis process. The isogeometric analysis process circumvents many of the meshing pitfalls experienced by the traditional pipeline by working directly within the approximation spaces used by the model representation. In this paper, we demonstrate that in a similar way as how mesh quality is used in traditional FEA to help characterize the impact of the mesh on analysis, an analogous concept of model quality exists within isogeometric analysis. The consequence of these observations is the need for a new area within modeling – analysis-aware modeling – in which model properties and parameters are selected to facilitate isogeometric analysis. ! 2009 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "dfbf5c12d8e5a8e5e81de5d51f382185",
"text": "Demand response (DR) is very important in the future smart grid, aiming to encourage consumers to reduce their demand during peak load hours. However, if binary decision variables are needed to specify start-up time of a particular appliance, the resulting mixed integer combinatorial problem is in general difficult to solve. In this paper, we study a versatile convex programming (CP) DR optimization framework for the automatic load management of various household appliances in a smart home. In particular, an L1 regularization technique is proposed to deal with schedule-based appliances (SAs), for which their on/off statuses are governed by binary decision variables. By relaxing these variables from integer to continuous values, the problem is reformulated as a new CP problem with an additional L1 regularization term in the objective. This allows us to transform the original mixed integer problem into a standard CP problem. Its major advantage is that the overall DR optimization problem remains to be convex and therefore the solution can be found efficiently. Moreover, a wide variety of appliances with different characteristics can be flexibly incorporated. Simulation result shows that the energy scheduling of SAs and other appliances can be determined simultaneously using the proposed CP formulation.",
"title": ""
},
{
"docid": "750c67fe63611248e8d8798a42ac282c",
"text": "Chaos and its drive-response synchronization for a fractional-order cellular neural networks (CNN) are studied. It is found that chaos exists in the fractional-order system with six-cell. The phase synchronisation of drive and response chaotic trajectories is investigated after that. These works based on Lyapunov exponents (LE), Lyapunov stability theory and numerical solving fractional-order system in Matlab environment.",
"title": ""
},
{
"docid": "cfaf2c04cd06103489ac60d00a70cd2c",
"text": "BACKGROUND\nΔ(9)-Tetrahydrocannabinol (THC), 11-nor-9-carboxy-THC (THCCOOH), and cannabinol (CBN) were measured in breath following controlled cannabis smoking to characterize the time course and windows of detection of breath cannabinoids.\n\n\nMETHODS\nExhaled breath was collected from chronic (≥4 times per week) and occasional (<twice per week) smokers before and after smoking a 6.8% THC cigarette. Sample analysis included methanol extraction from breath pads, solid-phase extraction, and liquid chromatography-tandem mass spectrometry quantification.\n\n\nRESULTS\nTHC was the major cannabinoid in breath; no sample contained THCCOOH and only 1 contained CBN. Among chronic smokers (n = 13), all breath samples were positive for THC at 0.89 h, 76.9% at 1.38 h, and 53.8% at 2.38 h, and only 1 sample was positive at 4.2 h after smoking. Among occasional smokers (n = 11), 90.9% of breath samples were THC-positive at 0.95 h and 63.6% at 1.49 h. One occasional smoker had no detectable THC. Analyte recovery from breath pads by methanolic extraction was 84.2%-97.4%. Limits of quantification were 50 pg/pad for THC and CBN and 100 pg/pad for THCCOOH. Solid-phase extraction efficiency was 46.6%-52.1% (THC) and 76.3%-83.8% (THCCOOH, CBN). Matrix effects were -34.6% to 12.3%. Cannabinoids fortified onto breath pads were stable (≤18.2% concentration change) for 8 h at room temperature and -20°C storage for 6 months.\n\n\nCONCLUSIONS\nBreath may offer an alternative matrix for identifying recent driving under the influence of cannabis, but currently sensitivity is limited to a short detection window (0.5-2 h).",
"title": ""
},
{
"docid": "599c2f4205f3a0978d0567658daf8be6",
"text": "With increasing audio/video service consumption through unmanaged IP networks, HTTP adaptive streaming techniques have emerged to handle bandwidth limitations and variations. But while it is becoming common to serve multiple clients in one home network, these solutions do not adequately address fine tuned quality arbitration between the multiple streams. While clients compete for bandwidth, the video suffers unstable conditions and/or inappropriate bit-rate levels.\n We hereby experiment a mechanism based on traffic chapping that allow bandwidth arbitration to be implemented in the home gateway, first determining desirable target bit-rates to be reached by each stream and then constraining the clients to stay within their limits. This enables the delivery of optimal quality of experience to the maximum number of users. This approach is validated through experimentation, and results are shown through a set of objective measurement criteria.",
"title": ""
},
{
"docid": "7f73952f3dfb445fd700d951a013595e",
"text": "Although parallel and convergent evolution are discussed extensively in technical articles and textbooks, their meaning can be overlapping, imprecise, and contradictory. The meaning of parallel evolution in much of the evolutionary literature grapples with two separate hypotheses in relation to phenotype and genotype, but often these two hypotheses have been inferred from only one hypothesis, and a number of subsidiary but problematic criteria, in relation to the phenotype. However, examples of parallel evolution of genetic traits that underpin or are at least associated with convergent phenotypes are now emerging. Four criteria for distinguishing parallelism from convergence are reviewed. All are found to be incompatible with any single proposition of homoplasy. Therefore, all homoplasy is equivalent to a broad view of convergence. Based on this concept, all phenotypic homoplasy can be described as convergence and all genotypic homoplasy as parallelism, which can be viewed as the equivalent concept of convergence for molecular data. Parallel changes of molecular traits may or may not be associated with convergent phenotypes but if so describe homoplasy at two biological levels-genotype and phenotype. Parallelism is not an alternative to convergence, but rather it entails homoplastic genetics that can be associated with and potentially explain, at the molecular level, how convergent phenotypes evolve.",
"title": ""
},
{
"docid": "d59d1ac7b3833ee1e60f7179a4a9af99",
"text": "s Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. GJCST Classification : C.1.4, C.2.1 Research Issues in Cloud Computing Strictly as per the compliance and regulations of: Research Issues in Cloud Computing V. Krishna Reddy , B. Thirumala Rao , Dr. L.S.S. Reddy , P. Sai Kiran ABSTRACT : Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges. Cloud computing moved away from personal computers and the individual enterprise application server to services provided by the cloud of computers. The emergence of cloud computing has made a tremendous impact on the Information Technology (IT) industry over the past few years. Currently IT industry needs Cloud computing services to provide best opportunities to real world. Cloud computing is in initial stages, with many issues still to be addressed. The objective of this paper is to explore the different issues of cloud computing and identify important research opportunities in this increasingly important area. We present different design challenges categorized under security challenges, Data Challenges, Performance challenges and other Design Challenges.",
"title": ""
},
{
"docid": "b3d1780cb8187e5993c5adbb7959b7a6",
"text": "We present impacto, a device designed to render the haptic sensation of hitting or being hit in virtual reality. The key idea that allows the small and light impacto device to simulate a strong hit is that it decomposes the stimulus: it renders the tactile aspect of being hit by tapping the skin using a solenoid; it adds impact to the hit by thrusting the user's arm backwards using electrical muscle stimulation. The device is self-contained, wireless, and small enough for wearable use, thus leaves the user unencumbered and able to walk around freely in a virtual environment. The device is of generic shape, allowing it to also be worn on legs, so as to enhance the experience of kicking, or merged into props, such as a baseball bat. We demonstrate how to assemble multiple impacto units into a simple haptic suit. Participants of our study rated impact simulated using impacto's combination of solenoid hit and electrical muscle stimulation as more realistic than either technique in isolation.",
"title": ""
},
{
"docid": "c7b7ca49ea887c25b05485e346b5b537",
"text": "I n our last article 1 we described the external features which characterize the cranial and facial structures of the cranial strains known as hyperflexion and hyperextension. To understand how these strains develop we have to examine the anatomical relations underlying all cranial patterns. Each strain represent a variation on a theme. By studying the features in common, it is possible to account for the facial and dental consequences of these variations. The key is the spheno-basilar symphysis and the displacements which can take place between the occiput and the sphenoid at that suture. In hyperflexion there is shortening of the cranium in an antero-posterior direction with a subsequent upward buckling of the spheno-basilar symphysis (Figure 1). In children, where the cartilage of the joint has not ossified, a v-shaped wedge can be seen occasionally on the lateral skull radiograph (Figure 2). Figure (3a) is of the cranial base seen from a vertex viewpoint. By leaving out the temporal bones the connection between the centrally placed spheno-basilar symphysis and the peripheral structures of the cranium can be seen more easily. Sutherland realized that the cranium could be divided into quadrants (Figure 3b) centered on the spheno-basilar symphysis and that what happens in each quadrant is directly influenced by the spheno-basilar symphysis. He noted that accompanying the vertical changes at the symphysis there are various lateral displacements. As the peripheral structures move laterally, this is known as external rotation. If they move closer to the midline, this is called internal rotation. It is not unusual to have one side of the face externally rotated and the other side internally rotated (Figure 4a). This can have a significant effect in the mouth, giving rise to asymmetries (Figure 4b). This shows a palatal view of the maxilla with the left posterior dentition externally rotated and the right buccal posterior segment internally rotated, reflecting the internal rotation of the whole right side of the face. This can be seen in hyperflexion but also other strains. With this background, it is now appropriate to examine in detail the cranial strain known as hyperflexion. As its name implies, it is brought about by an exaggeration of the flexion/ extension movement of the cranium into flexion. Rhythmic movement of the cranium continues despite the displacement into flexion, but it does so more readily into flexion than extension. As the skull is shortened in an antero-posterior plane, it is widened laterally. Figures 3a and 3b. 3a: cranial base from a vertex view (temporal bones left out). 3b: Sutherland’s quadrants imposed on cranial base. Figure 2. Lateral Skull Radiograph of Hyperflexion patient. Note V-shaped wedge at superior border of the spheno-basillar symphysis. Figure 1. Movement of Occiput and Sphenold in Hyperflexion. Reprinted from Orthopedic Gnathology, Hockel, J., Ed. 1983. With permission from Quintessence Publishing Co.",
"title": ""
}
] | scidocsrr |
bf196c07caa42433785f19ffcfa75c80 | Artificial Neural Networks ’ Applications in Management | [
{
"docid": "267f3d176f849bf24dfab7e78d93b153",
"text": "The long-running debate between the ‘rational design’ and ‘emergent process’ schools of strategy formation has involved caricatures of firms’ strategic planning processes, but little empirical evidence of whether and how companies plan. Despite the presumption that environmental turbulence renders conventional strategic planning all but impossible, the evidence from the corporate sector suggests that reports of the demise of strategic planning are greatly exaggerated. The goal of this paper is to fill this empirical gap by describing the characteristics of the strategic planning systems of multinational, multibusiness companies faced with volatile, unpredictable business environments. In-depth case studies of the planning systems of eight of the world’s largest oil companies identified fundamental changes in the nature and role of strategic planning since the end of the 1970s. The findings point to a possible reconciliation of ‘design’ and ‘process’ approaches to strategy formulation. The study pointed to a process of planned emergence in which strategic planning systems provided a mechanism for coordinating decentralized strategy formulation within a structure of demanding performance targets and clear corporate guidelines. The study shows that these planning systems fostered adaptation and responsiveness, but showed limited innovation and analytical sophistication. Copyright 2003 John Wiley & Sons, Ltd.",
"title": ""
}
] | [
{
"docid": "7456842efeebb480c21974f78aea2a9f",
"text": "Connectionist networks that have learned one task can be reused on related tasks in a process that is called \"transfer\". This paper surveys recent work on transfer. A number of distinctions between kinds of transfer are identified, and future directions for research are explored. The study of transfer has a long history in cognitive science. Discoveries about transfer in human cognition can inform applied efforts. Advances in applications can also inform cognitive studies.",
"title": ""
},
{
"docid": "b1202b110ae83980a71b14d9d6fd65cb",
"text": "In modern daily life people need to move, whether in business or leisure, sightseeing or addressing a meeting. Often this is done in familiar environments, but in some cases we need to find our way in unfamiliar scenarios. Visual impairment is a factor that greatly reduces mobility. Currently, the most widespread and used means by the visually impaired people are the white stick and the guide dog; however both present some limitations. With the recent advances in inclusive technology it is possible to extend the support given to people with visual impairment during their mobility. In this context we propose a system, named SmartVision, whose global objective is to give blind users the ability to move around in unfamiliar environments, whether indoor or outdoor, through a user friendly interface that is fed by a geographic information system (GIS). In this paper we propose the development of an electronic white cane that helps moving around, in both indoor and outdoor environments, providing contextualized geographical information using RFID technology.",
"title": ""
},
{
"docid": "c020a3ba9a2615cb5ed9a7e9d5aa3ce0",
"text": "Neural network approaches to Named-Entity Recognition reduce the need for carefully handcrafted features. While some features do remain in state-of-the-art systems, lexical features have been mostly discarded, with the exception of gazetteers. In this work, we show that this is unfair: lexical features are actually quite useful. We propose to embed words and entity types into a lowdimensional vector space we train from annotated data produced by distant supervision thanks to Wikipedia. From this, we compute — offline — a feature vector representing each word. When used with a vanilla recurrent neural network model, this representation yields substantial improvements. We establish a new state-of-the-art F1 score of 87.95 on ONTONOTES 5.0, while matching state-of-the-art performance with a F1 score of 91.73 on the over-studied CONLL-2003 dataset.",
"title": ""
},
{
"docid": "5d154a62b22415cbedd165002853315b",
"text": "Unaccompanied immigrant children are a highly vulnerable population, but research into their mental health and psychosocial context remains limited. This study elicited lawyers’ perceptions of the mental health needs of unaccompanied children in U.S. deportation proceedings and their mental health referral practices with this population. A convenience sample of 26 lawyers who work with unaccompanied children completed a semi-structured, online survey. Lawyers surveyed frequently had mental health concerns about their unaccompanied child clients, used clinical and lay terminology to describe symptoms, referred for both expert testimony and treatment purposes, frequently encountered barriers to accessing appropriate services, and expressed interest in mental health training. The results of this study suggest a complex intersection between the legal and mental health needs of unaccompanied children, and the need for further research and improved service provision in support of their wellbeing.",
"title": ""
},
{
"docid": "5bb63d07c8d7c743c505e6fd7df3dc4f",
"text": "XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for i) discovering the structural commonalities between sub-trees, ii) identifying sub-tree semantic resemblances, iii) computing tree-based edit operations costs, and iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance. © 2002 Elsevier Science. All rights reserved.",
"title": ""
},
{
"docid": "5eea47089f84c915005c40547712c617",
"text": "Current views on the neurobiological underpinnings of language are discussed that deviate in a number of ways from the classical Wernicke-Lichtheim-Geschwind model. More areas than Broca's and Wernicke's region are involved in language. Moreover, a division along the axis of language production and language comprehension does not seem to be warranted. Instead, for central aspects of language processing neural infrastructure is shared between production and comprehension. Three different accounts of the role of Broca's area in language are discussed. Arguments are presented in favor of a dynamic network view, in which the functionality of a region is co-determined by the network of regions in which it is embedded at particular moments in time. Finally, core regions of language processing need to interact with other networks (e.g. the attentional networks and the ToM network) to establish full functionality of language and communication.",
"title": ""
},
{
"docid": "d2d16580335dcff2f0d05ca8a43438ef",
"text": "Evolutionary adaptation can be rapid and potentially help species counter stressful conditions or realize ecological opportunities arising from climate change. The challenges are to understand when evolution will occur and to identify potential evolutionary winners as well as losers, such as species lacking adaptive capacity living near physiological limits. Evolutionary processes also need to be incorporated into management programmes designed to minimize biodiversity loss under rapid climate change. These challenges can be met through realistic models of evolutionary change linked to experimental data across a range of taxa.",
"title": ""
},
{
"docid": "7304805b7f5f8d22ef9f3ce02f8954e6",
"text": "A novel inductor switching technique is used to design and implement a wideband LC voltage controlled oscillator (VCO) in 0.13µm CMOS. The VCO has a tuning range of 87.2% between 3.3 and 8.4 GHz with phase noise ranging from −122 to −117.2 dBc/Hz at 1MHz offset. The power varies between 6.5 and 15.4 mW over the tuning range. This results in a Power-Frequency-Tuning Normalized figure of merit (PFTN) between 6.6 and 10.2 dB which is one of the best reported to date.",
"title": ""
},
{
"docid": "c1ee5f717481652d91431f647401d6d2",
"text": "Cluster ensembles have recently emerged as a powerful alternative to standard cluster analysis, aggregating several input data clusterings to generate a single output clustering, with improved robustness and stability. From the early work, these techniques held great promise; however, most of them generate the final solution based on incomplete information of a cluster ensemble. The underlying ensemble-information matrix reflects only cluster-data point relations, while those among clusters are generally overlooked. This paper presents a new link-based approach to improve the conventional matrix. It achieves this using the similarity between clusters that are estimated from a link network model of the ensemble. In particular, three new link-based algorithms are proposed for the underlying similarity assessment. The final clustering result is generated from the refined matrix using two different consensus functions of feature-based and graph-based partitioning. This approach is the first to address and explicitly employ the relationship between input partitions, which has not been emphasized by recent studies of matrix refinement. The effectiveness of the link-based approach is empirically demonstrated over 10 data sets (synthetic and real) and three benchmark evaluation measures. The results suggest the new approach is able to efficiently extract information embedded in the input clusterings, and regularly illustrate higher clustering quality in comparison to several state-of-the-art techniques.",
"title": ""
},
{
"docid": "c435c4106b1b5c90fe3ff607bc0d5f00",
"text": "In recent years, we have witnessed a significant growth of “social computing” services, or online communities where users contribute content in various forms, including images, text or video. Content contribution from members is critical to the viability of these online communities. It is therefore important to understand what drives users to share content with others in such settings. We extend previous literature on user contribution by studying the factors that are associated with users’ photo sharing in an online community, drawing on motivation theories as well as on analysis of basic structural properties. Our results indicate that photo sharing declines in respect to the users’ tenure in the community. We also show that users with higher commitment to the community and greater “structural embeddedness” tend to share more content. We demonstrate that the motivation of self-development is negatively related to photo sharing, and that tenure in the community moderates the effect of self-development on photo sharing. Directions for future research, as well as implications for theory and practice are discussed.",
"title": ""
},
{
"docid": "7e97f234801829afff4d11686428f59f",
"text": "Prior research has linked mindfulness to improvements in attention, and suggested that the effects of mindfulness are particularly pronounced when individuals are cognitively depleted or stressed. Yet, no studies have tested whether mindfulness improves declarative awareness of unexpected stimuli in goal-directed tasks. Participants (N=794) were either depleted (or not) and subsequently underwent a brief mindfulness induction (or not). They then completed an inattentional blindness task during which an unexpected distractor appeared on the computer monitor. This task was used to assess declarative conscious awareness of the unexpected distractor's presence and the extent to which its perceptual properties were encoded. Mindfulness increased awareness of the unexpected distractor (i.e., reduced rates of inattentional blindness). Contrary to predictions, no mindfulness×depletion interaction emerged. Depletion however, increased perceptual encoding of the distractor. These results suggest that mindfulness may foster awareness of unexpected stimuli (i.e., reduce inattentional blindness).",
"title": ""
},
{
"docid": "c721f79d7c20210b4ee388ecb75f241f",
"text": "The noble aim behind this project is to study and capture the Natural Eye movement detection and trying to apply it as assisting application for paralyzed patients those who cannot speak or use hands such disease as amyotrophic lateral sclerosis (ALS), Guillain-Barre Syndrome, quadriplegia & heniiparesis. Using electrophySiological genereted by the voluntary contradictions of the muscles around the eye. The proposed system which is based on the design and application of an electrooculogram (EOG) based an efficient human–computer interface (HCI). Establishing an alternative channel without speaking and hand movements is important in increasing the quality of life for the handicapped. EOG-based systems are more efficient than electroencephalogram (EEG)-based systems as easy acquisition, higher amplitude, and also easily classified. By using a realized virtual keyboard like graphical user interface, it is possible to notify in writing the needs of the patient in a relatively short time. Considering the bio potential measurement pitfalls, the novel EOG-based HCI system allows people to successfully communicate with their environment by using only eye movements. [1] Classifying horizontal and vertical EOG channel signals in an efficient interface is realized in this study. The nearest neighbourhood algorithm will be use to classify the signals. The novel EOG-based HCI system allows people to successfully and economically communicate with their environment by using only eye movements. [2] An Electrooculography is a method of tracking the ocular movement, based on the voltage changes that occur due to the medications on the special orientation of the eye dipole. The resulting signal has a myriad of possible applications. [2] In this dissertation phase one, the goal was to study the Eye movements and respective signal generation, EOG signal acquisition and also study of a Man-Machine Interface that made use of this signal. As per our goal we studied eye movements and design simple EOG acquisition circuit. We got efficient signal output in oscilloscope. I sure that result up to present stage will definitely leads us towards designing of novel assisting device for paralyzed patients. Thus, we set out to create an interface will be use by mobility impaired patients, allowing them to use their eyes to call nurse or attended person and some other requests. Keywords— Electro Oculogram, Natural Eye movement Detection, EOG acquisition & signal conditioning, Eye based Computer interface GUI, Paralysed assisting device, Eye movement recognization",
"title": ""
},
{
"docid": "67c8047fbb9e027f92910c4a4f93347a",
"text": "Mastocytosis is a rare, heterogeneous disease of complex etiology, characterized by a marked increase in mast cell density in the skin, bone marrow, liver, spleen, gastrointestinal mucosa and lymph nodes. The most frequent site of organ involvement is the skin. Cutaneous lesions include urticaria pigmentosa, mastocytoma, diffuse and erythematous cutaneous mastocytosis, and telangiectasia macularis eruptiva perstans. Human mast cells originate from CD34 progenitors, under the influence of stem cell factor (SCF); a substantial number of patients exhibit activating mutations in c-kit, the receptor for SCF. Mast cells can synthesize a variety of cytokines that could affect the skeletal system, increasing perforating bone resorption and leading to osteoporosis. The coexistence of hematologic disorders, such as myeloproliferative or myelodysplastic syndromes, or of lymphoreticular malignancies, is common. Compared with radiographs, Tc-99m methylenediphosphonate (MDP) scintigraphy is better able to show the widespread skeletal involvement in patients with diffuse disease. T1-weighted MR imaging is a sensitive technique for detecting marrow abnormalities in patients with systemic mastocytosis, showing several different patterns of marrow involvement. We report the imaging findings a 36-year old male with well-documented urticaria pigmentosa. In order to evaluate mastocytic bone marrow involvement, 99mTc-MDP scintigraphy, T1-weighted spin echo and short tau inversion recovery MRI at 1.0 T, were performed. Both scan findings were consistent with marrow hyperactivity. Thus, the combined use of bone scan and MRI may be useful in order to recognize marrow involvement in suspected systemic mastocytosis, perhaps avoiding bone biopsy.",
"title": ""
},
{
"docid": "6a3cc8319b7a195ce7ec05a70ad48c7a",
"text": "Image caption generation is the problem of generating a descriptive sentence of an image. Automatically describing the content of an image is a fundamental problem in artificial intelligence that connects computer vision and natural language processing. This paper presents a brief survey of some technical aspects and methods for description-generation of images. As there has been great interest in research community, to come up with automatic ways to retrieve images based on content. There are numbers of techniques, that, have been used to solve this problem, and purpose of this paper is to have an overview of many of these approaches and databases used for description generation purpose. Finally, we discuss open challenges and future directions for upcoming researchers.",
"title": ""
},
{
"docid": "85cf0bddbedc5836f41033a16274c1e2",
"text": "Intuitively, for a training sample xi with its associated label yi, a deep model is getting closer to the correct answer in the higher layers. It starts with the difficult job of classifying xi, which becomes easier as the higher layers distill xi into a representation that is easier to classify. One might be tempted to say that this means that the higher layers have more information about the ground truth, but this would be incorrect.",
"title": ""
},
{
"docid": "6f0faf1a90d9f9b19fb2e122a26a0f77",
"text": "Social media shatters the barrier to communicate anytime anywhere for people of all walks of life. The publicly available, virtually free information in social media poses a new challenge to consumers who have to discern whether a piece of information published in social media is reliable. For example, it can be difficult to understand the motivations behind a statement passed from one user to another, without knowing the person who originated the message. Additionally, false information can be propagated through social media, resulting in embarrassment or irreversible damages. Provenance data associated with a social media statement can help dispel rumors, clarify opinions, and confirm facts. However, provenance data about social media statements is not readily available to users today. Currently, providing this data to users requires changing the social media infrastructure or offering subscription services. Taking advantage of social media features, research in this nascent field spearheads the search for a way to provide provenance data to social media users, thus leveraging social media itself by mining it for the provenance data. Searching for provenance data reveals an interesting problem space requiring the development and application of new metrics in order to provide meaningful provenance data to social media users. This lecture reviews the current research on information provenance, explores exciting research opportunities to address pressing needs, and shows how data mining can enable a social media user to make informed judgements about statements published in social media.",
"title": ""
},
{
"docid": "3a18976245cfc4b50e97aadf304ef913",
"text": "Key-Value Stores (KVS) are becoming increasingly popular because they scale up and down elastically, sustain high throughputs for get/put workloads and have low latencies. KVS owe these advantages to their simplicity. This simplicity, however, comes at a cost: It is expensive to process complex, analytical queries on top of a KVS because today’s generation of KVS does not support an efficient way to scan the data. The problem is that there are conflicting goals when designing a KVS for analytical queries and for simple get/put workloads: Analytical queries require high locality and a compact representation of data whereas elastic get/put workloads require sparse indexes. This paper shows that it is possible to have it all, with reasonable compromises. We studied the KVS design space and built TellStore, a distributed KVS, that performs almost as well as state-of-the-art KVS for get/put workloads and orders of magnitude better for analytical and mixed workloads. This paper presents the results of comprehensive experiments with an extended version of the YCSB benchmark and a workload from the telecommunication industry.",
"title": ""
},
{
"docid": "5d35e34a5db727917e5105f857c174be",
"text": "Human face feature extraction using digital images is a vital element for several applications such as: identification and facial recognition, medical application, video games, cosmetology, etc. The skin pores are very important element of the structure of the skin. A novelty method is proposed allowing decomposing an photography of human face from digital image (RGB) in two layers, melanin and hemoglobin. From melanin layer, the main pores from the face can be obtained, as well as the centroids of each of them. It has been found that the pore configuration of the skin is invariant and unique for each individual. Therefore, from the localization of the pores of a human face, it is a possibility to use them for diverse application in the fields of pattern",
"title": ""
},
{
"docid": "9779a5ac2ada20f0ccd5751b0784e9cc",
"text": "Early-stage romantic love can induce euphoria, is a cross-cultural phenomenon, and is possibly a developed form of a mammalian drive to pursue preferred mates. It has an important influence on social behaviors that have reproductive and genetic consequences. To determine which reward and motivation systems may be involved, we used functional magnetic resonance imaging and studied 10 women and 7 men who were intensely \"in love\" from 1 to 17 mo. Participants alternately viewed a photograph of their beloved and a photograph of a familiar individual, interspersed with a distraction-attention task. Group activation specific to the beloved under the two control conditions occurred in dopamine-rich areas associated with mammalian reward and motivation, namely the right ventral tegmental area and the right postero-dorsal body and medial caudate nucleus. Activation in the left ventral tegmental area was correlated with facial attractiveness scores. Activation in the right anteromedial caudate was correlated with questionnaire scores that quantified intensity of romantic passion. In the left insula-putamen-globus pallidus, activation correlated with trait affect intensity. The results suggest that romantic love uses subcortical reward and motivation systems to focus on a specific individual, that limbic cortical regions process individual emotion factors, and that there is localization heterogeneity for reward functions in the human brain.",
"title": ""
},
{
"docid": "503756888df43d745e4fb5051f8855fb",
"text": "The widespread use of email has raised serious privacy concerns. A critical issue is how to prevent email information leaks, i.e., when a message is accidentally addressed to non-desired recipients. This is an increasingly common problem that can severely harm individuals and corporations — for instance, a single email leak can potentially cause expensive law suits, brand reputation damage, negotiation setbacks and severe financial losses. In this paper we present the first attempt to solve this problem. We begin by redefining it as an outlier detection task, where the unintended recipients are the outliers. Then we combine real email examples (from the Enron Corpus) with carefully simulated leak-recipients to learn textual and network patterns associated with email leaks. This method was able to detect email leaks in almost 82% of the test cases, significantly outperforming all other baselines. More importantly, in a separate set of experiments we applied the proposed method to the task of finding real cases of email leaks. The result was encouraging: a variation of the proposed technique was consistently successful in finding two real cases of email leaks. Not only does this paper introduce the important problem of email leak detection, but also presents an effective solution that can be easily implemented in any email client — with no changes in the email server side.",
"title": ""
}
] | scidocsrr |
a98cccbdc5cbdfc539a8746fcb96cdf7 | Radar Cross Section Reduction of a Microstrip Antenna Based on Polarization Conversion Metamaterial | [
{
"docid": "6545ea7d281be5528d9217f3b891a5da",
"text": "In this paper, a novel metamaterial absorber working in the C band frequency range has been proposed to reduce the in-band Radar Cross Section (RCS) of a typical planar antenna. The absorber is first designed in the shape of a hexagonal ring structure having dipoles at the corresponding arms of the rings. The various geometrical parameters of the proposed metamaterial structure have first been optimized using the numerical simulator, and the structure is fabricated and tested. In the second step, the metamaterial absorber is loaded on a microstrip patch antenna working in the same frequency band as that of the metamaterial absorber to reduce the in-band Radar Cross Section (RCS) of the antenna. The prototype is simulated, fabricated and tested. The simulated results show the 99% absorption of the absorber at 6.35 GHz which is in accordance with the measured data. A close agreement between the simulated and the measured results shows that the proposed absorber can be used for the RCS reduction of the planar antenna in order to improve its in-band stealth performance.",
"title": ""
}
] | [
{
"docid": "543dc9543221b507746ebf1fe8d14928",
"text": "Mixture modeling is a widely applied data analysis technique used to identify unobserved heterogeneity in a population. Despite mixture models’ usefulness in practice, one unresolved issue in the application of mixture models is that there is not one commonly accepted statistical indicator for deciding on the number of classes in a study population. This article presents the results of a simulation study that examines the performance of likelihood-based tests and the traditionally used Information Criterion (ICs) used for determining the number of classes in mixture modeling. We look at the performance of these tests and indexes for 3 types of mixture models: latent class analysis (LCA), a factor mixture model (FMA), and a growth mixture models (GMM). We evaluate the ability of the tests and indexes to correctly identify the number of classes at three different sample sizes (n D 200, 500, 1,000). Whereas the Bayesian Information Criterion performed the best of the ICs, the bootstrap likelihood ratio test proved to be a very consistent indicator of classes across all of the models considered.",
"title": ""
},
{
"docid": "ee223b75a3a99f15941e4725d261355e",
"text": "BACKGROUND\nIn Mexico, stunting and anemia have declined but are still high in some regions and subpopulations, whereas overweight and obesity have increased at alarming rates in all age and socioeconomic groups.\n\n\nOBJECTIVE\nThe objective was to describe the coexistence of stunting, anemia, and overweight and obesity at the national, household, and individual levels.\n\n\nDESIGN\nWe estimated national prevalences of and trends for stunting, anemia, and overweight and obesity in children aged <5 y and in school-aged children (5-11 y old) and anemia and overweight and obesity in women aged 20-49 y by using the National Health and Nutrition Surveys conducted in 1988, 1999, 2006, and 2012. With the use of the most recent data (2012), the double burden of malnutrition at the household level was estimated and defined as the coexistence of stunting in children aged <5 y and overweight or obesity in the mother. At the individual level, double burden was defined as concurrent stunting and overweight and obesity in children aged 5-11 y and concurrent anemia and overweight or obesity in children aged 5-11 y and in women. We also tested if the coexistence of the conditions corresponded to expected values, under the assumption of independent distributions of each condition.\n\n\nRESULTS\nAt the household level, the prevalence of concurrent stunting in children aged <5 y and overweight and obesity in mothers was 8.4%; at the individual level, prevalences were 1% for stunting and overweight or obesity and 2.9% for anemia and overweight or obesity in children aged 5-11 y and 7.6% for anemia and overweight or obesity in women. At the household and individual levels in children aged 5-11 y, prevalences of double burden were significantly lower than expected, whereas anemia and the prevalence of overweight or obesity in women were not different from that expected.\n\n\nCONCLUSIONS\nAlthough some prevalences of double burden were lower than expected, assuming independent distributions of the 2 conditions, the coexistence of stunting, overweight or obesity, and anemia at the national, household, and intraindividual levels in Mexico calls for policies and programs to prevent the 3 conditions.",
"title": ""
},
{
"docid": "8e10d20723be23d699c0c581c529ee19",
"text": "Insect-scale legged robots have the potential to locomote on rough terrain, crawl through confined spaces, and scale vertical and inverted surfaces. However, small scale implies that such robots are unable to carry large payloads. Limited payload capacity forces miniature robots to utilize simple control methods that can be implemented on a simple onboard microprocessor. In this study, the design of a new version of the biologically-inspired Harvard Ambulatory MicroRobot (HAMR) is presented. In order to find the most suitable control inputs for HAMR, maneuverability experiments are conducted for several drive parameters. Ideal input candidates for orientation and lateral velocity control are identified as a result of the maneuverability experiments. Using these control inputs, two simple feedback controllers are implemented to control the orientation and the lateral velocity of the robot. The controllers are used to force the robot to track trajectories with a minimum turning radius of 55 mm and a maximum lateral to normal velocity ratio of 0.8. Due to their simplicity, the controllers presented in this work are ideal for implementation with on-board computation for future HAMR prototypes.",
"title": ""
},
{
"docid": "3d0e5f0dbca6406b8b8eda4447ee6474",
"text": "We describe a watermarking scheme for ownership verification and authentication. Depending on the desire of the user, the watermark can be either visible or invisible. The scheme can detect any modification made to the image and indicate the specific locations that have been modified. If the correct key is specified in the watermark extraction procedure, then an output image is returned showing a proper watermark, indicating the image is authentic and has not been changed since the insertion of the watermark. Any modification would be reflected in a corresponding error in the watermark. If the key is incorrect, or if the image was not watermarked, or if the watermarked image is cropped, the watermark extraction algorithm will return an image that resembles random noise. Since it requires a user key during both the insertion and the extraction procedures, it is not possible for an unauthorized user to insert a new watermark or alter the existing watermark so that the resulting image will pass the test. We present secret key and public key versions of the technique.",
"title": ""
},
{
"docid": "a2688a1169babed7e35a52fa875505d4",
"text": "Crowdsourcing label generation has been a crucial component for many real-world machine learning applications. In this paper, we provide finite-sample exponential bounds on the error rate (in probability and in expectation) of hyperplane binary labeling rules for the Dawid-Skene (and Symmetric DawidSkene ) crowdsourcing model. The bounds can be applied to analyze many commonly used prediction methods, including the majority voting, weighted majority voting and maximum a posteriori (MAP) rules. These bound results can be used to control the error rate and design better algorithms. In particular, under the Symmetric Dawid-Skene model we use simulation to demonstrate that the data-driven EM-MAP rule is a good approximation to the oracle MAP rule which approximately optimizes our upper bound on the mean error rate for any hyperplane binary labeling rule. Meanwhile, the average error rate of the EM-MAP rule is bounded well by the upper bound on the mean error rate of the oracle MAP rule in the simulation.",
"title": ""
},
{
"docid": "7ca0ceb19e47f9848db1a5946c19d561",
"text": "This thesis performs an empirical analysis of Word2Vec by comparing its output to WordNet, a well-known, human-curated lexical database. It finds that Word2Vec tends to uncover more of certain types of semantic relations than others – with Word2Vec returning more hypernyms, synonomyns and hyponyms than hyponyms or holonyms. It also shows the probability that neighbors separated by a given cosine distance in Word2Vec are semantically related in WordNet. This result both adds to our understanding of the stillunknown Word2Vec and helps to benchmark new semantic tools built from word vectors. Word2Vec, Natural Language Processing, WordNet, Distributional Semantics",
"title": ""
},
{
"docid": "31c62f403e6d7f06ff2ab028894346ff",
"text": "Automated text summarization is important to for humans to better manage the massive information explosion. Several machine learning approaches could be successfully used to handle the problem. This paper reports the results of our study to compare the performance between neural networks and support vector machines for text summarization. Both models have the ability to discover non-linear data and are effective model when dealing with large datasets.",
"title": ""
},
{
"docid": "c9284c30e686c1fe1b905b776b520e0e",
"text": "Two decades since the idea of using software diversity for security was put forward, ASLR is the only technique to see widespread deployment. This is puzzling since academic security researchers have published scores of papers claiming to advance the state of the art in the area of code randomization. Unfortunately, these improved diversity techniques are generally less deployable than integrity-based techniques, such as control-flow integrity, due to their limited compatibility with existing optimization, development, and distribution practices. This paper contributes yet another diversity technique called pagerando. Rather than trading off practicality for security, we first and foremost aim for deployability and interoperability. Most code randomization techniques interfere with memory sharing and deduplication optimization across processes and virtual machines, ours does not. We randomize at the granularity of individual code pages but never rewrite page contents. This also avoids incompatibilities with code integrity mechanisms that only allow signed code to be mapped into memory and prevent any subsequent changes. On Android, pagerando fully adheres to the default SELinux policies. All practical mitigations must interoperate with unprotected legacy code, our implementation transparently interoperates with unmodified applications and libraries. To support our claims of practicality, we demonstrate that our technique can be integrated into and protect all shared libraries shipped with stock Android 6.0. We also consider hardening of non-shared libraries and executables and other concerns that must be addressed to put software diversity defenses on par with integrity-based mitigations such as CFI.",
"title": ""
},
{
"docid": "88e4c785587b5b195758034119955474",
"text": "We consider adaptive meshless discretisation of the Dirichlet problem for Poisson equation based on numerical differentiation stencils obtained with the help of radial basis functions. New meshless stencil selection and adaptive refinement algorithms are proposed in 2D. Numerical experiments show that the accuracy of the solution is comparable with, and often better than that achieved by the mesh-based adaptive finite element method.",
"title": ""
},
{
"docid": "e5ddbe32d1beed6de2e342c5d5fea274",
"text": "Link prediction appears as a central problem of network science, as it calls for unfolding the mechanisms that govern the micro-dynamics of the network. In this work, we are interested in ego-networks, that is the mere information of interactions of a node to its neighbors, in the context of social relationships. As the structural information is very poor, we rely on another source of information to predict links among egos’ neighbors: the timing of interactions. We define several features to capture different kinds of temporal information and apply machine learning methods to combine these various features and improve the quality of the prediction. We demonstrate the efficiency of this temporal approach on a cellphone interaction dataset, pointing out features which prove themselves to perform well in this context, in particular the temporal profile of interactions and elapsed time between contacts.",
"title": ""
},
{
"docid": "f77107a84778699e088b94c1a75bfd78",
"text": "Nathaniel Kleitman was the first to observe that sleep deprivation in humans did not eliminate the ability to perform neurobehavioral functions, but it did make it difficult to maintain stable performance for more than a few minutes. To investigate variability in performance as a function of sleep deprivation, n = 13 subjects were tested every 2 hours on a 10-minute, sustained-attention, psychomotor vigilance task (PVT) throughout 88 hours of total sleep deprivation (TSD condition), and compared to a control group of n = 15 subjects who were permitted a 2-hour nap every 12 hours (NAP condition) throughout the 88-hour period. PVT reaction time means and standard deviations increased markedly among subjects and within each individual subject in the TSD condition relative to the NAP condition. TSD subjects also had increasingly greater performance variability as a function of time on task after 18 hours of wakefulness. During sleep deprivation, variability in PVT performance reflected a combination of normal timely responses, errors of omission (i.e., lapses), and errors of commission (i.e., responding when no stimulus was present). Errors of omission and errors of commission were highly intercorrelated across deprivation in the TSD condition (r = 0.85, p = 0.0001), suggesting that performance instability is more likely to include compensatory effort than a lack of motivation. The marked increases in PVT performance variability as sleep loss continued supports the \"state instability\" hypothesis, which posits that performance during sleep deprivation is increasingly variable due to the influence of sleep initiating mechanisms on the endogenous capacity to maintain attention and alertness, thereby creating an unstable state that fluctuates within seconds and that cannot be characterized as either fully awake or asleep.",
"title": ""
},
{
"docid": "1f121c30e686d25f44363f44dc71b495",
"text": "In this paper we show that the Euler number of the compactified Jacobian of a rational curve C with locally planar singularities is equal to the multiplicity of the δ-constant stratum in the base of a semi-universal deformation of C. In particular, the multiplicity assigned by Yau, Zaslow and Beauville to a rational curve on a K3 surface S coincides with the multiplicity of the normalisation map in the moduli space of stable maps to S. Introduction Let C be a reduced and irreducible projective curve with singular set Σ ⊂ C and let n : C̃ −→ C be its normalisation. The generalised Jacobian JC of C is an extension of JC̃ by an affine commutative group of dimension δ := dimH0(n∗(OC̃)/OC) = ∑",
"title": ""
},
{
"docid": "8f183ac262aac98c563bf9dcc69b1bf5",
"text": "Functional infrared thermal imaging (fITI) is considered a promising method to measure emotional autonomic responses through facial cutaneous thermal variations. However, the facial thermal response to emotions still needs to be investigated within the framework of the dimensional approach to emotions. The main aim of this study was to assess how the facial thermal variations index the emotional arousal and valence dimensions of visual stimuli. Twenty-four participants were presented with three groups of standardized emotional pictures (unpleasant, neutral and pleasant) from the International Affective Picture System. Facial temperature was recorded at the nose tip, an important region of interest for facial thermal variations, and compared to electrodermal responses, a robust index of emotional arousal. Both types of responses were also compared to subjective ratings of pictures. An emotional arousal effect was found on the amplitude and latency of thermal responses and on the amplitude and frequency of electrodermal responses. The participants showed greater thermal and dermal responses to emotional than to neutral pictures with no difference between pleasant and unpleasant ones. Thermal responses correlated and the dermal ones tended to correlate with subjective ratings. Finally, in the emotional conditions compared to the neutral one, the frequency of simultaneous thermal and dermal responses increased while both thermal or dermal isolated responses decreased. Overall, this study brings convergent arguments to consider fITI as a promising method reflecting the arousal dimension of emotional stimulation and, consequently, as a credible alternative to the classical recording of electrodermal activity. The present research provides an original way to unveil autonomic implication in emotional processes and opens new perspectives to measure them in touchless conditions.",
"title": ""
},
{
"docid": "a42e6ef132c872c72de49bf47b5ff56f",
"text": "A compact dual-band bandstop filter (BSF) is presented. It combines a conventional open-stub BSF and three spurlines. This filter generates two stopbands at 2.0 GHz and 3.0 GHz with the same circuit size as the conventional BSF.",
"title": ""
},
{
"docid": "b27dd00e5ef38d678959b3922af8ae0a",
"text": "0167-8655/$ see front matter 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.patrec.2013.07.007 ⇑ Corresponding author at: Department of Computer Science, Triangle Research & Development Center, Kafr Qarea, Israel. Fax: +972 4 6356168. E-mail addresses: [email protected] (R. Saabni), [email protected] (A. Asi), [email protected] (J. El-Sana). 1 These authors contributed equally to this work. Raid Saabni a,b,⇑,1, Abedelkadir Asi , Jihad El-Sana c",
"title": ""
},
{
"docid": "cbf32934e275e8d95a584762b270a5c2",
"text": "Online telemedicine systems are useful due to the possibility of timely and efficient healthcare services. These systems are based on advanced wireless and wearable sensor technologies. The rapid growth in technology has remarkably enhanced the scope of remote health monitoring systems. In this paper, a real-time heart monitoring system is developed considering the cost, ease of application, accuracy, and data security. The system is conceptualized to provide an interface between the doctor and the patients for two-way communication. The main purpose of this study is to facilitate the remote cardiac patients in getting latest healthcare services which might not be possible otherwise due to low doctor-to-patient ratio. The developed monitoring system is then evaluated for 40 individuals (aged between 18 and 66 years) using wearable sensors while holding an Android device (i.e., smartphone under supervision of the experts). The performance analysis shows that the proposed system is reliable and helpful due to high speed. The analyses showed that the proposed system is convenient and reliable and ensures data security at low cost. In addition, the developed system is equipped to generate warning messages to the doctor and patient under critical circumstances.",
"title": ""
},
{
"docid": "77214b0522c0cb7772e094351b5bfa82",
"text": "One of the key aspects in the implementation of reactive behaviour in the Web and, most importantly, in the semantic Web is the development of event detection engines. An event engine detects events occurring in a system and notifies their occurrences to its clients. Although primitive events are useful for modelling a good number of applications, certain other applications require the combination of primitive events in order to support reactive behaviour. This paper presents the implementation of an event detection engine that detects composite events specified by expressions of an illustrative sublanguage of the SNOOP event algebra",
"title": ""
},
{
"docid": "13cb793ca9cdf926da86bb6fc630800a",
"text": "In this paper, we present the first formal study of how mothers of young children (aged three and under) use social networking sites, particularly Facebook and Twitter, including mothers' perceptions of which SNSes are appropriate for sharing information about their children, changes in post style and frequency after birth, and the volume and nature of child-related content shared in these venues. Our findings have implications for improving the utility and usability of SNS tools for mothers of young children, as well as for creating and improving sociotechnical systems related to maternal and child health.",
"title": ""
},
{
"docid": "19863150313643b977f72452bb5a8a69",
"text": "Important research effort has been devoted to the topic of optimal planning of distribution systems. However, in general it has been mostly referred to the design of the primary network, with very modest considerations to the effect of the secondary network in the planning and future operation of the complete grid. Relatively little attention has been paid to the optimization of the secondary grid and to its effect on the optimality of the design of the complete electrical system, although the investment and operation costs of the secondary grid represent an important portion of the total costs. Appropriate design procedures have been proposed separately for both the primary and the secondary grid; however, in general, both planning problems have been presented and treated as different-almost isolated-problems, setting aside with this approximation some important factors that couple both problems, such as the fact that they may share the right of way, use the same poles, etc., among other factors that strongly affect the calculation of the investment costs. The main purpose of this work is the development and initial testing of a model for the optimal planning of a distribution system that includes both the primary and the secondary grids, so that a single optimization problem is stated for the design of the integral primary-secondary distribution system that overcomes these simplifications. The mathematical model incorporates the variables that define both the primary as well as the secondary planning problems and consists of a mixed integer-linear programming problem that may be solved by means of any suitable algorithm. Results are presented of the application of the proposed integral design procedure using conventional mixed integer-linear programming techniques to a real case of a residential primary-secondary distribution system consisting of 75 electrical nodes.",
"title": ""
}
] | scidocsrr |
f3b6384ba243589c11a67aedbce697b3 | Unsupervised CNN for Single View Depth Estimation: Geometry to the Rescue | [
{
"docid": "d15e7e655e7afc86e30e977516de7720",
"text": "We propose a new learning-based method for estimating 2D human pose from a single image, using Dual-Source Deep Convolutional Neural Networks (DS-CNN). Recently, many methods have been developed to estimate human pose by using pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective. In this paper, we propose to integrate both the local (body) part appearance and the holistic view of each local part for more accurate human pose estimation. Specifically, the proposed DS-CNN takes a set of image patches (category-independent object proposals for training and multi-scale sliding windows for testing) as the input and then learns the appearance of each local part by considering their holistic views in the full body. Using DS-CNN, we achieve both joint detection, which determines whether an image patch contains a body joint, and joint localization, which finds the exact location of the joint in the image patch. Finally, we develop an algorithm to combine these joint detection/localization results from all the image patches for estimating the human pose. The experimental results show the effectiveness of the proposed method by comparing to the state-of-the-art human-pose estimation methods based on pose priors that are estimated from physiologically inspired graphical models or learned from a holistic perspective.",
"title": ""
},
{
"docid": "d4fb664caa02b81909bc51291d3fafd7",
"text": "This paper offers the first variational approach to the problem of dense 3D reconstruction of non-rigid surfaces from a monocular video sequence. We formulate non-rigid structure from motion (nrsfm) as a global variational energy minimization problem to estimate dense low-rank smooth 3D shapes for every frame along with the camera motion matrices, given dense 2D correspondences. Unlike traditional factorization based approaches to nrsfm, which model the low-rank non-rigid shape using a fixed number of basis shapes and corresponding coefficients, we minimize the rank of the matrix of time-varying shapes directly via trace norm minimization. In conjunction with this low-rank constraint, we use an edge preserving total-variation regularization term to obtain spatially smooth shapes for every frame. Thanks to proximal splitting techniques the optimization problem can be decomposed into many point-wise sub-problems and simple linear systems which can be easily solved on GPU hardware. We show results on real sequences of different objects (face, torso, beating heart) where, despite challenges in tracking, illumination changes and occlusions, our method reconstructs highly deforming smooth surfaces densely and accurately directly from video, without the need for any prior models or shape templates.",
"title": ""
},
{
"docid": "9dbf1ae31558c80aff4edf94c446b69e",
"text": "This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.",
"title": ""
}
] | [
{
"docid": "26699915946647c1c582c1a0ab63b963",
"text": "In computer vision problems such as pair matching, only binary information ‘same’ or ‘different’ label for pairs of images is given during training. This is in contrast to classification problems, where the category labels of training images are provided. We propose a unified discriminative dictionary learning approach for both pair matching and multiclass classification tasks. More specifically, we introduce a new discriminative term called ‘pairwise sparse code error’ for the discriminativeness in sparse representation of pairs of signals, and then combine it with the classification error for discriminativeness in classifier construction to form a unified objective function. The solution to the new objective function is achieved by employing the efficient feature-sign search algorithm. The learned dictionary encourages feature points from a similar pair (or the same class) to have similar sparse codes. We validate the effectiveness of our approach through a series of experiments on face verification and recognition problems.",
"title": ""
},
{
"docid": "c3c58760970768b9a839184f9e0c5b29",
"text": "The anatomic structures in the female that prevent incontinence and genital organ prolapse on increases in abdominal pressure during daily activities include sphincteric and supportive systems. In the urethra, the action of the vesical neck and urethral sphincteric mechanisms maintains urethral closure pressure above bladder pressure. Decreases in the number of striated muscle fibers of the sphincter occur with age and parity. A supportive hammock under the urethra and vesical neck provides a firm backstop against which the urethra is compressed during increases in abdominal pressure to maintain urethral closure pressures above the rapidly increasing bladder pressure. This supporting layer consists of the anterior vaginal wall and the connective tissue that attaches it to the pelvic bones through the pubovaginal portion of the levator ani muscle, and the uterosacral and cardinal ligaments comprising the tendinous arch of the pelvic fascia. At rest the levator ani maintains closure of the urogenital hiatus. They are additionally recruited to maintain hiatal closure in the face of inertial loads related to visceral accelerations as well as abdominal pressurization in daily activities involving recruitment of the abdominal wall musculature and diaphragm. Vaginal birth is associated with an increased risk of levator ani defects, as well as genital organ prolapse and urinary incontinence. Computer models indicate that vaginal birth places the levator ani under tissue stretch ratios of up to 3.3 and the pudendal nerve under strains of up to 33%, respectively. Research is needed to better identify the pathomechanics of these conditions.",
"title": ""
},
{
"docid": "f7d36b012ac92e7a0e3ff26a3b596178",
"text": "The purpose of the present text is to present the theory and techniques behind the Gray Level Coocurrence Matrix (GLCM) method, and the stateof-the-art of the field, as applied to two dimensional images. It does not present a survey of practical results. 1 Gray Level Coocurrence Matrices In statistical texture analysis, texture features are computed from the statistical distribution of observed combinations of intensities at specified positions relative to each other in the image. According to the number of intensity points (pixels) in each combination, statistics are classified into first-order, second-order and higher-order statistics. The Gray Level Coocurrence Matrix (GLCM) method is a way of extracting second order statistical texture features. The approach has been used in a number of applications, e.g. [5],[6],[14],[5],[7],[12],[2],[8],[10],[1]. A GLCM is a matrix where the number of rows and colums is equal to the number of gray levels, G, in the image. The matrix element P (i, j | ∆x, ∆y) is the relative frequency with which two pixels, separated by a pixel distance (∆x, ∆y), occur within a given neighborhood, one with intensity i and the other with intensity j. One may also say that the matrix element P (i, j | d, θ) contains the second order 1 Albregtsen : Texture Measures Computed from GLCM-Matrices 2 statistical probability values for changes between gray levels i and j at a particular displacement distance d and at a particular angle (θ). Given an M ×N neighborhood of an input image containing G gray levels from 0 to G − 1, let f(m, n) be the intensity at sample m, line n of the neighborhood. Then P (i, j | ∆x, ∆y) = WQ(i, j | ∆x, ∆y) (1) where W = 1 (M − ∆x)(N − ∆y) Q(i, j | ∆x, ∆y) = N−∆y ∑",
"title": ""
},
{
"docid": "ca4beef505d8a93f399a4b5371816205",
"text": "A systematic review of the literature related to effective occupational therapy interventions in rehabilitation of individuals with work-related low back injuries and illnesses was carried out as part of the Evidence-Based Literature Review Project of the American Occupational Therapy Association. This review evaluated research on a broad range of occupational therapy-related intervention procedures and approaches. Findings from the review indicate that the evidence is insufficient to support or refute the effectiveness of exercise therapy and other conservative treatments for subacute and chronic low back injuries. The research reviewed strongly suggests that for interventions to be effective, occupational therapy practitioners should use a holistic, client-centered approach. The research supports the need for occupational therapy practitioners to consider multiple strategies for addressing clients' needs. Specifically, interventions for individuals with low back injuries and illnesses should incorporate a biopsychosocial, client-centered approach that includes actively involving the client in the rehabilitation process at the beginning of the intervention process and addressing the client's psychosocial needs in addition to his or her physical impairments. The implications for occupational therapy practice, research, and education are also discussed.",
"title": ""
},
{
"docid": "a4aa085507cc018af3735b5a848446da",
"text": "Domain Name System (DNS) is ubiquitous in any network. DNS tunnelling is a technique to transfer data, convey messages or conduct TCP activities over DNS protocol that is typically not blocked or watched by security enforcement such as firewalls. As a technique, it can be utilized in many malicious ways which can compromise the security of a network by the activities of data exfiltration, cyber-espionage, and command and control. On the other side, it can also be used by legitimate users. The traditional methods may not be able to distinguish between legitimate and malicious uses even if they can detect the DNS tunnelling activities. We propose a behaviour analysis based method that can not only detect the DNS tunnelling, but also classify the activities in order to catch and block the malicious tunnelling traffic. The proposed method can achieve the scale of real-time detection on fast and large DNS data with the use of big data technologies in offline training and online detection systems.",
"title": ""
},
{
"docid": "9d0a383122a7aa73053cededb64b418d",
"text": "With the explosive growth of Internet of Things devices and massive data produced at the edge of the network, the traditional centralized cloud computing model has come to a bottleneck due to the bandwidth limitation and resources constraint. Therefore, edge computing, which enables storing and processing data at the edge of the network, has emerged as a promising technology in recent years. However, the unique features of edge computing, such as content perception, real-time computing, and parallel processing, has also introduced several new challenges in the field of data security and privacy-preserving, which are also the key concerns of the other prevailing computing paradigms, such as cloud computing, mobile cloud computing, and fog computing. Despites its importance, there still lacks a survey on the recent research advance of data security and privacy-preserving in the field of edge computing. In this paper, we present a comprehensive analysis of the data security and privacy threats, protection technologies, and countermeasures inherent in edge computing. Specifically, we first make an overview of edge computing, including forming factors, definition, architecture, and several essential applications. Next, a detailed analysis of data security and privacy requirements, challenges, and mechanisms in edge computing are presented. Then, the cryptography-based technologies for solving data security and privacy issues are summarized. The state-of-the-art data security and privacy solutions in edge-related paradigms are also surveyed. Finally, we propose several open research directions of data security in the field of edge computing.",
"title": ""
},
{
"docid": "8f73870d5e999c0269059c73bb85e05c",
"text": "Placing the DRAM in the same package as a processor enables several times higher memory bandwidth than conventional off-package DRAM. Yet, the latency of in-package DRAM is not appreciably lower than that of off-package DRAM. A promising use of in-package DRAM is as a large cache. Unfortunately, most previous DRAM cache designs optimize mainly for cache hit latency and do not consider bandwidth efficiency as a first-class design constraint. Hence, as we show in this paper, these designs are suboptimal for use with in-package DRAM.\n We propose a new DRAM cache design, Banshee, that optimizes for both in-package and off-package DRAM bandwidth efficiency without degrading access latency. Banshee is based on two key ideas. First, it eliminates the tag lookup overhead by tracking the contents of the DRAM cache using TLBs and page table entries, which is efficiently enabled by a new lightweight TLB coherence protocol we introduce. Second, it reduces unnecessary DRAM cache replacement traffic with a new bandwidth-aware frequency-based replacement policy. Our evaluations show that Banshee significantly improves performance (15% on average) and reduces DRAM traffic (35.8% on average) over the best-previous latency-optimized DRAM cache design.",
"title": ""
},
{
"docid": "cde4d7457b949420ab90bdc894f40eb0",
"text": "We study the problem of named entity recognition (NER) from electronic medical records, which is one of the most fundamental and critical problems for medical text mining. Medical records which are written by clinicians from different specialties usually contain quite different terminologies and writing styles. The difference of specialties and the cost of human annotation makes it particularly difficult to train a universal medical NER system. In this paper, we propose a labelaware double transfer learning framework (LaDTL) for cross-specialty NER, so that a medical NER system designed for one specialty could be conveniently applied to another one with minimal annotation efforts. The transferability is guaranteed by two components: (i) we propose label-aware MMD for feature representation transfer, and (ii) we perform parameter transfer with a theoretical upper bound which is also label aware. We conduct extensive experiments on 12 cross-specialty NER tasks. The experimental results demonstrate that La-DTL provides consistent accuracy improvement over strong baselines. Besides, the promising experimental results on non-medical NER scenarios indicate that LaDTL is potential to be seamlessly adapted to a wide range of NER tasks.",
"title": ""
},
{
"docid": "a363b4cec11d5328012a1cd0f13ba747",
"text": "Techniques for partitioning objects into optimally homogeneous groups on the basis of empirical measures of similarity among those objects have received increasing attention in several different fields. This paper develops a useful correspondence between any hierarchical system of such clusters, and a particular type of distance measure. The correspondence gives rise to two methods of clustering that are computationally rapid and invariant under monotonic transformations of the data. In an explicitly defined sense, one method forms clusters that are optimally \"connected,\" while the other forms clusters that are optimally \"compact.\"",
"title": ""
},
{
"docid": "d8dd68593fd7bd4bdc868634deb9661a",
"text": "We present a low-cost IoT based system able to monitor acoustic, olfactory, visual and thermal comfort levels. The system is provided with different ambient sensors, computing, control and connectivity features. The integration of the device with a smartwatch makes it possible the analysis of the personal comfort parameters.",
"title": ""
},
{
"docid": "ccce159596bf45910117a80ee54090a5",
"text": "The parietal lobe plays a major role in sensorimotor integration and action. Recent neuroimaging studies have revealed more than 40 retinotopic areas distributed across five visual streams in the human brain, two of which enter the parietal lobe. A series of retinotopic areas occupy the length of the intraparietal sulcus and continue into the postcentral sulcus. On themedial wall, retinotopy extends across the parieto-occipital sulcus into the precuneus and reaches the cingulate sulcus. Full-body tactile stimulation revealed a multisensory homunculus lying along the postcentral sulcus just posterior to primary somatosensory cortical areas and overlapping with the anteriormost retinotopic maps. These topologically organized higher-level maps lay the foundation for actions in peripersonal space (e.g., reaching and grasping) aswell as navigation through space. A preliminary yet comprehensive multilayer functional atlas was constructed to specify the relative locations of cortical unisensory, multisensory, and action representations. We expect that those areal and functional definitions will be refined by future studies using more sophisticated stimuli and tasks tailored to regions with different specificity. The long-term goal is to construct an online surface-based atlas containing layered maps of multiple modalities that can be used as a reference to understand the functions and disorders of the parietal lobe.",
"title": ""
},
{
"docid": "8f73a521d7703fa00bbaf7b68e470c55",
"text": "Purpose – The purpose of this paper is to introduce the concept of strategic integration of knowledge management (KM ) and customer relationship management (CRM). The integration is a strategic issue that has strong ramifications in the long-term competitiveness of organizations. It is not limited to CRM; the concept can also be applied to supply chain management (SCM), product development management (PDM), eterprise resource planning (ERP) and retail network management (RNM) that offer different perspectives into knowledge management adoption. Design/methodology/approach – Through literature review and establishing new perspectives with examples, the components of knowledge management, customer relationship management, and strategic planning are amalgamated. Findings – Findings include crucial details in the various components of knowledge management, customer relationship management, and strategic planning, i.e. strategic planning process, value formula, intellectual capital measure, different levels of CRM and their core competencies. Practical implications – Although the strategic integration of knowledge management and customer relationship management is highly conceptual, a case example has been provided where the concept is applied. The same concept could also be applied to other industries that focus on customer service. Originality/value – The concept of strategic integration of knowledge management and customer relationship management is new. There are other areas, yet to be explored in terms of additional integration such as SCM, PDM, ERP, and RNM. The concept of integration would be useful for future research as well as for KM and CRM practitioners.",
"title": ""
},
{
"docid": "236dc9aa7d8c78698cbff770184db32b",
"text": "The prevalence of diet-related chronic diseases strongly impacts global health and health services. Currently, it takes training and strong personal involvement to manage or treat these diseases. One way to assist with dietary assessment is through computer vision systems that can recognize foods and their portion sizes from images and output the corresponding nutritional information. When multiple food items may exist, a food segmentation stage should also be applied before recognition. In this study, we propose a method to detect and segment the food of already detected dishes in an image. The method combines region growing/merging techniques with a deep CNN-based food border detection. A semi-automatic version of the method is also presented that improves the result with minimal user input. The proposed methods are trained and tested on non-overlapping subsets of a food image database including 821 images, taken under challenging conditions and annotated manually. The automatic and semi-automatic dish segmentation methods reached average accuracies of 88% and 92%, respectively, in roughly 0.5 seconds per image.",
"title": ""
},
{
"docid": "a402ac37db42996e6fccca9d2da056ee",
"text": "This article presents an up-to-date review of the several extraction methods commonly used to determine the value of the threshold voltage of MOSFETs. It includes the different methods that extract this quantity from the drain current versus gate voltage transfer characteristics measured under linear operation conditions for crystalline and non-crystalline MOSFETs. The various methods presented for the linear region are adapted to the saturation region and tested as a function of drain voltage whenever possible. The implementation of the extraction methods is discussed and tested by applying them to real state-ofthe-art devices in order to compare their performance. The validity of the different methods with respect to the presence of parasitic series resistance is also evaluated using 2-D simulations. 2012 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2a5339fdb6b4f8a9a28af908da7b168d",
"text": "In this paper we propose a human interface device that converts the mechanism of hand sign language into alphanumerical characters. This device is in the form of a portable right hand glove. We propose this device in concurrence with assistive engineering to help the underprivileged. Our main goal is to identify 26 alphabets and 10 numbers of American Sign Language and display it on the LCD. Once the text is obtained on the LCD, text to speech conversion operation is carried out and a voice output is obtained. Further, the text obtained can also be viewed on a PC or any portable hand held device. People with hearing disability find it difficult to communicate with others using their Universal Sign Language, as a normal person doesn't understand these sign languages. Our main objective is to set an interface between the Deaf/Dumb and normal person to improve the communication capabilities so that they can communicate easily with others. We mount dual axis accelerometers on the glove and propose and efficient methodology to convert these sign languages.",
"title": ""
},
{
"docid": "354041896b7375aeedf1018f3d9bb380",
"text": "More than 60 percent of the population in the India, agriculture as the primary sector occupation. In recent years, due increase in labor shortage interest has grown for the development of the autonomous vehicles like robots in the agriculture. An robot called agribot have been designed for agricultural purposes. It is designed to minimize the labor of farmers in addition to increasing the speed and accuracy of the work. It performs the elementary functions involved in farming i.e. spraying of pesticide, sowing of seeds, and so on. Spraying pesticides especially important for the workers in the area of potentially harmful for the safety and health of the workers. This is especially important for the workers in the area of potentially harmful for the safety and health of the workers. The Proposed system aims at designing multipurpose autonomous agricultural robotic vehicle which can be controlled through IoT for seeding and spraying of pesticides. These robots are used to reduce human intervention, ensuring high yield and efficient utilization of resources. KeywordsIoT, Agribot, Sprayer, Pesticides",
"title": ""
},
{
"docid": "b63635129ab0663efa374b83f2b77944",
"text": "Cannabis sativa L. is an important herbaceous species originating from Central Asia, which has been used in folk medicine and as a source of textile fiber since the dawn of times. This fast-growing plant has recently seen a resurgence of interest because of its multi-purpose applications: it is indeed a treasure trove of phytochemicals and a rich source of both cellulosic and woody fibers. Equally highly interested in this plant are the pharmaceutical and construction sectors, since its metabolites show potent bioactivities on human health and its outer and inner stem tissues can be used to make bioplastics and concrete-like material, respectively. In this review, the rich spectrum of hemp phytochemicals is discussed by putting a special emphasis on molecules of industrial interest, including cannabinoids, terpenes and phenolic compounds, and their biosynthetic routes. Cannabinoids represent the most studied group of compounds, mainly due to their wide range of pharmaceutical effects in humans, including psychotropic activities. The therapeutic and commercial interests of some terpenes and phenolic compounds, and in particular stilbenoids and lignans, are also highlighted in view of the most recent literature data. Biotechnological avenues to enhance the production and bioactivity of hemp secondary metabolites are proposed by discussing the power of plant genetic engineering and tissue culture. In particular two systems are reviewed, i.e., cell suspension and hairy root cultures. Additionally, an entire section is devoted to hemp trichomes, in the light of their importance as phytochemical factories. Ultimately, prospects on the benefits linked to the use of the -omics technologies, such as metabolomics and transcriptomics to speed up the identification and the large-scale production of lead agents from bioengineered Cannabis cell culture, are presented.",
"title": ""
},
{
"docid": "486e3f5614f69f60d8703d8641c73416",
"text": "The Great East Japan Earthquake and Tsunami drastically changed Japanese society, and the requirements for ICT was completely redefined. After the disaster, it was impossible for disaster victims to utilize their communication devices, such as cellular phones, tablet computers, or laptop computers, to notify their families and friends of their safety and confirm the safety of their loved ones since the communication infrastructures were physically damaged or lacked the energy necessary to operate. Due to this drastic event, we have come to realize the importance of device-to-device communications. With the recent increase in popularity of D2D communications, many research works are focusing their attention on a centralized network operated by network operators and neglect the importance of decentralized infrastructureless multihop communication, which is essential for disaster relief applications. In this article, we propose the concept of multihop D2D communication network systems that are applicable to many different wireless technologies, and clarify requirements along with introducing open issues in such systems. The first generation prototype of relay by smartphone can deliver messages using only users' mobile devices, allowing us to send out emergency messages from disconnected areas as well as information sharing among people gathered in evacuation centers. The success of field experiments demonstrates steady advancement toward realizing user-driven networking powered by communication devices independent of operator networks.",
"title": ""
},
{
"docid": "924f23fa4a8b2140445755ed0a63676f",
"text": "This article examined the relationships and outcomes of behaviors falling at the interface of general and sexual forms of interpersonal mistreatment in the workplace. Data were collected with surveys of two different female populations (Ns = 833 and 1,425) working within a large public-sector organization. Findings revealed that general incivility and sexual harassment were related constructs, with gender harassment bridging the two. Moreover, these behaviors tended to co-occur in organizations, and employee well-being declined with the addition of each type of mistreatment to the workplace experience. This behavior type (or behavior combination) effect remained significant even after controlling for behavior frequency. The findings are interpreted from perspectives on sexual aggression, social power, and multiple victimization.",
"title": ""
},
{
"docid": "be426354d0338b2b5a17503d30c9665c",
"text": "0141-9331/$ see front matter 2011 Elsevier B.V. A doi:10.1016/j.micpro.2011.06.002 ⇑ Corresponding author. E-mail address: [email protected] (J. M In this paper, Texas Instruments TMS320C6713 DSP based real-time speech recognition system using Modified One Against All Support Vector Machine (SVM) classifier is proposed. The major contributions of this paper are: the study and evaluation of the performance of the classifier using three feature extraction techniques and proposal for minimizing the computation time for the classifier. From this study, it is found that the recognition accuracies of 93.33%, 98.67% and 96.67% are achieved for the classifier using Mel Frequency Cepstral Coefficients (MFCC) features, zerocrossing (ZC) and zerocrossing with peak amplitude (ZCPA) features respectively. To reduce the computation time required for the systems, two techniques – one using optimum threshold technique for the SVM classifier and another using linear assembly are proposed. The ZC based system requires the least computation time and the above techniques reduce the execution time by a factor of 6.56 and 5.95 respectively. For the purpose of comparison, the speech recognition system is also implemented using Altera Cyclone II FPGA with Nios II soft processor and custom instructions. Of the two approaches, the DSP approach requires 87.40% less number of clock cycles. Custom design of the recognition system on the FPGA without using the soft-core processor would have resulted in less computational complexity. The proposed classifier is also found to reduce the number of support vectors by a factor of 1.12–3.73 when applied to speaker identification and isolated letter recognition problems. The techniques proposed here can be adapted for various other SVM based pattern recognition systems. 2011 Elsevier B.V. All rights reserved.",
"title": ""
}
] | scidocsrr |
2379575cd8f94486a085e9a1bf85a0a4 | Multi- and Cross-Modal Semantics Beyond Vision: Grounding in Auditory Perception | [
{
"docid": "6d15f9766e35b2c78ce5402ed44cdf57",
"text": "Models that acquire semantic representations from both linguistic and perceptual input are of interest to researchers in NLP because of the obvious parallels with human language learning. Performance advantages of the multi-modal approach over language-only models have been clearly established when models are required to learn concrete noun concepts. However, such concepts are comparatively rare in everyday language. In this work, we present a new means of extending the scope of multi-modal models to more commonly-occurring abstract lexical concepts via an approach that learns multimodal embeddings. Our architecture outperforms previous approaches in combining input from distinct modalities, and propagates perceptual information on concrete concepts to abstract concepts more effectively than alternatives. We discuss the implications of our results both for optimizing the performance of multi-modal models and for theories of abstract conceptual representation.",
"title": ""
}
] | [
{
"docid": "b57377a695ce7c5114d61bbe4f29e7a1",
"text": "Referring to existing illustrations helps novice drawers to realize their ideas. To find such helpful references from a large image collection, we first build a semantic vector representation of illustrations by training convolutional neural networks. As the proposed vector space correctly reflects the semantic meanings of illustrations, users can efficiently search for references with similar attributes. Besides the search with a single query, a semantic morphing algorithm that searches the intermediate illustrations that gradually connect two queries is proposed. Several experiments were conducted to demonstrate the effectiveness of our methods.",
"title": ""
},
{
"docid": "bf2c7b1d93b6dee024336506fb5a2b32",
"text": "In this paper we present the first public, online demonstration of MaxTract; a tool that converts PDF files containing mathematics into multiple formats including LTEX, HTML with embedded MathML, and plain text. Using a bespoke PDF parser and image analyser, we directly extract character and font information to use as input for a linear grammar which, in conjunction with specialised drivers, can accurately recognise and reproduce both the two dimensional relationships between symbols in mathematical formulae and the one dimensional relationships present in standard text. The main goals of MaxTract are to provide translation services into standard mathematical markup languages and to add accessibility to mathematical documents on multiple levels. This includes both accessibility in the narrow sense of providing access to content for print impaired users, such as those with visual impairments, dyslexia or dyspraxia, as well as more generally to enable any user access to the mathematical content at more re-usable levels than merely visual. MaxTract produces output compatible with web browsers, screen readers, and tools such as copy and paste, which is achieved by enriching the regular text with mathematical markup. The output can also be used directly, within the limits of the presentation MathML produced, as machine readable mathematical input to software systems such as Mathematica or Maple.",
"title": ""
},
{
"docid": "783d7251658f9077e05a7b1b9bd60835",
"text": "A method is presented for the representation of (pictures of) faces. Within a specified framework the representation is ideal. This results in the characterization of a face, to within an error bound, by a relatively low-dimensional vector. The method is illustrated in detail by the use of an ensemble of pictures taken for this purpose.",
"title": ""
},
{
"docid": "16995051681cebf1e2dba1484a3f85bf",
"text": "A core problem in learning semantic parsers from denotations is picking out consistent logical forms—those that yield the correct denotation—from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WIKITABLEQUESTIONS dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.",
"title": ""
},
{
"docid": "8201ba18da15b1acb1e399e99d1fc586",
"text": "Articles in the financial press suggest that institutional investors are overly focused on short-term profitability leading mangers to manipulate earnings fearing that a short-term profit disappointment will lead institutions to liquidate their holdings. This paper shows, however, that the absolute value of discretionary accruals declines with institutional ownership. The result is consistent with managers recognizing that institutional owners are better informed than individual investors, which reduces the perceived benefit of managing accruals. We also find that as institutional ownership increases, stock prices tend to reflect a greater proportion of the information in future earnings relative to current earnings. This result is consistent with institutional investors looking beyond current earnings compared to individual investors. Collectively, the results offer strong evidence that managers do not manipulate earnings due to pressure from institutional investors who are overly focused on short-term profitability.",
"title": ""
},
{
"docid": "2ebb00579fbfbadb07331bd297e658e9",
"text": "There is risk involved in any construction project. A contractor’s quality assurance system is essential in preventing problems and the reoccurrence of problems. This system ensures consistent quality for the contractor’s clients. An evaluation of the quality systems of 15 construction contractors in Saudi Arabia is discussed here. The evaluation was performed against the ISO 9000 standard. The contractors’ quality systems vary in complexity, ranging from an informal inspection and test system to a comprehensive system. The ISO 9000 clauses most often complied with are those dealing with (1) inspection and test status; (2) inspection and testing; (3) control of nonconformance product; and (4) handling, storage, and preservation. The clauses least complied with concern (1) design control; (2) internal auditing; (3) training; and (4) statistical techniques. Documentation of a quality system is scarce for the majority of the contractors.",
"title": ""
},
{
"docid": "2937b605179b3a0f7657f7ddf5dbcf1a",
"text": "This article presents a survey on crowd analysis using computer vision techniques, covering different aspects such as people tracking, crowd density estimation, event detection, validation, and simulation. It also reports how related the areas of computer vision and computer graphics should be to deal with current challenges in crowd analysis.",
"title": ""
},
{
"docid": "ef15ffc5609653488c68364d2ba77149",
"text": "BACKGROUND\nBeneficial effects of probiotics have never been analyzed in an animal shelter.\n\n\nHYPOTHESIS\nDogs and cats housed in an animal shelter and administered a probiotic are less likely to have diarrhea of ≥2 days duration than untreated controls.\n\n\nANIMALS\nTwo hundred and seventeen cats and 182 dogs.\n\n\nMETHODS\nDouble blinded and placebo controlled. Shelter dogs and cats were housed in 2 separate rooms for each species. For 4 weeks, animals in 1 room for each species was fed Enterococcus faecium SF68 while animals in the other room were fed a placebo. After a 1-week washout period, the treatments by room were switched and the study continued an additional 4 weeks. A standardized fecal score system was applied to feces from each animal every day by a blinded individual. Feces of animals with and without diarrhea were evaluated for enteric parasites. Data were analyzed by a generalized linear mixed model using a binomial distribution with treatment being a fixed effect and the room being a random effect.\n\n\nRESULTS\nThe percentage of cats with diarrhea ≥2 days was significantly lower (P = .0297) in the probiotic group (7.4%) when compared with the placebo group (20.7%). Statistical differences between groups of dogs were not detected but diarrhea was uncommon in both groups of dogs during the study.\n\n\nCONCLUSION AND CLINICAL IMPORTANCE\nCats fed SF68 had fewer episodes of diarrhea of ≥2 days when compared with controls suggests the probiotic may have beneficial effects on the gastrointestinal tract.",
"title": ""
},
{
"docid": "bb86cae865113f2907a4cecb5f89453f",
"text": "In this paper, we study the problem of learning from weakly labeled data, where labels of the training examples are incomplete. This includes, for example, (i) semi-supervised learning where labels are partially known; (ii) multi-instance learning where labels are implicitly known; and (iii) clustering where labels are completely unknown. Unlike supervised learning, learning with weak labels involves a difficult Mixed-Integer Programming (MIP) problem. Therefore, it can suffer from poor scalability and may also get stuck in local minimum. In this paper, we focus on SVMs and propose the WellSVM via a novel label generation strategy. This leads to a convex relaxation of the original MIP, which is at least as tight as existing convex Semi-Definite Programming (SDP) relaxations. Moreover, the WellSVM can be solved via a sequence of SVM subproblems that are much more scalable than previous convex SDP relaxations. Experiments on three weakly labeled learning tasks, namely, (i) semi-supervised learning; (ii) multi-instance learning for locating regions of interest in content-based information retrieval; and (iii) clustering, clearly demonstrate improved performance, and WellSVM is also readily applicable on large data sets.",
"title": ""
},
{
"docid": "df997cfc15654a0c9886d52c4166f649",
"text": "Network embedding aims to represent each node in a network as a low-dimensional feature vector that summarizes the given node’s (extended) network neighborhood. The nodes’ feature vectors can then be used in various downstream machine learning tasks. Recently, many embedding methods that automatically learn the features of nodes have emerged, such as node2vec and struc2vec, which have been used in tasks such as node classification, link prediction, and node clustering, mainly in the social network domain. There are also other embedding methods that explicitly look at the connections between nodes, i.e., the nodes’ network neighborhoods, such as graphlets. Graphlets have been used in many tasks such as network comparison, link prediction, and network clustering, mainly in the computational biology domain. Even though the two types of embedding methods (node2vec/struct2vec versus graphlets) have a similar goal – to represent nodes as features vectors, no comparisons have been made between them, possibly because they have originated in the different domains. Therefore, in this study, we compare graphlets to node2vec and struc2vec, and we do so in the task of network alignment. In evaluations on synthetic and real-world biological networks, we find that graphlets are both more accurate and faster than node2vec and struc2vec.",
"title": ""
},
{
"docid": "e69dd688041be302ce973e22457622f9",
"text": "In the last decade, supervised deep learning approaches have been extensively employed in visual odometry (VO) applications, which is not feasible in environments where labelled data is not abundant. On the other hand, unsupervised deep learning approaches for localization and mapping in unknown environments from unlabelled data have received comparatively less attention in VO research. In this study, we propose a generative unsupervised learning framework that predicts 6-DoF pose camera motion and monocular depth map of the scene from unlabelled RGB image sequences, using deep convolutional Generative Adversarial Networks (GANs). We create a supervisory signal by warping view sequences and assigning the re-projection minimization to the objective loss function that is adopted in multi-view pose estimation and single-view depth generation network. Detailed quantitative and qualitative evaluations of the proposed framework on the KITTI [1] and Cityscapes [2] datasets show that the proposed method outperforms both existing traditional and unsupervised deep VO methods providing better results for both pose estimation and depth recovery.",
"title": ""
},
{
"docid": "0a43496b7fbfeb54a6283fcac438d5dc",
"text": "Enterprise Resource Planning (ERP) has come to mean many things over the last several decades. Divergent applications by practitioners and academics, as well as by researchers in alternative fields of study, has allowed for both considerable proliferation of information on the topic but also for a considerable amount of confusion regarding the meaning of the term. In reviewing ERP research two distinct research streams emerge. The first focuses on the fundamental corporate capabilities driving ERP as a strategic concept. A second stream focuses on the details associated with implementing information systems and their relative success and cost. This paper briefly discusses these research streams and suggests some ideas for related future research. Published in the European Journal of Operational Research 146(2), 2003",
"title": ""
},
{
"docid": "893e1e17570e5daa83827d91b1503185",
"text": "We introduce a similarity-based machine learning approach for detecting non-market, adversarial, malicious Android apps. By adversarial, we mean those apps designed to avoid detection. Our approach relies on identifying the Android applications that are similar to an adversarial known Android malware. In our approach, similarity is detected statically by computing the similarity score between two apps based on their methods similarity. The similarity between methods is computed using the normalized compression distance (NCD) in dependence of either zlib or bz2 compressors. The NCD calculates the semantic similarity between pair of methods in two compared apps. The first app is one of the sample apps in the input dataset, while the second app is one of malicious apps stored in a malware database. Later all the computed similarity scores are used as features for training a supervised learning classifier to detect suspicious apps with high similarity score to the malicious ones in the database.",
"title": ""
},
{
"docid": "c51cb80a1a5afe25b16a5772ccee0e6b",
"text": "Face perception relies on computations carried out in face-selective cortical areas. These areas have been intensively investigated for two decades, and this work has been guided by an influential neural model suggested by Haxby and colleagues in 2000. Here, we review new findings about face-selective areas that suggest the need for modifications and additions to the Haxby model. We suggest a revised framework based on (a) evidence for multiple routes from early visual areas into the face-processing system, (b) information about the temporal characteristics of these areas, (c) indications that the fusiform face area contributes to the perception of changeable aspects of faces, (d) the greatly elevated responses to dynamic compared with static faces in dorsal face-selective brain areas, and (e) the identification of three new anterior face-selective areas. Together, these findings lead us to suggest that face perception depends on two separate pathways: a ventral stream that represents form information and a dorsal stream driven by motion and form information.",
"title": ""
},
{
"docid": "7d4707e90adb42c75b4f84b10fce65c3",
"text": "Sleep is a complex phenomenon that could be understood and assessed at many levels. Sleep could be described at the behavioral level (relative lack of movements and awareness and responsiveness) and at the brain level (based on EEG activity). Sleep could be characterized by its duration, by its distribution during the 24-hr day period, and by its quality (e.g., consolidated versus fragmented). Different methods have been developed to assess various aspects of sleep. This chapter covers the most established and common methods used to assess sleep in infants and children. These methods include polysomnography, videosomnography, actigraphy, direct observations, sleep diaries, and questionnaires. The advantages and disadvantages of each method are highlighted.",
"title": ""
},
{
"docid": "b8377cba1fe8bca54e12b3c707d3cbaf",
"text": "The structure of foot-and-mouth disease virus has been determined at close to atomic resolution by X-ray diffraction without experimental phase information. The virus shows similarities with other picornaviruses but also several unique features. The canyon or pit found in other picornaviruses is absent; this has important implications for cell attachment. The most immunogenic portion of the capsid, which acts as a potent peptide vaccine, forms a disordered protrusion on the virus surface.",
"title": ""
},
{
"docid": "af0a1a8af70423ec09e0bb1e47f2e3f6",
"text": "Infants are experts at playing, with an amazing ability to generate novel structured behaviors in unstructured environments that lack clear extrinsic reward signals. We seek to replicate some of these abilities with a neural network that implements curiosity-driven intrinsic motivation. Using a simple but ecologically naturalistic simulated environment in which the agent can move and interact with objects it sees, the agent learns a world model predicting the dynamic consequences of its actions. Simultaneously, the agent learns to take actions that adversarially challenge the developing world model, pushing the agent to explore novel and informative interactions with its environment. We demonstrate that this policy leads to the self-supervised emergence of a spectrum of complex behaviors, including ego motion prediction, object attention, and object gathering. Moreover, the world model that the agent learns supports improved performance on object dynamics prediction and localization tasks. Our results are a proof-of-principle that computational models of intrinsic motivation might account for key features of developmental visuomotor learning in infants.",
"title": ""
},
{
"docid": "f81430ff3be528c891262ddb8a730699",
"text": "Clustering validation has long been recognized as one of the vital issues essential to the success of clustering applications. In general, clustering validation can be categorized into two classes, external clustering validation and internal clustering validation. In this paper, we focus on internal clustering validation and present a study of 11 widely used internal clustering validation measures for crisp clustering. The results of this study indicate that these existing measures have certain limitations in different application scenarios. As an alternative choice, we propose a new internal clustering validation measure, named clustering validation index based on nearest neighbors (CVNN), which is based on the notion of nearest neighbors. This measure can dynamically select multiple objects as representatives for different clusters in different situations. Experimental results show that CVNN outperforms the existing measures on both synthetic data and real-world data in different application scenarios.",
"title": ""
},
{
"docid": "88c1ab7e817118ee01fb28bf32ed2e23",
"text": "Field experiment was conducted on fodder maize to explore the potential of integrated use of chemical, organic and biofertilizers for improving maize growth, beneficial microflora in the rhizosphere and the economic returns. The treatments were designed to make comparison of NPK fertilizer with different combinations of half dose of NP with organic and biofertilizers viz. biological potassium fertilizer (BPF), Biopower, effective microorganisms (EM) and green force compost (GFC). Data reflected maximum crop growth in terms of plant height, leaf area and fresh biomass with the treatment of full NPK; and it was followed by BPF+full NP. The highest uptake of NPK nutrients by crop was recorded as: N under half NP+Biopower; P in BPF+full NP; and K from full NPK. The rhizosphere microflora enumeration revealed that Biopower+EM applied along with half dose of GFC soil conditioner (SC) or NP fertilizer gave the highest count of N-fixing bacteria (Azotobacter, Azospirillum, Azoarcus andZoogloea). Regarding the P-solubilizing bacteria,Bacillus was having maximum population with Biopower+BPF+half NP, andPseudomonas under Biopower+EM+half NP treatment. It was concluded that integration of half dose of NP fertilizer with Biopower+BPF / EM can give similar crop yield as with full rate of NP fertilizer; and through reduced use of fertilizers the production cost is minimized and the net return maximized. However, the integration of half dose of NP fertilizer with biofertilizers and compost did not give maize fodder growth and yield comparable to that from full dose of NPK fertilizers.",
"title": ""
}
] | scidocsrr |
c7b3a675e2e93e6900bfba1fea945c7f | Grab 'n Run: Secure and Practical Dynamic Code Loading for Android Applications | [
{
"docid": "6ee601387e550e896b3a3938016b03f7",
"text": "Android phone manufacturers are under the perpetual pressure to move quickly on their new models, continuously customizing Android to fit their hardware. However, the security implications of this practice are less known, particularly when it comes to the changes made to Android's Linux device drivers, e.g., those for camera, GPS, NFC etc. In this paper, we report the first study aimed at a better understanding of the security risks in this customization process. Our study is based on ADDICTED, a new tool we built for automatically detecting some types of flaws in customized driver protection. Specifically, on a customized phone, ADDICTED performs dynamic analysis to correlate the operations on a security-sensitive device to its related Linux files, and then determines whether those files are under-protected on the Linux layer by comparing them with their counterparts on an official Android OS. In this way, we can detect a set of likely security flaws on the phone. Using the tool, we analyzed three popular phones from Samsung, identified their likely flaws and built end-to-end attacks that allow an unprivileged app to take pictures and screenshots, and even log the keys the user enters through touch screen. Some of those flaws are found to exist on over a hundred phone models and affect millions of users. We reported the flaws and helped the manufacturers fix those problems. We further studied the security settings of device files on 2423 factory images from major phone manufacturers, discovered over 1,000 vulnerable images and also gained insights about how they are distributed across different Android versions, carriers and countries.",
"title": ""
},
{
"docid": "6eb2c0e22ecc0816cb5f83292902d799",
"text": "In this paper, we demonstrate that Android malware can bypass all automated analysis systems, including AV solutions, mobile sandboxes, and the Google Bouncer. We propose a tool called Sand-Finger for the fingerprinting of Android-based analysis systems. By analyzing the fingerprints of ten unique analysis environments from different vendors, we were able to find characteristics in which all tested environments differ from actual hardware. Depending on the availability of an analysis system, malware can either behave benignly or load malicious code at runtime. We classify this group of malware as Divide-and-Conquer attacks that are efficiently obfuscated by a combination of fingerprinting and dynamic code loading. In this group, we aggregate attacks that work against dynamic as well as static analysis. To demonstrate our approach, we create proof-of-concept malware that surpasses up-to-date malware scanners for Android. We also prove that known malware samples can enter the Google Play Store by modifying them only slightly. Due to Android's lack of an API for malware scanning at runtime, it is impossible for AV solutions to secure Android devices against these attacks.",
"title": ""
}
] | [
{
"docid": "328a3e05fac7d118a99afd6197dac918",
"text": "Neural networks have recently had a lot of success for many tasks. However, neural network architectures that perform well are still typically designed manually by experts in a cumbersome trial-and-error process. We propose a new method to automatically search for well-performing CNN architectures based on a simple hill climbing procedure whose operators apply network morphisms, followed by short optimization runs by cosine annealing. Surprisingly, this simple method yields competitive results, despite only requiring resources in the same order of magnitude as training a single network. E.g., on CIFAR-10, our method designs and trains networks with an error rate below 6% in only 12 hours on a single GPU; training for one day reduces this error further, to almost 5%.",
"title": ""
},
{
"docid": "01e6823392427274c4bd50cc1bf6bf6c",
"text": "The neocortex has a high capacity for plasticity. To understand the full scope of this capacity, it is essential to know how neurons choose particular partners to form synaptic connections. By using multineuron whole-cell recordings and confocal microscopy we found that axons of layer V neocortical pyramidal neurons do not preferentially project toward the dendrites of particular neighboring pyramidal neurons; instead, axons promiscuously touch all neighboring dendrites without any bias. Functional synaptic coupling of a small fraction of these neurons is, however, correlated with the existence of synaptic boutons at existing touch sites. These data provide the first direct experimental evidence for a tabula rasa-like structural matrix between neocortical pyramidal neurons and suggests that pre- and postsynaptic interactions shape the conversion between touches and synapses to form specific functional microcircuits. These data also indicate that the local neocortical microcircuit has the potential to be differently rewired without the need for remodeling axonal or dendritic arbors.",
"title": ""
},
{
"docid": "490df7bfea3338d98cbc0bd945463606",
"text": "This study examined perceived coping (perceived problem-solving ability and progress in coping with problems) as a mediator between adult attachment (anxiety and avoidance) and psychological distress (depression, hopelessness, anxiety, anger, and interpersonal problems). Survey data from 515 undergraduate students were analyzed using structural equation modeling. Results indicated that perceived coping fully mediated the relationship between attachment anxiety and psychological distress and partially mediated the relationship between attachment avoidance and psychological distress. These findings suggest not only that it is important to consider attachment anxiety or avoidance in understanding distress but also that perceived coping plays an important role in these relationships. Implications for these more complex relations are discussed for both counseling interventions and further research.",
"title": ""
},
{
"docid": "588a4eccb49bf0edf45456319b6d8ee4",
"text": "The VIENNA rectifiers have advantages of high efficiency as well as low output harmonics and are widely utilized in power conversion system when dc power sources are needed for supplying dc loads. VIENNA rectifiers based on three-phase/level can provide two voltage outputs with a neutral line at relatively low costs. However, total harmonic distortion (THD) of input current deteriorates seriously when unbalanced voltages occur. In addition, voltage outputs depend on system parameters, especially multiple loads. Therefore, unbalance output voltage controller and modified carrier-based pulse-width modulation (CBPWM) are proposed in this paper to solve the above problems. Unbalanced output voltage controller is designed based on average model considering independent output voltage and loads conditions. Meanwhile, reference voltages are modified according to different neutral point voltage conditions. The simulation and experimental results are presented to verify the proposed method.",
"title": ""
},
{
"docid": "2ed43c3b8ea0997d334f48e012a357c9",
"text": "While recognized as a theoretical and practical concept for over 20 years, only now ransomware has taken centerstage as one of the most prevalent cybercrimes. Various reports demonstrate the enormous burden placed on companies, which have to grapple with the ongoing attack waves. At the same time, our strategic understanding of the threat and the adversarial interaction between organizations and cybercriminals perpetrating ransomware attacks is lacking. In this paper, we develop, to the best of our knowledge, the first gametheoretic model of the ransomware ecosystem. Our model captures a multi-stage scenario involving organizations from different industry sectors facing a sophisticated ransomware attacker. We place particular emphasis on the decision of companies to invest in backup technologies as part of a contingency plan, and the economic incentives to pay a ransom if impacted by an attack. We further study to which degree comprehensive industry-wide backup investments can serve as a deterrent for ongoing attacks.",
"title": ""
},
{
"docid": "1ae161787669032d143226b41a380a66",
"text": "Automatic judgment prediction aims to predict the judicial results based on case materials. It has been studied for several decades mainly by lawyers and judges, considered as a novel and prospective application of artificial intelligence techniques in the legal field. Most existing methods follow the text classification framework, which fails to model the complex interactions among complementary case materials. To address this issue, we formalize the task as Legal Reading Comprehension according to the legal scenario. Following the working protocol of human judges, LRC predicts the final judgment results based on three types of information, including fact description, plaintiffs’ pleas, and law articles. Moreover, we propose a novel LRC model, AutoJudge, which captures the complex semantic interactions among facts, pleas, and laws. In experiments, we construct a real-world civil case dataset for LRC. Experimental results on this dataset demonstrate that our model achieves significant improvement over stateof-the-art models. We will publish all source codes and datasets of this work on github. com for further research.",
"title": ""
},
{
"docid": "8bb30efa3f14fa0860d1e5bc1265c988",
"text": "The introduction of microgrids in distribution networks based on power electronics facilitates the use of renewable energy resources, distributed generation (DG) and storage systems while improving the quality of electric power and reducing losses thus increasing the performance and reliability of the electrical system, opens new horizons for microgrid applications integrated into electrical power systems. The hierarchical control structure consists of primary, secondary, and tertiary levels for microgrids that mimic the behavior of the mains grid is reviewed. The main objective of this paper is to give a description of state of the art for the distributed power generation systems (DPGS) based on renewable energy and explores the power converter connected in parallel to the grid which are distinguished by their contribution to the formation of the grid voltage and frequency and are accordingly classified in three classes. This analysis is extended focusing mainly on the three classes of configurations grid-forming, grid-feeding, and gridsupporting. The paper ends up with an overview and a discussion of the control structures and strategies to control distribution power generation system (DPGS) units connected to the network. Keywords— Distributed power generation system (DPGS); hierarchical control; grid-forming; grid-feeding; grid-supporting. Nomenclature Symbols id − iq Vd − Vq P Q ω E f U",
"title": ""
},
{
"docid": "9cddaea30d7dda82537c273e97bff008",
"text": "A low-offset latched comparator using new dynamic offset cancellation technique is proposed. The new technique achieves low offset voltage without pre-amplifier and quiescent current. Furthermore the overdrive voltage of the input transistor can be optimized to reduce the offset voltage of the comparator independent of the input common mode voltage. A prototype comparator has been fabricated in 90 nm 9M1P CMOS technology with 152 µm2. Experimental results show that the comparator achieves 3.8 mV offset at 1 sigma at 500 MHz operating, while dissipating 39 μW from a 1.2 V supply.",
"title": ""
},
{
"docid": "f47019a78ee833dcb8c5d15a4762ccf9",
"text": "It has recently been shown that Bondi-van der Burg-Metzner-Sachs supertranslation symmetries imply an infinite number of conservation laws for all gravitational theories in asymptotically Minkowskian spacetimes. These laws require black holes to carry a large amount of soft (i.e., zero-energy) supertranslation hair. The presence of a Maxwell field similarly implies soft electric hair. This Letter gives an explicit description of soft hair in terms of soft gravitons or photons on the black hole horizon, and shows that complete information about their quantum state is stored on a holographic plate at the future boundary of the horizon. Charge conservation is used to give an infinite number of exact relations between the evaporation products of black holes which have different soft hair but are otherwise identical. It is further argued that soft hair which is spatially localized to much less than a Planck length cannot be excited in a physically realizable process, giving an effective number of soft degrees of freedom proportional to the horizon area in Planck units.",
"title": ""
},
{
"docid": "6514ddb39c465a8ca207e24e60071e7f",
"text": "The psychometric properties and clinical utility of the Separation Anxiety Avoidance Inventory, child and parent version (SAAI-C/P) were examined in two studies. The aim of the SAAI, a self- and parent-report measure, is to evaluate the avoidance relating to separation anxiety disorder (SAD) situations. In the first study, a school sample of 384 children and their parents (n = 279) participated. In the second study, 102 children with SAD and 35 children with other anxiety disorders (AD) were investigated. In addition, 93 parents of children with SAD, and 35 parents of children with other AD participated. A two-factor structure was confirmed by confirmatory factor analysis. The SAAI-C and SAAI-P demonstrated good internal consistency, test-retest reliability, as well as construct and discriminant validity. Furthermore, the SAAI was sensitive to treatment change. The parent-child agreement was substantial. Overall, these results provide support for the use of the SAAI-C/P version in clinical and research settings.",
"title": ""
},
{
"docid": "ad3147f3a633ec8612dc25dfde4a4f0c",
"text": "A half-bridge integrated zero-voltage-switching (ZVS) full-bridge converter with reduced conduction loss for battery on-board chargers in electric vehicles (EVs) or plug-in hybrid electric vehicles (PHEVs) is proposed in this paper. The proposed converter features a reduction in primary-conduction loss and a lower secondary-voltage stress. In addition, the proposed converter has the most favorable characteristics as battery chargers as follows: a full ZVS capability and a significantly reduced output filter size due to the improved output waveform. In this paper, the circuit configuration, operation principle, and relevant analysis results of the proposed converter are described, followed by the experimental results on a prototype converter realized with a scale-downed 2-kW battery charger for EVs or PHEVs. The experimental results validate the theoretical analysis and show the effectiveness of the proposed converter as battery on-board chargers for EVs or PHEVs.",
"title": ""
},
{
"docid": "bad98c6d356f2dd49ec50365276f0247",
"text": "In this paper we investigate the co-authorship graph obtained from all papers published at SIGMOD between 1975 and 2002. We find some interesting facts, for instance, the identity of the authors who, on average, are \"closest\" to all other authors at a given time. We also show that SIGMOD's co-authorship graph is yet another example of a small world---a graph topology which has received a lot of attention recently. A companion web site for this paper can be found at http://db.cs.ualberta.ca/coauthorship.",
"title": ""
},
{
"docid": "a4aab340255c068137d3b3a1daaf97b5",
"text": "We present here SEMILAR, a SEMantic simILARity toolkit. SEMILAR implements a number of algorithms for assessing the semantic similarity between two texts. It is available as a Java library and as a Java standalone application offering GUI-based access to the implemented semantic similarity methods. Furthermore, it offers facilities for manual semantic similarity annotation by experts through its component SEMILAT (a SEMantic simILarity Annotation Tool).",
"title": ""
},
{
"docid": "1e46143d47f5f221094d0bb09505be80",
"text": "Clinical Scenario: Patients who experience prolonged concussion symptoms can be diagnosed with postconcussion syndrome (PCS) when those symptoms persist longer than 4 weeks. Aerobic exercise protocols have been shown to be effective in improving physical and mental aspects of health. Emerging research suggests that aerobic exercise may be useful as a treatment for PCS, where exercise allows patients to feel less isolated and more active during the recovery process.\n\n\nCLINICAL QUESTION\nIs aerobic exercise more beneficial in reducing symptoms than current standard care in patients with prolonged symptoms or PCS lasting longer than 4 weeks? Summary of Key Findings: After a thorough literature search, 4 studies relevant to the clinical question were selected. Of the 4 studies, 1 study was a randomized control trial and 3 studies were case series. All 4 studies investigated aerobic exercise protocol as treatment for PCS. Three studies demonstrated a greater rate of symptom improvement from baseline assessment to follow-up after a controlled subsymptomatic aerobic exercise program. One study showed a decrease in symptoms in the aerobic exercise group compared with the full-body stretching group. Clinical Bottom Line: There is moderate evidence to support subsymptomatic aerobic exercise as a treatment of PCS; therefore, it should be considered as a clinical option for reducing PCS and prolonged concussion symptoms. A previously validated protocol, such as the Buffalo Concussion Treadmill test, Balke protocol, or rating of perceived exertion, as mentioned in this critically appraised topic, should be used to measure baseline values and treatment progression. Strength of Recommendation: Level C evidence exists that the aerobic exercise protocol is more effective than the current standard of care in treating PCS.",
"title": ""
},
{
"docid": "5c97711d149d6744e3ea6d070016cd39",
"text": "This paper presents a clock generator for a MIPI M-PHY serial link transmitter, which includes an ADPLL, a digitally controlled oscillator (DCO), a programmable multiplier, and the actual serial driver. The paper focuses on the design of a DCO and how to enhance the frequency resolution to diminish the quantization noise introduced by the frequency discretization. As a result, a 17-kHz DCO frequency tuning resolution is demonstrated. Furthermore, implementation details of a low-power programmable 1-to-2-or-4 frequency multiplier are elaborated. The design has been implemented in a 40-nm CMOS process. The measurement results verify that the circuit provides the MIPI clock data rates from 1.248 GHz to 5.83 GHz. The DCO and multiplier unit dissipates a maximum of 3.9 mW from a 1.1 V supply and covers a small die area of 0.012 mm2.",
"title": ""
},
{
"docid": "9a98e97bb786a0c57a68e4cf8e4fb7a8",
"text": "The application of frequent patterns in classification has demonstrated its power in recent studies. It often adopts a two-step approach: frequent pattern (or classification rule) mining followed by feature selection (or rule ranking). However, this two-step process could be computationally expensive, especially when the problem scale is large or the minimum support is low. It was observed that frequent pattern mining usually produces a huge number of \"patterns\" that could not only slow down the mining process but also make feature selection hard to complete. In this paper, we propose a direct discriminative pattern mining approach, DDPMine, to tackle the efficiency issue arising from the two-step approach. DDPMine performs a branch-and-bound search for directly mining discriminative patterns without generating the complete pattern set. Instead of selecting best patterns in a batch, we introduce a \"feature-centered\" mining approach that generates discriminative patterns sequentially on a progressively shrinking FP-tree by incrementally eliminating training instances. The instance elimination effectively reduces the problem size iteratively and expedites the mining process. Empirical results show that DDPMine achieves orders of magnitude speedup without any downgrade of classification accuracy. It outperforms the state-of-the-art associative classification methods in terms of both accuracy and efficiency.",
"title": ""
},
{
"docid": "9809521909e01140c367dbfbf3a4aacd",
"text": "Understanding how housing values evolve over time is important to policy makers, consumers and real estate professionals. Existing methods for constructing housing indices are computed at a coarse spatial granularity, such as metropolitan regions, which can mask or distort price dynamics apparent in local markets, such as neighborhoods and census tracts. A challenge in moving to estimates at, for example, the census tract level is the scarcity of spatiotemporally localized house sales observations. Our work aims to address this challenge by leveraging observations from multiple census tracts discovered to have correlated valuation dynamics. Our proposed Bayesian nonparametric approach builds on the framework of latent factor models to enable a flexible, data-driven method for inferring the clustering of correlated census tracts. We explore methods for scalability and parallelizability of computations, yielding a housing valuation index at the level of census tract rather than zip code, and on a monthly basis rather than quarterly. Our analysis is provided on a large Seattle metropolitan housing dataset.",
"title": ""
},
{
"docid": "a0f8af71421d484cbebb550a0bf59a6d",
"text": "researchers and practitioners doing work in these three related areas. Risk management, fraud detection, and intrusion detection all involve monitoring the behavior of populations of users (or their accounts) to estimate, plan for, avoid, or detect risk. In his paper, Til Schuermann (Oliver, Wyman, and Company) categorizes risk into market risk, credit risk, and operating risk (or fraud). Similarly, Barry Glasgow (Metropolitan Life Insurance Co.) discusses inherent risk versus fraud. This workshop focused primarily on what might loosely be termed “improper behavior,” which includes fraud, intrusion, delinquency, and account defaulting. However, Glasgow does discuss the estimation of “inherent risk,” which is the bread and butter of insurance firms. Problems of predicting, preventing, and detecting improper behavior share characteristics that complicate the application of existing AI and machine-learning technologies. In particular, these problems often have or require more than one of the following that complicate the technical problem of automatically learning predictive models: large volumes of (historical) data, highly skewed distributions (“improper behavior” occurs far less frequently than “proper behavior”), changing distributions (behaviors change over time), widely varying error costs (in certain contexts, false positive errors are far more costly than false negatives), costs that change over time, adaptation of undesirable behavior to detection techniques, changing patterns of legitimate behavior, the trad■ The 1997 AAAI Workshop on AI Approaches to Fraud Detection and Risk Management brought together over 50 researchers and practitioners to discuss problems of fraud detection, computer intrusion detection, and risk scoring. This article presents highlights, including discussions of problematic issues that are common to these application domains, and proposed solutions that apply a variety of AI techniques.",
"title": ""
},
{
"docid": "4765cc56ea91dc8835be233bc227ec62",
"text": "Recognizing plants is a vital problem especially for biologists, chemists, and environmentalists. Plant recognition can be performed by human experts manually but it is a time consuming and low-efficiency process. Automation of plant recognition is an important process for the fields working with plants. This paper presents an approach for plant recognition using leaf images. Shape and color features extracted from leaf images are used with k-Nearest Neighbor, Support Vector Machines, Naive Bayes, and Random Forest classification algorithms to recognize plant types. The presented approach is tested on 1897 leaf images and 32 kinds of leaves. The results demonstrated that success rate of plant recognition can be improved up to 96% with Random Forest method when both shape and color features are used.",
"title": ""
},
{
"docid": "44c0da7556c3fd5faacc7faf0d3692cf",
"text": "The study examined the etiology of individual differences in early drawing and of its longitudinal association with school mathematics. Participants (N = 14,760), members of the Twins Early Development Study, were assessed on their ability to draw a human figure, including number of features, symmetry, and proportionality. Human figure drawing was moderately stable across 6 months (average r = .40). Individual differences in drawing at age 4½ were influenced by genetic (.21), shared environmental (.30), and nonshared environmental (.49) factors. Drawing was related to later (age 12) mathematical ability (average r = .24). This association was explained by genetic and shared environmental factors that also influenced general intelligence. Some genetic factors, unrelated to intelligence, also contributed to individual differences in drawing.",
"title": ""
}
] | scidocsrr |
9432e1f552681e034a3e8875c681fa59 | A Retrieve-and-Edit Framework for Predicting Structured Outputs | [
{
"docid": "8ac8ad61dc5357f3dc3ab1020db8bada",
"text": "We show how to learn many layers of features on color images and we use these features to initialize deep autoencoders. We then use the autoencoders to map images to short binary codes. Using semantic hashing [6], 28-bit codes can be used to retrieve images that are similar to a query image in a time that is independent of the size of the database. This extremely fast retrieval makes it possible to search using multiple di erent transformations of the query image. 256-bit binary codes allow much more accurate matching and can be used to prune the set of images found using the 28-bit codes.",
"title": ""
},
{
"docid": "121daac04555fd294eef0af9d0fb2185",
"text": "In this paper, we apply a general deep learning (DL) framework for the answer selection task, which does not depend on manually defined features or linguistic tools. The basic framework is to build the embeddings of questions and answers based on bidirectional long short-term memory (biLSTM) models, and measure their closeness by cosine similarity. We further extend this basic model in two directions. One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework. The other direction is to utilize a simple but efficient attention mechanism in order to generate the answer representation according to the question context. Several variations of models are provided. The models are examined by two datasets, including TREC-QA and InsuranceQA. Experimental results demonstrate that the proposed models substantially outperform several strong baselines.",
"title": ""
},
{
"docid": "1a6ece40fa87e787f218902eba9b89f7",
"text": "Learning a similarity function between pairs of objects is at the core of learning to rank approaches. In information retrieval tasks we typically deal with query-document pairs, in question answering -- question-answer pairs. However, before learning can take place, such pairs needs to be mapped from the original space of symbolic words into some feature space encoding various aspects of their relatedness, e.g. lexical, syntactic and semantic. Feature engineering is often a laborious task and may require external knowledge sources that are not always available or difficult to obtain. Recently, deep learning approaches have gained a lot of attention from the research community and industry for their ability to automatically learn optimal feature representation for a given task, while claiming state-of-the-art performance in many tasks in computer vision, speech recognition and natural language processing. In this paper, we present a convolutional neural network architecture for reranking pairs of short texts, where we learn the optimal representation of text pairs and a similarity function to relate them in a supervised way from the available training data. Our network takes only words in the input, thus requiring minimal preprocessing. In particular, we consider the task of reranking short text pairs where elements of the pair are sentences. We test our deep learning system on two popular retrieval tasks from TREC: Question Answering and Microblog Retrieval. Our model demonstrates strong performance on the first task beating previous state-of-the-art systems by about 3\\% absolute points in both MAP and MRR and shows comparable results on tweet reranking, while enjoying the benefits of no manual feature engineering and no additional syntactic parsers.",
"title": ""
}
] | [
{
"docid": "2cddde920b40a245a5e1b4b1abb2e92b",
"text": "The aim of this research was to understand what affects people's privacy preferences in smartphone apps. We ran a four-week study in the wild with 34 participants. Participants were asked to answer questions, which were used to gather their personal context and to measure their privacy preferences by varying app name and purpose of data collection. Our results show that participants shared the most when no information about data access or purpose was given, and shared the least when both of these details were specified. When just one of either purpose or the requesting app was shown, participants shared less when just the purpose was specified than when just the app name was given. We found that the purpose for data access was the predominant factor affecting users' choices. In our study the purpose condition vary from being not specified, to vague to be very specific. Participants were more willing to disclose data when no purpose was specified. When a vague purpose was shown, participants became more privacy-aware and were less willing to disclose their information. When specific purposes were shown participants were more willing to disclose when the purpose for requesting the information appeared to be beneficial to them, and shared the least when the purpose for data access was solely beneficial to developers.",
"title": ""
},
{
"docid": "38cbdd5d5cea74dfe381547dee53d0aa",
"text": "Type confusion, often combined with use-after-free, is the main attack vector to compromise modern C++ software like browsers or virtual machines. Typecasting is a core principle that enables modularity in C++. For performance, most typecasts are only checked statically, i.e., the check only tests if a cast is allowed for the given type hierarchy, ignoring the actual runtime type of the object. Using an object of an incompatible base type instead of a derived type results in type confusion. Attackers abuse such type confusion issues to attack popular software products including Adobe Flash, PHP, Google Chrome, or Firefox. We propose to make all type checks explicit, replacing static checks with full runtime type checks. To minimize the performance impact of our mechanism HexType, we develop both low-overhead data structures and compiler optimizations. To maximize detection coverage, we handle specific object allocation patterns, e.g., placement new or reinterpret_cast which are not handled by other mechanisms. Our prototype results show that, compared to prior work, HexType has at least 1.1 -- 6.1 times higher coverage on Firefox benchmarks. For SPEC CPU2006 benchmarks with overhead, we show a 2 -- 33.4 times reduction in overhead. In addition, HexType discovered 4 new type confusion bugs in Qt and Apache Xerces-C++.",
"title": ""
},
{
"docid": "a93969b08efbc81c80129790d93e39de",
"text": "Text simplification aims to rewrite text into simpler versions, and thus make information accessible to a broader audience. Most previous work simplifies sentences using handcrafted rules aimed at splitting long sentences, or substitutes difficult words using a predefined dictionary. This paper presents a datadriven model based on quasi-synchronous grammar, a formalism that can naturally capture structural mismatches and complex rewrite operations. We describe how such a grammar can be induced from Wikipedia and propose an integer linear programming model for selecting the most appropriate simplification from the space of possible rewrites generated by the grammar. We show experimentally that our method creates simplifications that significantly reduce the reading difficulty of the input, while maintaining grammaticality and preserving its meaning.",
"title": ""
},
{
"docid": "94a35547a45c06a90f5f50246968b77e",
"text": "In this paper we present a process called color transfer which can borrow one image's color characteristics from another. Recently Reinhard and his colleagues reported a pioneering work of color transfer. Their technology can produce very believable results, but has to transform pixel values from RGB to lαβ. Inspired by their work, we advise an approach which can directly deal with the color transfer in any 3D space.From the view of statistics, we consider pixel's value as a three-dimension stochastic variable and an image as a set of samples, so the correlations between three components can be measured by covariance. Our method imports covariance between three components of pixel values while calculate the mean along each of the three axes. Then we decompose the covariance matrix using SVD algorithm and get a rotation matrix. Finally we can scale, rotate and shift pixel data of target image to fit data points' cluster of source image in the current color space and get resultant image which takes on source image's look and feel. Besides the global processing, a swatch-based method is introduced in order to manipulate images' color more elaborately. Experimental results confirm the validity and usefulness of our method.",
"title": ""
},
{
"docid": "47fb3483c8f4a5c0284fec3d3a309c09",
"text": "The Knowledge Base Population (KBP) track at the Text Analysis Conference 2010 marks the second year of this important information extraction evaluation. This paper describes the design and implementation of LCC’s systems which participated in the tasks of Entity Linking, Slot Filling, and the new task of Surprise Slot Filling. For the entity linking task, our top score was achieved through a robust context modeling approach which incorporates topical evidence. For slot filling, we used the output of the entity linking system together with a combination of different types of relation extractors. For surprise slot filling, our customizable extraction system was extremely useful due to the time sensitive nature of the task.",
"title": ""
},
{
"docid": "ea33654bb04b06bae122fbded4b8df49",
"text": "The volume, veracity, variability, and velocity of data produced from the ever increasing network of sensors connected to Internet pose challenges for power management, scalability, and sustainability of cloud computing infrastructure. Increasing the data processing capability of edge computing devices at lower power requirements can reduce several overheads for cloud computing solutions. This paper provides the review of neuromorphic CMOS-memristive architectures that can be integrated into edge computing devices. We discuss why the neuromorphic architectures are useful for edge devices and show the advantages, drawbacks, and open problems in the field of neuromemristive circuits for edge computing.",
"title": ""
},
{
"docid": "8e1b10ebb48b86ce151ab44dc0473829",
"text": "─ Cuckoo Search (CS) is a new met heuristic algorithm. It is being used for solving optimization problem. It was developed in 2009 by XinShe Yang and Susah Deb. Uniqueness of this algorithm is the obligatory brood parasitism behavior of some cuckoo species along with the Levy Flight behavior of some birds and fruit flies. Cuckoo Hashing to Modified CS have also been discussed in this paper. CS is also validated using some test functions. After that CS performance is compared with those of GAs and PSO. It has been shown that CS is superior with respect to GAs and PSO. At last, the effect of the experimental results are discussed and proposed for future research. Index terms ─ Cuckoo search, Levy Flight, Obligatory brood parasitism, NP-hard problem, Markov Chain, Hill climbing, Heavy-tailed algorithm.",
"title": ""
},
{
"docid": "bc85e28da375e2a38e06f0332a18aef0",
"text": "Background: Statistical reviews of the theories of reasoned action (TRA) and planned behavior (TPB) applied to exercise are limited by methodological issues including insufficient sample size and data to examine some moderator associations. Methods: We conducted a meta-analytic review of 111 TRA/TPB and exercise studies and examined the influences of five moderator variables. Results: We found that: a) exercise was most strongly associated with intention and perceived behavioral control; b) intention was most strongly associated with attitude; and c) intention predicted exercise behavior, and attitude and perceived behavioral control predicted intention. Also, the time interval between intention to behavior; scale correspondence; subject age; operationalization of subjective norm, intention, and perceived behavioral control; and publication status moderated the size of the effect. Conclusions: The TRA/TPB effectively explained exercise intention and behavior and moderators of this relationship. Researchers and practitioners are more equipped to design effective interventions by understanding the TRA/TPB constructs.",
"title": ""
},
{
"docid": "499a37563d171054ad0b0d6b8f7007bf",
"text": "For cold-start recommendation, it is important to rapidly profile new users and generate a good initial set of recommendations through an interview process --- users should be queried adaptively in a sequential fashion, and multiple items should be offered for opinion solicitation at each trial. In this work, we propose a novel algorithm that learns to conduct the interview process guided by a decision tree with multiple questions at each split. The splits, represented as sparse weight vectors, are learned through an L_1-constrained optimization framework. The users are directed to child nodes according to the inner product of their responses and the corresponding weight vector. More importantly, to account for the variety of responses coming to a node, a linear regressor is learned within each node using all the previously obtained answers as input to predict item ratings. A user study, preliminary but first in its kind in cold-start recommendation, is conducted to explore the efficient number and format of questions being asked in a recommendation survey to minimize user cognitive efforts. Quantitative experimental validations also show that the proposed algorithm outperforms state-of-the-art approaches in terms of both the prediction accuracy and user cognitive efforts.",
"title": ""
},
{
"docid": "aee91ee5d4cbf51d9ce1344be4e5448c",
"text": "Deep generative models have achieved impressive success in recent years. Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs), as powerful frameworks for deep generative model learning, have largely been considered as two distinct paradigms and received extensive independent studies respectively. This paper aims to establish formal connections between GANs and VAEs through a new formulation of them. We interpret sample generation in GANs as performing posterior inference, and show that GANs and VAEs involve minimizing KL divergences of respective posterior and inference distributions with opposite directions, extending the two learning phases of classic wake-sleep algorithm, respectively. The unified view provides a powerful tool to analyze a diverse set of existing model variants, and enables to transfer techniques across research lines in a principled way. For example, we apply the importance weighting method in VAE literatures for improved GAN learning, and enhance VAEs with an adversarial mechanism that leverages generated samples. Experiments show generality and effectiveness of the transfered techniques.",
"title": ""
},
{
"docid": "5e503aaee94e2dc58f9311959d5a142e",
"text": "The use of the fast Fourier transform in power spectrum analysis is described. Principal advantages of this method are a reduction in the number of computations and in required core storage, and convenient application in nonstationarity tests. The method involves sectioning the record and averaging modified periodograms of the sections. T INTRODLCTION HIS PAPER outlines a method for the application of the fast Fourier transform algorithm to the estimation of power spectra, which involves sectioning the record, taking modified periodograms of these sections, and averaging these modified periodo-grams. In many instances this method involves fewer computations than other methods. Moreover, it involves the transformation of sequences which are shorter than the whole record which is an advantage when computations are to be performed on a machine with limited core storage. Finally, it directly yields a potential resolution in the time dimension which is useful for testing and measuring nonstationarity. As will be pointed out, it is closely related to the method of complex demodulation described Let X(j), j= 0, N-1 be a sample from a stationary , second-order stochastic sequence. Assume for simplicity that E(X) 0. Let X(j) have spectral density Pcf), I f \\ 5%. We take segments, possibly overlapping, of length L with the starting points of these segments D units apart. Let X,(j),j=O, L 1 be the first such segment. Then Xdj) X($ and finally X&) X(j+ (K 1)D) j 0, ,L-1. We suppose we have K such segments; Xl(j), X,($, and that they cover the entire record, Le., that (K-1)DfL N. This segmenting is illustrated in Fig. 1. The method of estimation is as follows. For each segment of length L we calculate a modified periodo-gram. That is, we select a data window W(j), j= 0, L-1, and form the sequences Xl(j)W(j), X,(j) W(j). We then take the finite Fourier transforms A1(n), AK(~) of these sequences. Here ~k(n) xk(j) w(j)e-z~cijnlL 1 L-1 L j-0 and i= Finally, we obtain the K modified periodograms L U Ik(fn) I Ah(%) k 1, 2, K, where f n 0 , o-,L/2 n \" L and 1 Wyj). L j=o The spectral estimate is the average of these periodo",
"title": ""
},
{
"docid": "7f6de1ca650840d1a4fe5dcd8d97541a",
"text": "While child and adolescent physicians are familiar with the treatment of attention-deficit/hyperac-tivity disorder (ADHD), many adult physicians have had little experience with the disorder. It is difficult to develop clinical skills in the management of residual adult manifestations of developmental disorders without clinical experience with their presentation in childhood. Adult patients are increasingly seeking treatment for the symptoms of ADHD, and physicians need practice guidelines. Adult ADHD often presents differently from childhood ADHD. Because adult ADHD can be comorbid with other disorders and has symptoms similar to those of other disorders, it is important to understand differential diagnoses. Physicians should work with patients to provide feedback about their symptoms, to educate them about ADHD, and to set treatment goals. Treatment for ADHD in adults should include a medication trial, restructuring of the patient's environment to make it more compatible with the symptoms of ADHD, and ongoing supportive management to address any residual impairment and to facilitate functional and developmental improvements.",
"title": ""
},
{
"docid": "c718a2f9eb395e3b4a27ddf3208c4233",
"text": "Our objective is to efficiently and accurately estimate the upper body pose of humans in gesture videos. To this end, we build on the recent successful applications of deep convolutional neural networks (ConvNets). Our novelties are: (i) our method is the first to our knowledge to use ConvNets for estimating human pose in videos; (ii) a new network that exploits temporal information from multiple frames, leading to better performance; (iii) showing that pre-segmenting the foreground of the video improves performance; and (iv) demonstrating that even without foreground segmentations, the network learns to abstract away from the background and can estimate the pose even in the presence of a complex, varying background. We evaluate our method on the BBC TV Signing dataset and show that our pose predictions are significantly better, and an order of magnitude faster to compute, than the state of the art [3].",
"title": ""
},
{
"docid": "6b5bde39af1260effa0587d8c6afa418",
"text": "This survey highlights the major issues concerning privacy and security in online social networks. Firstly, we discuss research that aims to protect user data from the various attack vantage points including other users, advertisers, third party application developers, and the online social network provider itself. Next we cover social network inference of user attributes, locating hubs, and link prediction. Because online social networks are so saturated with sensitive information, network inference plays a major privacy role. As a response to the issues brought forth by client-server architectures, distributed social networks are discussed. We then cover the challenges that providers face in maintaining the proper operation of an online social network including minimizing spam messages, and reducing the number of sybil accounts. Finally, we present research in anonymizing social network data. This area is of particular interest in order to continue research in this field both in academia and in industry.",
"title": ""
},
{
"docid": "f5f56d680fbecb94a08d9b8e5925228f",
"text": "Semantic word embeddings represent the meaning of a word via a vector, and are created by diverse methods. Many use nonlinear operations on co-occurrence statistics, and have hand-tuned hyperparameters and reweighting methods. This paper proposes a new generative model, a dynamic version of the log-linear topic model of Mnih and Hinton (2007). The methodological novelty is to use the prior to compute closed form expressions for word statistics. This provides a theoretical justification for nonlinear models like PMI, word2vec, and GloVe, as well as some hyperparameter choices. It also helps explain why low-dimensional semantic embeddings contain linear algebraic structure that allows solution of word analogies, as shown by Mikolov et al. (2013a) and many subsequent papers. Experimental support is provided for the generative model assumptions, the most important of which is that latent word vectors are fairly uniformly dispersed in space.",
"title": ""
},
{
"docid": "fee78b996d88584499f342f7da89addf",
"text": "It has become standard for search engines to augment result lists with document summaries. Each document summary consists of a title, abstract, and a URL. In this work, we focus on the task of selecting relevant sentences for inclusion in the abstract. In particular, we investigate how machine learning-based approaches can effectively be applied to the problem. We analyze and evaluate several learning to rank approaches, such as ranking support vector machines (SVMs), support vector regression (SVR), and gradient boosted decision trees (GBDTs). Our work is the first to evaluate SVR and GBDTs for the sentence selection task. Using standard TREC test collections, we rigorously evaluate various aspects of the sentence selection problem. Our results show that the effectiveness of the machine learning approaches varies across collections with different characteristics. Furthermore, the results show that GBDTs provide a robust and powerful framework for the sentence selection task and significantly outperform SVR and ranking SVMs on several data sets.",
"title": ""
},
{
"docid": "ea5697d417fe154be77d941c19d8a86e",
"text": "The foundations of functional programming languages are examined from both historical and technical perspectives. Their evolution is traced through several critical periods: early work on lambda calculus and combinatory calculus, Lisp, Iswim, FP, ML, and modern functional languages such as Miranda1 and Haskell. The fundamental premises on which the functional programming methodology stands are critically analyzed with respect to philosophical, theoretical, and pragmatic concerns. Particular attention is paid to the main features that characterize modern functional languages: higher-order functions, lazy evaluation, equations and pattern matching, strong static typing and type inference, and data abstraction. In addition, current research areas—such as parallelism, nondeterminism, input/output, and state-oriented computations—are examined with the goal of predicting the future development and application of functional languages.",
"title": ""
},
{
"docid": "2ccae5b48fc5ac10f948b79fc4fb6ff3",
"text": "Hierarchical attention networks have recently achieved remarkable performance for document classification in a given language. However, when multilingual document collections are considered, training such models separately for each language entails linear parameter growth and lack of cross-language transfer. Learning a single multilingual model with fewer parameters is therefore a challenging but potentially beneficial objective. To this end, we propose multilingual hierarchical attention networks for learning document structures, with shared encoders and/or shared attention mechanisms across languages, using multi-task learning and an aligned semantic space as input. We evaluate the proposed models on multilingual document classification with disjoint label sets, on a large dataset which we provide, with 600k news documents in 8 languages, and 5k labels. The multilingual models outperform monolingual ones in low-resource as well as full-resource settings, and use fewer parameters, thus confirming their computational efficiency and the utility of cross-language transfer.",
"title": ""
},
{
"docid": "d464711e6e07b61896ba6efe2bbfa5e4",
"text": "This paper presents a simple model for body-shadowing in off-body and body-to-body channels. The model is based on a body shadowing pattern associated with the on-body antenna, represented by a cosine function whose amplitude parameter is calculated from measurements. This parameter, i.e the maximum body-shadowing loss, is found to be linearly dependent on distance. The model was evaluated against a set of off-body channel measurements at 2.45 GHz in an indoor office environment, showing a good fit. The coefficient of determination obtained for the linear model of the maximum body-shadowing loss is greater than 0.6 in all considered scenarios, being higher than 0.8 for the ones with a static user.",
"title": ""
},
{
"docid": "610922e925ccb52308dcc68ca2e7bc6b",
"text": "In this brief, we introduce an architecture for accelerating convolution stages in convolutional neural networks (CNNs) implemented in embedded vision systems. The purpose of the architecture is to exploit the inherent parallelism in CNNs to reduce the required bandwidth, resource usage, and power consumption of highly computationally complex convolution operations as required by real-time embedded applications. We also implement the proposed architecture using fixed-point arithmetic on a ZC706 evaluation board that features a Xilinx Zynq-7000 system on-chip, where the embedded ARM processor with high clocking speed is used as the main controller to increase the flexibility and speed. The proposed architecture runs under a frequency of 150 MHz, which leads to 19.2 Giga multiply accumulation operations per second while consuming less than 10 W in power. This is done using only 391 DSP48 modules, which shows significant utilization improvement compared to the state-of-the-art architectures.",
"title": ""
}
] | scidocsrr |
ef31e3bb3c357c2731f139175f9f9126 | An active compliance controller for quadruped trotting | [
{
"docid": "a258c6b5abf18cb3880e4bc7a436c887",
"text": "We propose a reactive controller framework for robust quadrupedal locomotion, designed to cope with terrain irregularities, trajectory tracking errors and poor state estimation. The framework comprises two main modules: One related to the generation of elliptic trajectories for the feet and the other for control of the stability of the whole robot. We propose a task space CPG-based trajectory generation that can be modulated according to terrain irregularities and the posture of the robot trunk. To improve the robot's stability, we implemented a null space based attitude control for the trunk and a push recovery algorithm based on the concept of capture points. Simulations and experimental results on the hydraulically actuated quadruped robot HyQ will be presented to demonstrate the effectiveness of our framework.",
"title": ""
},
{
"docid": "1495ed50a24703566b2bda35d7ec4931",
"text": "This paper examines the passive dynamics of quadrupedal bounding. First, an unexpected difference between local and global behavior of the forward speed versus touchdown angle in the selfstabilized Spring Loaded Inverted Pendulum (SLIP) model is exposed and discussed. Next, the stability properties of a simplified sagittal plane model of our Scout II quadrupedal robot are investigated. Despite its simplicity, this model captures the targeted steady state behavior of Scout II without dependence on the fine details of the robot structure. Two variations of the bounding gait, which are observed experimentally in Scout II, are considered. Surprisingly, numerical return map studies reveal that passive generation of a large variety of cyclic bounding motion is possible. Most strikingly, local stability analysis shows that the dynamics of the open loop passive system alone can confer stability to the motion! These results can be used in developing a general control methodology for legged robots, resulting from the synthesis of feedforward and feedback models that take advantage of the mechanical sysPortions of this paper have previously appeared in conference publications Poulakakis, Papadopoulos, and Buehler (2003) and Poulakakis, Smith, and Buehler (2005b). The first and third authors were with the Centre for Intelligent Machines at McGill University when this work was performed. Address all correspondence related to this paper to the first author. The International Journal of Robotics Research Vol. 25, No. 7, July 2006, pp. 669-687 DOI: 10.1177/0278364906066768 ©2006 SAGE Publications Figures appear in color online: http://ijr.sagepub.com tem, and might explain the success of simple, open loop bounding controllers on our experimental robot. KEY WORDS—passive dynamics, bounding gait, dynamic running, quadrupedal robot",
"title": ""
},
{
"docid": "956ffd90cc922e77632b8f9f79f42a98",
"text": "Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism Amir jafari Nikos Tsagarakis Darwin G Caldwell Article information: To cite this document: Amir jafari Nikos Tsagarakis Darwin G Caldwell , (2015),\"Energy efficient actuators with adjustable stiffness: a review on AwAS, AwAS-II and CompACT VSA changing stiffness based on lever mechanism\", Industrial Robot: An International Journal, Vol. 42 Iss 3 pp. Permanent link to this document: http://dx.doi.org/10.1108/IR-12-2014-0433",
"title": ""
}
] | [
{
"docid": "3bc9e621a0cfa7b8791ae3fb94eff738",
"text": "This paper deals with environment perception for automobile applications. Environment perception comprises measuring the surrounding field with onboard sensors such as cameras, radar, lidars, etc., and signal processing to extract relevant information for the planned safety or assistance function. Relevant information is primarily supplied using two well-known methods, namely, object based and grid based. In the introduction, we discuss the advantages and disadvantages of the two methods and subsequently present an approach that combines the two methods to achieve better results. The first part outlines how measurements from stereo sensors can be mapped onto an occupancy grid using an appropriate inverse sensor model. We employ the Dempster-Shafer theory to describe the occupancy grid, which has certain advantages over Bayes' theorem. Furthermore, we generate clusters of grid cells that potentially belong to separate obstacles in the field. These clusters serve as input for an object-tracking framework implemented with an interacting multiple-model estimator. Thereby, moving objects in the field can be identified, and this, in turn, helps update the occupancy grid more effectively. The first experimental results are illustrated, and the next possible research intentions are also discussed.",
"title": ""
},
{
"docid": "78c89f8aec24989737575c10b6bbad90",
"text": "News topics, which are constructed from news stories using the techniques of Topic Detection and Tracking (TDT), bring convenience to users who intend to see what is going on through the Internet. However, it is almost impossible to view all the generated topics, because of the large amount. So it will be helpful if all topics are ranked and the top ones, which are both timely and important, can be viewed with high priority. Generally, topic ranking is determined by two primary factors. One is how frequently and recently a topic is reported by the media; the other is how much attention users pay to it. Both media focus and user attention varies as time goes on, so the effect of time on topic ranking has already been included. However, inconsistency exists between both factors. In this paper, an automatic online news topic ranking algorithm is proposed based on inconsistency analysis between media focus and user attention. News stories are organized into topics, which are ranked in terms of both media focus and user attention. Experiments performed on practical Web datasets show that the topic ranking result reflects the influence of time, the media and users. The main contributions of this paper are as follows. First, we present the quantitative measure of the inconsistency between media focus and user attention, which provides a basis for topic ranking and an experimental evidence to show that there is a gap between what the media provide and what users view. Second, to the best of our knowledge, it is the first attempt to synthesize the two factors into one algorithm for automatic online topic ranking.",
"title": ""
},
{
"docid": "7b44c4ec18d01f46fdd513780ba97963",
"text": "This paper presents a robust approach for road marking detection and recognition from images captured by an embedded camera mounted on a car. Our method is designed to cope with illumination changes, shadows, and harsh meteorological conditions. Furthermore, the algorithm can effectively group complex multi-symbol shapes into an individual road marking. For this purpose, the proposed technique relies on MSER features to obtain candidate regions which are further merged using density-based clustering. Finally, these regions of interest are recognized using machine learning approaches. Worth noting, the algorithm is versatile since it does not utilize any prior information about lane position or road space. The proposed method compares favorably to other existing works through a large number of experiments on an extensive road marking dataset.",
"title": ""
},
{
"docid": "7e422bc9e691d552543c245e7c154cbf",
"text": "Personality assessment and, specifically, the assessment of personality disorders have traditionally been indifferent to computational models. Computational personality is a new field that involves the automatic classification of individuals' personality traits that can be compared against gold-standard labels. In this context, we introduce a new vectorial semantics approach to personality assessment, which involves the construction of vectors representing personality dimensions and disorders, and the automatic measurements of the similarity between these vectors and texts written by human subjects. We evaluated our approach by using a corpus of 2468 essays written by students who were also assessed through the five-factor personality model. To validate our approach, we measured the similarity between the essays and the personality vectors to produce personality disorder scores. These scores and their correspondence with the subjects' classification of the five personality factors reproduce patterns well-documented in the psychological literature. In addition, we show that, based on the personality vectors, we can predict each of the five personality factors with high accuracy.",
"title": ""
},
{
"docid": "f6099a1e6641d0a93c764efef120dd53",
"text": "For the past two decades, the security community has been fighting malicious programs for Windows-based operating systems. However, the recent surge in adoption of embedded devices and the IoT revolution are rapidly changing the malware landscape. Embedded devices are profoundly different than traditional personal computers. In fact, while personal computers run predominantly on x86-flavored architectures, embedded systems rely on a variety of different architectures. In turn, this aspect causes a large number of these systems to run some variants of the Linux operating system, pushing malicious actors to give birth to \"\"Linux malware.\"\" To the best of our knowledge, there is currently no comprehensive study attempting to characterize, analyze, and understand Linux malware. The majority of resources on the topic are available as sparse reports often published as blog posts, while the few systematic studies focused on the analysis of specific families of malware (e.g., the Mirai botnet) mainly by looking at their network-level behavior, thus leaving the main challenges of analyzing Linux malware unaddressed. This work constitutes the first step towards filling this gap. After a systematic exploration of the challenges involved in the process, we present the design and implementation details of the first malware analysis pipeline specifically tailored for Linux malware. We then present the results of the first large-scale measurement study conducted on 10,548 malware samples (collected over a time frame of one year) documenting detailed statistics and insights that can help directing future work in the area.",
"title": ""
},
{
"docid": "abc48ae19e2ea1e1bb296ff0ccd492a2",
"text": "This paper reports the results achieved by Carnegie Mellon University on the Topic Detection and Tracking Project’s secondyear evaluation for the segmentation, detection, and tracking tasks. Additional post-evaluation improvements are also",
"title": ""
},
{
"docid": "62cf2ae97e48e6b57139f305d616ec1b",
"text": "Many analytics applications generate mixed workloads, i.e., workloads comprised of analytical tasks with different processing characteristics including data pre-processing, SQL, and iterative machine learning algorithms. Examples of such mixed workloads can be found in web data analysis, social media analysis, and graph analytics, where they are executed repetitively on large input datasets (e.g., Find the average user time spent on the top 10 most popular web pages on the UK domain web graph.). Scale-out processing engines satisfy the needs of these applications by distributing the data and the processing task efficiently among multiple workers that are first reserved and then used to execute the task in parallel on a cluster of machines. Finding the resource allocation that can complete the workload execution within a given time constraint, and optimizing cluster resource allocations among multiple analytical workloads motivates the need for estimating the runtime of the workload before its actual execution. Predicting runtime of analytical workloads is a challenging problem as runtime depends on a large number of factors that are hard to model a priori execution. These factors can be summarized as workload characteristics (data statistics and processing costs) , the execution configuration (deployment, resource allocation, and software settings), and the cost model that captures the interplay among all of the above parameters. While conventional cost models proposed in the context of query optimization can assess the relative order among alternative SQL query plans, they are not aimed to estimate absolute runtime. Additionally, conventional models are ill-equipped to estimate the runtime of iterative analytics that are executed repetitively until convergence and that of user defined data pre-processing operators which are not “owned” by the underlying data management system. This thesis demonstrates that runtime for data analytics can be predicted accurately by breaking the analytical tasks into multiple processing phases, collecting key input features during a reference execution on a sample of the dataset, and then using the features to build per-phase cost models. We develop prediction models for three categories of data analytics produced by social media applications: iterative machine learning, data pre-processing, and reporting SQL. The prediction framework for iterative analytics, PREDIcT, addresses the challenging problem of estimating the number of iterations, and per-iteration runtime for a class of iterative machine learning algorithms that are run repetitively until convergence. The hybrid prediction models we develop for data pre-processing tasks and for reporting SQL combine the benefits of analytical modeling with that of machine learning-based models. Through a",
"title": ""
},
{
"docid": "bfe76736623dfc3271be4856f5dc2eef",
"text": "Fact-related information contained in fictional narratives may induce substantial changes in readers’ real-world beliefs. Current models of persuasion through fiction assume that these effects occur because readers are psychologically transported into the fictional world of the narrative. Contrary to general dual-process models of persuasion, models of persuasion through fiction also imply that persuasive effects of fictional narratives are persistent and even increase over time (absolute sleeper effect). In an experiment designed to test this prediction, 81 participants read either a fictional story that contained true as well as false assertions about realworld topics or a control story. There were large short-term persuasive effects of false information, and these effects were even larger for a group with a two-week assessment delay. Belief certainty was weakened immediately after reading but returned to baseline level after two weeks, indicating that beliefs acquired by reading fictional narratives are integrated into realworld knowledge.",
"title": ""
},
{
"docid": "03c74ae78bfe862499c4cb1e18a58ae7",
"text": "Age-associated disease and disability are placing a growing burden on society. However, ageing does not affect people uniformly. Hence, markers of the underlying biological ageing process are needed to help identify people at increased risk of age-associated physical and cognitive impairments and ultimately, death. Here, we present such a biomarker, ‘brain-predicted age’, derived using structural neuroimaging. Brain-predicted age was calculated using machine-learning analysis, trained on neuroimaging data from a large healthy reference sample (N=2001), then tested in the Lothian Birth Cohort 1936 (N=669), to determine relationships with age-associated functional measures and mortality. Having a brain-predicted age indicative of an older-appearing brain was associated with: weaker grip strength, poorer lung function, slower walking speed, lower fluid intelligence, higher allostatic load and increased mortality risk. Furthermore, while combining brain-predicted age with grey matter and cerebrospinal fluid volumes (themselves strong predictors) not did improve mortality risk prediction, the combination of brain-predicted age and DNA-methylation-predicted age did. This indicates that neuroimaging and epigenetics measures of ageing can provide complementary data regarding health outcomes. Our study introduces a clinically-relevant neuroimaging ageing biomarker and demonstrates that combining distinct measurements of biological ageing further helps to determine risk of age-related deterioration and death.",
"title": ""
},
{
"docid": "29ce9730d55b55b84e195983a8506e5c",
"text": "In situ Raman spectroscopy is an extremely valuable technique for investigating fundamental reactions that occur inside lithium rechargeable batteries. However, specialized in situ Raman spectroelectrochemical cells must be constructed to perform these experiments. These cells are often quite different from the cells used in normal electrochemical investigations. More importantly, the number of cells is usually limited by construction costs; thus, routine usage of in situ Raman spectroscopy is hampered for most laboratories. This paper describes a modification to industrially available coin cells that facilitates routine in situ Raman spectroelectrochemical measurements of lithium batteries. To test this strategy, in situ Raman spectroelectrochemical measurements are performed on Li//V2O5 cells. Various phases of Li(x)V2O5 could be identified in the modified coin cells with Raman spectroscopy, and the electrochemical cycling performance between in situ and unmodified cells is nearly identical.",
"title": ""
},
{
"docid": "e244cbd076ea62b4d720378c2adf4438",
"text": "This paper introduces flash organizations: crowds structured like organizations to achieve complex and open-ended goals. Microtask workflows, the dominant crowdsourcing structures today, only enable goals that are so simple and modular that their path can be entirely pre-defined. We present a system that organizes crowd workers into computationally-represented structures inspired by those used in organizations - roles, teams, and hierarchies - which support emergent and adaptive coordination toward open-ended goals. Our system introduces two technical contributions: 1) encoding the crowd's division of labor into de-individualized roles, much as movie crews or disaster response teams use roles to support coordination between on-demand workers who have not worked together before; and 2) reconfiguring these structures through a model inspired by version control, enabling continuous adaptation of the work and the division of labor. We report a deployment in which flash organizations successfully carried out open-ended and complex goals previously out of reach for crowdsourcing, including product design, software development, and game production. This research demonstrates digitally networked organizations that flexibly assemble and reassemble themselves from a globally distributed online workforce to accomplish complex work.",
"title": ""
},
{
"docid": "8baddf0d82411d18a77be03759101c82",
"text": "Deep convolutional neural networks (DCNNs) have been successfully used in many computer vision tasks. Previous works on DCNN acceleration usually use a fixed computation pattern for diverse DCNN models, leading to imbalance between power efficiency and performance. We solve this problem by designing a DCNN acceleration architecture called deep neural architecture (DNA), with reconfigurable computation patterns for different models. The computation pattern comprises a data reuse pattern and a convolution mapping method. For massive and different layer sizes, DNA reconfigures its data paths to support a hybrid data reuse pattern, which reduces total energy consumption by 5.9~8.4 times over conventional methods. For various convolution parameters, DNA reconfigures its computing resources to support a highly scalable convolution mapping method, which obtains 93% computing resource utilization on modern DCNNs. Finally, a layer-based scheduling framework is proposed to balance DNA’s power efficiency and performance for different DCNNs. DNA is implemented in the area of 16 mm2 at 65 nm. On the benchmarks, it achieves 194.4 GOPS at 200 MHz and consumes only 479 mW. The system-level power efficiency is 152.9 GOPS/W (considering DRAM access power), which outperforms the state-of-the-art designs by one to two orders.",
"title": ""
},
{
"docid": "4def0dc478dfb5ddb5a0ec59ec7433f5",
"text": "A system that enables continuous slip compensation for a Mars rover has been designed, implemented, and field-tested. This system is composed of several components that allow the rover to accurately and continuously follow a designated path, compensate for slippage, and reach intended goals in high-slip environments. These components include: visual odometry, vehicle kinematics, a Kalman filter pose estimator, and a slip compensation/path follower. Visual odometry tracks distinctive scene features in stereo imagery to estimate rover motion between successively acquired stereo image pairs. The vehicle kinematics for a rocker-bogie suspension system estimates motion by measuring wheel rates, and rocker, bogie, and steering angles. The Kalman filter merges data from an inertial measurement unit (IMU) and visual odometry. This merged estimate is then compared to the kinematic estimate to determine how much slippage has occurred, taking into account estimate uncertainties. If slippage has occurred then a slip vector is calculated by differencing the current Kalman filter estimate from the kinematic estimate. This slip vector is then used to determine the necessary wheel velocities and steering angles to compensate for slip and follow the desired path.",
"title": ""
},
{
"docid": "29f8b647d8f8de484f2b8f164b9e5add",
"text": "is the latest release of a versatile and very well optimized package for molecular simulation. Much effort has been devoted to achieving extremely high performance on both workstations and parallel computers. The design includes an extraction of vi-rial and periodic boundary conditions from the loops over pairwise interactions, and special software routines to enable rapid calculation of x –1/2. Inner loops are generated automatically in C or Fortran at compile time, with optimizations adapted to each architecture. Assembly loops using SSE and 3DNow! Multimedia instructions are provided for x86 processors, resulting in exceptional performance on inexpensive PC workstations. The interface is simple and easy to use (no scripting language), based on standard command line arguments with self-explanatory functionality and integrated documentation. All binary files are independent of hardware endian and can be read by versions of GROMACS compiled using different floating-point precision. A large collection of flexible tools for trajectory analysis is included, with output in the form of finished Xmgr/Grace graphs. A basic trajectory viewer is included, and several external visualization tools can read the GROMACS trajectory format. Starting with version 3.0, GROMACS is available under the GNU General Public License from",
"title": ""
},
{
"docid": "528796e22fc248de78a91cc089467c04",
"text": "Automatic recognition of emotional states from human speech is a current research topic with a wide range. In this paper an attempt has been made to recognize and classify the speech emotion from three language databases, namely, Berlin, Japan and Thai emotion databases. Speech features consisting of Fundamental Frequency (F0), Energy, Zero Crossing Rate (ZCR), Linear Predictive Coding (LPC) and Mel Frequency Cepstral Coefficient (MFCC) from short-time wavelet signals are comprehensively investigated. In this regard, Support Vector Machines (SVM) is utilized as the classification model. Empirical experimentation shows that the combined features of F0, Energy and MFCC provide the highest accuracy on all databases provided using the linear kernel. It gives 89.80%, 93.57% and 98.00% classification accuracy for Berlin, Japan and Thai emotions databases, respectively.",
"title": ""
},
{
"docid": "88cb8c2f7f4fd5cdc95cc8e48faa3cb7",
"text": "Prediction or prognostication is at the core of modern evidence-based medicine. Prediction of overall mortality and cardiovascular disease can be improved by a systematic evaluation of measurements from large-scale epidemiological studies or by using nested sampling designs to discover new markers from omics technologies. In study I, we investigated if prediction measures such as calibration, discrimination and reclassification could be calculated within traditional sampling designs and which of these designs were the most efficient. We found that is possible to calculate prediction measures by using a proper weighting system and that a stratified casecohort design is a reasonable choice both in terms of efficiency and simplicity. In study II, we investigated the clinical utility of several genetic scores for incident coronary heart disease. We found that genetic information could be of clinical value in improving the allocation of patients to correct risk strata and that the assessment of a genetic risk score among intermediate risk subjects could help to prevent about one coronary heart disease event every 318 people screened. In study III, we explored the association between circulating metabolites and incident coronary heart disease. We found four new metabolites associated with coronary heart disease independently of established cardiovascular risk factors and with evidence of clinical utility. By using genetic information we determined a potential causal effect on coronary heart disease of one of these novel metabolites. In study IV, we compared a large number of demographics, health and lifestyle measurements for association with all-cause and cause-specific mortality. By ranking measurements in terms of their predictive abilities we could provide new insights about their relative importance, as well as reveal some unexpected associations. Moreover we developed and validated a prediction score for five-year mortality with good discrimination ability and calibrated it for the entire UK population. In conclusion, we applied a translational approach spanning from the discovery of novel biomarkers to their evaluation in terms of clinical utility. We combined this effort with methodological improvements aimed to expand prediction measures in settings that were not previously explored. We identified promising novel metabolomics markers for cardiovascular disease and supported the potential clinical utility of a genetic score in primary prevention. Our results might fuel future studies aimed to implement these findings in clinical practice.",
"title": ""
},
{
"docid": "5ee410ddc75170aa38c39281a8d86827",
"text": "Research in automotive safety leads to the conclusion that modern vehicle should utilize active and passive sensors for the recognition of the environment surrounding them. Thus, the development of tracking systems utilizing efficient state estimators is very important. In this case, problems such as moving platform carrying the sensor and maneuvering targets could introduce large errors in the state estimation and in some cases can lead to the divergence of the filter. In order to avoid sub-optimal performance, the unscented Kalman filter is chosen, while a new curvilinear model is applied which takes into account both the turn rate of the detected object and its tangential acceleration, leading to a more accurate modeling of its movement. The performance of the unscented filter using the proposed model in the case of automotive applications is proven to be superior compared to the performance of the extended and linear Kalman filter.",
"title": ""
},
{
"docid": "f47fcbd6412384b85ef458fd3e6b27f3",
"text": "In this paper, we consider positioning with observed-time-difference-of-arrival (OTDOA) for a device deployed in long-term-evolution (LTE) based narrow-band Internet-of-things (NB-IoT) systems. We propose an iterative expectation- maximization based successive interference cancellation (EM-SIC) algorithm to jointly consider estimations of residual frequency- offset (FO), fading-channel taps and time-of- arrival (ToA) of the first arrival-path for each of the detected cells. In order to design a low complexity ToA detector and also due to the limits of low-cost analog circuits, we assume an NB-IoT device working at a low-sampling rate such as 1.92 MHz or lower. The proposed EM-SIC algorithm comprises two stages to detect ToA, based on which OTDOA can be calculated. In a first stage, after running the EM-SIC block a predefined number of iterations, a coarse ToA is estimated for each of the detected cells. Then in a second stage, to improve the ToA resolution, a low-pass filter is utilized to interpolate the correlations of time-domain PRS signal evaluated at a low sampling-rate to a high sampling-rate such as 30.72 MHz. To keep low-complexity, only the correlations inside a small search window centered at the coarse ToA estimates are upsampled. Then, the refined ToAs are estimated based on upsampled correlations. If at least three cells are detected, with OTDOA and the locations of detected cell sites, the position of the NB-IoT device can be estimated. We show through numerical simulations that, the proposed EM-SIC based ToA detector is robust against impairments introduced by inter-cell interference, fading-channel and residual FO. Thus significant signal-to-noise (SNR) gains are obtained over traditional ToA detectors that do not consider these impairments when positioning a device.",
"title": ""
},
{
"docid": "36d7f776d7297f67a136825e9628effc",
"text": "Random walks are at the heart of many existing network embedding methods. However, such algorithms have many limitations that arise from the use of random walks, e.g., the features resulting from these methods are unable to transfer to new nodes and graphs as they are tied to vertex identity. In this work, we introduce the Role2Vec framework which uses the flexible notion of attributed random walks, and serves as a basis for generalizing existing methods such as DeepWalk, node2vec, and many others that leverage random walks. Our proposed framework enables these methods to be more widely applicable for both transductive and inductive learning as well as for use on graphs with attributes (if available). This is achieved by learning functions that generalize to new nodes and graphs. We show that our proposed framework is effective with an average AUC improvement of 16.55% while requiring on average 853x less space than existing methods on a variety of graphs.",
"title": ""
}
] | scidocsrr |
2c574cc023094e7773ecd17a6bb84cda | Parallelizing MCMC via Weierstrass Sampler | [
{
"docid": "20deb56f6d004a8e33d1e1a4f579c1ba",
"text": "Hamiltonian dynamics can be used to produce distant proposals for the Metropolis algorithm, thereby avoiding the slow exploration of the state space that results from the diffusive behaviour of simple random-walk proposals. Though originating in physics, Hamiltonian dynamics can be applied to most problems with continuous state spaces by simply introducing fictitious “momentum” variables. A key to its usefulness is that Hamiltonian dynamics preserves volume, and its trajectories can thus be used to define complex mappings without the need to account for a hard-to-compute Jacobian factor — a property that can be exactly maintained even when the dynamics is approximated by discretizing time. In this review, I discuss theoretical and practical aspects of Hamiltonian Monte Carlo, and present some of its variations, including using windows of states for deciding on acceptance or rejection, computing trajectories using fast approximations, tempering during the course of a trajectory to handle isolated modes, and short-cut methods that prevent useless trajectories from taking much computation time.",
"title": ""
}
] | [
{
"docid": "72b93e02049b837a7990225494883708",
"text": "Cloud computing is emerging as a major trend in the ICT industry. While most of the attention of the research community is focused on considering the perspective of the Cloud providers, offering mechanisms to support scaling of resources and interoperability and federation between Clouds, the perspective of developers and operators willing to choose the Cloud without being strictly bound to a specific solution is mostly neglected.\n We argue that Model-Driven Development can be helpful in this context as it would allow developers to design software systems in a cloud-agnostic way and to be supported by model transformation techniques into the process of instantiating the system into specific, possibly, multiple Clouds. The MODAClouds (MOdel-Driven Approach for the design and execution of applications on multiple Clouds) approach we present here is based on these principles and aims at supporting system developers and operators in exploiting multiple Clouds for the same system and in migrating (part of) their systems from Cloud to Cloud as needed. MODAClouds offers a quality-driven design, development and operation method and features a Decision Support System to enable risk analysis for the selection of Cloud providers and for the evaluation of the Cloud adoption impact on internal business processes. Furthermore, MODAClouds offers a run-time environment for observing the system under execution and for enabling a feedback loop with the design environment. This allows system developers to react to performance fluctuations and to re-deploy applications on different Clouds on the long term.",
"title": ""
},
{
"docid": "e118177a0fc9fad704b2be958b01a873",
"text": "Safety stories specify safety requirements, using the EARS (Easy Requirements Specification) format. Software practitioners can use them in agile projects at lower levels of safety criticality to deal effectively with safety concerns.",
"title": ""
},
{
"docid": "c08518b806c93dde1dd04fdf3c9c45bb",
"text": "Purpose – The objectives of this article are to develop a multiple-item scale for measuring e-service quality and to study the influence of perceived quality on consumer satisfaction levels and the level of web site loyalty. Design/methodology/approach – First, there is an explanation of the main attributes of the concepts examined, with special attention being paid to the multi-dimensional nature of the variables and the relationships between them. This is followed by an examination of the validation processes of the measuring instruments. Findings – The validation process of scales suggested that perceived quality is a multidimensional construct: web design, customer service, assurance and order management; that perceived quality influences on satisfaction; and that satisfaction influences on consumer loyalty. Moreover, no differences in these conclusions were observed if the total sample is divided between buyers and information searchers. Practical implications – First, the need to develop user-friendly web sites which ease consumer purchasing and searching, thus creating a suitable framework for the generation of higher satisfaction and loyalty levels. Second, the web site manager should enhance service loyalty, customer sensitivity, personalised service and a quick response to complaints. Third, the web site should uphold sufficient security levels in communications and meet data protection requirements regarding the privacy. Lastly, the need for correct product delivery and product manipulation or service is recommended. Originality/value – Most relevant studies about perceived quality in the internet have focused on web design aspects. Moreover, the existing literature regarding internet consumer behaviour has not fully analysed profits generated by higher perceived quality in terms of user satisfaction and loyalty.",
"title": ""
},
{
"docid": "a6ce059863bc504242dff00025791b01",
"text": "We examined allelic polymorphisms of the serotonin transporter (5-HTT) gene and antidepressant response to 6 weeks' treatment with the selective serotonin reuptake inhibitor (SSRI) drugs fluoxetine or paroxetine. We genotyped 120 patients and 252 normal controls, using polymerase chain reaction of genomic DNA with primers flanking the second intron and promoter regions of the 5-HTT gene. Diagnosis of depression was not associated with 5-HTT polymorphisms. Patients homozygous l/l in intron 2 or homozygous s/s in the promoter region showed better responses than all others (p < 0.0001, p = 0.0074, respectively). Lack of the l/l allele form in intron 2 most powerfully predicted non-response (83.3%). Response to SSRI drugs is related to allelic variation in the 5-HTT gene in depressed Korean patients.",
"title": ""
},
{
"docid": "d3f256c026125f98ccb09fd6403ee5a0",
"text": "Endocytic mechanisms control the lipid and protein composition of the plasma membrane, thereby regulating how cells interact with their environments. Here, we review what is known about mammalian endocytic mechanisms, with focus on the cellular proteins that control these events. We discuss the well-studied clathrin-mediated endocytic mechanisms and dissect endocytic pathways that proceed independently of clathrin. These clathrin-independent pathways include the CLIC/GEEC endocytic pathway, arf6-dependent endocytosis, flotillin-dependent endocytosis, macropinocytosis, circular doral ruffles, phagocytosis, and trans-endocytosis. We also critically review the role of caveolae and caveolin1 in endocytosis. We highlight the roles of lipids, membrane curvature-modulating proteins, small G proteins, actin, and dynamin in endocytic pathways. We discuss the functional relevance of distinct endocytic pathways and emphasize the importance of studying these pathways to understand human disease processes.",
"title": ""
},
{
"docid": "20df8d71b963a432f4a0ea5fc129463a",
"text": "This study provided a comparative analysis of three social network sites, the open-to-all Facebook, the professionally oriented LinkedIn and the exclusive, members-only ASmallWorld.The analysis focused on the underlying structure or architecture of these sites, on the premise that it may set the tone for particular types of interaction.Through this comparative examination, four themes emerged, highlighting the private/public balance present in each social networking site, styles of self-presentation in spaces privately public and publicly private, cultivation of taste performances as a mode of sociocultural identification and organization and the formation of tight or loose social settings. Facebook emerged as the architectural equivalent of a glasshouse, with a publicly open structure, looser behavioral norms and an abundance of tools that members use to leave cues for each other. LinkedIn and ASmallWorld produced tighter spaces, which were consistent with the taste ethos of each network and offered less room for spontaneous interaction and network generation.",
"title": ""
},
{
"docid": "dc6ee3d45fa76aafe45507b0778018d5",
"text": "Traditional endpoint protection will not address the looming cybersecurity crisis because it ignores the source of the problem--the vast online black market buried deep within the Internet.",
"title": ""
},
{
"docid": "c42edb326ec95c257b821cc617e174e6",
"text": "recommendation systems support users and developers of various computer and software systems to overcome information overload, perform information discovery tasks and approximate computation, among others. They have recently become popular and have attracted a wide variety of application scenarios from business process modelling to source code manipulation. Due to this wide variety of application domains, different approaches and metrics have been adopted for their evaluation. In this chapter, we review a range of evaluation metrics and measures as well as some approaches used for evaluating recommendation systems. The metrics presented in this chapter are grouped under sixteen different dimensions, e.g., correctness, novelty, coverage. We review these metrics according to the dimensions to which they correspond. A brief overview of approaches to comprehensive evaluation using collections of recommendation system dimensions and associated metrics is presented. We also provide suggestions for key future research and practice directions. Iman Avazpour Faculty of ICT, Centre for Computing and Engineering Software and Systems (SUCCESS), Swinburne University of Technology, Hawthorn, Victoria 3122, Australia e-mail: iavazpour@swin.",
"title": ""
},
{
"docid": "097cab15476b850df18e625530c25821",
"text": "The Internet of Things (IoT) has been growing in recent years with the improvements in several different applications in the military, marine, intelligent transportation, smart health, smart grid, smart home and smart city domains. Although IoT brings significant advantages over traditional information and communication (ICT) technologies for Intelligent Transportation Systems (ITS), these applications are still very rare. Although there is a continuous improvement in road and vehicle safety, as well as improvements in IoT, the road traffic accidents have been increasing over the last decades. Therefore, it is necessary to find an effective way to reduce the frequency and severity of traffic accidents. Hence, this paper presents an intelligent traffic accident detection system in which vehicles exchange their microscopic vehicle variables with each other. The proposed system uses simulated data collected from vehicular ad-hoc networks (VANETs) based on the speeds and coordinates of the vehicles and then, it sends traffic alerts to the drivers. Furthermore, it shows how machine learning methods can be exploited to detect accidents on freeways in ITS. It is shown that if position and velocity values of every vehicle are given, vehicles' behavior could be analyzed and accidents can be detected easily. Supervised machine learning algorithms such as Artificial Neural Networks (ANN), Support Vector Machine (SVM), and Random Forests (RF) are implemented on traffic data to develop a model to distinguish accident cases from normal cases. The performance of RF algorithm, in terms of its accuracy, was found superior to ANN and SVM algorithms. RF algorithm has showed better performance with 91.56% accuracy than SVM with 88.71% and ANN with 90.02% accuracy.",
"title": ""
},
{
"docid": "a19f4e5f36b04fed7937be1c90ce3581",
"text": "This paper describes a map-matching algorithm designed to support the navigational functions of a real-time vehicle performance and emissions monitoring system currently under development, and other transport telematics applications. The algorithm is used together with the outputs of an extended Kalman filter formulation for the integration of GPS and dead reckoning data, and a spatial digital database of the road network, to provide continuous, accurate and reliable vehicle location on a given road segment. This is irrespective of the constraints of the operational environment, thus alleviating outage and accuracy problems associated with the use of stand-alone location sensors. The map-matching algorithm has been tested using real field data and has been found to be superior to existing algorithms, particularly in how it performs at road intersections.",
"title": ""
},
{
"docid": "42c0f8504f26d46a4cc92d3c19eb900d",
"text": "Research into suicide prevention has been hampered by methodological limitations such as low sample size and recall bias. Recently, Natural Language Processing (NLP) strategies have been used with Electronic Health Records to increase information extraction from free text notes as well as structured fields concerning suicidality and this allows access to much larger cohorts than previously possible. This paper presents two novel NLP approaches – a rule-based approach to classify the presence of suicide ideation and a hybrid machine learning and rule-based approach to identify suicide attempts in a psychiatric clinical database. Good performance of the two classifiers in the evaluation study suggest they can be used to accurately detect mentions of suicide ideation and attempt within free-text documents in this psychiatric database. The novelty of the two approaches lies in the malleability of each classifier if a need to refine performance, or meet alternate classification requirements arises. The algorithms can also be adapted to fit infrastructures of other clinical datasets given sufficient clinical recording practice knowledge, without dependency on medical codes or additional data extraction of known risk factors to predict suicidal behaviour.",
"title": ""
},
{
"docid": "d8780989fc125b69beb456986819d624",
"text": "The particle swarm optimization algorithm is analyzed using standard results from the dynamic system theory. Graphical parameter selection guidelines are derived. The exploration–exploitation tradeoff is discussed and illustrated. Examples of performance on benchmark functions superior to previously published results are given. 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "eec0aecb9b41fa1b2db390bdab2c4c44",
"text": "Wi-Fi Tracking: Fingerprinting Attacks and CounterMeasures The recent spread of everyday-carried Wi-Fi-enabled devices (smartphones, tablets and wearable devices) comes with a privacy threat to their owner, and to society as a whole. These devices continuously emit signals which can be captured by a passive attacker using cheap hardware and basic knowledge. These signals contain a unique identi er, called the MAC address. To mitigate the threat, device vendors are currently deploying a countermeasure on new devices: MAC address randomization. Unfortunately, we show that this mitigation, in its current state, is insu cient to prevent tracking. To do so, we introduce several attacks, based on the content and the timing of emitted signals. In complement, we study implementations of MAC address randomization in some recent devices, and nd a number of shortcomings limiting the e ciency of these implementations at preventing device tracking. At the same time, we perform two real-world studies. The rst one considers the development of actors exploiting this issue to install Wi-Fi tracking systems. We list some real-world installations and discuss their various aspects, including regulation, privacy implications, consent and public acceptance. The second one deals with the spread of MAC address randomization in the devices population. Finally, we present two tools: an experimental Wi-Fi tracking system for testing and public awareness raising purpose, and a tool estimating the uniqueness of a device based on the content of its emitted signals even if the identi er is randomized.",
"title": ""
},
{
"docid": "0d509af77c0bb093d534cd95102b8941",
"text": "A compelling body of evidence indicates that observing a task-irrelevant action makes the execution of that action more likely. However, it remains unclear whether this 'automatic imitation' effect is indeed automatic or whether the imitative action is voluntary. The present study tested the automaticity of automatic imitation by asking whether it occurs in a strategic context where it reduces payoffs. Participants were required to play rock-paper-scissors, with the aim of achieving as many wins as possible, while either one or both players were blindfolded. While the frequency of draws in the blind-blind condition was precisely that expected at chance, the frequency of draws in the blind-sighted condition was significantly elevated. Specifically, the execution of either a rock or scissors gesture by the blind player was predictive of an imitative response by the sighted player. That automatic imitation emerges in a context where imitation reduces payoffs accords with its 'automatic' description, and implies that these effects are more akin to involuntary than to voluntary actions. These data represent the first evidence of automatic imitation in a strategic context, and challenge the abstraction from physical aspects of social interaction typical in economic and game theory.",
"title": ""
},
{
"docid": "83ae128f71bb154177881012dfb6a680",
"text": "Cell imbalance in large battery packs degrades their capacity delivery, especially for cells connected in series where the weakest cell dominates their overall capacity. In this article, we present a case study of exploiting system reconfigurations to mitigate the cell imbalance in battery packs. Specifically, instead of using all the cells in a battery pack to support the load, selectively skipping cells to be discharged may actually enhance the pack’s capacity delivery. Based on this observation, we propose CSR, a Cell Skipping-assisted Reconfiguration algorithm that identifies the system configuration with (near)-optimal capacity delivery. We evaluate CSR using large-scale emulation based on empirically collected discharge traces of 40 lithium-ion cells. CSR achieves close-to-optimal capacity delivery when the cell imbalance in the battery pack is low and improves the capacity delivery by about 20% and up to 1x in the case of a high imbalance.",
"title": ""
},
{
"docid": "d0cbdd5230d97d16b9955013699df5aa",
"text": "There has been a great deal of recent interest in statistical models of 2D landmark data for generating compact deformable models of a given object. This paper extends this work to a class of parametrised shapes where there are no landmarks available. A rigorous statistical framework for the eigenshape model is introduced, which is an extension to the conventional Linear Point Distribution Model. One of the problems associated with landmark free methods is that a large degree of variability in any shape descriptor may be due to the choice of parametrisation. An automated training method is described which utilises an iterative feedback method to overcome this problem. The result is an automatically generated compact linear shape model. The model has been successfully applied to a problem of tracking the outline of a walking pedestrian in real time.",
"title": ""
},
{
"docid": "e7d36dc01a3e20c3fb6d2b5245e46705",
"text": "A gender gap in mathematics achievement persists in some nations but not in others. In light of the underrepresentation of women in careers in science, technology, mathematics, and engineering, increasing research attention is being devoted to understanding gender differences in mathematics achievement, attitudes, and affect. The gender stratification hypothesis maintains that such gender differences are closely related to cultural variations in opportunity structures for girls and women. We meta-analyzed 2 major international data sets, the 2003 Trends in International Mathematics and Science Study and the Programme for International Student Assessment, representing 493,495 students 14-16 years of age, to estimate the magnitude of gender differences in mathematics achievement, attitudes, and affect across 69 nations throughout the world. Consistent with the gender similarities hypothesis, all of the mean effect sizes in mathematics achievement were very small (d < 0.15); however, national effect sizes showed considerable variability (ds = -0.42 to 0.40). Despite gender similarities in achievement, boys reported more positive math attitudes and affect (ds = 0.10 to 0.33); national effect sizes ranged from d = -0.61 to 0.89. In contrast to those of previous tests of the gender stratification hypothesis, our results point to specific domains of gender equity responsible for gender gaps in math. Gender equity in school enrollment, women's share of research jobs, and women's parliamentary representation were the most powerful predictors of cross-national variability in gender gaps in math. Results are situated within the context of existing research demonstrating apparently paradoxical effects of societal gender equity and highlight the significance of increasing girls' and women's agency cross-nationally.",
"title": ""
},
{
"docid": "7c7beabf8bcaa2af706b6c1fd92ee8dd",
"text": "In this paper, two main contributions are presented to manage the power flow between a 11 wind turbine and a solar power system. The first one is to use the fuzzy logic controller as an 12 objective to find the maximum power point tracking, applied to a hybrid wind-solar system, at fixed 13 atmospheric conditions. The second one is to response to real-time control system constraints and 14 to improve the generating system performance. For this, a hardware implementation of the 15 proposed algorithm is performed using the Xilinx system generator. The experimental results show 16 that the suggested system presents high accuracy and acceptable execution time performances. The 17 proposed model and its control strategy offer a proper tool for optimizing the hybrid power system 18 performance which we can use in smart house applications. 19",
"title": ""
},
{
"docid": "12b1f774967739ea12a1ddcfe43f2faf",
"text": "Herbal drug authentication is an important task in traditional medicine; however, it is challenged by the limitations of traditional authentication methods and the lack of trained experts. DNA barcoding is conspicuous in almost all areas of the biological sciences and has already been added to the British pharmacopeia and Chinese pharmacopeia for routine herbal drug authentication. However, DNA barcoding for the Korean pharmacopeia still requires significant improvements. Here, we present a DNA barcode reference library for herbal drugs in the Korean pharmacopeia and developed a species identification engine named KP-IDE to facilitate the adoption of this DNA reference library for the herbal drug authentication. Using taxonomy records, specimen records, sequence records, and reference records, KP-IDE can identify an unknown specimen. Currently, there are 6,777 taxonomy records, 1,054 specimen records, 30,744 sequence records (ITS2 and psbA-trnH) and 285 reference records. Moreover, 27 herbal drug materials were collected from the Seoul Yangnyeongsi herbal medicine market to give an example for real herbal drugs authentications. Our study demonstrates the prospects of the DNA barcode reference library for the Korean pharmacopeia and provides future directions for the use of DNA barcoding for authenticating herbal drugs listed in other modern pharmacopeias.",
"title": ""
},
{
"docid": "2b4b822d722fac299ae7504078d87fd0",
"text": "LETOR is a package of benchmark data sets for research on LEarning TO Rank, which contains standard features, relevance judgments, data partitioning, evaluation tools, and several baselines. Version 1.0 was released in April 2007. Version 2.0 was released in Dec. 2007. Version 3.0 was released in Dec. 2008. This version, 4.0, was released in July 2009. Very different from previous versions (V3.0 is an update based on V2.0 and V2.0 is an update based on V1.0), LETOR4.0 is a totally new release. It uses the Gov2 web page collection (~25M pages) and two query sets from Million Query track of TREC 2007 and TREC 2008. We call the two query sets MQ2007 and MQ2008 for short. There are about 1700 queries in MQ2007 with labeled documents and about 800 queries in MQ2008 with labeled documents. If you have any questions or suggestions about the datasets, please kindly email us ([email protected]). Our goal is to make the dataset reliable and useful for the community.",
"title": ""
}
] | scidocsrr |
47052b6522116f9277c62e67fdf9cc95 | The Reversible Residual Network: Backpropagation Without Storing Activations | [
{
"docid": "7ec6540b44b23a0380dcb848239ccac4",
"text": "There is plenty of theoretical and empirical evidence that depth of neural networks is a crucial ingredient for their success. However, network training becomes more difficult with increasing depth and training of very deep networks remains an open problem. In this extended abstract, we introduce a new architecture designed to ease gradient-based training of very deep networks. We refer to networks with this architecture as highway networks, since they allow unimpeded information flow across several layers on information highways. The architecture is characterized by the use of gating units which learn to regulate the flow of information through a network. Highway networks with hundreds of layers can be trained directly using stochastic gradient descent and with a variety of activation functions, opening up the possibility of studying extremely deep and efficient architectures. Note: A full paper extending this study is available at http://arxiv.org/abs/1507.06228, with additional references, experiments and analysis.",
"title": ""
},
{
"docid": "4d2be7aac363b77c6abd083947bc28c7",
"text": "Scene parsing is challenging for unrestricted open vocabulary and diverse scenes. In this paper, we exploit the capability of global context information by different-region-based context aggregation through our pyramid pooling module together with the proposed pyramid scene parsing network (PSPNet). Our global prior representation is effective to produce good quality results on the scene parsing task, while PSPNet provides a superior framework for pixel-level prediction. The proposed approach achieves state-of-the-art performance on various datasets. It came first in ImageNet scene parsing challenge 2016, PASCAL VOC 2012 benchmark and Cityscapes benchmark. A single PSPNet yields the new record of mIoU accuracy 85.4% on PASCAL VOC 2012 and accuracy 80.2% on Cityscapes.",
"title": ""
},
{
"docid": "b2fc60b400b2b8ed3425658e3a1e9217",
"text": "We propose a systematic approach to reduce the memory consumption of deep neural network training. Specifically, we design an algorithm that costs O( √ n) memory to train a n layer network, with only the computational cost of an extra forward pass per mini-batch. As many of the state-of-the-art models hit the upper bound of the GPU memory, our algorithm allows deeper and more complex models to be explored, and helps advance the innovations in deep learning research. We focus on reducing the memory cost to store the intermediate feature maps and gradients during training. Computation graph analysis is used for automatic in-place operation and memory sharing optimizations. We show that it is possible to trade computation for memory giving a more memory efficient training algorithm with a little extra computation cost. In the extreme case, our analysis also shows that the memory consumption can be reduced to O(logn) with as little as O(n logn) extra cost for forward computation. Our experiments show that we can reduce the memory cost of a 1,000-layer deep residual network from 48G to 7G with only 30% additional running time cost on ImageNet problems. Similarly, significant memory cost reduction is observed in training complex recurrent neural networks on very long sequences.",
"title": ""
},
{
"docid": "b0bd9a0b3e1af93a9ede23674dd74847",
"text": "This paper introduces WaveNet, a deep neural network for generating raw audio waveforms. The model is fully probabilistic and autoregressive, with the predictive distribution for each audio sample conditioned on all previous ones; nonetheless we show that it can be efficiently trained on data with tens of thousands of samples per second of audio. When applied to text-to-speech, it yields state-ofthe-art performance, with human listeners rating it as significantly more natural sounding than the best parametric and concatenative systems for both English and Mandarin. A single WaveNet can capture the characteristics of many different speakers with equal fidelity, and can switch between them by conditioning on the speaker identity. When trained to model music, we find that it generates novel and often highly realistic musical fragments. We also show that it can be employed as a discriminative model, returning promising results for phoneme recognition.",
"title": ""
}
] | [
{
"docid": "79564b938dde94306a2a142240bf30ea",
"text": "Accurately counting maize tassels is important for monitoring the growth status of maize plants. This tedious task, however, is still mainly done by manual efforts. In the context of modern plant phenotyping, automating this task is required to meet the need of large-scale analysis of genotype and phenotype. In recent years, computer vision technologies have experienced a significant breakthrough due to the emergence of large-scale datasets and increased computational resources. Naturally image-based approaches have also received much attention in plant-related studies. Yet a fact is that most image-based systems for plant phenotyping are deployed under controlled laboratory environment. When transferring the application scenario to unconstrained in-field conditions, intrinsic and extrinsic variations in the wild pose great challenges for accurate counting of maize tassels, which goes beyond the ability of conventional image processing techniques. This calls for further robust computer vision approaches to address in-field variations. This paper studies the in-field counting problem of maize tassels. To our knowledge, this is the first time that a plant-related counting problem is considered using computer vision technologies under unconstrained field-based environment. With 361 field images collected in four experimental fields across China between 2010 and 2015 and corresponding manually-labelled dotted annotations, a novel Maize Tassels Counting (MTC) dataset is created and will be released with this paper. To alleviate the in-field challenges, a deep convolutional neural network-based approach termed TasselNet is proposed. TasselNet can achieve good adaptability to in-field variations via modelling the local visual characteristics of field images and regressing the local counts of maize tassels. Extensive results on the MTC dataset demonstrate that TasselNet outperforms other state-of-the-art approaches by large margins and achieves the overall best counting performance, with a mean absolute error of 6.6 and a mean squared error of 9.6 averaged over 8 test sequences. TasselNet can achieve robust in-field counting of maize tassels with a relatively high degree of accuracy. Our experimental evaluations also suggest several good practices for practitioners working on maize-tassel-like counting problems. It is worth noting that, though the counting errors have been greatly reduced by TasselNet, in-field counting of maize tassels remains an open and unsolved problem.",
"title": ""
},
{
"docid": "bbb6b192974542b165d3f7a0d139a8e1",
"text": "While gamification is gaining ground in business, marketing, corporate management, and wellness initiatives, its application in education is still an emerging trend. This article presents a study of the published empirical research on the application of gamification to education. The study is limited to papers that discuss explicitly the effects of using game elements in specific educational contexts. It employs a systematic mapping design. Accordingly, a categorical structure for classifying the research results is proposed based on the extracted topics discussed in the reviewed papers. The categories include gamification design principles, game mechanics, context of applying gamification (type of application, educational level, and academic subject), implementation, and evaluation. By mapping the published works to the classification criteria and analyzing them, the study highlights the directions of the currently conducted empirical research on applying gamification to education. It also indicates some major obstacles and needs, such as the need for proper technological support, for controlled studies demonstrating reliable positive or negative results of using specific game elements in particular educational contexts, etc. Although most of the reviewed papers report promising results, more substantial empirical research is needed to determine whether both extrinsic and intrinsic motivation of the learners can be influenced by gamification.",
"title": ""
},
{
"docid": "072a6a274820e7dea5d811906f81d244",
"text": "Analysis of vascular geometry is important in many medical imaging applications, such as retinal, pulmonary, and cardiac investigations. In order to make reliable judgments for clinical usage, accurate and robust segmentation methods are needed. Due to the high complexity of biological vasculature trees, manual identification is often too time-consuming and tedious to be used in practice. To design an automated and computerized method, a major challenge is that the appearance of vasculatures in medical images has great variance across modalities and subjects. Therefore, most existing approaches are specially designed for a particular task, lacking the flexibility to be adapted to other circumstances. In this paper, we present a generic approach for vascular structure identification from medical images, which can be used for multiple purposes robustly. The proposed method uses the state-of-the-art deep convolutional neural network (CNN) to learn the appearance features of the target. A Principal Component Analysis (PCA)-based nearest neighbor search is then utilized to estimate the local structure distribution, which is further incorporated within the generalized probabilistic tracking framework to extract the entire connected tree. Qualitative and quantitative results over retinal fundus data demonstrate that the proposed framework achieves comparable accuracy as compared with state-of-the-art methods, while efficiently producing more information regarding the candidate tree structure.",
"title": ""
},
{
"docid": "824480b0f5886a37ca1930ce4484800d",
"text": "Conduction loss reduction technique using a small resonant capacitor for a phase shift full bridge converter with clamp diodes is proposed in this paper. The proposed technique can be implemented simply by adding a small resonant capacitor beside the leakage inductor of transformer. Since the voltage across the small resonant capacitor is applied to the small leakage inductor of transformer during freewheeling period, the primary current can be decreased rapidly. This results in the reduced conduction loss on the secondary side of transformer while the proposed technique can still guarantee the wide ZVS ranges. The operational principles and analysis are presented. Experimental results show that the proposed reduction technique of conduction loss can be operated properly.",
"title": ""
},
{
"docid": "ecea52064dd97ee4acdd11cb2c84f8cf",
"text": "Occupational therapists have used activity analysis to ensure the therapeutic use of activities. Recently, they have begun to explore the affective components of activities. This study explores the feelings (affective responses) that chronic psychiatric patients have toward selected activities commonly used in occupational therapy. Twenty-two participating chronic psychiatric patients were randomly assigned to one of three different activity groups: cooking, craft, or sensory awareness. Immediately following participation, each subject was asked to rate the activity by using Osgood's semantic differential, which measures the evaluation, power, and action factors of affective meaning. Data analysis revealed significant differences between the cooking activity and the other two activities on the evaluation factor. The fact that the three activities were rated differently is evidence that different activities can elicit different responses in one of the target populations of occupational therapy. The implications of these findings to occupational therapists are discussed and areas of future research are indicated.",
"title": ""
},
{
"docid": "23ee528e0efe7c4fec7f8cda7e49a8dd",
"text": "The development of reliability-based design criteria for surface ship structures needs to consider the following three components: (1) loads, (2) structural strength, and (3) methods of reliability analysis. A methodology for reliability-based design of ship structures is provided in this document. The methodology consists of the following two approaches: (1) direct reliabilitybased design, and (2) load and resistance factor design (LRFD) rules. According to this methodology, loads can be linearly or nonlinearly treated. Also in assessing structural strength, linear or nonlinear analysis can be used. The reliability assessment and reliability-based design can be performed at several levels of a structural system, such as at the hull-girder, grillage, panel, plate and detail levels. A rational treatment of uncertainty is suggested by considering all its types. Also, failure definitions can have significant effects on the assessed reliability, or resulting reliability-based designs. A method for defining and classifying failures at the system level is provided. The method considers the continuous nature of redundancy in ship structures. A bibliography is provided at the end of this document to facilitate future implementation of the methodology.",
"title": ""
},
{
"docid": "0356445aef8821582d18234683b62194",
"text": "Supervisory control and data acquisition (SCADA) systems are large-scale industrial control systems often spread across geographically dispersed locations that let human operators control entire physical systems, from a single control room. Early multi-site SCADA systems used closed networks and propriety industrial communication protocols like Modbus, DNP3 etc to reach remote sites. But with time it has become more convenient and more cost-effective to connect them to the Internet. However, internet connections to SCADA systems build in new vulnerabilities, as SCADA systems were not designed with internet security in mind. This can become matter of national security if these systems are power plants, water treatment facilities, or other pieces of critical infrastructure. Compared to IT systems, SCADA systems have a higher requirement concerning reliability, latency and uptime, so it is not always feasible to apply IT security measures deployed in IT systems. This paper provides an overview of security issues and threats in SCADA networks. Next, attention is focused on security assessment of the SCADA. This is followed by an overview of relevant SCADA security solutions. Finally we propose our security solution approach which is embedded in bump-in-the-wire is discussed.",
"title": ""
},
{
"docid": "7f54157faf8041436174fa865d0f54a8",
"text": "The goal of robot learning from demonstra tion is to have a robot learn from watching a demonstration of the task to be performed In our approach to learning from demon stration the robot learns a reward function from the demonstration and a task model from repeated attempts to perform the task A policy is computed based on the learned reward function and task model Lessons learned from an implementation on an an thropomorphic robot arm using a pendulum swing up task include simply mimicking demonstrated motions is not adequate to per form this task a task planner can use a learned model and reward function to com pute an appropriate policy this model based planning process supports rapid learn ing both parametric and nonparametric models can be learned and used and in corporating a task level direct learning com ponent which is non model based in addi tion to the model based planner is useful in compensating for structural modeling errors and slow model learning",
"title": ""
},
{
"docid": "013270914bfee85265f122b239c9fc4c",
"text": "Current study is with the aim to identify similarities and distinctions between irony and sarcasm by adopting quantitative sentiment analysis as well as qualitative content analysis. The result of quantitative sentiment analysis shows that sarcastic tweets are used with more positive tweets than ironic tweets. The result of content analysis corresponds to the result of quantitative sentiment analysis in identifying the aggressiveness of sarcasm. On the other hand, from content analysis it shows that irony owns two senses. The first sense of irony is equal to aggressive sarcasm with speaker awareness. Thus, tweets of first sense of irony may attack a specific target, and the speaker may tag his/her tweet irony because the tweet itself is ironic. These tweets though tagged as irony are in fact sarcastic tweets. Different from this, the tweets of second sense of irony is tagged to classify an event to be ironic. However, from the distribution in sentiment analysis and examples in content analysis, irony seems to be more broadly used in its second sense.",
"title": ""
},
{
"docid": "f17a6c34a7b3c6a7bf266f04e819af94",
"text": "BACKGROUND\nPatients with advanced squamous-cell non-small-cell lung cancer (NSCLC) who have disease progression during or after first-line chemotherapy have limited treatment options. This randomized, open-label, international, phase 3 study evaluated the efficacy and safety of nivolumab, a fully human IgG4 programmed death 1 (PD-1) immune-checkpoint-inhibitor antibody, as compared with docetaxel in this patient population.\n\n\nMETHODS\nWe randomly assigned 272 patients to receive nivolumab, at a dose of 3 mg per kilogram of body weight every 2 weeks, or docetaxel, at a dose of 75 mg per square meter of body-surface area every 3 weeks. The primary end point was overall survival.\n\n\nRESULTS\nThe median overall survival was 9.2 months (95% confidence interval [CI], 7.3 to 13.3) with nivolumab versus 6.0 months (95% CI, 5.1 to 7.3) with docetaxel. The risk of death was 41% lower with nivolumab than with docetaxel (hazard ratio, 0.59; 95% CI, 0.44 to 0.79; P<0.001). At 1 year, the overall survival rate was 42% (95% CI, 34 to 50) with nivolumab versus 24% (95% CI, 17 to 31) with docetaxel. The response rate was 20% with nivolumab versus 9% with docetaxel (P=0.008). The median progression-free survival was 3.5 months with nivolumab versus 2.8 months with docetaxel (hazard ratio for death or disease progression, 0.62; 95% CI, 0.47 to 0.81; P<0.001). The expression of the PD-1 ligand (PD-L1) was neither prognostic nor predictive of benefit. Treatment-related adverse events of grade 3 or 4 were reported in 7% of the patients in the nivolumab group as compared with 55% of those in the docetaxel group.\n\n\nCONCLUSIONS\nAmong patients with advanced, previously treated squamous-cell NSCLC, overall survival, response rate, and progression-free survival were significantly better with nivolumab than with docetaxel, regardless of PD-L1 expression level. (Funded by Bristol-Myers Squibb; CheckMate 017 ClinicalTrials.gov number, NCT01642004.).",
"title": ""
},
{
"docid": "6adf612b6a80494f9c9559170ab66670",
"text": "In recent years, Steganography and Steganalysis are two important areas of research that involve a number of applications. These two areas of research are important especially when reliable and secure information exchange is required. Steganography is an art of embedding information in a cover image without causing statistically significant variations to the cover image. Steganalysis is the technology that attempts to defeat Steganography by detecting the hidden information and extracting. In this paper a comparative analysis is made to demonstrate the effectiveness of the proposed methods. The effectiveness of the proposed methods has been estimated by computing Mean square error (MSE) and Peak Signal to Noise Ratio (PSNR), Processing time, security.The analysis shows that the BER and PSNR is improved in the LSB Method but security sake DCT is the best method.",
"title": ""
},
{
"docid": "491bf7103b8540748b58465ff9238fe7",
"text": "We present a new approach for defining groups of populations that are geographically homogeneous and maximally differentiated from each other. As a by-product, it also leads to the identification of genetic barriers between these groups. The method is based on a simulated annealing procedure that aims to maximize the proportion of total genetic variance due to differences between groups of populations (spatial analysis of molecular variance; samova). Monte Carlo simulations were used to study the performance of our approach and, for comparison, the behaviour of the Monmonier algorithm, a procedure commonly used to identify zones of sharp genetic changes in a geographical area. Simulations showed that the samova algorithm indeed finds maximally differentiated groups, which do not always correspond to the simulated group structure in the presence of isolation by distance, especially when data from a single locus are available. In this case, the Monmonier algorithm seems slightly better at finding predefined genetic barriers, but can often lead to the definition of groups of populations not differentiated genetically. The samova algorithm was then applied to a set of European roe deer populations examined for their mitochondrial DNA (mtDNA) HVRI diversity. The inferred genetic structure seemed to confirm the hypothesis that some Italian populations were recently reintroduced from a Balkanic stock, as well as the differentiation of groups of populations possibly due to the postglacial recolonization of Europe or the action of a specific barrier to gene flow.",
"title": ""
},
{
"docid": "aabed671a466730e273225d8ee572f73",
"text": "It is essential to base instruction on a foundation of understanding of children’s thinking, but it is equally important to adopt the longer-term view that is needed to stretch these early competencies into forms of thinking that are complex, multifaceted, and subject to development over years, rather than weeks or months. We pursue this topic through our studies of model-based reasoning. We have identified four forms of models and related modeling practices that show promise for developing model-based reasoning. Models have the fortuitous feature of making forms of student reasoning public and inspectable—not only among the community of modelers, but also to teachers. Modeling provides feedback about student thinking that can guide teaching decisions, an important dividend for improving professional practice.",
"title": ""
},
{
"docid": "fe59d96ddb5a777f154da5cf813c556c",
"text": "For a set $P$ of $n$ points in the plane and an integer $k \\leq n$, consider the problem of finding the smallest circle enclosing at least $k$ points of $P$. We present a randomized algorithm that computes in $O( n k )$ expected time such a circle, improving over previously known algorithms. Further, we present a linear time $\\delta$-approximation algorithm that outputs a circle that contains at least $k$ points of $P$ and has radius less than $(1+\\delta)r_{opt}(P,k)$, where $r_{opt}(P,k)$ is the radius of the minimum circle containing at least $k$ points of $P$. The expected running time of this approximation algorithm is $O(n + n \\cdot\\min((1/k\\delta^3) \\log^2 (1/\\delta), k))$.",
"title": ""
},
{
"docid": "647ba490d8507eeefb50387ab95bf59c",
"text": "This study compares the cradle-to-gate total energy and major emissions for the extraction of raw materials, production, and transportation of the common wood building materials from the CORRIM 2004 reports. A life-cycle inventory produced the raw materials, including fuel resources and emission to air, water, and land for glued-laminated timbers, kiln-dried and green softwood lumber, laminated veneer lumber, softwood plywood, and oriented strandboard. Major findings from these comparisons were that the production of wood products, by the nature of the industry, uses a third of their energy consumption from renewable resources and the remainder from fossil-based, non-renewable resources when the system boundaries consider forest regeneration and harvesting, wood products and resin production, and transportation life-cycle stages. When the system boundaries are reduced to a gate-to-gate (manufacturing life-cycle stage) model for the wood products, the biomass component of the manufacturing energy increases to nearly 50% for most products and as high as 78% for lumber production from the Southeast. The manufacturing life-cycle stage consumed the most energy over all the products when resin is considered part of the production process. Extraction of log resources and transportation of raw materials for production had the least environmental impact.",
"title": ""
},
{
"docid": "734638df47b05b425b0dcaaab11d886e",
"text": "Satisfying the needs of users of online video streaming services requires not only to manage the network Quality of Service (QoS), but also to address the user's Quality of Experience (QoE) expectations. While QoS factors reflect the status of individual networks, they do not comprehensively capture the end-to-end features affecting the quality delivered to the user. In this situation, QoE management is the better option. However, traditionally used QoE management models require human interaction and have stringent requirements in terms of time and complexity. Thus, they fail to achieve successful performance in terms of real-timeliness, accuracy, scalability and adaptability. This dissertation work investigates new methods to bring QoE management to the level required by the real-time management of video services. In this paper, we highlight our main contributions. First, with the aim to perform a combined network-service assessment, we designed an experimental methodology able to map network QoS onto service QoE. Our methodology is meant to provide service and network providers with the means to pinpoint the working boundaries of their video-sets and to predict the effect of network policies on perception. Second, we developed a generic machine learning framework that allows deriving accurate predictive No Reference (NR) assessment metrics, based on simplistic NR QoE methods, that are functionally and computationally viable for real-time QoE evaluation. The tools, methods and conclusions derived from this dissertation conform a solid contribution to QoE management of video streaming services, opening new venues for further research.",
"title": ""
},
{
"docid": "49a9b9bb7a040523378f5ed4363f9fe9",
"text": "Pattern recognition is used to classify the input data into different classes based on extracted key features. Increasing the recognition rate of pattern recognition applications is a challenging task. The spike neural networks inspired from physiological brain architecture, is a neuromorphic hardware implementation of network of neurons. A sample of neuromorphic architecture has two layers of neurons, input and output. The number of input neurons is fixed based on the input data patterns. While the number of outputs neurons can be different. The goal of this paper is performance evaluation of neuromorphic architecture in terms of recognition rates using different numbers of output neurons. For this purpose a simulation environment of N2S3 and MNIST handwritten digits are used. Our simulation results show the recognition rate for various number of output neurons, 20, 30, 50, 100, 200, and 300 is 70%, 74%, 79%, 85%, 89%, and 91%, respectively.",
"title": ""
},
{
"docid": "9973de0dc30f8e8f7234819163a15db2",
"text": "Jennifer L. Docktor, Natalie E. Strand, José P. Mestre, and Brian H. Ross Department of Physics, University of Wisconsin–La Crosse, La Crosse, Wisconsin 54601, USA Department of Physics, University of Illinois, Urbana, Illinois 61801, USA Beckman Institute for Advanced Science and Technology, University of Illinois, Urbana, Illinois 61801, USA Department of Educational Psychology, University of Illinois, Champaign, Illinois 61820, USA Department of Psychology, University of Illinois, Champaign, Illinois 61820, USA (Received 30 April 2015; published 1 September 2015)",
"title": ""
},
{
"docid": "d8d52c5329ed7f187ba7ebfde45b750c",
"text": "Lately enhancing the capability of network services automatically and dynamically through SDN and CDN/CDNi networks has become a recent topic of research. While, in one hand, these systems can be very beneficial to control and optimize the overall network services that studies the topology, traffic paths, packet handling and such others, on the other hand, the servers in such architectures can also be a potential target for DoS and/or DDoS attacks. We, therefore, propose a mechanism for the SDN based CDNi networks to securely deliver services with a multi-defense strategy against DDoS attacks. Addition of ALTO like servers in such architectures enables mapping a very big network to provide a bird's eye view. We propose an additional marking path map in the ALTO server to trace the request packets. The next defense is a protection switch to protect the main servers. A Management Information Base (MIB) is also proposed in the SDN controller to compare and assess the request traffic coming to the protection switches.",
"title": ""
}
] | scidocsrr |
cedfb0244b1ea9b24f594603745167e5 | Dynamic Facet Ordering for Faceted Product Search Engines | [
{
"docid": "0dbad8ca53615294bc25f7a2d8d41d99",
"text": "Faceted search is becoming a popular method to allow users to interactively search and navigate complex information spaces. A faceted search system presents users with key-value metadata that is used for query refinement. While popular in e-commerce and digital libraries, not much research has been conducted on which metadata to present to a user in order to improve the search experience. Nor are there repeatable benchmarks for evaluating a faceted search engine. This paper proposes the use of collaborative filtering and personalization to customize the search interface to each user's behavior. This paper also proposes a utility based framework to evaluate the faceted interface. In order to demonstrate these ideas and better understand personalized faceted search, several faceted search algorithms are proposed and evaluated using the novel evaluation methodology.",
"title": ""
}
] | [
{
"docid": "782396981f9d3fffb74d7e03048cdb6b",
"text": "A high-voltage high-speed gate driver to enable synchronous rectifiers with zero-voltage-switching (ZVS) operation is presented in this paper. A capacitive-coupled level-shifter (CCLS) is developed to achieve negligible propagation delay and static current consumption. With only 1 off-chip capacitor, the proposed gate driver possesses strong driving capability and requires no external floating supply for the high-side driving. A dynamic timing control is also proposed not only to enable ZVS operation in the converter for minimizing the capacitive switching loss, but also to eliminate the converter short-circuit power loss. Implemented in a 0.5μm HV CMOS process, the proposed CCLS of the gate driver can shift up a 5V signal to the 100V DC rail with sub-nanosecond delay, improving the FoM by at least 29 times compared with that of state-of-the-art counterparts. The dynamic dead-time control properly enables ZVS operation in a synchronous buck converter under different input voltages (30V to 100V). The power losses of the high-voltage buck converter are thus greatly reduced under different load currents, achieving a maximum power efficiency improvement of 11.5%.",
"title": ""
},
{
"docid": "cceb05e100fe8c9f9dab9f6525d435db",
"text": "Conventional feedback control methods can solve various types of robot control problems very efficiently by capturing the structure with explicit models, such as rigid body equations of motion. However, many control problems in modern manufacturing deal with contacts and friction, which are difficult to capture with first-order physical modeling. Hence, applying control design methodologies to these kinds of problems often results in brittle and inaccurate controllers, which have to be manually tuned for deployment. Reinforcement learning (RL) methods have been demonstrated to be capable of learning continuous robot controllers from interactions with the environment, even for problems that include friction and contacts. In this paper, we study how we can solve difficult control problems in the real world by decomposing them into a part that is solved efficiently by conventional feedback control methods, and the residual which is solved with RL. The final control policy is a superposition of both control signals. We demonstrate our approach by training an agent to successfully perform a real-world block assembly task involving contacts and unstable objects.",
"title": ""
},
{
"docid": "323abed1a623e49db50bed383ab26a92",
"text": "Robust object detection is a critical skill for robotic applications in complex environments like homes and offices. In this paper we propose a method for using multiple cameras to simultaneously view an object from multiple angles and at high resolutions. We show that our probabilistic method for combining the camera views, which can be used with many choices of single-image object detector, can significantly improve accuracy for detecting objects from many viewpoints. We also present our own single-image object detection method that uses large synthetic datasets for training. Using a distributed, parallel learning algorithm, we train from very large datasets (up to 100 million image patches). The resulting object detector achieves high performance on its own, but also benefits substantially from using multiple camera views. Our experimental results validate our system in realistic conditions and demonstrates significant performance gains over using standard single-image classifiers, raising accuracy from 0.86 area-under-curve to 0.97.",
"title": ""
},
{
"docid": "fca35510714dcf6f2a7a835291db382f",
"text": "This paper considers the state of art real-time detection network single-shot multi-box detector (SSD) for multi-targets detection. It is built on top of a base network VGG16 that ends with some convolution layers. Its base network VGG16, designed for 1000 categories in Imagenet dataset, is obviously over-parametered, when used for 21 categories classification in VOC dataset. In this paper, we visualize the base network VGG16 in SSD network by deconvolution method. We analyze the discriminative feature learned by last layer conv5_3 of VGG16 network due to its semantic property. Redundancy intra-channel can be seen in the form of deconvolution image. Accordingly, we propose a pruning method to obtain a compressed network with high accuracy. Experiments illustrate the efficiency of our method by comparing different fine-tune methods. A reduced SSD network is obtained with even higher mAP than the original one by 2 percent. When only 4% of the original kernels in conv5_3 is remained, mAP is still as high as that of the original network.",
"title": ""
},
{
"docid": "0ab4f0cf03c0a2d72b4e9ed079181a67",
"text": "In this paper, we present a method for estimating articulated human poses in videos. We cast this as an optimization problem defined on body parts with spatio-temporal links between them. The resulting formulation is unfortunately intractable and previous approaches only provide approximate solutions. Although such methods perform well on certain body parts, e.g., head, their performance on lower arms, i.e., elbows and wrists, remains poor. We present a new approximate scheme with two steps dedicated to pose estimation. First, our approach takes into account temporal links with subsequent frames for the less-certain parts, namely elbows and wrists. Second, our method decomposes poses into limbs, generates limb sequences across time, and recomposes poses by mixing these body part sequences. We introduce a new dataset \"Poses in the Wild\", which is more challenging than the existing ones, with sequences containing background clutter, occlusions, and severe camera motion. We experimentally compare our method with recent approaches on this new dataset as well as on two other benchmark datasets, and show significant improvement.",
"title": ""
},
{
"docid": "065e6db1710715ce5637203f1749e6f6",
"text": "Software fault isolation (SFI) is an effective mechanism to confine untrusted modules inside isolated domains to protect their host applications. Since its debut, researchers have proposed different SFI systems for many purposes such as safe execution of untrusted native browser plugins. However, most of these systems focus on the x86 architecture. Inrecent years, ARM has become the dominant architecture for mobile devices and gains in popularity in data centers.Hence there is a compellingneed for an efficient SFI system for the ARM architecture. Unfortunately, existing systems either have prohibitively high performance overhead or place various limitations on the memory layout and instructions of untrusted modules.\n In this paper, we propose ARMlock, a hardware-based fault isolation for ARM. It uniquely leverages the memory domain support in ARM processors to create multiple sandboxes. Memory accesses by the untrusted module (including read, write, and execution) are strictly confined by the hardware,and instructions running inside the sandbox execute at the same speed as those outside it. ARMlock imposes virtually no structural constraints on untrusted modules. For example, they can use self-modifying code, receive exceptions, and make system calls. Moreover, system calls can be interposed by ARMlock to enforce the policies set by the host. We have implemented a prototype of ARMlock for Linux that supports the popular ARMv6 and ARMv7 sub-architecture. Our security assessment and performance measurement show that ARMlock is practical, effective, and efficient.",
"title": ""
},
{
"docid": "b31f5af2510461479d653be1ddadaa22",
"text": "Integrating smart temperature sensors into digital platforms facilitates information to be processed and transmitted, and open up new applications. Furthermore, temperature sensors are crucial components in computing platforms to manage power-efficiency trade-offs reliably under a thermal budget. This paper presents a holistic perspective about smart temperature sensor design from system- to device-level including manufacturing concerns. Through smart sensor design evolutions, we identify some scaling paths and circuit techniques to surmount analog/mixed-signal design challenges in 32-nm and beyond. We close with opportunities to design smarter temperature sensors.",
"title": ""
},
{
"docid": "476e612f4124fc5e9f391e2fa4a49a3b",
"text": "Debugging data processing logic in Data-Intensive Scalable Computing (DISC) systems is a difficult and time consuming effort. Today's DISC systems offer very little tooling for debugging programs, and as a result programmers spend countless hours collecting evidence (e.g., from log files) and performing trial and error debugging. To aid this effort, we built Titian, a library that enables data provenance-tracking data through transformations-in Apache Spark. Data scientists using the Titian Spark extension will be able to quickly identify the input data at the root cause of a potential bug or outlier result. Titian is built directly into the Spark platform and offers data provenance support at interactive speeds-orders-of-magnitude faster than alternative solutions-while minimally impacting Spark job performance; observed overheads for capturing data lineage rarely exceed 30% above the baseline job execution time.",
"title": ""
},
{
"docid": "df9ed642b388f7eac9df492384c81efa",
"text": "The predominantly anaerobic microbiota of the distal ileum and colon contain an extraordinarily complex variety of metabolically active bacteria and fungi that intimately interact with the host's epithelial cells and mucosal immune system. Crohn's disease, ulcerative colitis, and pouchitis are the result of continuous microbial antigenic stimulation of pathogenic immune responses as a consequence of host genetic defects in mucosal barrier function, innate bacterial killing, or immunoregulation. Altered microbial composition and function in inflammatory bowel diseases result in increased immune stimulation, epithelial dysfunction, or enhanced mucosal permeability. Although traditional pathogens probably are not responsible for these disorders, increased virulence of commensal bacterial species, particularly Escherichia coli, enhance their mucosal attachment, invasion, and intracellular persistence, thereby stimulating pathogenic immune responses. Host genetic polymorphisms most likely interact with functional bacterial changes to stimulate aggressive immune responses that lead to chronic tissue injury. Identification of these host and microbial alterations in individual patients should lead to selective targeted interventions that correct underlying abnormalities and induce sustained and predictable therapeutic responses.",
"title": ""
},
{
"docid": "41cfe93db7c4635e106a1d620ea31036",
"text": "Neuroblastoma (NBL) and medulloblastoma (MBL) are tumors of the neuroectoderm that occur in children. NBL and MBL express Trk family tyrosine kinase receptors, which regulate growth, differentiation, and cell death. CEP-751 (KT-6587), an indolocarbazole derivative, is an inhibitor of Trk family tyrosine kinases at nanomolar concentrations. This study was designed to determine the effect of CEP-751 on the growth of NBL and MBL cell lines as xenografts. In vivo studies were conducted on four NBL cell lines (IMR-5, CHP-134, NBL-S, and SY5Y) and three MBL cell lines (D283, D341, and DAOY) using two treatment schedules: (a) treatment was started after the tumors were measurable (therapeutic study); or (b) 4-6 days after inoculation, before tumors were palpable (prevention study). CEP-751 was given at 21 mg/kg/dose administered twice a day, 7 days a week; the carrier vehicle was used as a control. In therapeutic studies, a significant difference in tumor size was seen between treated and control animals with IMR-5 on day 8 (P = 0.01), NBL-S on day 17 (P = 0.016), and CHP-134 on day 15 (P = 0.034). CEP-751 also had a significant growth-inhibitory effect on the MBL line D283 (on day 39, P = 0.031). Inhibition of tumor growth of D341 did not reach statistical significance, and no inhibition was apparent with DAOY. In prevention studies, CEP-751 showed a modest growth-inhibitory effect on IMR5 (P = 0.062) and CHP-134 (P = 0.049). Furthermore, inhibition of growth was greater in the SY5Y cell line transfected with TrkB compared with the untransfected parent cell line expressing no detectable TrkB. Terminal deoxynucleotidyl transferase-mediated nick end labeling studies showed CEP-751 induced apoptosis in the treated CHP-134 tumors, whereas no evidence of apoptosis was seen in the control tumors. Finally, there was no apparent toxicity identified in any of the treated mice. These results suggest that CEP-751 may be a useful therapeutic agent for NBL or MBL.",
"title": ""
},
{
"docid": "0c3387ec7ed161d931bc08151e722d10",
"text": "New updated! The latest book from a very famous author finally comes out. Book of the tower of hanoi myths and maths, as an amazing reference becomes what you need to get. What's for is this book? Are you still thinking for what the book is? Well, this is what you probably will get. You should have made proper choices for your better life. Book, as a source that may involve the facts, opinion, literature, religion, and many others are the great friends to join with.",
"title": ""
},
{
"docid": "52dbfe369d1875c402220692ef985bec",
"text": "Geographically annotated social media is extremely valuable for modern information retrieval. However, when researchers can only access publicly-visible data, one quickly finds that social media users rarely publish location information. In this work, we provide a method which can geolocate the overwhelming majority of active Twitter users, independent of their location sharing preferences, using only publicly-visible Twitter data. Our method infers an unknown user's location by examining their friend's locations. We frame the geotagging problem as an optimization over a social network with a total variation-based objective and provide a scalable and distributed algorithm for its solution. Furthermore, we show how a robust estimate of the geographic dispersion of each user's ego network can be used as a per-user accuracy measure which is effective at removing outlying errors. Leave-many-out evaluation shows that our method is able to infer location for 101, 846, 236 Twitter users at a median error of 6.38 km, allowing us to geotag over 80% of public tweets.",
"title": ""
},
{
"docid": "6f3a5219346e4c6c8dd094e391f93e2f",
"text": "We consider 27 population and community terms used frequently by parasitologists when describing the ecology of parasites. We provide suggestions for various terms in an attempt to foster consistent use and to make terms used in parasite ecology easier to interpret for those who study free-living organisms. We suggest strongly that authors, whether they agree or disagree with us, provide complete and unambiguous definitions for all parameters of their studies.",
"title": ""
},
{
"docid": "5bece01bed7c5a9a2433d95379882a37",
"text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.",
"title": ""
},
{
"docid": "26ad79619be484ec239daf5b735ae5a4",
"text": "The placenta is a complex organ, playing multiple roles during fetal development. Very little is known about the association between placental morphological abnormalities and fetal physiology. In this work, we present an open sourced, computationally tractable deep learning pipeline to analyse placenta histology at the level of the cell. By utilising two deep convolutional neural network architectures and transfer learning, we can robustly localise and classify placental cells within five classes with an accuracy of 89%. Furthermore, we learn deep embeddings encoding phenotypic knowledge that is capable of both stratifying five distinct cell populations and learn intraclass phenotypic variance. We envisage that the automation of this pipeline to population scale studies of placenta histology has the potential to improve our understanding of basic cellular placental biology and its variations, particularly its role in predicting adverse birth outcomes.",
"title": ""
},
{
"docid": "ed7826f37cf45f56ba6e7abf98c509e7",
"text": "The progressive ability of a six-strains L. monocytogenes cocktail to form biofilm on stainless steel (SS), under fish-processing simulated conditions, was investigated, together with the biocide tolerance of the developed sessile communities. To do this, the pathogenic bacteria were left to form biofilms on SS coupons incubated at 15°C, for up to 240h, in periodically renewable model fish juice substrate, prepared by aquatic extraction of sea bream flesh, under both mono-species and mixed-culture conditions. In the latter case, L. monocytogenes cells were left to produce biofilms together with either a five-strains cocktail of four Pseudomonas species (fragi, savastanoi, putida and fluorescens), or whole fish indigenous microflora. The biofilm populations of L. monocytogenes, Pseudomonas spp., Enterobacteriaceae, H2S producing and aerobic plate count (APC) bacteria, both before and after disinfection, were enumerated by selective agar plating, following their removal from surfaces through bead vortexing. Scanning electron microscopy was also applied to monitor biofilm formation dynamics and anti-biofilm biocidal actions. Results revealed the clear dominance of Pseudomonas spp. bacteria in all the mixed-culture sessile communities throughout the whole incubation period, with the in parallel sole presence of L. monocytogenes cells to further increase (ca. 10-fold) their sessile growth. With respect to L. monocytogenes and under mono-species conditions, its maximum biofilm population (ca. 6logCFU/cm2) was reached at 192h of incubation, whereas when solely Pseudomonas spp. cells were also present, its biofilm formation was either slightly hindered or favored, depending on the incubation day. However, when all the fish indigenous microflora was present, biofilm formation by the pathogen was greatly hampered and never exceeded 3logCFU/cm2, while under the same conditions, APC biofilm counts had already surpassed 7logCFU/cm2 by the end of the first 96h of incubation. All here tested disinfection treatments, composed of two common food industry biocides gradually applied for 15 to 30min, were insufficient against L. monocytogenes mono-species biofilm communities, with the resistance of the latter to significantly increase from the 3rd to 7th day of incubation. However, all these treatments resulted in no detectable L. monocytogenes cells upon their application against the mixed-culture sessile communities also containing the fish indigenous microflora, something probably associated with the low attached population level of these pathogenic cells before disinfection (<102CFU/cm2) under such mixed-culture conditions. Taken together, all these results expand our knowledge on both the population dynamics and resistance of L. monocytogenes biofilm cells under conditions resembling those encountered within the seafood industry and should be considered upon designing and applying effective anti-biofilm strategies.",
"title": ""
},
{
"docid": "89e36aaa4c4d3ba5ec0326c6a568ebba",
"text": "We demonstrate a MEMS-based display system with a very wide projection angle of up to 120deg. The system utilizes a gimbal-less two-axis micromirror scanner for high-speed laser beam-steering in both axes. The optical scan angle of the micromirrors is up to 16deg on each axis. A custom-designed fisheye lens is utilized to magnify scan angles. The system can display a variety of vector graphics as well as multiframe animations at arbitrary refresh rates, up to the overall bandwidth limit of the MEMS device. It is also possible to operate the scanners in point-to-point scanning, resonant and/or rastering modes. The system is highly adaptable for projection on a variety of surfaces including projection on specially coated transparent surfaces (Fig. 3.) The size of the displayed area, refresh rate, display mode (vector graphic or image raster,) and many other parameters are all adjustable by the user. The small size of the MEMS devices and lens as well as the ultra-low power consumption of the MEMS devices, in the milliwatt range, makes the overall system highly portable and miniaturizable.",
"title": ""
},
{
"docid": "13451c2f433b9d32563012458bb4856c",
"text": "Purpose – The paper’s aim is to explore the factors that affect the online game addiction and the role that online game addiction plays in the relationship between online satisfaction and loyalty. Design/methodology/approach – A web survey of online game players was conducted, with 1,186 valid responses collected. Structure equation modeling – specifically partial least squares – was used to assess the relationships in the proposed research framework. Findings – The results indicate that perceived playfulness and descriptive norms influence online game addiction. Furthermore, descriptive norms indirectly affect online game addiction through perceived playfulness. Addiction also directly contributes to loyalty and attenuates the relationship between satisfaction and loyalty. This finding partially explains why people remain loyal to an online game despite being dissatisfied. Practical implications – Online gaming vendors should strive to create amusing game content and to maintain their online game communities in order to enhance players’ perceptions of playfulness and the effects of social influences. Also, because satisfaction is the most significant indicator of loyalty, vendors can enhance loyalty by providing better services, such as fraud prevention and the detection of cheating behaviors. Originality/value – The value of this study is that it reveals the moderating influences of addiction on the satisfaction-loyalty relationship and factors that contribute to the online game addiction. Moreover, while many past studies focused on addiction’s negative effects and on groups considered particularly vulnerable to Internet addiction, this paper extends previous work by investigating the relationship of addiction to other marketing variables and by using a more general population, mostly young adults, as research subjects.",
"title": ""
},
{
"docid": "4b57b59f475a643b281a1ee5e49c87bd",
"text": "In this paper we present a Model Predictive Control (MPC) approach for combined braking and steering systems in autonomous vehicles. We start from the result presented in (Borrelli et al. (2005)) and (Falcone et al. (2007a)), where a Model Predictive Controller (MPC) for autonomous steering systems has been presented. As in (Borrelli et al. (2005)) and (Falcone et al. (2007a)) we formulate an MPC control problem in order to stabilize a vehicle along a desired path. In the present paper, the control objective is to best follow a given path by controlling the front steering angle and the brakes at the four wheels independently, while fulfilling various physical and design constraints.",
"title": ""
}
] | scidocsrr |
22f2a21ab25e1d20636299564824a389 | What you see is what you set: sustained inattentional blindness and the capture of awareness. | [
{
"docid": "6362adacc0ee3e7f3cf418e8d8ff0cb9",
"text": "Advances in neuroscience implicate reentrant signaling as the predominant form of communication between brain areas. This principle was used in a series of masking experiments that defy explanation by feed-forward theories. The masking occurs when a brief display of target plus mask is continued with the mask alone. Two masking processes were found: an early process affected by physical factors such as adapting luminance and a later process affected by attentional factors such as set size. This later process is called masking by object substitution, because it occurs whenever there is a mismatch between the reentrant visual representation and the ongoing lower level activity. Iterative reentrant processing was formalized in a computational model that provides an excellent fit to the data. The model provides a more comprehensive account of all forms of visual masking than do the long-held feed-forward views based on inhibitory contour interactions.",
"title": ""
}
] | [
{
"docid": "90da5531538f373d7a591d80615d0fb4",
"text": "Re-authenticating users may be necessary for smartphone authentication schemes that leverage user behaviour, device context, or task sensitivity. However, due to the unpredictable nature of re-authentication, users may get annoyed when they have to use the default, non-transparent authentication prompt for re-authentication. We address this concern by proposing several re-authentication configurations with varying levels of screen transparency and an optional time delay before displaying the authentication prompt. We conduct user studies with 30 participants to evaluate the usability and security perceptions of these configurations. We find that participants respond positively to our proposed changes and utilize the time delay while they are anticipating to get an authentication prompt to complete their current task. Though our findings indicate no differences in terms of task performance against these configurations, we find that the participants’ preferences for the configurations are context-based. They generally prefer the reauthentication configuration with a non-transparent background for sensitive applications, such as banking and photo apps, while their preferences are inclined towards convenient, usable configurations for medium and low sensitive apps or while they are using their devices at home. We conclude with suggestions to improve the design of our proposed configurations as well as a discussion of guidelines for future implementations of re-authentication schemes.",
"title": ""
},
{
"docid": "66f17513486e4d25c9be36e71aecbbf8",
"text": "Fuzz testing is an active testing technique which consists in automatically generating and sending malicious inputs to an application in order to hopefully trigger a vulnerability. Fuzzing entails such questions as: Where to fuzz? Which parameter to fuzz? What kind of anomaly to introduce? Where to observe its effects? etc. Different test contexts depending on the degree of knowledge assumed about the target: recompiling the application (white-box), interacting only at the target interface (blackbox), dynamically instrumenting a binary (grey-box). In this paper, we focus on black-box test contest, and specifically address the questions: How to obtain a notion of coverage on unstructured inputs? How to capture human testers intuitions and use it for the fuzzing? How to drive the search in various directions? We specifically address the problems of detecting Memory Corruption in PDF interpreters and Cross Site Scripting (XSS) in web applications. We detail our approaches which use genetic algorithm, inference and anti-random testing. We empirically evaluate our implementations of XSS fuzzer KameleonFuzz and of PDF fuzzer ShiftMonkey.",
"title": ""
},
{
"docid": "227b995313994032ddeddc3cd4093790",
"text": "This paper describes and assesses underwater channel models for optical wireless communication. Models considered are: inherent optical properties; vector radiative transfer theory with the small-angle analytical solution and numerical solutions of the vector radiative transfer equation (Monte Carlo, discrete ordinates and invariant imbedding). Variable composition and refractive index, in addition to background light, are highlighted as aspects of the channel which advanced models must represent effectively. Models are assessed against these aspects in terms of their ability to predict transmitted power and spatial and temporal distributions of light a specified distance from a transmitter. Monte Carlo numerical methods are found to be the most versatile but are compromised by long computational time and greater errors than other methods.",
"title": ""
},
{
"docid": "88af2cee31243eef4e46e357b053b3ae",
"text": "Domestic induction heating (IH) is currently the technology of choice in modern domestic applications due to its advantages regarding fast heating time, efficiency, and improved control. New design trends pursue the implementation of new cost-effective topologies with higher efficiency levels. In order to achieve this aim, a direct ac-ac boost resonant converter is proposed in this paper. The main features of this proposal are the improved efficiency, reduced component count, and proper output power control. A detailed analytical model leading to closed-form expressions of the main magnitudes is presented, and a converter design procedure is proposed. In addition, an experimental prototype has been designed and built to prove the expected converter performance and the accurateness of the analytical model. The experimental results are in good agreement with the analytical ones and prove the feasibility of the proposed converter for the IH application.",
"title": ""
},
{
"docid": "e1f531740891d47387a2fc2ef4f71c46",
"text": "Multi-dimensional arrays, or tensors, are increasingly found in fields such as signal processing and recommender systems. Real-world tensors can be enormous in size and often very sparse. There is a need for efficient, high-performance tools capable of processing the massive sparse tensors of today and the future. This paper introduces SPLATT, a C library with shared-memory parallelism for three-mode tensors. SPLATT contains algorithmic improvements over competing state of the art tools for sparse tensor factorization. SPLATT has a fast, parallel method of multiplying a matricide tensor by a Khatri-Rao product, which is a key kernel in tensor factorization methods. SPLATT uses a novel data structure that exploits the sparsity patterns of tensors. This data structure has a small memory footprint similar to competing methods and allows for the computational improvements featured in our work. We also present a method of finding cache-friendly reordering and utilizing them with a novel form of cache tiling. To our knowledge, this is the first work to investigate reordering and cache tiling in this context. SPLATT averages almost 30x speedup compared to our baseline when using 16 threads and reaches over 80x speedup on NELL-2.",
"title": ""
},
{
"docid": "5015d853665e2642add922290b28b685",
"text": "What is CRM Customer relationship Management (CRM) appears to be a simple and straightforward concept, but there are many different definitions and implementations of CRM. At present, a number of different conceptual understandings are associated with the term \"Customer Relationship Management (CRM). There understanding range from IT driven programs designed to optimize customer contact to comprehensive approaches for the establishment and design of long-term relationships. The effort to establish a meaningful relationship with the customer is characteristic of this last understanding (Barnes 2003).",
"title": ""
},
{
"docid": "795f5c1085cbdfccb3457adf003faba1",
"text": "Abstract—In this paper, a novel dual-band RF-harvesting RF-DC converter with a frequency limited impedance matching network (M/N) is proposed. The proposed RF-DC converter consists of a dual-band impedance matching network, a rectifier circuit with villard structure, a wideband harmonic suppression low-pass filter (LPF), and a termination load. The proposed dual-band M/N can match two receiving band signals and suppress the out-of-band signals effectively, so the back-scattered nonlinear frequency components from the nonlinear rectifying diodes to the antenna can be blocked. The fabricated circuit provides the maximum RF-DC conversion efficiency of 73.76% and output voltage 7.09 V at 881MHz and 69.05% with 6.86V at 2.4GHz with an individual input signal power of 22 dBm. Moreover, the conversion efficiency of 77.13% and output voltage of 7.25V are obtained when two RF waves with input dual-band signal power of 22 dBm are fed simultaneously.",
"title": ""
},
{
"docid": "41d6fe50d6ef17936d457c801024274f",
"text": "In this article, we quantitatively analyze how the term “fake news” is being shaped in news media in recent years. We study the perception and the conceptualization of this term in the traditional media using eight years of data collected from news outlets based in 20 countries. Our results not only corroborate previous indications of a high increase in the usage of the expression “fake news”, but also show contextual changes around this expression after the United States presidential election of 2016. Among other results, we found changes in the related vocabulary, in the mentioned entities, in the surrounding topics and in the contextual polarity around the term “fake news”, suggesting that this expression underwent a change in perception and conceptualization after 2016. These outcomes expand the understandings on the usage of the term “fake news”, helping to comprehend and more accurately characterize this relevant social phenomenon linked to misinformation and manipulation.",
"title": ""
},
{
"docid": "3831c1b7b1679f6e158d6a17e47df122",
"text": "Social media platforms provide an inexpensive communication medium that allows anyone to quickly reach millions of users. Consequently, in these platforms anyone can publish content and anyone interested in the content can obtain it, representing a transformative revolution in our society. However, this same potential of social media systems brings together an important challenge---these systems provide space for discourses that are harmful to certain groups of people. This challenge manifests itself with a number of variations, including bullying, offensive content, and hate speech. Specifically, authorities of many countries today are rapidly recognizing hate speech as a serious problem, specially because it is hard to create barriers on the Internet to prevent the dissemination of hate across countries or minorities. In this paper, we provide the first of a kind systematic large scale measurement and analysis study of hate speech in online social media. We aim to understand the abundance of hate speech in online social media, the most common hate expressions, the effect of anonymity on hate speech and the most hated groups across regions. In order to achieve our objectives, we gather traces from two social media systems: Whisper and Twitter. We then develop and validate a methodology to identify hate speech on both of these systems. Our results identify hate speech forms and unveil a set of important patterns, providing not only a broader understanding of online hate speech, but also offering directions for detection and prevention approaches.",
"title": ""
},
{
"docid": "f652e66bbc0e6a1ddaec31f16286a332",
"text": "In Rspondin-based 3D cultures, Lgr5 stem cells from multiple organs form ever-expanding epithelial organoids that retain their tissue identity. We report the establishment of tumor organoid cultures from 20 consecutive colorectal carcinoma (CRC) patients. For most, organoids were also generated from adjacent normal tissue. Organoids closely recapitulate several properties of the original tumor. The spectrum of genetic changes within the \"living biobank\" agrees well with previous large-scale mutational analyses of CRC. Gene expression analysis indicates that the major CRC molecular subtypes are represented. Tumor organoids are amenable to high-throughput drug screens allowing detection of gene-drug associations. As an example, a single organoid culture was exquisitely sensitive to Wnt secretion (porcupine) inhibitors and carried a mutation in the negative Wnt feedback regulator RNF43, rather than in APC. Organoid technology may fill the gap between cancer genetics and patient trials, complement cell-line- and xenograft-based drug studies, and allow personalized therapy design. PAPERCLIP.",
"title": ""
},
{
"docid": "567445f68597ea8ff5e89719772819be",
"text": "We have developed an interactive pop-up book called Electronic Popables to explore paper-based computing. Our book integrates traditional pop-up mechanisms with thin, flexible, paper-based electronics and the result is an artifact that looks and functions much like an ordinary pop-up, but has added elements of dynamic interactivity. This paper introduces the book and, through it, a library of paper-based sensors and a suite of paper-electronics construction techniques. We also reflect on the unique and under-explored opportunities that arise from combining material experimentation, artistic design, and engineering.",
"title": ""
},
{
"docid": "6f6ae8ea9237cca449b8053ff5f368e7",
"text": "With the rapid development of Location-based Social Network (LBSN) services, a large number of Point-of-Interests (POIs) have been available, which consequently raises a great demand of building personalized POI recommender systems. A personalized POI recommender system can significantly help users to find their preferred POIs and assist POI owners to attract more customers. However, due to the complexity of users’ checkin decision making process that is influenced by many different factors such as POI distance and region’s prosperity, and the dynamics of user’s preference, POI recommender systems usually suffer from many challenges. Although different latent factor based methods (e.g., probabilistic matrix factorization) have been proposed, most of them do not successfully incorporate both geographical influence and temporal effect together into latent factor models. To this end, in this paper, we propose a new Spatial-Temporal Probabilistic Matrix Factorization (STPMF) model that models a user’s preference for POI as the combination of his geographical preference and other general interest in POI. Furthermore, in addition to static general interest of user, we capture the temporal dynamics of user’s interest as well by modeling checkin data in a unique way. To evaluate the proposed STPMF model, we conduct extensive experiments with many state-of-the-art baseline methods and evaluation metrics on two real-world data sets. The experimental results clearly demonstrate the effectiveness of our proposed STPMF model.",
"title": ""
},
{
"docid": "d1c88428d398caba2dc9a8f79f84a45f",
"text": "In this article, a novel compact reconfigurable antenna based on substrate integrated waveguide (SIW) technology is introduced. The geometry of the proposed antennas is symmetric with respect to the horizontal center line. The electrical shape of the antenna is composed of double H-plane SIW based horn antennas and radio frequency micro electro mechanical system (RF-MEMS) actuators. The RF-MEMS actuators are integrated in the planar structure of the antenna for reconfiguring the radiation pattern by adding nulls to the pattern. The proper activation/deactivation of the switches alters the modes distributed in the structure and changes the radiation pattern. When different combinations of switches are on or off, the radiation patterns have 2, 4, 6, 8, . . . nulls with nearly similar operating frequencies. The attained peak gain of the proposed antenna is higher than 5 dB at any point on the far field radiation pattern except at the null positions. The design procedure and closed form formulation are provided for analytical determination of the antenna parameters. Moreover, the designed antenna with an overall dimensions of only 63.6 × 50 mm2 is fabricated and excited through standard SMA connector and compared with the simulated results. The measured results show that the antenna can clearly alters its beams using the switching components. The proposed antenna retains advantages of low cost, low cross-polarized radiation, and easy integration of configuration.",
"title": ""
},
{
"docid": "901fbd46cdd4403c8398cb21e1c75ba1",
"text": "Hidden Markov Model (HMM) based applications are common in various areas, but the incorporation of HMM's for anomaly detection is still in its infancy. This paper aims at classifying the TCP network traffic as an attack or normal using HMM. The paper's main objective is to build an anomaly detection system, a predictive model capable of discriminating between normal and abnormal behavior of network traffic. In the training phase, special attention is given to the initialization and model selection issues, which makes the training phase particularly effective. For training HMM, 12.195% features out of the total features (5 features out of 41 features) present in the KDD Cup 1999 data set are used. Result of tests on the KDD Cup 1999 data set shows that the proposed system is able to classify network traffic in proportion to the number of features used for training HMM. We are extending our work on a larger data set for building an anomaly detection system.",
"title": ""
},
{
"docid": "c82f4117c7c96d0650eff810f539c424",
"text": "The Stock Market is known for its volatile and unstable nature. A particular stock could be thriving in one period and declining in the next. Stock traders make money from buying equity when they are at their lowest and selling when they are at their highest. The logical question would be: \"What Causes Stock Prices To Change?\". At the most fundamental level, the answer to this would be the demand and supply. In reality, there are many theories as to why stock prices fluctuate, but there is no generic theory that explains all, simply because not all stocks are identical, and one theory that may apply for today, may not necessarily apply for tomorrow. This paper covers various approaches taken to attempt to predict the stock market without extensive prior knowledge or experience in the subject area, highlighting the advantages and limitations of the different techniques such as regression and classification. We formulate both short term and long term predictions. Through experimentation we achieve 81% accuracy for future trend direction using classification, 0.0117 RMSE for next day price and 0.0613 RMSE for next day change in price using regression techniques. The results obtained in this paper are achieved using only historic prices and technical indicators. Various methods, tools and evaluation techniques will be assessed throughout the course of this paper, the result of this contributes as to which techniques will be selected and enhanced in the final artefact of a stock prediction model. Further work will be conducted utilising deep learning techniques to approach the problem. This paper will serve as a preliminary guide to researchers wishing to expose themselves to this area.",
"title": ""
},
{
"docid": "5c3ae59522d549bed4c059a11b9724c6",
"text": "The chemokine receptor CCR7 drives leukocyte migration into and within lymph nodes (LNs). It is activated by chemokines CCL19 and CCL21, which are scavenged by the atypical chemokine receptor ACKR4. CCR7-dependent navigation is determined by the distribution of extracellular CCL19 and CCL21, which form concentration gradients at specific microanatomical locations. The mechanisms underpinning the establishment and regulation of these gradients are poorly understood. In this article, we have incorporated multiple biochemical processes describing the CCL19-CCL21-CCR7-ACKR4 network into our model of LN fluid flow to establish a computational model to investigate intranodal chemokine gradients. Importantly, the model recapitulates CCL21 gradients observed experimentally in B cell follicles and interfollicular regions, building confidence in its ability to accurately predict intranodal chemokine distribution. Parameter variation analysis indicates that the directionality of these gradients is robust, but their magnitude is sensitive to these key parameters: chemokine production, diffusivity, matrix binding site availability, and CCR7 abundance. The model indicates that lymph flow shapes intranodal CCL21 gradients, and that CCL19 is functionally important at the boundary between B cell follicles and the T cell area. It also predicts that ACKR4 in LNs prevents CCL19/CCL21 accumulation in efferent lymph, but does not control intranodal gradients. Instead, it attributes the disrupted interfollicular CCL21 gradients observed in Ackr4-deficient LNs to ACKR4 loss upstream. Our novel approach has therefore generated new testable hypotheses and alternative interpretations of experimental data. Moreover, it acts as a framework to investigate gradients at other locations, including those that cannot be visualized experimentally or involve other chemokines.",
"title": ""
},
{
"docid": "4e182b30dcbc156e2237e7d1d22d5c93",
"text": "A brain-computer interface (BCI) based on real-time functional magnetic resonance imaging (fMRI) is presented which allows human subjects to observe and control changes of their own blood oxygen level-dependent (BOLD) response. This BCI performs data preprocessing (including linear trend removal, 3D motion correction) and statistical analysis on-line. Local BOLD signals are continuously fed back to the subject in the magnetic resonance scanner with a delay of less than 2 s from image acquisition. The mean signal of a region of interest is plotted as a time-series superimposed on color-coded stripes which indicate the task, i.e., to increase or decrease the BOLD signal. We exemplify the presented BCI with one volunteer intending to control the signal of the rostral-ventral and dorsal part of the anterior cingulate cortex (ACC). The subject achieved significant changes of local BOLD responses as revealed by region of interest analysis and statistical parametric maps. The percent signal change increased across fMRI-feedback sessions suggesting a learning effect with training. This methodology of fMRI-feedback can assess voluntary control of circumscribed brain areas. As a further extension, behavioral effects of local self-regulation become accessible as a new field of research.",
"title": ""
},
{
"docid": "c8b9efec71a72a1d0f0fc7170efba61d",
"text": "Microorganisms present in our oral cavity which are called the human micro flora attach to our tooth surfaces and develop biofilms. In maximum organic habitats microorganisms generally prevail as multispecies biolfilms with the help of intercellular interactions and communications among them which are the main keys for their endurance. These biofilms are formed by initial attachment of bacteria to a surface, development of a multi –dimensional complex structure and detachment to progress other site. The best example of biofilm formation is dental plaque. Plaque formation can lead to dental caries and other associated diseases causing tooth loss. Many different bacteria are involved in these processes and one among them is Streptococcus mutans which is the principle and the most important agent. When these infections become severe, during the treatment the bacterium can enter the bloodstream from the oral cavity and cause endocarditis. The oral bacterium S. mutans is greatly skilled in its mechanical modes of carbohydrate absorption. It also synthesizes polysaccharides that are present in dental plaque causing caries. As dental caries is a preventable disease major distinct approaches for its prevention are: carbohydrate diet, sugar substitutes, mechanical cleaning techniques, use of fluorides, antimicrobial agents, fissure sealants, vaccines, probiotics, replacement theory and dairy products and at the same time for tooth remineralization fluorides and casein phosphopeptides are extensively employed. The aim of this review article is to put forth the general features of the bacterium S.mutans and how it is involved in certain diseases like: dental plaque, dental caries and endocarditis.",
"title": ""
},
{
"docid": "8077eb57c4232bc7e502f864f659ee7b",
"text": "Sex based differences in immune responses, affecting both the innate and adaptive immune responses, contribute to differences in the pathogenesis of infectious diseases in males and females, the response to viral vaccines and the prevalence of autoimmune diseases. Indeed, females have a lower burden of bacterial, viral and parasitic infections, most evident during their reproductive years. Conversely, females have a higher prevalence of a number of autoimmune diseases, including Sjogren's syndrome, systemic lupus erythematosus (SLE), scleroderma, rheumatoid arthritis (RA) and multiple sclerosis (MS). These observations suggest that gonadal hormones may have a role in this sex differential. The fundamental differences in the immune systems of males and females are attributed not only to differences in sex hormones, but are related to X chromosome gene contributions and the effects of environmental factors. A comprehensive understanding of the role that sex plays in the immune response is required for therapeutic intervention strategies against infections and the development of appropriate and effective therapies for autoimmune diseases for both males and females. This review will focus on the differences between male and female immune responses in terms of innate and adaptive immunity, and the effects of sex hormones in SLE, MS and RA.",
"title": ""
},
{
"docid": "6ed4d5ae29eef70f5aae76ebed76b8ca",
"text": "Web services that thrive on mining user interaction data such as search engines can currently track clicks and mouse cursor activity on their Web pages. Cursor interaction mining has been shown to assist in user modeling and search result relevance, and is becoming another source of rich information that data scientists and search engineers can tap into. Due to the growing popularity of touch-enabled mobile devices, search systems may turn to tracking touch interactions in place of cursor interactions. However, unlike cursor interactions, touch interactions are difficult to record reliably and their coordinates have not been shown to relate to regions of user interest. A better approach may be to track the viewport coordinates instead, which the user must manipulate to view the content on a mobile device. These recorded viewport coordinates can potentially reveal what regions of the page interest users and to what degree. Using this information, search system can then improve the design of their pages or use this information in click models or learning to rank systems. In this position paper, we discuss some of the challenges faced in mining interaction data for new modes of interaction, and future research directions in this field.",
"title": ""
}
] | scidocsrr |
d7f41168e016d53e714ede27eb6a19ba | Characteristics of knowledge, people engaged in knowledge transfer and knowledge stickiness: evidence from Chinese R&D team | [
{
"docid": "adcaa15fd8f1e7887a05d3cb1cd47183",
"text": "The dynamic capabilities framework analyzes the sources and methods of wealth creation and capture by private enterprise firms operating in environments of rapid technological change. The competitive advantage of firms is seen as resting on distinctive processes (ways of coordinating and combining), shaped by the firm's (specific) asset positions (such as the firm's portfolio of difftcult-to-trade knowledge assets and complementary assets), and the evolution path(s) it has aflopted or inherited. The importance of path dependencies is amplified where conditions of increasing retums exist. Whether and how a firm's competitive advantage is eroded depends on the stability of market demand, and the ease of replicability (expanding intemally) and imitatability (replication by competitors). If correct, the framework suggests that private wealth creation in regimes of rapid technological change depends in large measure on honing intemal technological, organizational, and managerial processes inside the firm. In short, identifying new opportunities and organizing effectively and efficiently to embrace them are generally more fundamental to private wealth creation than is strategizing, if by strategizing one means engaging in business conduct that keeps competitors off balance, raises rival's costs, and excludes new entrants. © 1997 by John Wiley & Sons, Ltd.",
"title": ""
},
{
"docid": "cbf878cd5fbf898bdf88a2fcf5024826",
"text": "Hypotheses involving mediation are common in the behavioral sciences. Mediation exists when a predictor affects a dependent variable indirectly through at least one intervening variable, or mediator. Methods to assess mediation involving multiple simultaneous mediators have received little attention in the methodological literature despite a clear need. We provide an overview of simple and multiple mediation and explore three approaches that can be used to investigate indirect processes, as well as methods for contrasting two or more mediators within a single model. We present an illustrative example, assessing and contrasting potential mediators of the relationship between the helpfulness of socialization agents and job satisfaction. We also provide SAS and SPSS macros, as well as Mplus and LISREL syntax, to facilitate the use of these methods in applications.",
"title": ""
}
] | [
{
"docid": "8788f14a2615f3065f4f0656a4a66592",
"text": "The ability to communicate in natural language has long been considered a defining characteristic of human intelligence. Furthermore, we hold our ability to express ideas in writing as a pinnacle of this uniquely human language facility—it defies formulaic or algorithmic specification. So it comes as no surprise that attempts to devise computer programs that evaluate writing are often met with resounding skepticism. Nevertheless, automated writing-evaluation systems might provide precisely the platforms we need to elucidate many of the features that characterize good and bad writing, and many of the linguistic, cognitive, and other skills that underlie the human capacity for both reading and writing. Using computers to increase our understanding of the textual features and cognitive skills involved in creating and comprehending written text will have clear benefits. It will help us develop more effective instructional materials for improving reading, writing, and other human communication abilities. It will also help us develop more effective technologies, such as search engines and questionanswering systems, for providing universal access to electronic information. A sketch of the brief history of automated writing-evaluation research and its future directions might lend some credence to this argument.",
"title": ""
},
{
"docid": "d6e565c0123049b9e11692b713674ccf",
"text": "Now days many research is going on for text summari zation. Because of increasing information in the internet, these kind of research are gaining more a nd more attention among the researchers. Extractive text summarization generates a brief summary by extracti ng proper set of sentences from a document or multi ple documents by deep learning. The whole concept is to reduce or minimize the important information prese nt in the documents. The procedure is manipulated by Rest rict d Boltzmann Machine (RBM) algorithm for better efficiency by removing redundant sentences. The res tricted Boltzmann machine is a graphical model for binary random variables. It consist of three layers input, hidden and output layer. The input data uni formly distributed in the hidden layer for operation. The experimentation is carried out and the summary is g enerated for three different document set from different kno wledge domain. The f-measure value is the identifie r to the performance of the proposed text summarization meth od. The top responses of the three different knowle dge domain in accordance with the f-measure are 0.85, 1 .42 and 1.97 respectively for the three document se t.",
"title": ""
},
{
"docid": "71ac262257aacc838b2027fe061a2f56",
"text": "In Part I of this paper, a novel motion simulator platform is presented, the DLR Robot Motion Simulator with 7 degrees of freedom (DOF). In this Part II, a path-planning algorithm for mentioned platform will be discussed. By replacing the widely used hexapod kinematics by an antropomorhic, industrial robot arm mounted on a standard linear axis, a comparably larger workspace at lower hardware costs can be achieved. But the serial, redundant kinematics of the industrial robot system also introduces challenges for the path-planning as singularities in the workspace, varying movability of the system and the handling of robot system's kinematical redundancy. By solving an optimization problem with constraints in every sampling step, a feasible trajectory can be generated, fulfilling the task of motion cueing, while respecting the robot's dynamic constraints.",
"title": ""
},
{
"docid": "02d8c55750904b7f4794139bcfa51693",
"text": "BACKGROUND\nMore than one-third of deaths during the first five years of life are attributed to undernutrition, which are mostly preventable through economic development and public health measures. To alleviate this problem, it is necessary to determine the nature, magnitude and determinants of undernutrition. However, there is lack of evidence in agro-pastoralist communities like Bule Hora district. Therefore, this study assessed magnitude and factors associated with undernutrition in children who are 6-59 months of age in agro-pastoral community of Bule Hora District, South Ethiopia.\n\n\nMETHODS\nA community based cross-sectional study design was used to assess the magnitude and factors associated with undernutrition in children between 6-59 months. A structured questionnaire was used to collect data from 796 children paired with their mothers. Anthropometric measurements and determinant factors were collected. SPSS version 16.0 statistical software was used for analysis. Bivariate and multivariate logistic regression analyses were conducted to identify factors associated to nutritional status of the children Statistical association was declared significant if p-value was less than 0.05.\n\n\nRESULTS\nAmong study participants, 47.6%, 29.2% and 13.4% of them were stunted, underweight, and wasted respectively. Presence of diarrhea in the past two weeks, male sex, uneducated fathers and > 4 children ever born to a mother were significantly associated with being underweight. Presence of diarrhea in the past two weeks, male sex and pre-lacteal feeding were significantly associated with stunting. Similarly, presence of diarrhea in the past two weeks, age at complementary feed was started and not using family planning methods were associated to wasting.\n\n\nCONCLUSION\nUndernutrition is very common in under-five children of Bule Hora district. Factors associated to nutritional status of children in agro-pastoralist are similar to the agrarian community. Diarrheal morbidity was associated with all forms of Protein energy malnutrition. Family planning utilization decreases the risk of stunting and underweight. Feeding practices (pre-lacteal feeding and complementary feeding practice) were also related to undernutrition. Thus, nutritional intervention program in Bule Hora district in Ethiopia should focus on these factors.",
"title": ""
},
{
"docid": "06e708b307a0518ec681e8a6d272d558",
"text": "Augmented reality (AR) in surgery consists in the fusion of synthetic computer-generated images (3D virtual model) obtained from medical imaging preoperative workup and real-time patient images in order to visualize unapparent anatomical details. The 3D model could be used for a preoperative planning of the procedure. The potential of AR navigation as a tool to improve safety of the surgical dissection is outlined for robotic hepatectomy. Three patients underwent a fully robotic and AR-assisted hepatic segmentectomy. The 3D virtual anatomical model was obtained using a thoracoabdominal CT scan with a customary software (VR-RENDER®, IRCAD). The model was then processed using a VR-RENDER® plug-in application, the Virtual Surgical Planning (VSP®, IRCAD), to delineate surgical resection planes including the elective ligature of vascular structures. Deformations associated with pneumoperitoneum were also simulated. The virtual model was superimposed to the operative field. A computer scientist manually registered virtual and real images using a video mixer (MX 70; Panasonic, Secaucus, NJ) in real time. Two totally robotic AR segmentectomy V and one segmentectomy VI were performed. AR allowed for the precise and safe recognition of all major vascular structures during the procedure. Total time required to obtain AR was 8 min (range 6–10 min). Each registration (alignment of the vascular anatomy) required a few seconds. Hepatic pedicle clamping was never performed. At the end of the procedure, the remnant liver was correctly vascularized. Resection margins were negative in all cases. The postoperative period was uneventful without perioperative transfusion. AR is a valuable navigation tool which may enhance the ability to achieve safe surgical resection during robotic hepatectomy.",
"title": ""
},
{
"docid": "4a6ee237d0ebebce741e40279009a333",
"text": "This paper describes the latest version of the ABC metadata model. This model has been developed within the Harmony international digital library project to provide a common conceptual model to facilitate interoperability between metadata vocabularies from different domains. This updated ABC model is the result of collaboration with the CIMI consortium whereby earlier versions of the ABC model were applied to metadata descriptions of complex objects provided by CIMI museums and libraries. The result is a metadata model with more logically grounded time and entity semantics. Based on this model we have been able to build a metadata repository of RDF descriptions and a search interface which is capable of more sophisticated queries than less-expressive, object-centric metadata models will allow.",
"title": ""
},
{
"docid": "75aa71e270d85df73fa97336d2a6b713",
"text": "Designing powerful tools that support cooking activities has rapidly gained popularity due to the massive amounts of available data, as well as recent advances in machine learning that are capable of analyzing them. In this paper, we propose a cross-modal retrieval model aligning visual and textual data (like pictures of dishes and their recipes) in a shared representation space. We describe an effective learning scheme, capable of tackling large-scale problems, and validate it on the Recipe1M dataset containing nearly 1 million picture-recipe pairs. We show the effectiveness of our approach regarding previous state-of-the-art models and present qualitative results over computational cooking use cases.",
"title": ""
},
{
"docid": "d0b29493c64e787ed88ad8166d691c3d",
"text": "Mobile apps have to satisfy various privacy requirements. Notably, app publishers are often obligated to provide a privacy policy and notify users of their apps’ privacy practices. But how can a user tell whether an app behaves as its policy promises? In this study we introduce a scalable system to help analyze and predict Android apps’ compliance with privacy requirements. We discuss how we customized our system in a collaboration with the California Office of the Attorney General. Beyond its use by regulators and activists our system is also meant to assist app publishers and app store owners in their internal assessments of privacy requirement compliance. Our analysis of 17,991 free Android apps shows the viability of combining machine learning-based privacy policy analysis with static code analysis of apps. Results suggest that 71% of apps tha lack a privacy policy should have one. Also, for 9,050 apps that have a policy, we find many instances of potential inconsistencies between what the app policy seems to state and what the code of the app appears to do. In particular, as many as 41% of these apps could be collecting location information and 17% could be sharing such with third parties without disclosing so in their policies. Overall, each app exhibits a mean of 1.83 potential privacy requirement inconsistencies.",
"title": ""
},
{
"docid": "8c864e944afa69696cfb4f87c4344a07",
"text": "In this study, we examined physician acceptance behavior of the electronic medical record (EMR) exchange. Although several prior studies have focused on factors that affect the adoption or use of EMRs, empirical study that captures the success factors that encourage physicians to adopt the EMR exchange is limited. Therefore, drawing on institutional trust integrated with the decomposed theory of planned behavior (TPB) model, we propose a theoretical model to examine physician intentions of using the EMR exchange. A field survey was conducted in Taiwan to collect data from physicians. Structural equation modeling (SEM) using the partial least squares (PLS) method was employed to test the research model. The results showed that the usage intention of physicians is significantly influenced by 4 factors (i.e., attitude, subjective norm, perceived behavior control, and institutional trust). These 4 factors were assessed by their perceived usefulness and compatibility, facilitating conditions and self-efficacy, situational normality, and structural assurance, respectively. The results also indicated that institutional trust integrated with the decomposed TPB model provides an improved method for predicting physician intentions to use the EMR exchange. Finally, the implications of this study are discussed.",
"title": ""
},
{
"docid": "d5955aa10ee95527bd7a3d13479d4018",
"text": "As urbanisation increases globally and the natural environment becomes increasingly fragmented, the importance of urban green spaces for biodiversity conservation grows. In many countries, private gardens are a major component of urban green space and can provide considerable biodiversity benefits. Gardens and adjacent habitats form interconnected networks and a landscape ecology framework is necessary to understand the relationship between the spatial configuration of garden patches and their constituent biodiversity. A scale-dependent tension is apparent in garden management, whereby the individual garden is much smaller than the unit of management needed to retain viable populations. To overcome this, here we suggest mechanisms for encouraging 'wildlife-friendly' management of collections of gardens across scales from the neighbourhood to the city.",
"title": ""
},
{
"docid": "6478097f207482543c0db12b518be82b",
"text": "What is a good test case? One that reveals potential defects with good cost-effectiveness. We provide a generic model of faults and failures, formalize it, and present its various methodological usages for test case generation.",
"title": ""
},
{
"docid": "0e803e853422328aeef59e426410df48",
"text": "We present WatchWriter, a finger operated keyboard that supports both touch and gesture typing with statistical decoding on a smartwatch. Just like on modern smartphones, users type one letter per tap or one word per gesture stroke on WatchWriter but in a much smaller spatial scale. WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size. In a user performance experiment entirely run on a smartwatch, 36 participants reached a speed of 22-24 WPM with near zero error rate.",
"title": ""
},
{
"docid": "1e972c454587c5a3b24386f2b6ffc8fa",
"text": "Three classic cases and one exceptional case are reported. The unique case of decapitation took place in a traffic accident, while the others were seen after homicide, vehicle-assisted suicide, and after long-jump hanging. Thorough scene examinations were performed, and photographs from the scene were available in all cases. Through the autopsy of each case, the mechanism for the decapitation in each case was revealed. The severance lines were through the neck and the cervical vertebral column, except for in the motor vehicle accident case, where the base of skull was fractured. This case was also unusual as the mechanism was blunt force. In the homicide case, the mechanism was the use of a knife combined with a saw, while in the two last cases, a ligature made the cut through the neck. The different mechanisms in these decapitations are suggested.",
"title": ""
},
{
"docid": "d4ac52a52e780184359289ecb41e321e",
"text": "Interleaving is an increasingly popular technique for evaluating information retrieval systems based on implicit user feedback. While a number of isolated studies have analyzed how this technique agrees with conventional offline evaluation approaches and other online techniques, a complete picture of its efficiency and effectiveness is still lacking. In this paper we extend and combine the body of empirical evidence regarding interleaving, and provide a comprehensive analysis of interleaving using data from two major commercial search engines and a retrieval system for scientific literature. In particular, we analyze the agreement of interleaving with manual relevance judgments and observational implicit feedback measures, estimate the statistical efficiency of interleaving, and explore the relative performance of different interleaving variants. We also show how to learn improved credit-assignment functions for clicks that further increase the sensitivity of interleaving.",
"title": ""
},
{
"docid": "1547a67fd88ac720f4521a206a26dff3",
"text": "A core business in the fashion industry is the understanding and prediction of customer needs and trends. Search engines and social networks are at the same time a fundamental bridge and a costly middleman between the customer’s purchase intention and the retailer. To better exploit Europe’s distinctive characteristics e.g., multiple languages, fashion and cultural differences, it is pivotal to reduce retailers’ dependence to search engines. This goal can be achieved by harnessing various data channels (manufacturers and distribution networks, online shops, large retailers, social media, market observers, call centers, press/magazines etc.) that retailers can leverage in order to gain more insight about potential buyers, and on the industry trends as a whole. This can enable the creation of novel on-line shopping experiences, the detection of influencers, and the prediction of upcoming fashion trends. In this paper, we provide an overview of the main research challenges and an analysis of the most promising technological solutions that we are investigating in the FashionBrain project.",
"title": ""
},
{
"docid": "5dce9f3c1ec0cb65ec98c9c5ecdaf549",
"text": "As organizational environments become more global, dynamic, and competitive, contradictory demands intensify. To understand and explain such tensions, academics and practitioners are increasingly adopting a paradox lens. We review the paradox literature, categorizing types and highlighting fundamental debates. We then present a dynamic equilibrium model of organizing, which depicts how cyclical responses to paradoxical tensions enable sustainability—peak performance in the present that enables success in the future. This review and the model provide the foundation of a theory of paradox.",
"title": ""
},
{
"docid": "909d9d1b9054586afc4b303e94acae73",
"text": "Humans learn to solve tasks of increasing complexity by building on top of previously acquired knowledge. Typically, there exists a natural progression in the tasks that we learn – most do not require completely independent solutions, but can be broken down into simpler subtasks. We propose to represent a solver for each task as a neural module that calls existing modules (solvers for simpler tasks) in a program-like manner. Lower modules are a black box to the calling module, and communicate only via a query and an output. Thus, a module for a new task learns to query existing modules and composes their outputs in order to produce its own output. Each module also contains a residual component that learns to solve aspects of the new task that lower modules cannot solve. Our model effectively combines previous skill-sets, does not suffer from forgetting, and is fully differentiable. We test our model in learning a set of visual reasoning tasks, and demonstrate state-ofthe-art performance in Visual Question Answering, the highest-level task in our task set. By evaluating the reasoning process using non-expert human judges, we show that our model is more interpretable than an attention-based baseline.",
"title": ""
},
{
"docid": "1d1fdf869a30a8ba9437e3b18bc8c872",
"text": "Automated nuclear detection is a critical step for a number of computer assisted pathology related image analysis algorithms such as for automated grading of breast cancer tissue specimens. The Nottingham Histologic Score system is highly correlated with the shape and appearance of breast cancer nuclei in histopathological images. However, automated nucleus detection is complicated by 1) the large number of nuclei and the size of high resolution digitized pathology images, and 2) the variability in size, shape, appearance, and texture of the individual nuclei. Recently there has been interest in the application of “Deep Learning” strategies for classification and analysis of big image data. Histopathology, given its size and complexity, represents an excellent use case for application of deep learning strategies. In this paper, a Stacked Sparse Autoencoder (SSAE), an instance of a deep learning strategy, is presented for efficient nuclei detection on high-resolution histopathological images of breast cancer. The SSAE learns high-level features from just pixel intensities alone in order to identify distinguishing features of nuclei. A sliding window operation is applied to each image in order to represent image patches via high-level features obtained via the auto-encoder, which are then subsequently fed to a classifier which categorizes each image patch as nuclear or non-nuclear. Across a cohort of 500 histopathological images (2200 × 2200) and approximately 3500 manually segmented individual nuclei serving as the groundtruth, SSAE was shown to have an improved F-measure 84.49% and an average area under Precision-Recall curve (AveP) 78.83%. The SSAE approach also out-performed nine other state of the art nuclear detection strategies.",
"title": ""
},
{
"docid": "951ad18af2b3c9b0ca06147b0c804f65",
"text": "Food photos are widely used in food logs for diet monitoring and in social networks to share social and gastronomic experiences. A large number of these images are taken in restaurants. Dish recognition in general is very challenging, due to different cuisines, cooking styles, and the intrinsic difficulty of modeling food from its visual appearance. However, contextual knowledge can be crucial to improve recognition in such scenario. In particular, geocontext has been widely exploited for outdoor landmark recognition. Similarly, we exploit knowledge about menus and location of restaurants and test images. We first adapt a framework based on discarding unlikely categories located far from the test image. Then, we reformulate the problem using a probabilistic model connecting dishes, restaurants, and locations. We apply that model in three different tasks: dish recognition, restaurant recognition, and location refinement. Experiments on six datasets show that by integrating multiple evidences (visual, location, and external knowledge) our system can boost the performance in all tasks.",
"title": ""
},
{
"docid": "f0ea768c020a99ac3ed144b76893dbd9",
"text": "This paper focuses on tracking dynamic targets using a low cost, commercially available drone. The approach presented utilizes a computationally simple potential field controller expanded to operate not only on relative positions, but also relative velocities. A brief background on potential field methods is given, and the design and implementation of the proposed controller is presented. Experimental results using an external motion capture system for localization demonstrate the ability of the drone to track a dynamic target in real time as well as avoid obstacles in its way.",
"title": ""
}
] | scidocsrr |
57a18a8a899b95092f68ebc9351a9765 | Bandwidth Enhancement of Small-Size Planar Tablet Computer Antenna Using a Parallel-Resonant Spiral Slit | [
{
"docid": "75d486862b8d9eca63502ac6cbb936dc",
"text": "A coupled-fed shorted monopole with its feed structure as an effective radiator for eight-band LTE/WWAN (LTE700/GSM850/900/1800/ 1900/UMTS/LTE2300/2500) operation in the laptop computer is presented. The radiating feed structure capacitively excites the shorted monopole. The feed structure mainly comprises a long feeding strip and a loop feed therein. The loop feed is formed at the front section of the feeding strip and connected to a 50-Ω mini-cable to feed the antenna. Both the feeding strip and loop feed contribute wideband resonant modes to combine with those generated by the shorted monopole for the desired eight-band operation. The antenna size above the top shielding metal wall of the laptop display is 4 × 10 × 80 mm3 and is suitable to be embedded inside the casing of the laptop computer. The proposed antenna is fabricated and tested, and good radiation performances of the fabricated antenna are obtained.",
"title": ""
},
{
"docid": "bc69fe2a1791b8d7e0e262f8110df9d4",
"text": "A small-size coupled-fed loop antenna suitable to be printed on the system circuit board of the mobile phone for penta-band WWAN operation (824-960/1710-2170 MHz) is presented. The loop antenna requires only a small footprint of 15 x 25 mm2 on the circuit board, and it can also be in close proximity to the surrounding ground plane printed on the circuit board. That is, very small or no isolation distance is required between the antenna's radiating portion and the nearby ground plane. This can lead to compact integration of the internal on-board printed antenna on the circuit board of the mobile phone, especially the slim mobile phone. The loop antenna also shows a simple structure; it is formed by a loop strip of about 87 mm with its end terminal short-circuited to the ground plane and its front section capacitively coupled to a feeding strip which is also an efficient radiator to contribute a resonant mode for the antenna's upper band to cover the GSM1800/1900/UMTS bands (1710-2170 MHz). Through the coupling excitation, the antenna can also generate a 0.25-wavelength loop resonant mode to form the antenna's lower band to cover the GSM850/900 bands (824-960 MHz). Details of the proposed antenna are presented. The SAR results for the antenna with the presence of the head and hand phantoms are also studied.",
"title": ""
},
{
"docid": "7cc3d7722f978545a6735ae4982ffc62",
"text": "A multiband printed monopole slot antenna promising for operating as an internal antenna in the thin-profile laptop computer for wireless wide area network (WWAN) operation is presented. The proposed antenna is formed by three monopole slots operated at their quarter-wavelength modes and arranged in a compact planar configuration. A step-shaped microstrip feedline is applied to excite the three monopole slots at their respective optimal feeding position, and two wide operating bands at about 900 and 1900 MHz are obtained for the antenna to cover all the five operating bands of GSM850/900/1800/1900/UMTS for WWAN operation. The antenna is easily printed on a small-size FR4 substrate and shows a length of 60 mm only and a height of 12 mm when mounted at the top edge of the system ground plane or supporting metal frame of the laptop display. Details of the proposed antenna are presented and studied.",
"title": ""
}
] | [
{
"docid": "ef44e3456962ed4a857614b0782ed4d2",
"text": "A sketching system for spline-based free-form surfaces on the Responsive Workbench is presented. We propose 3D tools for curve drawing and deformation techniques for curves and surfaces, adapted to the needs of designers. The user directly draws curves in the virtual environment, using a tracked stylus as an input device. A curve network can be formed, describing the skeleton of a virtual model. The non-dominant hand positions and orients the model while the dominant hand uses the editing tools. The curves and the resulting skinning surfaces can interactively be deformed.",
"title": ""
},
{
"docid": "fc62b094df3093528c6846e405f55e39",
"text": "Correctly classifying a skin lesion is one of the first steps towards treatment. We propose a novel convolutional neural network (CNN) architecture for skin lesion classification designed to learn based on information from multiple image resolutions while leveraging pretrained CNNs. While traditional CNNs are generally trained on a single resolution image, our CNN is composed of multiple tracts, where each tract analyzes the image at a different resolution simultaneously and learns interactions across multiple image resolutions using the same field-of-view. We convert a CNN, pretrained on a single resolution, to work for multi-resolution input. The entire network is fine-tuned in a fully learned end-to-end optimization with auxiliary loss functions. We show how our proposed novel multi-tract network yields higher classification accuracy, outperforming state-of-the-art multi-scale approaches when compared over a public skin lesion dataset.",
"title": ""
},
{
"docid": "6080612b8858d633c3f63a3d019aef58",
"text": "Color images provide large information for human visual perception compared to grayscale images. Color image enhancement methods enhance the visual data to increase the clarity of the color image. It increases human perception of information. Different color image contrast enhancement methods are used to increase the contrast of the color images. The Retinex algorithms enhance the color images similar to the scene perceived by the human eye. Multiscale retinex with color restoration (MSRCR) is a type of retinex algorithm. The MSRCR algorithm results in graying out and halo artifacts at the edges of the images. So here the focus is on improving the MSRCR algorithm by combining it with contrast limited adaptive histogram equalization (CLAHE) using image.",
"title": ""
},
{
"docid": "3e9aa3bcc728f8d735f6b02e0d7f0502",
"text": "Linda Marion is a doctoral student at Drexel University. E-mail: [email protected]. Abstract This exploratory study examined 250 online academic librarian employment ads posted during 2000 to determine current requirements for technologically oriented jobs. A content analysis software program was used to categorize the specific skills and characteristics listed in the ads. The results were analyzed using multivariate analysis (cluster analysis and multidimensional scaling). The results, displayed in a three-dimensional concept map, indicate 19 categories comprised of both computer related skills and behavioral characteristics that can be interpreted along three continua: (1) technical skills to people skills; (2) long-established technologies and behaviors to emerging trends; (3) technical service competencies to public service competencies. There was no identifiable “digital librarian” category.",
"title": ""
},
{
"docid": "eb1045f1e85d7197a2952c6580604f75",
"text": "There's a large push toward offering solutions and services in the cloud due to its numerous advantages. However, there are no clear guidelines for designing and deploying cloud solutions that can seamlessly operate to handle Web-scale traffic. The authors review industry best practices and identify principles for operating Web-scale cloud solutions by deriving design patterns that enable each principle in cloud solutions. In addition, using a seemingly straightforward cloud service as an example, they explain the application of the identified patterns.",
"title": ""
},
{
"docid": "10b4d77741d40a410b30b0ba01fae67f",
"text": "While glucosamine supplementation is very common and a multitude of commercial products are available, there is currently limited information available to assist the equine practitioner in deciding when and how to use these products. Low bioavailability of orally administered glucosamine, poor product quality, low recommended doses, and a lack of scientific evidence showing efficacy of popular oral joint supplements are major concerns. Authors’ addresses: Rolling Thunder Veterinary Services, 225 Roxbury Road, Garden City, NY 11530 (Oke); Ontario Veterinary College, Department of Clinical Studies, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Weese); e-mail: [email protected] (Oke). © 2006 AAEP.",
"title": ""
},
{
"docid": "bd5b8680feac7b5ff806a6a40b9f73ae",
"text": "Human variation in content selection in summarization has given rise to some fundamental research questions: How can one incorporate the observed variation in suitable evaluation measures? How can such measures reflect the fact that summaries conveying different content can be equally good and informative? In this article, we address these very questions by proposing a method for analysis of multiple human abstracts into semantic content units. Such analysis allows us not only to quantify human variation in content selection, but also to assign empirical importance weight to different content units. It serves as the basis for an evaluation method, the Pyramid Method, that incorporates the observed variation and is predictive of different equally informative summaries. We discuss the reliability of content unit annotation, the properties of Pyramid scores, and their correlation with other evaluation methods.",
"title": ""
},
{
"docid": "def6cd29f4679acdc7d944d9a7e734e4",
"text": "Question Answering (QA) is one of the most challenging and crucial tasks in Natural Language Processing (NLP) that has a wide range of applications in various domains, such as information retrieval and entity extraction. Traditional methods involve linguistically based NLP techniques, and recent researchers apply Deep Learning on this task and have achieved promising result. In this paper, we combined Dynamic Coattention Network (DCN) [1] and bilateral multiperspective matching (BiMPM) model [2], achieved an F1 score of 63.8% and exact match (EM) of 52.3% on test set.",
"title": ""
},
{
"docid": "e4f4fe27fff75bd7ed079f3094deaedb",
"text": "This paper considers the scenario that multiple data owners wish to apply a machine learning method over the combined dataset of all owners to obtain the best possible learning output but do not want to share the local datasets owing to privacy concerns. We design systems for the scenario that the stochastic gradient descent (SGD) algorithm is used as the machine learning method because SGD (or its variants) is at the heart of recent deep learning techniques over neural networks. Our systems differ from existing systems in the following features: (1) any activation function can be used, meaning that no privacy-preserving-friendly approximation is required; (2) gradients computed by SGD are not shared but the weight parameters are shared instead; and (3) robustness against colluding parties even in the extreme case that only one honest party exists. We prove that our systems, while privacy-preserving, achieve the same learning accuracy as SGD and hence retain the merit of deep learning with respect to accuracy. Finally, we conduct several experiments using benchmark datasets, and show that our systems outperform previous system in terms of learning accuracies. keywords: privacy preservation, stochastic gradient descent, distributed trainers, neural networks.",
"title": ""
},
{
"docid": "98ce0c1bc955b7aa64e1820b56a1be6c",
"text": "Lipid nanoparticles (LNPs) have attracted special interest during last few decades. Solid lipid nanoparticles (SLNs) and nanostructured lipid carriers (NLCs) are two major types of Lipid-based nanoparticles. SLNs were developed to overcome the limitations of other colloidal carriers, such as emulsions, liposomes and polymeric nanoparticles because they have advantages like good release profile and targeted drug delivery with excellent physical stability. In the next generation of the lipid nanoparticle, NLCs are modified SLNs which improve the stability and capacity loading. Three structural models of NLCs have been proposed. These LNPs have potential applications in drug delivery field, research, cosmetics, clinical medicine, etc. This article focuses on features, structure and innovation of LNPs and presents a wide discussion about preparation methods, advantages, disadvantages and applications of LNPs by focusing on SLNs and NLCs.",
"title": ""
},
{
"docid": "1d1e89d6f1db290f01d296394d03a71b",
"text": "Ontology mapping is seen as a solution provider in today’s landscape of ontology research. As the number of ontologies that are made publicly available and accessible on the Web increases steadily, so does the need for applications to use them. A single ontology is no longer enough to support the tasks envisaged by a distributed environment like the Semantic Web. Multiple ontologies need to be accessed from several applications. Mapping could provide a common layer from which several ontologies could be accessed and hence could exchange information in semantically sound manners. Developing such mappings has been the focus of a variety of works originating from diverse communities over a number of years. In this article we comprehensively review and present these works. We also provide insights on the pragmatics of ontology mapping and elaborate on a theoretical approach for defining ontology mapping.",
"title": ""
},
{
"docid": "722a2b6f773473d032d202ce7aded43c",
"text": "Detection of skin cancer in the earlier stage is very Important and critical. In recent days, skin cancer is seen as one of the most Hazardous form of the Cancers found in Humans. Skin cancer is found in various types such as Melanoma, Basal and Squamous cell Carcinoma among which Melanoma is the most unpredictable. The detection of Melanoma cancer in early stage can be helpful to cure it. Computer vision can play important role in Medical Image Diagnosis and it has been proved by many existing systems. In this paper, we present a computer aided method for the detection of Melanoma Skin Cancer using Image processing tools. The input to the system is the skin lesion image and then by applying novel image processing techniques, it analyses it to conclude about the presence of skin cancer. The Lesion Image analysis tools checks for the various Melanoma parameters Like Asymmetry, Border, Colour, Diameter, (ABCD) etc. by texture, size and shape analysis for image segmentation and feature stages. The extracted feature parameters are used to classify the image as Normal skin and Melanoma cancer lesion.",
"title": ""
},
{
"docid": "57d40d18977bc332ba16fce1c3cf5a66",
"text": "Deep neural networks are now rivaling human accuracy in several pattern recognition problems. Compared to traditional classifiers, where features are handcrafted, neural networks learn increasingly complex features directly from the data. Instead of handcrafting the features, it is now the network architecture that is manually engineered. The network architecture parameters such as the number of layers or the number of filters per layer and their interconnections are essential for good performance. Even though basic design guidelines exist, designing a neural network is an iterative trial-and-error process that takes days or even weeks to perform due to the large datasets used for training. In this paper, we present DeepEyes, a Progressive Visual Analytics system that supports the design of neural networks during training. We present novel visualizations, supporting the identification of layers that learned a stable set of patterns and, therefore, are of interest for a detailed analysis. The system facilitates the identification of problems, such as superfluous filters or layers, and information that is not being captured by the network. We demonstrate the effectiveness of our system through multiple use cases, showing how a trained network can be compressed, reshaped and adapted to different problems.",
"title": ""
},
{
"docid": "4519e039416fe4548e08a15b30b8a14f",
"text": "The R-tree, one of the most popular access methods for rectangles, is based on the heuristic optimization of the area of the enclosing rectangle in each inner node. By running numerous experiments in a standardized testbed under highly varying data, queries and operations, we were able to design the R*-tree which incorporates a combined optimization of area, margin and overlap of each enclosing rectangle in the directory. Using our standardized testbed in an exhaustive performance comparison, it turned out that the R*-tree clearly outperforms the existing R-tree variants. Guttman's linear and quadratic R-tree and Greene's variant of the R-tree. This superiority of the R*-tree holds for different types of queries and operations, such as map overlay, for both rectangles and multidimensional points in all experiments. From a practical point of view the R*-tree is very attractive because of the following two reasons 1 it efficiently supports point and spatial data at the same time and 2 its implementation cost is only slightly higher than that of other R-trees.",
"title": ""
},
{
"docid": "a7760563ce223473a3723e048b85427a",
"text": "The concept of “task” is at the core of artificial intelligence (AI): Tasks are used for training and evaluating AI systems, which are built in order to perform and automatize tasks we deem useful. In other fields of engineering theoretical foundations allow thorough evaluation of designs by methodical manipulation of well understood parameters with a known role and importance; this allows an aeronautics engineer, for instance, to systematically assess the effects of wind speed on an airplane’s performance and stability. No framework exists in AI that allows this kind of methodical manipulation: Performance results on the few tasks in current use (cf. board games, question-answering) cannot be easily compared, however similar or different. The issue is even more acute with respect to artificial general intelligence systems, which must handle unanticipated tasks whose specifics cannot be known beforehand. A task theory would enable addressing tasks at the class level, bypassing their specifics, providing the appropriate formalization and classification of tasks, environments, and their parameters, resulting in more rigorous ways of measuring, comparing, and evaluating intelligent behavior. Even modest improvements in this direction would surpass the current ad-hoc nature of machine learning and AI evaluation. Here we discuss the main elements of the argument for a task theory and present an outline of what it might look like for physical tasks.",
"title": ""
},
{
"docid": "4b33d61fce948b8c7942ca6180765a59",
"text": "We propose in this paper a fully automated deep model, which learns to classify human actions without using any prior knowledge. The first step of our scheme, based on the extension of Convolutional Neural Networks to 3D, automatically learns spatio-temporal features. A Recurrent Neural Network is then trained to classify each sequence considering the temporal evolution of the learned features for each timestep. Experimental results on the KTH dataset show that the proposed approach outperforms existing deep models, and gives comparable results with the best related works.",
"title": ""
},
{
"docid": "7417b84c36671fde36a88ccf661c99e1",
"text": "The power MOSFET on 4H-SiC is an attractive high-speed and low-dissipation power switching device. The problem to be solved before realizing the 4H-SiC power MOSFET with low on-resistance is low channel mobility at the SiO2/SiC interface. This work has succeeded in increasing the channel mobility in the buried channel IEMOSFET on carbon-face substrate, and has achieved an extremely low on-resistance of 1.8 mΩcm2 with a blocking voltage of 660 V",
"title": ""
},
{
"docid": "235899b940c658316693d0a481e2d954",
"text": "BACKGROUND\nImmunohistochemical markers are often used to classify breast cancer into subtypes that are biologically distinct and behave differently. The aim of this study was to estimate mortality for patients with the major subtypes of breast cancer as classified using five immunohistochemical markers, to investigate patterns of mortality over time, and to test for heterogeneity by subtype.\n\n\nMETHODS AND FINDINGS\nWe pooled data from more than 10,000 cases of invasive breast cancer from 12 studies that had collected information on hormone receptor status, human epidermal growth factor receptor-2 (HER2) status, and at least one basal marker (cytokeratin [CK]5/6 or epidermal growth factor receptor [EGFR]) together with survival time data. Tumours were classified as luminal and nonluminal tumours according to hormone receptor expression. These two groups were further subdivided according to expression of HER2, and finally, the luminal and nonluminal HER2-negative tumours were categorised according to expression of basal markers. Changes in mortality rates over time differed by subtype. In women with luminal HER2-negative subtypes, mortality rates were constant over time, whereas mortality rates associated with the luminal HER2-positive and nonluminal subtypes tended to peak within 5 y of diagnosis and then decline over time. In the first 5 y after diagnosis the nonluminal tumours were associated with a poorer prognosis, but over longer follow-up times the prognosis was poorer in the luminal subtypes, with the worst prognosis at 15 y being in the luminal HER2-positive tumours. Basal marker expression distinguished the HER2-negative luminal and nonluminal tumours into different subtypes. These patterns were independent of any systemic adjuvant therapy.\n\n\nCONCLUSIONS\nThe six subtypes of breast cancer defined by expression of five markers show distinct behaviours with important differences in short term and long term prognosis. Application of these markers in the clinical setting could have the potential to improve the targeting of adjuvant chemotherapy to those most likely to benefit. The different patterns of mortality over time also suggest important biological differences between the subtypes that may result in differences in response to specific therapies, and that stratification of breast cancers by clinically relevant subtypes in clinical trials is urgently required.",
"title": ""
},
{
"docid": "4b4cea4f58f33b9ace117fddd936d006",
"text": "The paper presents a complete solution for recognition of textual and graphic structures in various types of documents acquired from the Internet. In the proposed approach, the document structure recognition problem is divided into sub-problems. The first one is localizing logical structure elements within the document. The second one is recognizing segmented logical structure elements. The input to the method is an image of document page, the output is the XML file containing all graphic and textual elements included in the document, preserving the reading order of document blocks. This file contains information about the identity and position of all logical elements in the document image. The paper describes all details of the proposed method and shows the results of the experiments validating its effectiveness. The results of the proposed method for paragraph structure recognition are comparable to the referenced methods which offer segmentation only.",
"title": ""
},
{
"docid": "2f8430ae99d274bb1a08b031dfd1c11b",
"text": "BACKGROUND\nCleft-lip nasal deformity (CLND) affects the overall facial appearance and attractiveness. The CLND nose shares some features in part with the aging nose.\n\n\nOBJECTIVES\nThis questionnaire survey examined: 1) the panel perceptions of the role of secondary cleft rhinoplasty in nasal rejuvenation; and 2) the influence of a medical background in cleft care, age and gender of the panel members on the estimated age of the CLND nose.\n\n\nSTUDY DESIGN\nUsing a cross-sectional study design, we enrolled a random sample of adult laypersons and health care providers. The predictor variables were secondary cleft rhinoplasty (before/after) and a medical background in cleft care (yes/no). The outcome variable was the estimated age of nose in photographs derived from 8 German nonsyndromic CLND patients. Other study variables included age, gender, and career of the assessors. Appropriate descriptive and univariate statistics were computed, and a P value of <.05 was considered to be statistically significant.\n\n\nRESULTS\nThe sample consisted of 507 lay volunteers and 51 medical experts (407 [72.9%] were female; mean age ± SD = 24.9 ± 8.2 y). The estimated age of the CLND noses was higher than their real age. The rhinoplasty decreased the estimated age to a statistically significant degree (P < .0001). A medical background, age, and gender of the participants were not individually associated with their votes (P > .05).\n\n\nCONCLUSIONS\nThe results of this study suggest that CLND noses lack youthful appearance. Secondary cleft rhinoplasty rejuvenates the nose and makes it come close to the actual age of the patients.",
"title": ""
}
] | scidocsrr |
662497218440e16157a3f40ceeddf58a | Answering Science Exam Questions Using Query Rewriting with Background Knowledge | [
{
"docid": "e27d560bd974985dec1df3791fdf2f13",
"text": "Modeling natural language inference is a very challenging task. With the availability of large annotated data, it has recently become feasible to train complex models such as neural-network-based inference models, which have shown to achieve the state-of-the-art performance. Although there exist relatively large annotated data, can machines learn all knowledge needed to perform natural language inference (NLI) from these data? If not, how can neural-network-based NLI models benefit from external knowledge and how to build NLI models to leverage it? In this paper, we enrich the state-of-the-art neural natural language inference models with external knowledge. We demonstrate that the proposed models improve neural NLI models to achieve the state-of-the-art performance on the SNLI and MultiNLI datasets.",
"title": ""
},
{
"docid": "540099388527a2e8dd5b43162b697fea",
"text": "This paper describes NCRF++, a toolkit for neural sequence labeling. NCRF++ is designed for quick implementation of different neural sequence labeling models with a CRF inference layer. It provides users with an inference for building the custom model structure through configuration file with flexible neural feature design and utilization. Built on PyTorch1, the core operations are calculated in batch, making the toolkit efficient with the acceleration of GPU. It also includes the implementations of most state-of-the-art neural sequence labeling models such as LSTMCRF, facilitating reproducing and refinement on those methods.",
"title": ""
},
{
"docid": "b4ab51818d868b2f9796540c71a7bd17",
"text": "We propose a simple neural architecture for natural language inference. Our approach uses attention to decompose the problem into subproblems that can be solved separately, thus making it trivially parallelizable. On the Stanford Natural Language Inference (SNLI) dataset, we obtain state-of-the-art results with almost an order of magnitude fewer parameters than previous work and without relying on any word-order information. Adding intra-sentence attention that takes a minimum amount of order into account yields further improvements.",
"title": ""
},
{
"docid": "fe3a3ffab9a98cf8f4f71c666383780c",
"text": "We present a new dataset and model for textual entailment, derived from treating multiple-choice question-answering as an entailment problem. SCITAIL is the first entailment set that is created solely from natural sentences that already exist independently “in the wild” rather than sentences authored specifically for the entailment task. Different from existing entailment datasets, we create hypotheses from science questions and the corresponding answer candidates, and premises from relevant web sentences retrieved from a large corpus. These sentences are often linguistically challenging. This, combined with the high lexical similarity of premise and hypothesis for both entailed and non-entailed pairs, makes this new entailment task particularly difficult. The resulting challenge is evidenced by state-of-the-art textual entailment systems achieving mediocre performance on SCITAIL, especially in comparison to a simple majority class baseline. As a step forward, we demonstrate that one can improve accuracy on SCITAIL by 5% using a new neural model that exploits linguistic structure.",
"title": ""
},
{
"docid": "fa6f272026605bddf1b18c8f8234dba6",
"text": "tion can machines think? by replacing it with another, namely can a machine pass the imitation game (the Turing test). In the years since, this test has been criticized as being a poor replacement for the original enquiry (for example, Hayes and Ford [1995]), which raises the question: what would a better replacement be? In this article, we argue that standardized tests are an effective and practical assessment of many aspects of machine intelligence, and should be part of any comprehensive measure of AI progress. While a crisp definition of machine intelligence remains elusive, we can enumerate some general properties we might expect of an intelligent machine. The list is potentially long (for example, Legg and Hutter [2007]), but should at least include the ability to (1) answer a wide variety of questions, (2) answer complex questions, (3) demonstrate commonsense and world knowledge, and (4) acquire new knowledge scalably. In addition, a suitable test should be clearly measurable, graduated (have a variety of levels of difficulty), not gameable, ambitious but realistic, and motivating. There are many other requirements we might add (for example, capabilities in robotics, vision, dialog), and thus any comprehensive measure of AI is likely to require a battery of different tests. However, standardized tests meet a surprising number of requirements, including the four listed, and thus should be a key component of a future battery of tests. As we will show, the tests require answering a wide variety of questions, including those requiring commonsense and world knowledge. In addition, they meet all the practical requirements, a huge advantage for any component of a future test of AI. Articles",
"title": ""
},
{
"docid": "6d9393c95ca9c6534c98c0d0a4451fbc",
"text": "The recent work of Clark et al. (2018) introduces the AI2 Reasoning Challenge (ARC) and the associated ARC dataset that partitions open domain, complex science questions into an Easy Set and a Challenge Set. That paper includes an analysis of 100 questions with respect to the types of knowledge and reasoning required to answer them; however, it does not include clear definitions of these types, nor does it offer information about the quality of the labels. We propose a comprehensive set of definitions of knowledge and reasoning types necessary for answering the questions in the ARC dataset. Using ten annotators and a sophisticated annotation interface, we analyze the distribution of labels across the Challenge Set and statistics related to them. Additionally, we demonstrate that although naive information retrieval methods return sentences that are irrelevant to answering the query, sufficient supporting text is often present in the (ARC) corpus. Evaluating with human-selected relevant sentences improves the performance of a neural machine comprehension model by 42 points.",
"title": ""
}
] | [
{
"docid": "a4e1a0f5e56685a294a2c9088809a4fb",
"text": "As multicore systems continue to gain ground in the High Performance Computing world, linear algebra algorithms have to be reformulated or new algorithms have to be developed in order to take advantage of the architectural features on these new processors. Fine grain parallelism becomes a major requirement and introduces the necessity of loose synchronization in the parallel execution of an operation. This paper presents an algorithm for the Cholesky, LU and QR factorization where the operations can be represented as a sequence of small tasks that operate on square blocks of data. These tasks can be dynamically scheduled for execution based on the dependencies among them and on the availability of computational resources. This may result in an out of order execution of the tasks which will completely hide the presence of intrinsically sequential tasks in the factorization. Performance comparisons are presented with the LAPACK algorithms where parallelism can only be exploited at the level of the BLAS operations and vendor implementations.",
"title": ""
},
{
"docid": "38a74fff83d3784c892230255943ee23",
"text": "Several researchers, present authors included, envision personal mobile robot agents that can assist humans in their daily tasks. Despite many advances in robotics, such mobile robot agents still face many limitations in their perception, cognition, and action capabilities. In this work, we propose a symbiotic interaction between robot agents and humans to overcome the robot limitations while allowing robots to also help humans. We introduce a visitor’s companion robot agent, as a natural task for such symbiotic interaction. The visitor lacks knowledge of the environment but can easily open a door or read a door label, while the mobile robot with no arms cannot open a door and may be confused about its exact location, but can plan paths well through the building and can provide useful relevant information to the visitor. We present this visitor companion task in detail with an enumeration and formalization of the actions of the robot agent in its interaction with the human. We briefly describe the wifi-based robot localization algorithm and show results of the different levels of human help to the robot during its navigation. We then test the value of robot help to the visitor during the task to understand the relationship tradeoffs. Our work has been fully implemented in a mobile robot agent, CoBot, which has successfully navigated for several hours and continues to navigate in our indoor environment.",
"title": ""
},
{
"docid": "d1444f26cee6036f1c2df67a23d753be",
"text": "Text mining has becoming an emerging research area in now-a-days that helps to extract useful information from large amount of natural language text documents. The need of grouping similar documents together for different applications has gaining the attention of researchers in this area. Document clustering organizes the documents into different groups called as clusters. The documents in one cluster have higher degree of similarity than the documents in other cluster. The paper provides an overview of the document clustering reviewed from different papers and the challenges in document clustering. KeywordsText Mining, Document Clustering, Similarity Measures, Challenges in Document Clustering",
"title": ""
},
{
"docid": "26f957036ead7173f93ec16a57097a50",
"text": "The purpose of this paper is to present a direct digital manufacturing (DDM) process that is an order of magnitude faster than other DDM processes currently available. The developed process is based on a mask-image-projection-based Stereolithography process (MIP-SL), during which a Digital Micromirror Device (DMD) controlled projection light cures and cross-links liquid photopolymer resin. In order to achieve high-speed fabrication, we investigated the bottom-up projection system in the MIP-SL process. A set of techniques including film coating and the combination of two-way linear motions have been developed for the quick spreading of liquid resin into uniform thin layers. The process parameters and related settings to achieve the fabrication speed of a few seconds per layer are presented. Additionally, the hardware, software, and material setups developed for fabricating given three-dimensional (3D) digital models are presented. Experimental studies using the developed testbed have been performed to verify the effectiveness and efficiency of the presented fast MIP-SL process. The test results illustrate that the newly developed process can build a moderately sized part within minutes instead of hours that are typically required.",
"title": ""
},
{
"docid": "3b2c18828ef155233ede7f51d80f656a",
"text": "It is crucial for cancer diagnosis and treatment to accurately identify the site of origin of a tumor. With the emergence and rapid advancement of DNA microarray technologies, constructing gene expression profiles for different cancer types has already become a promising means for cancer classification. In addition to research on binary classification such as normal versus tumor samples, which attracts numerous efforts from a variety of disciplines, the discrimination of multiple tumor types is also important. Meanwhile, the selection of genes which are relevant to a certain cancer type not only improves the performance of the classifiers, but also provides molecular insights for treatment and drug development. Here, we use semisupervised ellipsoid ARTMAP (ssEAM) for multiclass cancer discrimination and particle swarm optimization for informative gene selection. ssEAM is a neural network architecture rooted in adaptive resonance theory and suitable for classification tasks. ssEAM features fast, stable, and finite learning and creates hyperellipsoidal clusters, inducing complex nonlinear decision boundaries. PSO is an evolutionary algorithm-based technique for global optimization. A discrete binary version of PSO is employed to indicate whether genes are chosen or not. The effectiveness of ssEAM/PSO for multiclass cancer diagnosis is demonstrated by testing it on three publicly available multiple-class cancer data sets. ssEAM/PSO achieves competitive performance on all these data sets, with results comparable to or better than those obtained by other classifiers",
"title": ""
},
{
"docid": "b52bad9f04c8a922b7012603be56c819",
"text": "In this paper, we investigate the possibility that a Near Field Communication (NFC) enabled mobile phone, with an embedded secure element (SE), could be used as a mobile token cloning and skimming platform. We show how an attacker could use an NFC mobile phone as such an attack platform by exploiting the existing security controls of the embedded SE and the available contactless APIs. To illustrate the feasibility of these actions, we also show how to practically skim and emulate certain tokens typically used in payment and access control applications with a NFC mobile phone. We also discuss how to capture and analyse legitimate transaction information from contactless systems. Although such attacks can also be implemented on other contactless platforms, such as custom-built card emulators and modified readers, the NFC enabled mobile phone has a legitimate form factor, which would be accepted by merchants and arouse less suspicion in public. Finally, we propose several security countermeasures for NFC phones that could prevent such misuse.",
"title": ""
},
{
"docid": "d98b97dae367d57baae6b0211c781d66",
"text": "In this paper we describe a technology for protecting privacy in video systems. The paper presents a review of privacy in video surveillance and describes how a computer vision approach to understanding the video can be used to represent “just enough” of the information contained in a video stream to allow video-based tasks (including both surveillance and other “person aware” applications) to be accomplished, while hiding superfluous details, particularly identity, that can contain privacyintrusive information. The technology has been implemented in the form of a privacy console that manages operator access to different versions of the video-derived data according to access control lists. We have also built PrivacyCam—a smart camera that produces a video stream with the privacy-intrusive information already removed.",
"title": ""
},
{
"docid": "6e8d30f3eaaf6c88dddb203c7b703a92",
"text": "searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggesstions for reducing this burden, to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington VA, 22202-4302. Respondents should be aware that notwithstanding any other provision of law, no person shall be subject to any oenalty for failing to comply with a collection of information if it does not display a currently valid OMB control number. PLEASE DO NOT RETURN YOUR FORM TO THE ABOVE ADDRESS.",
"title": ""
},
{
"docid": "11707c7f7c5b028392b25d1dffa9daeb",
"text": "High reliability and large rangeability are required of pumps in existing and new plants which must be capable of reliable on-off cycling operations and specially low load duties. The reliability and rangeability target is a new task for the pump designer/researcher and is made very challenging by the cavitation and/or suction recirculation effects, first of all the pump damage. The present knowledge about the: a) design critical parameters and their optimization, b) field problems diagnosis and troubleshooting has much advanced, in the very latest years. The objective of the pump manufacturer is to develop design solutions and troubleshooting approaches which improve the impeller life as related to cavitation erosion and enlarge the reliable operating range by minimizing the effects of the suction recirculation. This paper gives a short description of several field cases characterized by different damage patterns and other symptoms related with cavitation and/or suction recirculation. The troubleshooting methodology is described in detail, also focusing on the role of both the pump designer and the pump user.",
"title": ""
},
{
"docid": "9852e00f24fd8f626a018df99bea5f1f",
"text": "Business Process Reengineering is a discipline in which extensive research has been carried out and numerous methodologies churned out. But what seems to be lacking is a structured approach. In this paper we provide a review of BPR and present ‘best of breed ‘ methodologies from contemporary literature and introduce a consolidated, systematic approach to the redesign of a business enterprise. The methodology includes the five activities: Prepare for reengineering, Map and Analyze As-Is process, Design To-be process, Implement reengineered process and Improve continuously.",
"title": ""
},
{
"docid": "d2d134363fc993d68194e770c338b301",
"text": "The demand for coal has been on the rise in modern society. With the number of opencast coal mines decreasing, it has become increasingly difficult to find coal. Low efficiencies and high casualty rates have always been problems in the process of coal exploration due to complicated geological structures in coal mining areas. Therefore, we propose a new exploration technology for coal that uses satellite images to explore and monitor opencast coal mining areas. First, we collected bituminous coal and lignite from the Shenhua opencast coal mine in China in addition to non-coal objects, including sandstones, soils, shales, marls, vegetation, coal gangues, water, and buildings. Second, we measured the spectral data of these objects through a spectrometer. Third, we proposed a multilayer extreme learning machine algorithm and constructed a coal classification model based on that algorithm and the spectral data. The model can assist in the classification of bituminous coal, lignite, and non-coal objects. Fourth, we collected Landsat 8 satellite images for the coal mining areas. We divided the image of the coal mine using the constructed model and correctly described the distributions of bituminous coal and lignite. Compared with the traditional coal exploration method, our method manifested an unparalleled advantage and application value in terms of its economy, speed, and accuracy.",
"title": ""
},
{
"docid": "6ee2d94f0ccebbb05df2ea4b79b30976",
"text": "Received: 25 June 2013 Revised: 11 October 2013 Accepted: 25 November 2013 Abstract This paper distinguishes and contrasts two design science research strategies in information systems. In the first strategy, a researcher constructs or builds an IT meta-artefact as a general solution concept to address a class of problem. In the second strategy, a researcher attempts to solve a client’s specific problem by building a concrete IT artefact in that specific context and distils from that experience prescriptive knowledge to be packaged into a general solution concept to address a class of problem. The two strategies are contrasted along 16 dimensions representing the context, outcomes, process and resource requirements. European Journal of Information Systems (2015) 24(1), 107–115. doi:10.1057/ejis.2013.35; published online 7 January 2014",
"title": ""
},
{
"docid": "819693b9acce3dfbb74694733ab4d10f",
"text": "The present research examined how mode of play in an educational mathematics video game impacts learning, performance, and motivation. The game was designed for the practice and automation of arithmetic skills to increase fluency and was adapted to allow for individual, competitive, or collaborative game play. Participants (N 58) from urban middle schools were randomly assigned to each experimental condition. Results suggested that, in comparison to individual play, competition increased in-game learning, whereas collaboration decreased performance during the experimental play session. Although out-of-game math fluency improved overall, it did not vary by condition. Furthermore, competition and collaboration elicited greater situational interest and enjoyment and invoked a stronger mastery goal orientation. Additionally, collaboration resulted in stronger intentions to play the game again and to recommend it to others. Results are discussed in terms of the potential for mathematics learning games and technology to increase student learning and motivation and to demonstrate how different modes of engagement can inform the instructional design of such games.",
"title": ""
},
{
"docid": "f5e4bf1536d2ef7065b77be4e0c37ddc",
"text": "This research addresses management control in the front end of innovation projects. We conceptualize and analyze PMOs more broadly than just as a specialized project-focused organizational unit. Building on theories of management control, organization design, and innovation front end literature, we assess the role of PMO as an integrative arrangement. The empirical material is derived from four companies. The results show a variety of management control mechanisms that can be considered as integrative organizational arrangements. Such organizational arrangements can be considered as an alternative to a non-existent PMO, or to complement a (non-existent) PMO's tasks. The paper also contrasts prior literature by emphasizing the desirability of a highly organic or embedded matrix structure in the organization. Finally, we propose that the development path of the management approach proceeds by first emphasizing diagnostic and boundary systems (with mechanistic management approaches) followed by intensive use of interactive and belief systems (with value-based management approaches). The major contribution of this paper is in the organizational and managerial mechanisms of a firm that is managing multiple innovation projects. This research also expands upon the existing PMO research to include a broader management control approach for managing projects in companies. © 2011 Elsevier Ltd. and IPMA. All rights reserved.",
"title": ""
},
{
"docid": "eccd1b3b8acbf8426d7ccb7933e0bd0e",
"text": "We consider an architecture for a serverless distributed file system that does not assume mutual trust among the client computers. The system provides security, availability, and reliability by distributing multiple encrypted replicas of each file among the client machines. To assess the feasibility of deploying this system on an existing desktop infrastructure, we measure and analyze a large set of client machines in a commercial environment. In particular, we measure and report results on disk usage and content; file activity; and machine uptimes, lifetimes, and loads. We conclude that the measured desktop infrastructure would passably support our proposed system, providing availability on the order of one unfilled file request per user per thousand days.",
"title": ""
},
{
"docid": "ecb2cb8de437648c7895fc3f93809bfb",
"text": "Context: Static analysis approaches have been proposed to assess the security of Android apps, by searching for known vulnerabilities or actual malicious code. The literature thus has proposed a large body of works, each of which attempts to tackle one or more of the several challenges that program analyzers face when dealing with Android apps. Objective: We aim to provide a clear view of the state-of-the-art works that statically analyze Android apps, from which we highlight the trends of static analysis approaches, pinpoint where the focus has been put and enumerate the key aspects where future researches are still needed. Method: We have performed a systematic literature review which involves studying around 90 research papers published in software engineering, programming languages and security venues. This review is performed mainly in five dimensions: problems targeted by the approach, fundamental techniques used by authors, static analysis sensitivities considered, android characteristics taken into account and the scale of evaluation performed. Results: Our in-depth examination have led to several key findings: 1) Static analysis is largely performed to uncover security and privacy issues; 2) The Soot framework and the Jimple intermediate representation are the most adopted basic support tool and format, respectively; 3) Taint analysis remains the most applied technique in research approaches; 4) Most approaches support several analysis sensitivities, but very few approaches consider path-sensitivity; 5) There is no single work that has been proposed to tackle all challenges of static analysis that are related to Android programming; and 6) Only a small portion of state-of-the-art works have made their artifacts publicly available. Conclusion: The research community is still facing a number of challenges for building approaches that are aware altogether of implicit-Flows, dynamic code loading features, reflective calls, native code and multi-threading, in order to implement sound and highly precise static analyzers.",
"title": ""
},
{
"docid": "e9d5ba66ddcc3a38020f532414ebeef7",
"text": "Current theories of aspect acknowledge the pervasiveness of verbs of variable telicity, and are designed to account both for why these verbs show such variability and for the complex conditions that give rise to telic and atelic interpretations. Previous work has identified several sets of such verbs, including incremental theme verbs, such as eat and destroy; degree achievements, such as cool and widen; and (a)telic directed motion verbs, such as ascend and descend (see e.g., Dowty 1979; Declerck 1979; Dowty 1991; Krifka 1989, 1992; Tenny 1994; Bertinetto and Squartini 1995; Levin and Rappaport Hovav 1995; Jackendoff 1996; Ramchand 1997; Filip 1999; Hay, Kennedy, and Levin 1999; Rothstein 2003; Borer 2005). As the diversity in descriptive labels suggests, most previous work has taken these classes to embody distinct phenomena and to have distinct lexical semantic analyses. We believe that it is possible to provide a unified analysis in which the behavior of all of these verbs stems from a single shared element of their meanings: a function that measures the degree to which an object changes relative to some scalar dimension over the course of an event. We claim that such ‘measures of change’ are based on the more general kinds of measure functions that are lexicalized in many languages by gradable adjectives, and that map an object to a scalar value that represents the degree to which it manifests some gradable property at a time (see Bartsch and Vennemann 1972,",
"title": ""
},
{
"docid": "1258939378850f7d89f6fa860be27c39",
"text": "Sparse methods and the use of Winograd convolutions are two orthogonal approaches, each of which significantly accelerates convolution computations in modern CNNs. Sparse Winograd merges these two and thus has the potential to offer a combined performance benefit. Nevertheless, training convolution layers so that the resulting Winograd kernels are sparse has not hitherto been very successful. By introducing a Winograd layer in place of a standard convolution layer, we can learn and prune Winograd coefficients “natively” and obtain sparsity level beyond 90% with only 0.1% accuracy loss with AlexNet on ImageNet dataset. Furthermore, we present a sparse Winograd convolution algorithm and implementation that exploits the sparsity, achieving up to 31.7 effective TFLOP/s in 32-bit precision on a latest Intel Xeon CPU, which corresponds to a 5.4× speedup over a state-of-the-art dense convolution implementation.",
"title": ""
},
{
"docid": "ffa25551d331651d80f8d91f59a441c0",
"text": "Since vulnerabilities in Linux kernel are on the increase, attackers have turned their interests into related exploitation techniques. However, compared with numerous researches on exploiting use-after-free vulnerabilities in the user applications, few efforts studied how to exploit use-after-free vulnerabilities in Linux kernel due to the difficulties that mainly come from the uncertainty of the kernel memory layout. Without specific information leakage, attackers could only conduct a blind memory overwriting strategy trying to corrupt the critical part of the kernel, for which the success rate is negligible.\n In this work, we present a novel memory collision strategy to exploit the use-after-free vulnerabilities in Linux kernel reliably. The insight of our exploit strategy is that a probabilistic memory collision can be constructed according to the widely deployed kernel memory reuse mechanisms, which significantly increases the success rate of the attack. Based on this insight, we present two practical memory collision attacks: An object-based attack that leverages the memory recycling mechanism of the kernel allocator to achieve freed vulnerable object covering, and a physmap-based attack that takes advantage of the overlap between the physmap and the SLAB caches to achieve a more flexible memory manipulation. Our proposed attacks are universal for various Linux kernels of different architectures and could successfully exploit systems with use-after-free vulnerabilities in kernel. Particularly, we achieve privilege escalation on various popular Android devices (kernel version>=4.3) including those with 64-bit processors by exploiting the CVE-2015-3636 use-after-free vulnerability in Linux kernel. To our knowledge, this is the first generic kernel exploit for the latest version of Android. Finally, to defend this kind of memory collision, we propose two corresponding mitigation schemes.",
"title": ""
},
{
"docid": "01984e20b6fa46888fc82dccc621ab73",
"text": "Organizations spend a significant amount of resources securing their servers and network perimeters. However, these mechanisms are not sufficient for protecting databases. In this paper, we present a new technique for identifying malicious database transactions. Compared to many existing approaches which profile SQL query structures and database user activities to detect intrusions, the novelty of this approach is the automatic discovery and use of essential data dependencies, namely, multi-dimensional and multi-level data dependencies, for identifying anomalous database transactions. Since essential data dependencies reflect semantic relationships among data items and are less likely to change than SQL query structures or database user behaviors, they are ideal for profiling data correlations for identifying malicious database activities.1",
"title": ""
}
] | scidocsrr |
354579b2298c9d6677cd502a74e92e6e | Hybrid Partitioned SRAM-Based Ternary Content Addressable Memory | [
{
"docid": "39ab78b58f6ace0fc29f18a1c4ed8ebc",
"text": "We survey recent developments in the design of large-capacity content-addressable memory (CAM). A CAM is a memory that implements the lookup-table function in a single clock cycle using dedicated comparison circuitry. CAMs are especially popular in network routers for packet forwarding and packet classification, but they are also beneficial in a variety of other applications that require high-speed table lookup. The main CAM-design challenge is to reduce power consumption associated with the large amount of parallel active circuitry, without sacrificing speed or memory density. In this paper, we review CAM-design techniques at the circuit level and at the architectural level. At the circuit level, we review low-power matchline sensing techniques and searchline driving approaches. At the architectural level we review three methods for reducing power consumption.",
"title": ""
}
] | [
{
"docid": "55304b1a38d49cd65658964c3aea5df5",
"text": "In this paper, we take the view that any formalization of commitments has to come together with a formalization of time, events/actions and change. We enrich a suitable formalism for reasoning about time, event/action and change in order to represent and reason about commitments. We employ a three-valued based temporal first-order non-monotonic logic (TFONL) that allows an explicit representation of time and events/action. TFONL subsumes the action languages presented in the literature and takes into consideration the frame, qualification and ramification problems, and incorporates to a domain description the set of rules governing change. It can handle protocols for the different types of dialogues such as information seeking, inquiry and negotiation. We incorporate commitments into TFONL to obtain Com-TFONL. Com-TFONL allows an agent to reason about its commitments and about other agents’ behaviour during a dialogue. Thus, agents can employ social commitments to act on, argue with and reason about during interactions with other agents. Agents may use their reasoning and argumentative capabilities in order to determine the appropriate communicative acts during conversations. Furthermore, Com-TFONL allows for an integration of commitments and arguments which helps in capturing the public aspects of a conversation and the reasoning aspects required in coherent conversations.",
"title": ""
},
{
"docid": "58a47d7fab243f265621be47f0bc5b58",
"text": "A 1.8-kV 100-ps rise-time pulsed-power generator operating at a repetition frequency of 50 kHz is presented. The generator consists of three compression stages. In the first stage, a power MOSFET produces high voltage by breaking an inductor current. In the second stage, a 3-kV drift-step-recovery diode cuts the reverse current rapidly to create a 1-ns rise-time pulse. In the last stage, a silicon-avalanche shaper is used as a fast 100-ps closing switch. Experimental investigation showed that, by optimizing the generator operating point, the shot-to-shot jitter can be reduced to less than 13 ps. The theoretical model of the pulse-forming circuit is presented.",
"title": ""
},
{
"docid": "39430478909e5818b242e0b28db419f0",
"text": "BACKGROUND\nA modified version of the Berg Balance Scale (mBBS) was developed for individuals with intellectual and visual disabilities (IVD). However, the concurrent and predictive validity has not yet been determined.\n\n\nAIM\nThe purpose of the current study was to evaluate the concurrent and predictive validity of the mBBS for individuals with IVD.\n\n\nMETHOD\nFifty-four individuals with IVD and Gross Motor Functioning Classification System (GMFCS) Levels I and II participated in this study. The mBBS, the Centre of Gravity (COG), the Comfortable Walking Speed (CWS), and the Barthel Index (BI) were assessed during one session in order to determine the concurrent validity. The percentage of explained variance was determined by analyzing the squared multiple correlation between the mBBS and the BI, COG, CWS, GMFCS, and age, gender, level of intellectual disability, presence of epilepsy, level of visual impairment, and presence of hearing impairment. Furthermore, an overview of the degree of dependence between the mBBS, BI, CWS, and COG was obtained by graphic modelling. Predictive validity of mBBS was determined with respect to the number of falling incidents during 26 weeks and evaluated with Zero-inflated regression models using the explanatory variables of mBBS, BI, COG, CWS, and GMFCS.\n\n\nRESULTS\nThe results demonstrated that two significant explanatory variables, the GMFCS Level and the BI, and one non-significant variable, the CWS, explained approximately 60% of the mBBS variance. Graphical modelling revealed that BI was the most important explanatory variable for mBBS moreso than COG and CWS. Zero-inflated regression on the frequency of falling incidents demonstrated that the mBBS was not predictive, however, COG and CWS were.\n\n\nCONCLUSIONS\nThe results indicated that the concurrent validity as well as the predictive validity of mBBS were low for persons with IVD.",
"title": ""
},
{
"docid": "2615f2f66adeaf1718d7afa5be3b32b1",
"text": "In this paper, an advanced design of an Autonomous Underwater Vehicle (AUV) is presented. The design is driven only by four water pumps. The different power combinations of the four motors provides the force and moment for propulsion and maneuvering. No control surfaces are needed in this design, which make the manufacturing cost of such a vehicle minimal and more reliable. Based on the propulsion method of the vehicle, a nonlinear AUV dynamic model is studied. This nonlinear model is linearized at the operation point. A control strategy of the AUV is proposed including attitude control and auto-pilot design. Simulation results for the attitude control loop are presented to validate this approach.",
"title": ""
},
{
"docid": "ba13195d39b28d5205b33452bfebd6e7",
"text": "A compact multiple-input-multiple-output (MIMO) antenna is presented for ultrawideband (UWB) applications. The antenna consists of two open L-shaped slot (LS) antenna elements and a narrow slot on the ground plane. The antenna elements are placed perpendicularly to each other to obtain high isolation, and the narrow slot is added to reduce the mutual coupling of antenna elements in the low frequency band (3-4.5 GHz). The proposed MIMO antenna has a compact size of 32 ×32 mm2, and the antenna prototype is fabricated and measured. The measured results show that the proposed antenna design achieves an impedance bandwidth of larger than 3.1-10.6 GHz, low mutual coupling of less than 15 dB, and a low envelope correlation coefficient of better than 0.02 across the frequency band, which are suitable for portable UWB applications.",
"title": ""
},
{
"docid": "37a8ea1b792466c6e39709879e7a7b41",
"text": "The lightning impulse withstand voltage for an oil-immersed power transformer is determined by the value of the lightning surge overvoltage generated at the transformer terminal. This overvoltage value has been conventionally obtained through lightning surge analysis using the electromagnetic transients program (EMTP), where the transformer is often simulated by a single lumped capacitance. However, since high frequency surge overvoltages ranging from several kHz to several MHz are generated in an actual system, a transformer circuit model capable of simulating the range up to this high frequency must be developed for further accurate analysis. In this paper, a high frequency circuit model for an oil-immersed transformer was developed and its validity was verified through comparison with the measurement results on the model winding actually produced. Consequently, it emerged that a high frequency model with three serially connected LC parallel circuits could adequately simulate the impedance characteristics of the winding up to a high frequency range of several MHz. Following lightning surge analysis for a 500 kV substation using this high frequency model, the peak value of the waveform was evaluated as lower than that simulated by conventional lumped capacitance even though the front rising was steeper. This phenomenon can be explained by the charging process of the capacitance circuit inside the transformer. Furthermore, the waveform analyzed by each model was converted into an equivalent standard lightning impulse waveform and the respective peak values were compared. As a result, the peak value obtained by the lumped capacitance simulation was evaluated as relatively higher under the present analysis conditions.",
"title": ""
},
{
"docid": "dadcecd178721cf1ea2b6bf51bc9d246",
"text": "8 Research on speech and emotion is moving from a period of exploratory research into one where there is a prospect 9 of substantial applications, notably in human–computer interaction. Progress in the area relies heavily on the devel10 opment of appropriate databases. This paper addresses four main issues that need to be considered in developing 11 databases of emotional speech: scope, naturalness, context and descriptors. The state of the art is reviewed. A good deal 12 has been done to address the key issues, but there is still a long way to go. The paper shows how the challenge of 13 developing appropriate databases is being addressed in three major recent projects––the Reading–Leeds project, the 14 Belfast project and the CREST–ESP project. From these and other studies the paper draws together the tools and 15 methods that have been developed, addresses the problems that arise and indicates the future directions for the de16 velopment of emotional speech databases. 2002 Published by Elsevier Science B.V.",
"title": ""
},
{
"docid": "c809ef0984855e377bf241ed8a7aa7eb",
"text": "Priapism of the clitoris is a rare entity. A case of painful priapism is reported in a patient who had previously suffered a radical cystectomy for bladder carcinoma pT3-GIII, followed by local recurrence in the pelvis. From a symptomatic point of view she showed a good response to conservative treatment (analgesics and anxiolytics), as she refused surgical treatment. She survived 6 months from the recurrence, and died with lung metastases. The priapism did not recur. The physiopathological mechanisms involved in the process are discussed and the literature reviewed.",
"title": ""
},
{
"docid": "fce58bfa94acf2b26a50f816353e6bf2",
"text": "The perspective directions in evaluating network security are simulating possible malefactor’s actions, building the representation of these actions as attack graphs (trees, nets), the subsequent checking of various properties of these graphs, and determining security metrics which can explain possible ways to increase security level. The paper suggests a new approach to security evaluation based on comprehensive simulation of malefactor’s actions, construction of attack graphs and computation of different security metrics. The approach is intended for using both at design and exploitation stages of computer networks. The implemented software system is described, and the examples of experiments for analysis of network security level are considered.",
"title": ""
},
{
"docid": "d4da4c9bc129a15a8f7b7094216bc4b2",
"text": "This paper presents a physical description of two specific aspects in drain-extended MOS transistors, i.e., quasi-saturation and impact-ionization effects. The 2-D device simulator Medici provides the physical insights, and both the unique features are originally attributed to the Kirk effect. The transistor dc model is derived from regional analysis of carrier transport in the intrinsic MOS and the drift region. The substrate-current equations, considering extra impact-ionization factors in the drift region, are also rigorously derived. The proposed model is primarily validated by MATLAB program and exhibits excellent scalability for various transistor dimensions, drift-region doping concentration, and voltage-handling capability.",
"title": ""
},
{
"docid": "39b072a5adb75eb43561017d53ab6f44",
"text": "The Internet of Things (IoT) is converting the agriculture industry and solving the immense problems or the major challenges faced by the farmers todays in the field. India is one of the 13th countries in the world having scarcity of water resources. Due to ever increasing of world population, we are facing difficulties in the shortage of water resources, limited availability of land, difficult to manage the costs while meeting the demands of increasing consumption needs of a global population that is expected to grow by 70% by the year 2050. The influence of population growth on agriculture leads to a miserable impact on the farmers livelihood. To overcome the problems we design a low cost system for monitoring the agriculture farm which continuously measure the level of soil moisture of the plants and alert the farmers if the moisture content of particular plants is low via sms or an email. This system uses an esp8266 microcontroller and a moisture sensor using Losant platform. Losant is a simple and most powerful IoT cloud platform for the development of coming generation. It offers the real time data visualization of sensors data which can be operate from any part of the world irrespective of the position of field.",
"title": ""
},
{
"docid": "0efa756a15219d8383ca296860f7433a",
"text": "Chronic inflammation plays a multifaceted role in carcinogenesis. Mounting evidence from preclinical and clinical studies suggests that persistent inflammation functions as a driving force in the journey to cancer. The possible mechanisms by which inflammation can contribute to carcinogenesis include induction of genomic instability, alterations in epigenetic events and subsequent inappropriate gene expression, enhanced proliferation of initiated cells, resistance to apoptosis, aggressive tumor neovascularization, invasion through tumor-associated basement membrane and metastasis, etc. Inflammation-induced reactive oxygen and nitrogen species cause damage to important cellular components (e.g., DNA, proteins and lipids), which can directly or indirectly contribute to malignant cell transformation. Overexpression, elevated secretion, or abnormal activation of proinflammatory mediators, such as cytokines, chemokines, cyclooxygenase-2, prostaglandins, inducible nitric oxide synthase, and nitric oxide, and a distinct network of intracellular signaling molecules including upstream kinases and transcription factors facilitate tumor promotion and progression. While inflammation promotes development of cancer, components of the tumor microenvironment, such as tumor cells, stromal cells in surrounding tissue and infiltrated inflammatory/immune cells generate an intratumoral inflammatory state by aberrant expression or activation of some proinflammatory molecules. Many of proinflammatory mediators, especially cytokines, chemokines and prostaglandins, turn on the angiogenic switches mainly controlled by vascular endothelial growth factor, thereby inducing inflammatory angiogenesis and tumor cell-stroma communication. This will end up with tumor angiogenesis, metastasis and invasion. Moreover, cellular microRNAs are emerging as a potential link between inflammation and cancer. The present article highlights the role of various proinflammatory mediators in carcinogenesis and their promise as potential targets for chemoprevention of inflammation-associated carcinogenesis.",
"title": ""
},
{
"docid": "a20b874ab019da6a8c8f430cd9bc11b4",
"text": "It is traditional wisdom that one should start from the goals when generating a plan in order to focus the plan generation process on potentially relevant actions. The graphplan system, however, which is the most eecient planning system nowadays, builds a \\planning graph\" in a forward-chaining manner. Although this strategy seems to work well, it may possibly lead to problems if the planning task description contains irrelevant information. Although some irrelevant information can be ltered out by graphplan, most cases of irrelevance are not noticed. In this paper, we analyze the eeects arising from \\irrelevant\" information to planning task descriptions for diierent types of planners. Based on that, we propose a family of heuristics that select relevant information by minimizing the number of initial facts that are used when approximating a plan by backchaining from the goals ignoring any connicts. These heuristics, although not solution-preserving, turn out to be very useful for guiding the planning process, as shown by applying the heuristics to a large number of examples from the literature.",
"title": ""
},
{
"docid": "5aacd3ac3c6120311d7daa2de3cef2ba",
"text": "Situated in the western Sierra Nevada foothills of California, CA-MRP-402 exhibits 103 rock art panels. By combining archaeological field research and excavation, this paper explores the ancient activities that took place at MRP-402. These efforts reveal that ancient Native Americans intentionally altered the landscape to create an astronomical observation area and generate consistent equinoctial solar and shadow alignments.",
"title": ""
},
{
"docid": "8a1adea9a1f4beeb704691d76b2e4f53",
"text": "As we observe a trend towards the recentralisation of the Internet, this paper raises the question of guaranteeing an everlasting decentralisation. We introduce the properties of strong and soft uncentralisability in order to describe systems in which all authorities can be untrusted at any time without affecting the system. We link the soft uncentralisability to another property called perfect forkability. Using that knowledge, we introduce a new cryptographic primitive called uncentralisable ledger and study its properties. We use those properties to analyse what an uncentralisable ledger may offer to classic electronic voting systems and how it opens up the realm of possibilities for completely new voting mechanisms. We review a list of selected projects that implement voting systems using blockchain technology. We then conclude that the true revolutionary feature enabled by uncentralisable ledgers is a self-sovereign and distributed identity provider.",
"title": ""
},
{
"docid": "a576a6bf249616d186657a48c2aec071",
"text": "Penumbras, or soft shadows, are an important means to enhance the realistic ap pearance of computer generated images. We present a fast method based on Minkowski operators to reduce t he run ime for penumbra calculation with stochastic ray tracing. Detailed run time analysis on some examples shows that the new method is significantly faster than the conventional approach. Moreover, it adapts to the environment so that small penumbras are calculated faster than larger ones. The algorithm needs at most twice as much memory as the underlying ray tracing algorithm.",
"title": ""
},
{
"docid": "6440be547f86da7e08b79eac6b4311fe",
"text": "OBJECTIVE\nTo assess the bioequivalence of an ezetimibe/simvastatin (EZE/SIMVA) combination tablet compared to the coadministration of ezetimibe and simvastatin as separate tablets (EZE + SIMVA).\n\n\nMETHODS\nIn this open-label, randomized, 2-part, 2-period crossover study, 96 healthy subjects were randomly assigned to participate in each part of the study (Part I or II), with each part consisting of 2 single-dose treatment periods separated by a 14-day washout. Part I consisted of Treatments A (EZE 10 mg + SIMVA 10 mg) and B (EZE/SIMVA 10/10 mg/mg) and Part II consisted of Treatments C (EZE 10 mg + SIMVA 80 mg) and D (EZE/SIMVA 10/80 mg/mg). Blood samples were collected up to 96 hours post-dose for determination of ezetimibe, total ezetimibe (ezetimibe + ezetimibe glucuronide), simvastatin and simvastatin acid (the most prevalent active metabolite of simvastatin) concentrations. Ezetimibe and simvastatin acid AUC(0-last) were predefined as primary endpoints and ezetimibe and simvastatin acid Cmax were secondary endpoints. Bioequivalence was achieved if 90% confidence intervals (CI) for the geometric mean ratios (GMR) (single tablet/coadministration) of AUC(0-last) and Cmax fell within prespecified bounds of (0.80, 1.25).\n\n\nRESULTS\nThe GMRs of the AUC(0-last) and Cmax for ezetimibe and simvastatin acid fell within the bioequivalence limits (0.80, 1.25). EZE/ SIMVA and EZE + SIMVA were generally well tolerated.\n\n\nCONCLUSIONS\nThe lowest and highest dosage strengths of EZE/SIMVA tablet were bioequivalent to the individual drug components administered together. Given the exact weight multiples of the EZE/SIMVA tablet and linear pharmacokinetics of simvastatin across the marketed dose range, bioequivalence of the intermediate tablet strengths (EZE/SIMVA 10/20 mg/mg and EZE/SIMVA 10/40 mg/mg) was inferred, although these dosages were not tested directly. These results indicate that the safety and efficacy profile of EZE + SIMVA coadministration therapy can be applied to treatment with the EZE/SIMVA tablet across the clinical dose range.",
"title": ""
},
{
"docid": "9d2b3aaf57e31a2c0aa517d642f39506",
"text": "3.1. URINARY TRACT INFECTION Urinary tract infection is one of the important causes of morbidity and mortality in Indian population, affecting all age groups across the life span. Anatomically, urinary tract is divided into an upper portion composed of kidneys, renal pelvis, and ureters and a lower portion made up of urinary bladder and urethra. UTI is an inflammatory response of the urothelium to bacterial invasion that is usually associated with bacteriuria and pyuria. UTI may involve only the lower urinary tract or both the upper and lower tract [19].",
"title": ""
},
{
"docid": "1926166029995392a9ccb3c64bc10ee7",
"text": "OBJECTIVES\nFew low income countries have emergency medical services to provide prehospital medical care and transport to road traffic crash casualties. In Ghana most roadway casualties receive care and transport to the hospital from taxi, bus, or truck drivers. This study reports the methods used to devise a model for prehospital trauma training for commercial drivers in Ghana.\n\n\nMETHODS\nOver 300 commercial drivers attended a first aid and rescue course designed specifically for roadway trauma and geared to a low education level. The training programme has been evaluated twice at one and two year intervals by interviewing both trained and untrained drivers with regard to their experiences with injured persons. In conjunction with a review of prehospital care literature, lessons learnt from the evaluations were used in the revision of the training model.\n\n\nRESULTS\nControl of external haemorrhage was quickly learnt and used appropriately by the drivers. Areas identified needing emphasis in future trainings included consistent use of universal precautions and protection of airways in unconscious persons using the recovery position.\n\n\nCONCLUSION\nIn low income countries, prehospital trauma care for roadway casualties can be improved by training laypersons already involved in prehospital transport and care. Training should be locally devised, evidence based, educationally appropriate, and focus on practical demonstrations.",
"title": ""
},
{
"docid": "3969a0156c558020ca1de3b978c3ab4e",
"text": "Silver-Russell syndrome (SRS) and Beckwith-Wiedemann syndrome (BWS) are 2 clinically opposite growth-affecting disorders belonging to the group of congenital imprinting disorders. The expression of both syndromes usually depends on the parental origin of the chromosome in which the imprinted genes reside. SRS is characterized by severe intrauterine and postnatal growth retardation with various additional clinical features such as hemihypertrophy, relative macrocephaly, fifth finger clinodactyly, and triangular facies. BWS is an overgrowth syndrome with many additional clinical features such as macroglossia, organomegaly, and an increased risk of childhood tumors. Both SRS and BWS are clinically and genetically heterogeneous, and for clinical diagnosis, different diagnostic scoring systems have been developed. Six diagnostic scoring systems for SRS and 4 for BWS have been previously published. However, neither syndrome has common consensus diagnostic criteria yet. Most cases of SRS and BWS are associated with opposite epigenetic or genetic abnormalities in the 11p15 chromosomal region leading to opposite imbalances in the expression of imprinted genes. SRS is also caused by maternal uniparental disomy 7, which is usually identified in 5-10% of the cases, and is therefore the first imprinting disorder that affects 2 different chromosomes. In this review, we describe in detail the clinical diagnostic criteria and scoring systems as well as molecular causes in both SRS and BWS.",
"title": ""
}
] | scidocsrr |
e182ef6081b4711ffab5d0ec4d8fa340 | Knowledge management in software engineering - describing the process | [
{
"docid": "a2047969c4924a1e93b805b4f7d2402c",
"text": "Knowledge is a resource that is valuable to an organization's ability to innovate and compete. It exists within the individual employees, and also in a composite sense within the organization. According to the resourcebased view of the firm (RBV), strategic assets are the critical determinants of an organization's ability to maintain a sustainable competitive advantage. This paper will combine RBV theory with characteristics of knowledge to show that organizational knowledge is a strategic asset. Knowledge management is discussed frequently in the literature as a mechanism for capturing and disseminating the knowledge that exists within the organization. This paper will also explain practical considerations for implementation of knowledge management principles.",
"title": ""
}
] | [
{
"docid": "94c6f94e805a366c6fa6f995f13a92ba",
"text": "Unusual site deep vein thrombosis (USDVT) is an uncommon form of venous thromboembolism (VTE) with heterogeneity in pathophysiology and clinical features. While the need for anticoagulation treatment is generally accepted, there is little data on optimal USDVT treatment. The TRUST study aimed to characterize the epidemiology, treatment and outcomes of USDVT. From 2008 to 2012, 152 patients were prospectively enrolled at 4 Canadian centers. After baseline, patients were followed at 6, 12 and 24months. There were 97 (64%) cases of splanchnic, 33 (22%) cerebral, 14 (9%) jugular, 6 (4%) ovarian and 2 (1%) renal vein thrombosis. Mean age was 52.9years and 113 (74%) cases were symptomatic. Of 72 (47%) patients tested as part of clinical care, 22 (31%) were diagnosed with new thrombophilia. Of 138 patients evaluated in follow-up, 66 (48%) completed at least 6months of anticoagulation. Estrogen exposure or inflammatory conditions preceding USDVT were commonly associated with treatment discontinuation before 6months, while previous VTE was associated with continuing anticoagulation beyond 6months. During follow-up, there were 22 (16%) deaths (20 from cancer), 4 (3%) cases of recurrent VTE and no fatal bleeding events. Despite half of USDVT patients receiving <6months of anticoagulation, the rate of VTE recurrence was low and anticoagulant treatment appears safe. Thrombophilia testing was common and thrombophilia prevalence was high. Further research is needed to determine the optimal investigation and management of USDVT.",
"title": ""
},
{
"docid": "27a3c368176ead25ed653d696648f244",
"text": "The growing proliferation in solar deployment, especially at distribution level, has made the case for power system operators to develop more accurate solar forecasting models. This paper proposes a solar photovoltaic (PV) generation forecasting model based on multi-level solar measurements and utilizing a nonlinear autoregressive with exogenous input (NARX) model to improve the training and achieve better forecasts. The proposed model consists of four stages of data preparation, establishment of fitting model, model training, and forecasting. The model is tested under different weather conditions. Numerical simulations exhibit the acceptable performance of the model when compared to forecasting results obtained from two-level and single-level studies.",
"title": ""
},
{
"docid": "4a811a48f913e1529f70937c771d01da",
"text": "An interesting research problem in our age of Big Data is that of determining provenance. Granular evaluation of provenance of physical goods--e.g. tracking ingredients of a pharmaceutical or demonstrating authenticity of luxury goods--has often not been possible with today's items that are produced and transported in complex, inter-organizational, often internationally-spanning supply chains. Recent adoption of Internet of Things and Blockchain technologies give promise at better supply chain provenance. We are particularly interested in the blockchain as many favoured use cases of blockchain are for provenance tracking. We are also interested in applying ontologies as there has been some work done on knowledge provenance, traceability, and food provenance using ontologies. In this paper, we make a case for why ontologies can contribute to blockchain design. To support this case, we analyze a traceability ontology and translate some of its representations to smart contracts that execute a provenance trace and enforce traceability constraints on the Ethereum blockchain platform.",
"title": ""
},
{
"docid": "7bef5a19f6d8f71d4aa12194dd02d0c4",
"text": "To build a natural sounding speech synthesis system, it is essential that the text processing component produce an appropriate sequence of phonemic units corresponding to an arbitrary input text. In this paper we discuss our efforts in addressing the issues of Font-to-Akshara mapping, pronunciation rules for Aksharas, text normalization in the context of building text-to-speech systems in Indian languages.",
"title": ""
},
{
"docid": "4b0230c640cc85a0f1f23c0cb60d5325",
"text": "Natural language understanding research has recently shifted towards complex Machine Learning and Deep Learning algorithms. Such models often outperform significantly their simpler counterparts. However, their performance relies on the availability of large amounts of labeled data, which are rarely available. To tackle this problem, we propose a methodology for extending training datasets to arbitrarily big sizes and training complex, data-hungry models using weak supervision. We apply this methodology on biomedical relationship extraction, a task where training datasets are excessively time-consuming and expensive to create, yet has a major impact on downstream applications such as drug discovery. We demonstrate in a small-scale controlled experiment that our method consistently enhances the performance of an LSTM network, with performance improvements comparable to hand-labeled training data. Finally, we discuss the optimal setting for applying weak supervision using this methodology.",
"title": ""
},
{
"docid": "1b20c242815b26533731308cb42ac054",
"text": "Amnesic patients demonstrate by their performance on a serial reaction time task that they learned a repeating spatial sequence despite their lack of awareness of the repetition (Nissen & Bullemer, 1987). In the experiments reported here, we investigated this form of procedural learning in normal subjects. A subgroup of subjects showed substantial procedural learning of the sequence in the absence of explicit declarative knowledge of it. Their ability to generate the sequence was effectively at chance and showed no savings in learning. Additional amounts of training increased both procedural and declarative knowledge of the sequence. Development of knowledge in one system seems not to depend on knowledge in the other. Procedural learning in this situation is neither solely perceptual nor solely motor. The learning shows minimal transfer to a situation employing the same motor sequence.",
"title": ""
},
{
"docid": "c0d8842983a2d7952de1c187a80479ac",
"text": "Two new topologies of three-phase segmented rotor switched reluctance machine (SRM) that enables the use of standard voltage source inverters (VSIs) for its operation are presented. The topologies has shorter end-turn length, axial length compared to SRM topologies that use three-phase inverters; compared to the conventional SRM (CSRM), these new topologies has the advantage of shorter flux paths that results in lower core losses. FEA based optimization have been performed for a given design specification. The new concentrated winding segmented SRMs demonstrate competitive performance with three-phase standard inverters compared to CSRM.",
"title": ""
},
{
"docid": "ac040c0c04351ea6487ea6663688ebd6",
"text": "This paper presents the conceptual design, detailed development and flight testing of AtlantikSolar, a 5.6m-wingspan solar-powered Low-Altitude Long-Endurance (LALE) Unmanned Aerial Vehicle (UAV) designed and built at ETH Zurich. The UAV is required to provide perpetual endurance at a geographic latitude of 45°N in a 4-month window centered around June 21st. An improved conceptual design method is presented and applied to maximize the perpetual flight robustness with respect to local meteorological disturbances such as clouds or winds. Airframe, avionics hardware, state estimation and control method development for autonomous flight operations are described. Flight test results include a 12-hour flight relying solely on batteries to replicate night-flight conditions. In addition, we present flight results from Search-And-Rescue field trials where a camera and processing pod were mounted on the aircraft to create high-fidelity 3D-maps of a simulated disaster area.",
"title": ""
},
{
"docid": "fadbfcc98ad512dd788f6309d0a932af",
"text": "Thanks to the convergence of pervasive mobile communications and fast-growing online social networking, mobile social networking is penetrating into our everyday life. Aiming to develop a systematic understanding of mobile social networks, in this paper we exploit social ties in human social networks to enhance cooperative device-to-device (D2D) communications. Specifically, as handheld devices are carried by human beings, we leverage two key social phenomena, namely social trust and social reciprocity, to promote efficient cooperation among devices. With this insight, we develop a coalitional game-theoretic framework to devise social-tie-based cooperation strategies for D2D communications. We also develop a network-assisted relay selection mechanism to implement the coalitional game solution, and show that the mechanism is immune to group deviations, individually rational, truthful, and computationally efficient. We evaluate the performance of the mechanism by using real social data traces. Simulation results corroborate that the proposed mechanism can achieve significant performance gain over the case without D2D cooperation.",
"title": ""
},
{
"docid": "3854ead43024ebc6ac942369a7381d71",
"text": "During the past two decades, the prevalence of obesity in children has risen greatly worldwide. Obesity in childhood causes a wide range of serious complications, and increases the risk of premature illness and death later in life, raising public-health concerns. Results of research have provided new insights into the physiological basis of bodyweight regulation. However, treatment for childhood obesity remains largely ineffective. In view of its rapid development in genetically stable populations, the childhood obesity epidemic can be primarily attributed to adverse environmental factors for which straightforward, if politically difficult, solutions exist.",
"title": ""
},
{
"docid": "9b1bf9930b378232d03c43c007d1c151",
"text": "Matrix factorization has found incredible success and widespread application as a collaborative filtering based approach to recommendations. Unfortunately, incorporating additional sources of evidence, especially ones that are incomplete and noisy, is quite difficult to achieve in such models, however, is often crucial for obtaining further gains in accuracy. For example, additional information about businesses from reviews, categories, and attributes should be leveraged for predicting user preferences, even though this information is often inaccurate and partially-observed. Instead of creating customized methods that are specific to each type of evidences, in this paper we present a generic approach to factorization of relational data that collectively models all the relations in the database. By learning a set of embeddings that are shared across all the relations, the model is able to incorporate observed information from all the relations, while also predicting all the relations of interest. Our evaluation on multiple Amazon and Yelp datasets demonstrates effective utilization of additional information for held-out preference prediction, but further, we present accurate models even for the cold-starting businesses and products for which we do not observe any ratings or reviews. We also illustrate the capability of the model in imputing missing information and jointly visualizing words, categories, and attribute factors.",
"title": ""
},
{
"docid": "212e9306654141360a7d240a30af5c4a",
"text": "In this paper, we introduce a stereo vision based CNN tracker for a person following robot. The tracker is able to track a person in real-time using an online convolutional neural network. Our approach enables the robot to follow a target under challenging situations such as occlusions, appearance changes, pose changes, crouching, illumination changes or people wearing the same clothes in different environments. The robot follows the target around corners even when it is momentarily unseen by estimating and replicating the local path of the target. We build an extensive dataset for person following robots under challenging situations. We evaluate the proposed system quantitatively by comparing our tracking approach with existing real-time tracking algorithms.",
"title": ""
},
{
"docid": "0ac679740e0e3911af04be9464f76a7d",
"text": "Max-Min Fairness is a flexible resource allocation mechanism used in most datacenter schedulers. However, an increasing number of jobs have hard placement constraints, restricting the machines they can run on due to special hardware or software requirements. It is unclear how to define, and achieve, max-min fairness in the presence of such constraints. We propose Constrained Max-Min Fairness (CMMF), an extension to max-min fairness that supports placement constraints, and show that it is the only policy satisfying an important property that incentivizes users to pool resources. Optimally computing CMMF is challenging, but we show that a remarkably simple online scheduler, called Choosy, approximates the optimal scheduler well. Through experiments, analysis, and simulations, we show that Choosy on average differs 2% from the optimal CMMF allocation, and lets jobs achieve their fair share quickly.",
"title": ""
},
{
"docid": "ec6fb21b7ae27cc4df67f3d6745ffe34",
"text": "In today's world data is growing very rapidly, which we call as big data. To deal with these large data sets, currently we are using NoSQL databases, as relational database is not capable for handling such data. These schema less NoSQL database allow us to handle unstructured data. Through this paper we are comparing two NoSQL databases MongoDB and CouchBase server, in terms of image storage and retrieval. Aim behind selecting these two databases as both comes under Document store category. Major applications like social media, traffic analysis, criminal database etc. require image database. The motivation behind this paper is to compare database performance in terms of time required to store and retrieve images from database. In this paper, firstly we are going describe advantages of NoSQL databases over SQL, then brief idea about MongoDB and CouchBase and finally comparison of time required to insert various size images in databases and to retrieve various size images using front end tool Java.",
"title": ""
},
{
"docid": "1d53b01ee1a721895a17b7d0f3535a28",
"text": "We present a suite of algorithms for self-organization of wireless sensor networks, in which there is a scalably large number of mainly static nodes with highly constrained energy resources. The protocols further support slow mobility by a subset of the nodes, energy-efficient routing, and formation of ad hoc subnetworks for carrying out cooperative signal processing functions among a set of the nodes. † This research is supported by DARPA contract number F04701-97-C-0010, and was presented in part at the 37 Allerton Conference on Communication, Computing and Control, September 1999. ‡ Corresponding author.",
"title": ""
},
{
"docid": "aeb3e0b089e658b532b3ed6c626898dd",
"text": "Semantics is seen as the key ingredient in the next phase of the Web infrastructure as well as the next generation of information systems applications. In this context, we review some of the reservations expressed about the viability of the Semantic Web. We respond to these by identifying a Semantic Technology that supports the key capabilities also needed to realize the Semantic Web vision, namely representing, acquiring and utilizing knowledge. Given that scalability is a key challenge, we briefly review our observations from developing three classes of real world applications and corresponding technology components: search/browsing, integration, and analytics. We distinguish this proven technology from some parts of the Semantic Web approach and offer subjective remarks which we hope will foster additional debate.",
"title": ""
},
{
"docid": "72a5db33e2ba44880b3801987b399c3d",
"text": "Over the last decade, the ever increasing world-wide demand for early detection of breast cancer at many screening sites and hospitals has resulted in the need of new research avenues. According to the World Health Organization (WHO), an early detection of cancer greatly increases the chances of taking the right decision on a successful treatment plan. The Computer-Aided Diagnosis (CAD) systems are applied widely in the detection and differential diagnosis of many different kinds of abnormalities. Therefore, improving the accuracy of a CAD system has become one of the major research areas. In this paper, a CAD scheme for detection of breast cancer has been developed using deep belief network unsupervised path followed by back propagation supervised path. The construction is back-propagation neural network with Liebenberg Marquardt learning function while weights are initialized from the deep belief network path (DBN-NN). Our technique was tested on the Wisconsin Breast Cancer Dataset (WBCD). The classifier complex gives an accuracy of 99.68% indicating promising results over previously-published studies. The proposed system provides an effective classification model for breast cancer. In addition, we examined the architecture at several train-test partitions. © 2015 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "2c4fed71ee9d658516b017a924ad6589",
"text": "As the concept of Friction stir welding is relatively new, there are many areas, which need thorough investigation to optimize and make it commercially viable. In order to obtain the desired mechanical properties, certain process parameters, like rotational and translation speeds, tool tilt angle, tool geometry etc. are to be controlled. Aluminum alloys of 5xxx series and their welded joints show good resistance to corrosion in sea water. Here, a literature survey has been carried out for the friction stir welding of 5xxx series aluminum alloys.",
"title": ""
},
{
"docid": "77e5724ff3b8984a1296731848396701",
"text": "Temporal networks, i.e., networks in which the interactions among a set of elementary units change over time, can be modelled in terms of timevarying graphs, which are time-ordered sequences of graphs over a set of nodes. In such graphs, the concepts of node adjacency and reachability crucially depend on the exact temporal ordering of the links. Consequently, all the concepts and metrics proposed and used for the characterisation of static complex networks have to be redefined or appropriately extended to time-varying graphs, in order to take into account the effects of time ordering on causality. In this chapter we V. Nicosia ( ) Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK e-mail: [email protected] Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy J. Tang C. Mascolo Computer Laboratory, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK M. Musolesi ( ) School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, UK e-mail: [email protected] G. Russo Dipartimento di Matematica e Informatica, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy V. Latora Laboratorio sui Sistemi Complessi, Scuola Superiore di Catania, Via Valdisavoia 9, 95123 Catania, Italy School of Mathematical Sciences, Queen Mary, University of London, E1 4NS London, UK Dipartimento di Fisica e Astronomia and INFN, Universitá di Catania, Via S. Sofia 64, 95123 Catania, Italy P. Holme and J. Saramäki (eds.), Temporal Networks, Understanding Complex Systems, DOI 10.1007/978-3-642-36461-7 2, © Springer-Verlag Berlin Heidelberg 2013 15 16 V. Nicosia et al. discuss how to represent temporal networks and we review the definitions of walks, paths, connectedness and connected components valid for graphs in which the links fluctuate over time. We then focus on temporal node–node distance, and we discuss how to characterise link persistence and the temporal small-world behaviour in this class of networks. Finally, we discuss the extension of classic centrality measures, including closeness, betweenness and spectral centrality, to the case of time-varying graphs, and we review the work on temporal motifs analysis and the definition of modularity for temporal graphs.",
"title": ""
}
] | scidocsrr |
b720c9f662b395d0237232a6b0c85d5c | Hidden Roles of CSR : Perceived Corporate Social Responsibility as a Preventive against Counterproductive Work Behaviors | [
{
"docid": "92d1abda02a6c6e1c601930bfbb7ed3d",
"text": "In spite of the increasing importance of corporate social responsibility (CSR) and employee job performance, little is still known about the links between the socially responsible actions of organizations and the job performance of their members. In order to explain how employees’ perceptions of CSR influence their job performance, this study first examines the relationships between perceived CSR, organizational identification, job satisfaction, and job performance, and then develops a sequential mediation model by fully integrating these links. The results of structural equation modeling analyses conducted for 250 employees at hotels in South Korea offered strong support for the proposed model. We found that perceived CSR was indirectly and positively associated with job performance sequentially mediated first through organizational identification and then job satisfaction. This study theoretically contributes to the CSR literature by revealing the sequential mechanism through which employees’ perceptions of CSR affect their job performance, and offers practical implications by stressing the importance of employees’ perceptions of CSR. Limitations of this study and future research directions are discussed.",
"title": ""
},
{
"docid": "fb34b610cd933da8c7f863249f32f9a2",
"text": "The purpose of this research was to develop broad, theoretically derived measure(s) of deviant behavior in the workplace. Two scales were developed: a 12-item scale of organizational deviance (deviant behaviors directly harmful to the organization) and a 7-item scale of interpersonal deviance (deviant behaviors directly harmful to other individuals within the organization). These scales were found to have internal reliabilities of .81 and .78, respectively. Confirmatory factor analysis verified that a 2-factor structure had acceptable fit. Preliminary evidence of construct validity is also provided. The implications of this instrument for future empirical research on workplace deviance are discussed.",
"title": ""
}
] | [
{
"docid": "73270e8140d763510d97f7bd2fdd969e",
"text": "Inspired by the progress of deep neural network (DNN) in single-media retrieval, the researchers have applied the DNN to cross-media retrieval. These methods are mainly two-stage learning: the first stage is to generate the separate representation for each media type, and the existing methods only model the intra-media information but ignore the inter-media correlation with the rich complementary context to the intra-media information. The second stage is to get the shared representation by learning the cross-media correlation, and the existing methods learn the shared representation through a shallow network structure, which cannot fully capture the complex cross-media correlation. For addressing the above problems, we propose the cross-media multiple deep network (CMDN) to exploit the complex cross-media correlation by hierarchical learning. In the first stage, CMDN jointly models the intra-media and intermedia information for getting the complementary separate representation of each media type. In the second stage, CMDN hierarchically combines the inter-media and intra-media representations to further learn the rich cross-media correlation by a deeper two-level network strategy, and finally get the shared representation by a stacked network style. Experiment results show that CMDN achieves better performance comparing with several state-of-the-art methods on 3 extensively used cross-media datasets.",
"title": ""
},
{
"docid": "266b705308b6f7c236f54bb327f315ec",
"text": "In this paper, we examine the generalization error of regularized distance metric learning. We show that with appropriate constraints, the generalization error of regularized distance metric learning could be independent from the dimensionality, making it suitable for handling high dimensional data. In addition, we present an efficient online learning algorithm for regularized distance metric learning. Our empirical studies with data classification and face recognition show that the proposed algorithm is (i) effective for distance metric learning when compared to the state-of-the-art methods, and (ii) efficient and robust for high dimensional data.",
"title": ""
},
{
"docid": "4b1c46a58d132e3b168186848122e1d0",
"text": "Recently, there has been considerable interest in providing \"trusted computing platforms\" using hardware~---~TCPA and Palladium being the most publicly visible examples. In this paper we discuss our experience with building such a platform using a traditional time-sharing operating system executing on XOM~---~a processor architecture that provides copy protection and tamper-resistance functions. In XOM, only the processor is trusted; main memory and the operating system are not trusted.Our operating system (XOMOS) manages hardware resources for applications that don't trust it. This requires a division of responsibilities between the operating system and hardware that is unlike previous systems. We describe techniques for providing traditional operating systems services in this context.Since an implementation of a XOM processor does not exist, we use SimOS to simulate the hardware. We modify IRIX 6.5, a commercially available operating system to create xomos. We are then able to analyze the performance and implementation overheads of running an untrusted operating system on trusted hardware.",
"title": ""
},
{
"docid": "f7d56588da8f5c5ac0f1481e5f2286b4",
"text": "Machine learning is an established method of selecting algorithms to solve hard search problems. Despite this, to date no systematic comparison and evaluation of the different techniques has been performed and the performance of existing systems has not been critically compared to other approaches. We compare machine learning techniques for algorithm selection on real-world data sets of hard search problems. In addition to well-established approaches, for the first time we also apply statistical relational learning to this problem. We demonstrate that most machine learning techniques and existing systems perform less well than one might expect. To guide practitioners, we close by giving clear recommendations as to which machine learning techniques are likely to perform well based on our experiments.",
"title": ""
},
{
"docid": "980565c38859db2df10db238d8a4dc61",
"text": "Performing High Voltage (HV) tasks with a multi craft work force create a special set of safety circumstances. This paper aims to present vital information relating to when it is acceptable to use a single or a two-layer soil structure. Also it discusses the implication of the high voltage infrastructure on the earth grid and the safety of this implication under a single or a two-layer soil structure. A multiple case study is investigated to show the importance of using the right soil resistivity structure during the earthing system design. Keywords—Earth Grid, EPR, High Voltage, Soil Resistivity Structure, Step Voltage, Touch Voltage.",
"title": ""
},
{
"docid": "3a75bf4c982d076fce3b4cdcd560881a",
"text": "This project is one of the research topics in Professor William Dally’s group. In this project, we developed a pruning based method to learn both weights and connections for Long Short Term Memory (LSTM). In this method, we discard the unimportant connections in a pretrained LSTM, and make the weight matrix sparse. Then, we retrain the remaining model. After we remaining model is converge, we prune this model again and retrain the remaining model iteratively, until we achieve the desired size of model and performance. This method will save the size of the LSTM as well as prevent overfitting. Our results retrained on NeuralTalk shows that we can discard nearly 90% of the weights without hurting the performance too much. Part of the results in this project will be posted in NIPS 2015.",
"title": ""
},
{
"docid": "64d4776be8e2dbb0fa3b30d6efe5876c",
"text": "This paper presents a novel method for hierarchically organizing large face databases, with application to efficient identity-based face retrieval. The method relies on metric learning with local binary pattern (LBP) features. On one hand, LBP features have proved to be highly resilient to various appearance changes due to illumination and contrast variations while being extremely efficient to calculate. On the other hand, metric learning (ML) approaches have been proved very successful for face verification ‘in the wild’, i.e. in uncontrolled face images with large amounts of variations in pose, expression, appearances, lighting, etc. While such ML based approaches compress high dimensional features into low dimensional spaces using discriminatively learned projections, the complexity of retrieval is still significant for large scale databases (with millions of faces). The present paper shows that learning such discriminative projections locally while organizing the database hierarchically leads to a more accurate and efficient system. The proposed method is validated on the standard Labeled Faces in the Wild (LFW) benchmark dataset with millions of additional distracting face images collected from photos on the internet.",
"title": ""
},
{
"docid": "17f0fbd3ab3b773b5ef9d636700b5af6",
"text": "Motor sequence learning is a process whereby a series of elementary movements is re-coded into an efficient representation for the entire sequence. Here we show that human subjects learn a visuomotor sequence by spontaneously chunking the elementary movements, while each chunk acts as a single memory unit. The subjects learned to press a sequence of 10 sets of two buttons through trial and error. By examining the temporal patterns with which subjects performed a visuomotor sequence, we found that the subjects performed the 10 sets as several clusters of sets, which were separated by long time gaps. While the overall performance time decreased by repeating the same sequence, the clusters became clearer and more consistent. The cluster pattern was uncorrelated with the distance of hand movements and was different across subjects who learned the same sequence. We then split a learned sequence into three segments, while preserving or destroying the clusters in the learned sequence, and shuffled the segments. The performance on the shuffled sequence was more accurate and quicker when the clusters in the original sequence were preserved than when they were destroyed. The results suggest that each cluster is processed as a single memory unit, a chunk, and is necessary for efficient sequence processing. A learned visuomotor sequence is hierarchically represented as chunks that contain several elementary movements. We also found that the temporal patterns of sequence performance transferred from the nondominant to dominant hand, but not vice versa. This may suggest a role of the dominant hemisphere in storage of learned chunks. Together with our previous unit-recording and imaging studies that used the same learning paradigm, we predict specific roles of the dominant parietal area, basal ganglia, and presupplementary motor area in the chunking.",
"title": ""
},
{
"docid": "9b1643284b783f2947be11f16ae8d942",
"text": "We investigate the task of modeling opendomain, multi-turn, unstructured, multiparticipant, conversational dialogue. We specifically study the effect of incorporating different elements of the conversation. Unlike previous efforts, which focused on modeling messages and responses, we extend the modeling to long context and participant’s history. Our system does not rely on handwritten rules or engineered features; instead, we train deep neural networks on a large conversational dataset. In particular, we exploit the structure of Reddit comments and posts to extract 2.1 billion messages and 133 million conversations. We evaluate our models on the task of predicting the next response in a conversation, and we find that modeling both context and participants improves prediction accuracy.",
"title": ""
},
{
"docid": "35a298d5ec169832c3faf2e30d95e1a4",
"text": "© 2 0 0 1 m a s s a c h u s e t t s i n s t i t u t e o f t e c h n o l o g y, c a m b r i d g e , m a 0 2 1 3 9 u s a — w w w. a i. m i t. e d u",
"title": ""
},
{
"docid": "fe08f3e1dc4fe2d71059b483c8532e88",
"text": "Digital asset management (DAM) has increasing benefits in booming global Internet economy, but it is still a great challenge for providing an effective way to manage, store, ingest, organize and retrieve digital asset. To do it, we present a new digital asset management platform, called DAM-Chain, with Transaction-based Access Control (TBAC) which integrates the distribution ABAC model and the blockchain technology. In this platform, the ABAC provides flexible and diverse authorization mechanisms for digital asset escrowed into blockchain while the blockchain's transactions serve as verifiable and traceable medium of access request procedure. We also present four types of transactions to describe the TBAC access control procedure, and provide the algorithms of these transactions corresponding to subject registration, object escrowing and publication, access request and grant. By maximizing the strengths of both ABAC and blockchain, this platform can support flexible and diverse permission management, as well as verifiable and transparent access authorization process in an open decentralized environment.",
"title": ""
},
{
"docid": "8eb161e363d55631148ed3478496bbd5",
"text": "This paper proposes a new power-factor-correction (PFC) topology, and explains its operation principle, its control mechanism, related application problems followed by experimental results. In this proposed topology, critical-conduction-mode (CRM) interleaved technique is applied to a bridgeless PFC in order to achieve high efficiency by combining benefits of each topology. This application is targeted toward low to middle power applications that normally employs continuous-conductionmode boost converter. key words: PFC, Interleaved, critical-conduction-mode, totem-pole",
"title": ""
},
{
"docid": "dfbf5c12d8e5a8e5e81de5d51f382185",
"text": "Demand response (DR) is very important in the future smart grid, aiming to encourage consumers to reduce their demand during peak load hours. However, if binary decision variables are needed to specify start-up time of a particular appliance, the resulting mixed integer combinatorial problem is in general difficult to solve. In this paper, we study a versatile convex programming (CP) DR optimization framework for the automatic load management of various household appliances in a smart home. In particular, an L1 regularization technique is proposed to deal with schedule-based appliances (SAs), for which their on/off statuses are governed by binary decision variables. By relaxing these variables from integer to continuous values, the problem is reformulated as a new CP problem with an additional L1 regularization term in the objective. This allows us to transform the original mixed integer problem into a standard CP problem. Its major advantage is that the overall DR optimization problem remains to be convex and therefore the solution can be found efficiently. Moreover, a wide variety of appliances with different characteristics can be flexibly incorporated. Simulation result shows that the energy scheduling of SAs and other appliances can be determined simultaneously using the proposed CP formulation.",
"title": ""
},
{
"docid": "a4d177e695f83ddbaad38b5aa5c34baa",
"text": "Introduction Digital technologies play an increasingly important role in shaping the profile of human thought and action. In the few short decades since its invention, for example, the World Wide Web has transformed the way we shop, date, socialize and undertake scientific endeavours. We are also witnessing an unprecedented rate of technological innovation and change, driven, at least in part, by exponential rates of growth in computing power and performance. The technological landscape is thus a highly dynamic one – new technologies are being introduced all the time, and the rate of change looks set to continue unabated. In view of all this, it is natural to wonder about the effects of new technology on both ourselves and the societies in which we live.",
"title": ""
},
{
"docid": "1bcb0d930848fab3e5b8aee3c983e45b",
"text": "BACKGROUND\nLycopodium clavatum (Lyc) is a widely used homeopathic medicine for the liver, urinary and digestive disorders. Recently, acetyl cholinesterase (AchE) inhibitory activity has been found in Lyc alkaloid extract, which could be beneficial in dementia disorder. However, the effect of Lyc has not yet been explored in animal model of memory impairment and on cerebral blood flow.\n\n\nAIM\nThe present study was planned to explore the effect of Lyc on learning and memory function and cerebral blood flow (CBF) in intracerebroventricularly (ICV) administered streptozotocin (STZ) induced memory impairment in rats.\n\n\nMATERIALS AND METHODS\nMemory deficit was induced by ICV administration of STZ (3 mg/kg) in rats on 1st and 3rd day. Male SD rats were treated with Lyc Mother Tincture (MT) 30, 200 and 1000 for 17 days. Learning and memory was evaluated by Morris water maze test on 14th, 15th and 16th day. CBF was measured by Laser Doppler flow meter on 17th day.\n\n\nRESULTS\nSTZ (ICV) treated rats showed impairment in learning and memory along with reduced CBF. Lyc MT and 200 showed improvement in learning and memory. There was increased CBF in STZ (ICV) treated rats at all the potencies of Lyc studied.\n\n\nCONCLUSION\nThe above study suggests that Lyc may be used as a drug of choice in condition of memory impairment due to its beneficial effect on CBF.",
"title": ""
},
{
"docid": "a023b7a853733b92287efcddc67976ae",
"text": "Intensive use of e-business can provide number of opportunities and actual benefits to companies of all activities and sizes. In general, through the use of web sites companies can create global presence and widen business boundaries. Many organizations now have websites to complement their other activities, but it is likely that a smaller proportion really know how successful their sites are and in what extent they comply with business objectives. A key enabler of web sites measurement is web site analytics and metrics. Web sites analytics especially refers to the use of data collected from a web site to determine which aspects of the web site work towards the business objectives. Advanced web analytics must play an important role in overall company strategy and should converge to web intelligence – a specific part of business intelligence which collect and analyze information collected from web sites and apply them in relevant ‘business’ context. This paper examines the importance of measuring the web site quality of the Croatian hotels. Wide range of web site metrics are discussed and finally a set of 8 dimensions and 44 attributes chosen for the evaluation of Croatian hotel’s web site quality. The objective of the survey conducted on the 30 hotels was to identify different groups of hotel web sites in relation to their quality measured with specific web metrics. Key research question was: can hotel web sites be placed into meaningful groups by consideration of variation in web metrics and number of hotel stars? To confirm the reliability of chosen metrics a Cronbach's alpha test was conducted. Apart from descriptive statistics tools, to answer the posed research question, clustering analysis was conducted and the characteristics of the three clusters were considered. Experiences and best practices of the hotel web sites clusters are taken as the prime source of recommendation for improving web sites quality level. Key-Words: web metrics, hotel web sites, web analytics, web site audit, web site quality, cluster analysis",
"title": ""
},
{
"docid": "30ba7b3cf3ba8a7760703a90261d70eb",
"text": "Starch is a major storage product of many economically important crops such as wheat, rice, maize, tapioca, and potato. A large-scale starch processing industry has emerged in the last century. In the past decades, we have seen a shift from the acid hydrolysis of starch to the use of starch-converting enzymes in the production of maltodextrin, modified starches, or glucose and fructose syrups. Currently, these enzymes comprise about 30% of the world’s enzyme production. Besides the use in starch hydrolysis, starch-converting enzymes are also used in a number of other industrial applications, such as laundry and porcelain detergents or as anti-staling agents in baking. A number of these starch-converting enzymes belong to a single family: the -amylase family or family13 glycosyl hydrolases. This group of enzymes share a number of common characteristics such as a ( / )8 barrel structure, the hydrolysis or formation of glycosidic bonds in the conformation, and a number of conserved amino acid residues in the active site. As many as 21 different reaction and product specificities are found in this family. Currently, 25 three-dimensional (3D) structures of a few members of the -amylase family have been determined using protein crystallization and X-ray crystallography. These data in combination with site-directed mutagenesis studies have helped to better understand the interactions between the substrate or product molecule and the different amino acids found in and around the active site. This review illustrates the reaction and product diversity found within the -amylase family, the mechanistic principles deduced from structure–function relationship structures, and the use of the enzymes of this family in industrial applications. © 2002 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "5184b25a4d056b861f5dbae34300344a",
"text": "AFFILIATIONS: asHouri, Hsu, soroosHian, and braitHwaite— Center for Hydrometeorology and Remote Sensing, Henry Samueli School of Engineering, Department of Civil and Environmental Engineering, University of California, Irvine, Irvine, California; Knapp and neLson—NOAA/National Climatic Data Center, Asheville, North Carolina; CeCiL—Global Science & Technology, Inc., Asheville, North Carolina; prat—Cooperative Institute for Climate and Satellites, North Carolina State University, and NOAA/National Climatic Data Center, Asheville, North Carolina CORRESPONDING AUTHOR: Hamed Ashouri, Center for Hydrometeorology and Remote Sensing, Department of Civil and Environmental Engineering, University of California, Irvine, CA 92697 E-mail: [email protected]",
"title": ""
},
{
"docid": "0e74994211d0e3c1e85ba0c85aba3df5",
"text": "Images of faces manipulated to make their shapes closer to the average are perceived as more attractive. The influences of symmetry and averageness are often confounded in studies based on full-face views of faces. Two experiments are reported that compared the effect of manipulating the averageness of female faces in profile and full-face views. Use of a profile view allows a face to be \"morphed\" toward an average shape without creating an image that becomes more symmetrical. Faces morphed toward the average were perceived as more attractive in both views, but the effect was significantly stronger for full-face views. Both full-face and profile views morphed away from the average shape were perceived as less attractive. It is concluded that the effect of averageness is independent of any effect of symmetry on the perceived attractiveness of female faces.",
"title": ""
},
{
"docid": "0f3b2081ecd311b7b2555091aaca2571",
"text": "Maximum Power Point Tracking (MPPT) is widely used control technique to extract maximum power available from the solar cell of photovoltaic (PV) module. Since the solar cells have non-linear i–v characteristics. The efficiency of PV module is very low and power output depends on solar insolation level and ambient temperature, so maximization of power output with greater efficiency is of special interest. Moreover there is great loss of power due to mismatch of source and load. So, to extract maximum power from solar panel a MPPT needs to be designed. The objective of the paper is to present a novel cost effective and efficient microcontroller based MPPT system for solar photovoltaic system to ensure fast maximum power point operation at all fast changing environmental conditions. The proposed controller scheme utilizes PWM techniques to regulate the output power of boost DC/DC converter at its maximum possible value and simultaneously controls the charging process of battery. Incremental Conductance algorithm is implemented to track maximum power point. For the feasibility study, parameter extraction, model evaluation and analysis of converter system design a MATLAB/Simulink model is demonstrated and simulated for a typical 40W solar panel from Kyocera KC-40 for hardware implementation and verification. Finally, a hardware model is designed and tested in lab at different operating conditions. Further, MPPT system has been tested with Solar Panel at different solar insolation level and temperature. The resulting system has high-efficiency, lower-cost, very fast tracking speed and can be easily modified for additional control function for future use.",
"title": ""
}
] | scidocsrr |
a613c67f9f24fa382437b912d38cd586 | Automated Diagnosis of Glaucoma Using Texture and Higher Order Spectra Features | [
{
"docid": "e494f926c9b2866d2c74032d200e4d0a",
"text": "This chapter describes a new algorithm for training Support Vector Machines: Sequential Minimal Optimization, or SMO. Training a Support Vector Machine (SVM) requires the solution of a very large quadratic programming (QP) optimization problem. SMO breaks this large QP problem into a series of smallest possible QP problems. These small QP problems are solved analytically, which avoids using a time-consuming numerical QP optimization as an inner loop. The amount of memory required for SMO is linear in the training set size, which allows SMO to handle very large training sets. Because large matrix computation is avoided, SMO scales somewhere between linear and quadratic in the training set size for various test problems, while a standard projected conjugate gradient (PCG) chunking algorithm scales somewhere between linear and cubic in the training set size. SMO's computation time is dominated by SVM evaluation, hence SMO is fastest for linear SVMs and sparse data sets. For the MNIST database, SMO is as fast as PCG chunking; while for the UCI Adult database and linear SVMs, SMO can be more than 1000 times faster than the PCG chunking algorithm.",
"title": ""
},
{
"docid": "0a3a349e6b66d822cd826f633ba9f066",
"text": "Diabetic retinopathy (DR) is a condition where the retina is damaged due to fluid leaking from the blood vessels into the retina. In extreme cases, the patient will become blind. Therefore, early detection of diabetic retinopathy is crucial to prevent blindness. Various image processing techniques have been used to identify the different stages of diabetes retinopathy. The application of non-linear features of the higher-order spectra (HOS) was found to be efficient as it is more suitable for the detection of shapes. The aim of this work is to automatically identify the normal, mild DR, moderate DR, severe DR and prolific DR. The parameters are extracted from the raw images using the HOS techniques and fed to the support vector machine (SVM) classifier. This paper presents classification of five kinds of eye classes using SVM classifier. Our protocol uses, 300 subjects consisting of five different kinds of eye disease conditions. We demonstrate a sensitivity of 82% for the classifier with the specificity of 88%.",
"title": ""
}
] | [
{
"docid": "a13ca3d83e6ec1693bd9ad53323d2f63",
"text": "BACKGROUND\nThis study examined longitudinal patterns of heroin use, other substance use, health, mental health, employment, criminal involvement, and mortality among heroin addicts.\n\n\nMETHODS\nThe sample was composed of 581 male heroin addicts admitted to the California Civil Addict Program (CAP) during the years 1962 through 1964; CAP was a compulsory drug treatment program for heroin-dependent criminal offenders. This 33-year follow-up study updates information previously obtained from admission records and 2 face-to-face interviews conducted in 1974-1975 and 1985-1986; in 1996-1997, at the latest follow-up, 284 were dead and 242 were interviewed.\n\n\nRESULTS\nIn 1996-1997, the mean age of the 242 interviewed subjects was 57.4 years. Age, disability, years since first heroin use, and heavy alcohol use were significant correlates of mortality. Of the 242 interviewed subjects, 20.7% tested positive for heroin (with additional 9.5% urine refusal and 14.0% incarceration, for whom urinalyses were unavailable), 66.9% reported tobacco use, 22.1% were daily alcohol drinkers, and many reported illicit drug use (eg, past-year heroin use was 40.5%; marijuana, 35.5%; cocaine, 19.4%; crack, 10.3%; amphetamine, 11.6%). The group also reported high rates of health problems, mental health problems, and criminal justice system involvement. Long-term heroin abstinence was associated with less criminality, morbidity, psychological distress, and higher employment.\n\n\nCONCLUSIONS\nWhile the number of deaths increased steadily over time, heroin use patterns were remarkably stable for the group as a whole. For some, heroin addiction has been a lifelong condition associated with severe health and social consequences.",
"title": ""
},
{
"docid": "2f5ccd63b8f23300c090cb00b6bbe045",
"text": "Computing has revolutionized the biological sciences over the past several decades, such that virtually all contemporary research in molecular biology, biochemistry, and other biosciences utilizes computer programs. The computational advances have come on many fronts, spurred by fundamental developments in hardware, software, and algorithms. These advances have influenced, and even engendered, a phenomenal array of bioscience fields, including molecular evolution and bioinformatics; genome-, proteome-, transcriptome- and metabolome-wide experimental studies; structural genomics; and atomistic simulations of cellular-scale molecular assemblies as large as ribosomes and intact viruses. In short, much of post-genomic biology is increasingly becoming a form of computational biology. The ability to design and write computer programs is among the most indispensable skills that a modern researcher can cultivate. Python has become a popular programming language in the biosciences, largely because (i) its straightforward semantics and clean syntax make it a readily accessible first language; (ii) it is expressive and well-suited to object-oriented programming, as well as other modern paradigms; and (iii) the many available libraries and third-party toolkits extend the functionality of the core language into virtually every biological domain (sequence and structure analyses, phylogenomics, workflow management systems, etc.). This primer offers a basic introduction to coding, via Python, and it includes concrete examples and exercises to illustrate the language's usage and capabilities; the main text culminates with a final project in structural bioinformatics. A suite of Supplemental Chapters is also provided. Starting with basic concepts, such as that of a \"variable,\" the Chapters methodically advance the reader to the point of writing a graphical user interface to compute the Hamming distance between two DNA sequences.",
"title": ""
},
{
"docid": "d59e21319b9915c2f6d7a8931af5503c",
"text": "The effect of directional antenna elements in uniform circular arrays (UCAs) for direction of arrival (DOA) estimation is studied in this paper. While the vast majority of previous work assumes isotropic antenna elements or omnidirectional dipoles, this work demonstrates that improved DOA estimation accuracy and increased bandwidth is achievable with appropriately-designed directional antennas. The Cramer-Rao Lower Bound (CRLB) is derived for UCAs with directional antennas and is compared to isotropic antennas for 4- and 8-element arrays using a theoretical radiation pattern. The directivity that minimizes the CRLB is identified and microstrip patch antennas approximating the optimal theoretical gain pattern are designed to compare the resulting DOA estimation accuracy with a UCA using dipole antenna elements. Simulation results show improved DOA estimation accuracy and robustness using microstrip patch antennas as opposed to conventional dipoles. Additionally, it is shown that the bandwidth of a UCA for DOA estimation is limited only by the broadband characteristics of the directional antenna elements and not by the electrical size of the array as is the case with omnidirectional antennas.",
"title": ""
},
{
"docid": "c55057c6231d472477bf93339e6b5292",
"text": "BACKGROUND\nAcute hospital discharge delays are a pressing concern for many health care administrators. In Canada, a delayed discharge is defined by the alternate level of care (ALC) construct and has been the target of many provincial health care strategies. Little is known on the patient characteristics that influence acute ALC length of stay. This study examines which characteristics drive acute ALC length of stay for those awaiting nursing home admission.\n\n\nMETHODS\nPopulation-level administrative and assessment data were used to examine 17,111 acute hospital admissions designated as alternate level of care (ALC) from a large Canadian health region. Case level hospital records were linked to home care administrative and assessment records to identify and characterize those ALC patients that account for the greatest proportion of acute hospital ALC days.\n\n\nRESULTS\nALC patients waiting for nursing home admission accounted for 41.5% of acute hospital ALC bed days while only accounting for 8.8% of acute hospital ALC patients. Characteristics that were significantly associated with greater ALC lengths of stay were morbid obesity (27 day mean deviation, 99% CI = ±14.6), psychiatric diagnosis (13 day mean deviation, 99% CI = ±6.2), abusive behaviours (12 day mean deviation, 99% CI = ±10.7), and stroke (7 day mean deviation, 99% CI = ±5.0). Overall, persons with morbid obesity, a psychiatric diagnosis, abusive behaviours, or stroke accounted for 4.3% of all ALC patients and 23% of all acute hospital ALC days between April 1st 2009 and April 1st, 2011. ALC patients with the identified characteristics had unique clinical profiles.\n\n\nCONCLUSIONS\nA small number of patients with non-medical days waiting for nursing home admission contribute to a substantial proportion of total non-medical days in acute hospitals. Increases in nursing home capacity or changes to existing funding arrangements should target the sub-populations identified in this investigation to maximize effectiveness. Specifically, incentives should be introduced to encourage nursing homes to accept acute patients with the least prospect for community-based living, while acute patients with the greatest prospect for community-based living are discharged to transitional care or directly to community-based care.",
"title": ""
},
{
"docid": "40e74f062a6d4c969d87e57e7566bc9e",
"text": "Bullying is a serious public health concern that is associated with significant negative mental, social, and physical outcomes. Technological advances have increased adolescents' use of social media, and online communication platforms have exposed adolescents to another mode of bullying- cyberbullying. Prevention and intervention materials, from websites and tip sheets to classroom curriculum, have been developed to help youth, parents, and teachers address cyberbullying. While youth and parents are willing to disclose their experiences with bullying to their health care providers, these disclosures need to be taken seriously and handled in a caring manner. Health care providers need to include questions about bullying on intake forms to encourage these disclosures. The aim of this article is to examine the current status of cyberbullying prevention and intervention. Research support for several school-based intervention programs is summarised. Recommendations for future research are provided.",
"title": ""
},
{
"docid": "5f66a3faa36f273831b13b4345c2bf15",
"text": "The structure of blood vessels in the sclerathe white part of the human eye, is unique for every individual, hence it is best suited for human identification. However, this is a challenging research because it has a high insult rate (the number of occasions the valid user is rejected). In this survey firstly a brief introduction is presented about the sclera based biometric authentication. In addition, a literature survey is presented. We have proposed simplified method for sclera segmentation, a new method for sclera pattern enhancement based on histogram equalization and line descriptor based feature extraction and pattern matching with the help of matching score between the two segment descriptors. We attempt to increase the awareness about this topic, as much of the research is not done in this area.",
"title": ""
},
{
"docid": "e685a22b6f7b20fb1289923e86e467c5",
"text": "Nowadays, with the growth in the use of search engines, the extension of spying programs and anti -terrorism prevention, several researches focused on text analysis. In this sense, lemmatization and stemming are two common requirements of these researches. They include reducing different grammatical forms of a word and bring them to a common base form. In what follows, we will discuss these treatment methods on arabic text, especially the Khoja Stemmer, show their limits and provide new tools to improve it.",
"title": ""
},
{
"docid": "31fb6df8d386f28b63140ee2ad8d11ea",
"text": "The problem and the solution.The majority of the literature on creativity has focused on the individual, yet the social environment can influence both the level and frequency of creative behavior. This article reviews the literature for factors related to organizational culture and climate that act as supports and impediments to organizational creativity and innovation. The work of Amabile, Kanter, Van de Ven, Angle, and others is reviewed and synthesized to provide an integrative understanding of the existing literature. Implications for human resource development research and practice are discussed.",
"title": ""
},
{
"docid": "d2ca6d41e582c798bc7c53e932fd8dec",
"text": "How to measure usability is an important question in HCI research and user interface evaluation. We review current practice in measuring usability by categorizing and discussing usability measures from 180 studies published in core HCI journals and proceedings. The discussion distinguish several problems with the measures, including whether they actually measure usability, if they cover usability broadly, how they are reasoned about, and if they meet recommendations on how to measure usability. In many studies, the choice of and reasoning about usability measures fall short of a valid and reliable account of usability as quality-in-use of the user interface being studied. Based on the review, we discuss challenges for studies of usability and for research into how to measure usability. The challenges are to distinguish and empirically compare subjective and objective measures of usability; to focus on developing and employing measures of learning and retention; to study long-term use and usability; to extend measures of satisfaction beyond post-use questionnaires; to validate and standardize the host of subjective satisfaction questionnaires used; to study correlations between usability measures as a means for validation; and to use both micro and macro tasks and corresponding measures of usability. In conclusion, we argue that increased attention to the problems identified and challenges discussed may strengthen studies of usability and usability research. r 2005 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "5bece01bed7c5a9a2433d95379882a37",
"text": "n The polarization of electromagnetic signals is an important feature in the design of modern radar and telecommunications. Standard electromagnetic theory readily shows that a linearly polarized plane wave propagating in free space consists of two equal but counter-rotating components of circular polarization. In magnetized media, these circular modes can be arranged to produce the nonreciprocal propagation effects that are the basic properties of isolator and circulator devices. Independent phase control of right-hand (+) and left-hand (–) circular waves is accomplished by splitting their propagation velocities through differences in the e ± m ± parameter. A phenomenological analysis of the permeability m and permittivity e in dispersive media serves to introduce the corresponding magneticand electric-dipole mechanisms of interaction length with the propagating signal. As an example of permeability dispersion, a Lincoln Laboratory quasi-optical Faradayrotation isolator circulator at 35 GHz (l ~ 1 cm) with a garnet-ferrite rotator element is described. At infrared wavelengths (l = 1.55 mm), where fiber-optic laser sources also require protection by passive isolation of the Faraday-rotation principle, e rather than m provides the dispersion, and the frequency is limited to the quantum energies of the electric-dipole atomic transitions peculiar to the molecular structure of the magnetic garnet. For optimum performance, bismuth additions to the garnet chemical formula are usually necessary. Spectroscopic and molecular theory models developed at Lincoln Laboratory to explain the bismuth effects are reviewed. In a concluding section, proposed advances in present technology are discussed in the context of future radar and telecommunications challenges.",
"title": ""
},
{
"docid": "4d79d71c019c0f573885ffa2bc67f48b",
"text": "In this article, we provide a basic introduction to CMOS image-sensor technology, design and performance limits and present recent developments and future directions in this area. We also discuss image-sensor operation and describe the most popular CMOS image-sensor architectures. We note the main non-idealities that limit CMOS image sensor performance, and specify several key performance measures. One of the most important advantages of CMOS image sensors over CCDs is the ability to integrate sensing with analog and digital processing down to the pixel level. Finally, we focus on recent developments and future research directions that are enabled by pixel-level processing, the applications of which promise to further improve CMOS image sensor performance and broaden their applicability beyond current markets.",
"title": ""
},
{
"docid": "a492dcdbb9ec095cdfdab797c4b4e659",
"text": "We present a new class of methods for high-dimensional nonparametric regression and classification called sparse additive models (SpAM). Our methods combine ideas from sparse linear modeling and additive nonparametric regression. We derive an algorithm for fitting the models that is practical and effective even when the number of covariates is larger than the sample size. SpAM is essentially a functional version of the grouped lasso of Yuan and Lin (2006). SpAM is also closely related to the COSSO model of Lin and Zhang (2006), but decouples smoothing and sparsity, enabling the use of arbitrary nonparametric smoothers. We give an analysis of the theoretical properties of sparse additive models, and present empirical results on synthetic and real data, showing that SpAM can be effective in fitting sparse nonparametric models in high dimensional data.",
"title": ""
},
{
"docid": "5bc183ebfcc9280dae0c15454085d95d",
"text": "In this paper a criminal detection framework that could help policemen to recognize the face of a criminal or a suspect is proposed. The framework is a client-server video based face recognition surveillance in the real-time. The framework applies face detection and tracking using Android mobile devices at the client side and video based face recognition at the server side. This paper focuses on the development of the client side of the proposed framework, face detection and tracking using Android mobile devices. For the face detection stage, robust Viola-Jones algorithm that is not affected by illuminations is used. The face tracking stage is based on Optical Flow algorithm. Optical Flow is implemented in the proposed framework with two feature extraction methods, Fast Corner Features, and Regular Features. The proposed face detection and tracking is implemented using Android studio and OpenCV library, and tested using Sony Xperia Z2 Android 5.1 Lollipop Smartphone. Experiments show that face tracking using Optical Flow with Regular Features achieves a higher level of accuracy and efficiency than Optical Flow with Fast Corner Features.",
"title": ""
},
{
"docid": "711c950873c784a0c80217c83f81070c",
"text": "Accelerators are special purpose processors designed to speed up compute-intensive sections of applications. Two extreme endpoints in the spectrum of possible accelerators are FPGAs and GPUs, which can often achieve better performance than CPUs on certain workloads. FPGAs are highly customizable, while GPUs provide massive parallel execution resources and high memory bandwidth. Applications typically exhibit vastly different performance characteristics depending on the accelerator. This is an inherent problem attributable to architectural design, middleware support and programming style of the target platform. For the best application-to-accelerator mapping, factors such as programmability, performance, programming cost and sources of overhead in the design flows must be all taken into consideration. In general, FPGAs provide the best expectation of performance, flexibility and low overhead, while GPUs tend to be easier to program and require less hardware resources. We present a performance study of three diverse applications - Gaussian elimination, data encryption standard (DES), and Needleman-Wunsch - on an FPGA, a GPU and a multicore CPU system. We perform a comparative study of application behavior on accelerators considering performance and code complexity. Based on our results, we present an application characteristic to accelerator platform mapping, which can aid developers in selecting an appropriate target architecture for their chosen application.",
"title": ""
},
{
"docid": "18498166845b27890110c3ca0cd43d86",
"text": "Raine Mäntysalo The purpose of this article is to make an overview of postWWII urban planning theories from the point of view of participation. How have the ideas of public accountability, deliberative democracy and involvement of special interests developed from one theory to another? The urban planning theories examined are rational-comprehensive planning theory, advocacy planning theory, incrementalist planning theory and the two branches of communicative planning theory: planning as consensus-seeking and planning as management of conflicts.",
"title": ""
},
{
"docid": "ce1384d061248cbb96e77ea482b2ba62",
"text": "Preventable behaviors contribute to many life threatening health problems. Behavior-change technologies have been deployed to modify these, but such systems typically draw on traditional behavioral theories that overlook affect. We examine the importance of emotion tracking for behavior change. First, we conducted interviews to explore how emotions influence unwanted behaviors. Next, we deployed a system intervention, in which 35 participants logged information for a self-selected, unwanted behavior (e.g., smoking or overeating) over 21 days. 16 participants engaged in standard behavior tracking using a Fact-Focused system to record objective information about goals. 19 participants used an Emotion-Focused system to record emotional consequences of behaviors. Emotion-Focused logging promoted more successful behavior change and analysis of logfiles revealed mechanisms for success: greater engagement of negative affect for unsuccessful days and increased insight were key to motivating change. We present design implications to improve behavior-change technologies with emotion tracking.",
"title": ""
},
{
"docid": "79934e1cb9a6c07fb965da9674daeb69",
"text": "BACKGROUND\nAtrophic scars can complicate moderate and severe acne. There are, at present, several modalities of treatment with different results. Percutaneous collagen induction (PCI) has recently been proposed as a simple and effective therapeutic option for the management of atrophic scars.\n\n\nOBJECTIVE\nThe aim of our study was to analyze the efficacy and safety of percutaneous collagen induction for the treatment of acne scarring in different skin phototypes.\n\n\nMETHODS & MATERIALS\nA total of 60 patients of skin types phototype I to VI were included in the study. They were divided into three groups before beginning treatment: Group A (phototypes I to II), Group B (phototypes III to V), and Group C (phototypes VI). Each patient had three treatments at monthly intervals. The aesthetic improvement was evaluated by using a Global Aesthetic Improvement Scale (GAIS), and analyzed statistically by computerized image analysis of the patients' photographs. The differences in the GAIS scores in the different time-points of each group were found using the Wilcoxon's test for nonparametric-dependent continuous variables. Computerized image analysis of silicone replicas was used to quantify the irregularity of the surface micro-relief with Fast Fourier Transformation (FFT); average values of gray were obtained along the x- and y-axes. The calculated indexes were the integrals of areas arising from the distribution of pixels along the axes.\n\n\nRESULTS\nAll patients completed the study. The Wilcoxon's test for nonparametric-dependent continuous variables showed a statistically significant (p < 0.05) reduction in severity grade of acne scars at T5 compared to baseline (T1). The analysis of the surface micro-relief performed on skin replicas showed a decrease in the degree of irregularity of skin texture in all three groups of patients, with an average reduction of 31% in both axes after three sessions. No short- or long-term dyschromia was observed.\n\n\nCONCLUSION\nPCI offers a simple and safe modality to improve the appearance of acne scars without risk of dyspigmentation in patient of all skin types.",
"title": ""
},
{
"docid": "1dc32737d1c6aea101258e5687fc8545",
"text": "Individuals with Binge Eating Disorder (BED) often evidence comorbid Substance Use Disorders (SUD), resulting in poor outcome. This study is the first to examine treatment outcome for this concurrent disordered population. In this pilot study, 38 individuals diagnosed with BED and SUD participated in a 16-week group Mindfulness-Action Based Cognitive Behavioral Therapy (MACBT). Participants significantly improved on measures of objective binge eating episodes; disordered eating attitudes; alcohol and drug addiction severity; and depression. Taken together, MACBT appears to hold promise in treating individuals with co-existing BED-SUD.",
"title": ""
},
{
"docid": "0858f3c76ea9570eeae23c33307f2eaf",
"text": "Geometrical validation around the Calpha is described, with a new Cbeta measure and updated Ramachandran plot. Deviation of the observed Cbeta atom from ideal position provides a single measure encapsulating the major structure-validation information contained in bond angle distortions. Cbeta deviation is sensitive to incompatibilities between sidechain and backbone caused by misfit conformations or inappropriate refinement restraints. A new phi,psi plot using density-dependent smoothing for 81,234 non-Gly, non-Pro, and non-prePro residues with B < 30 from 500 high-resolution proteins shows sharp boundaries at critical edges and clear delineation between large empty areas and regions that are allowed but disfavored. One such region is the gamma-turn conformation near +75 degrees,-60 degrees, counted as forbidden by common structure-validation programs; however, it occurs in well-ordered parts of good structures, it is overrepresented near functional sites, and strain is partly compensated by the gamma-turn H-bond. Favored and allowed phi,psi regions are also defined for Pro, pre-Pro, and Gly (important because Gly phi,psi angles are more permissive but less accurately determined). Details of these accurate empirical distributions are poorly predicted by previous theoretical calculations, including a region left of alpha-helix, which rates as favorable in energy yet rarely occurs. A proposed factor explaining this discrepancy is that crowding of the two-peptide NHs permits donating only a single H-bond. New calculations by Hu et al. [Proteins 2002 (this issue)] for Ala and Gly dipeptides, using mixed quantum mechanics and molecular mechanics, fit our nonrepetitive data in excellent detail. To run our geometrical evaluations on a user-uploaded file, see MOLPROBITY (http://kinemage.biochem.duke.edu) or RAMPAGE (http://www-cryst.bioc.cam.ac.uk/rampage).",
"title": ""
},
{
"docid": "57b35e32b92b54fc1ea7724e73b26f39",
"text": "The authors examined relations between the Big Five personality traits and academic outcomes, specifically SAT scores and grade-point average (GPA). Openness was the strongest predictor of SAT verbal scores, and Conscientiousness was the strongest predictor of both high school and college GPA. These relations replicated across 4 independent samples and across 4 different personality inventories. Further analyses showed that Conscientiousness predicted college GPA, even after controlling for high school GPA and SAT scores, and that the relation between Conscientiousness and college GPA was mediated, both concurrently and longitudinally, by increased academic effort and higher levels of perceived academic ability. The relation between Openness and SAT verbal scores was independent of academic achievement and was mediated, both concurrently and longitudinally, by perceived verbal intelligence. Together, these findings show that personality traits have independent and incremental effects on academic outcomes, even after controlling for traditional predictors of those outcomes. ((c) 2007 APA, all rights reserved).",
"title": ""
}
] | scidocsrr |
cc7875ac90d3a8b3bcd7eb0e7a7fa1df | FEDD: Feature Extraction for Explicit Concept Drift Detection in time series | [
{
"docid": "50d63f05e453468f8e5234910e3d86d1",
"text": "0167-8655/$ see front matter 2011 Published by doi:10.1016/j.patrec.2011.08.019 ⇑ Corresponding author. Tel.: +44 (0) 2075940990; E-mail addresses: [email protected], gr203@i ic.ac.uk (N.M. Adams), [email protected] (D.K. Tas Hand). Classifying streaming data requires the development of methods which are computationally efficient and able to cope with changes in the underlying distribution of the stream, a phenomenon known in the literature as concept drift. We propose a new method for detecting concept drift which uses an exponentially weighted moving average (EWMA) chart to monitor the misclassification rate of an streaming classifier. Our approach is modular and can hence be run in parallel with any underlying classifier to provide an additional layer of concept drift detection. Moreover our method is computationally efficient with overhead O(1) and works in a fully online manner with no need to store data points in memory. Unlike many existing approaches to concept drift detection, our method allows the rate of false positive detections to be controlled and kept constant over time. 2011 Published by Elsevier B.V.",
"title": ""
},
{
"docid": "8b63800da2019180d266297647e3dbc0",
"text": "Most of the work in machine learning assume that examples are generated at random according to some stationary probability distribution. In this work we study the problem of learning when the class-probability distribution that generate the examples changes over time. We present a method for detection of changes in the probability distribution of examples. A central idea is the concept of context: a set of contiguous examples where the distribution is stationary. The idea behind the drift detection method is to control the online error-rate of the algorithm. The training examples are presented in sequence. When a new training example is available, it is classified using the actual model. Statistical theory guarantees that while the distribution is stationary, the error wil decrease. When the distribution changes, the error will increase. The method controls the trace of the online error of the algorithm. For the actual context we define a warning level, and a drift level. A new context is declared, if in a sequence of examples, the error increases reaching the warning level at example kw, and the drift level at example kd. This is an indication of a change in the distribution of the examples. The algorithm learns a new model using only the examples since kw. The method was tested with a set of eight artificial datasets and a real world dataset. We used three learning algorithms: a perceptron, a neural network and a decision tree. The experimental results show a good performance detecting drift and also with learning the new concept. We also observe that the method is independent of the learning algorithm.",
"title": ""
}
] | [
{
"docid": "327bbbee0087e15db04780291ded9fe6",
"text": "Semantic Reliability is a novel correctness criterion for multicast protocols based on the concept of message obsolescence: A message becomes obsolete when its content or purpose is superseded by a subsequent message. By exploiting obsolescence, a reliable multicast protocol may drop irrelevant messages to find additional buffer space for new messages. This makes the multicast protocol more resilient to transient performance perturbations of group members, thus improving throughput stability. This paper describes our experience in developing a suite of semantically reliable protocols. It summarizes the motivation, definition, and algorithmic issues and presents performance figures obtained with a running implementation. The data obtained experimentally is compared with analytic and simulation models. This comparison allows us to confirm the validity of these models and the usefulness of the approach. Finally, the paper reports the application of our prototype to distributed multiplayer games.",
"title": ""
},
{
"docid": "45cbfbe0a0bcf70910a6d6486fb858f0",
"text": "Grid cells in the entorhinal cortex of freely moving rats provide a strikingly periodic representation of self-location which is indicative of very specific computational mechanisms. However, the existence of grid cells in humans and their distribution throughout the brain are unknown. Here we show that the preferred firing directions of directionally modulated grid cells in rat entorhinal cortex are aligned with the grids, and that the spatial organization of grid-cell firing is more strongly apparent at faster than slower running speeds. Because the grids are also aligned with each other, we predicted a macroscopic signal visible to functional magnetic resonance imaging (fMRI) in humans. We then looked for this signal as participants explored a virtual reality environment, mimicking the rats’ foraging task: fMRI activation and adaptation showing a speed-modulated six-fold rotational symmetry in running direction. The signal was found in a network of entorhinal/subicular, posterior and medial parietal, lateral temporal and medial prefrontal areas. The effect was strongest in right entorhinal cortex, and the coherence of the directional signal across entorhinal cortex correlated with spatial memory performance. Our study illustrates the potential power of combining single-unit electrophysiology with fMRI in systems neuroscience. Our results provide evidence for grid-cell-like representations in humans, and implicate a specific type of neural representation in a network of regions which supports spatial cognition and also autobiographical memory.",
"title": ""
},
{
"docid": "a85496dc96f87ba4f0018ef8bb2c8695",
"text": "The negative capacitance (NC) of ferroelectric materials has paved the way for achieving sub-60-mV/decade switching feature in complementary metal-oxide-semiconductor (CMOS) field-effect transistors, by simply inserting a ferroelectric thin layer in the gate stack. However, in order to utilize the ferroelectric capacitor (as a breakthrough technique to overcome the Boltzmann limit of the device using thermionic emission process), the thickness of the ferroelectric layer should be scaled down to sub-10-nm for ease of integration with conventional CMOS logic devices. In this paper, we demonstrate an NC fin-shaped field-effect transistor (FinFET) with a 6-nm-thick HfZrO ferroelectric capacitor. The performance parameters of NC FinFET such as on-/off-state currents and subthreshold slope are compared with those of the conventional FinFET. Furthermore, a repetitive and reliable steep switching feature of the NC FinFET at various drain voltages is demonstrated.",
"title": ""
},
{
"docid": "7917c6d9a9d495190e5b7036db92d46d",
"text": "Background A precise understanding of the anatomical structures of the heart and great vessels is essential for surgical planning in order to avoid unexpected findings. Rapid prototyping techniques are used to print three-dimensional (3D) replicas of patients’ cardiovascular anatomy based on 3D clinical images such as MRI. The purpose of this study is to explore the use of 3D patient-specific cardiovascular models using rapid prototyping techniques to improve surgical planning in patients with complex congenital heart disease.",
"title": ""
},
{
"docid": "3fbbe02ff11faa5cf6d537d5bcb0e658",
"text": "This paper reports on a mixed-method research project that examined the attitudes of computer users toward accidental/naive information security (InfoSec) behaviour. The aim of this research was to investigate the extent to which attitude data elicited from repertory grid technique (RGT) interviewees support their responses collected via an online survey questionnaire. Twenty five university students participated in this two-stage project. Individual attitude scores were calculated for each of the research methods and were compared across seven behavioural focus areas using Spearman product-moment correlation coefficient. The two sets of data exhibited a small-to-medium correlation when individual attitudes were analysed for each of the focus areas. In summary, this exploratory research indicated that the two research approaches were reasonably complementary and the RGT interview results tended to triangulate the attitude scores derived from the online survey questionnaire, particularly in regard to attitudes toward Incident Reporting behaviour, Email Use behaviour and Social Networking Site Use behaviour. The results also highlighted some attitude items in the online questionnaire that need to be reviewed for clarity, relevance and non-ambiguity.",
"title": ""
},
{
"docid": "3bc7adca896ab0c18fd8ec9b8c5b3911",
"text": "Traditional algorithms to design hand-crafted features for action recognition have been a hot research area in last decade. Compared to RGB video, depth sequence is more insensitive to lighting changes and more discriminative due to its capability to catch geometric information of object. Unlike many existing methods for action recognition which depend on well-designed features, this paper studies deep learning-based action recognition using depth sequences and the corresponding skeleton joint information. Firstly, we construct a 3Dbased Deep Convolutional Neural Network (3DCNN) to directly learn spatiotemporal features from raw depth sequences, then compute a joint based feature vector named JointVector for each sequence by taking into account the simple position and angle information between skeleton joints. Finally, support vector machine (SVM) classification results from 3DCNN learned features and JointVector are fused to take action recognition. Experimental results demonstrate that our method can learn feature representation which is time-invariant and viewpoint-invariant from depth sequences. The proposed method achieves comparable results to the state-of-the-art methods on the UTKinect-Action3D dataset and achieves superior performance in comparison to baseline methods on the MSR-Action3D dataset. We further investigate the generalization of the trained model by transferring the learned features from one dataset (MSREmail addresses: [email protected] (Zhi Liu), [email protected] (Chenyang Zhang), [email protected] (Yingli Tian) Preprint submitted to Image and Vision Computing April 11, 2016 Action3D) to another dataset (UTKinect-Action3D) without retraining and obtain very promising classification accuracy.",
"title": ""
},
{
"docid": "e7f8f8bd80b1366058f356d39af483b4",
"text": "To handle the colorization problem, we propose a deep patch-wise colorization model for grayscale images. Distinguished with some constructive color mapping models with complicated mathematical priors, we alternately apply two loss metric functions in the deep model to suppress the training errors under the convolutional neural network. To address the potential boundary artifacts, a refinement scheme is presented inspired by guided filtering. In the experiment section, we summarize our network parameters setting in practice, including the patch size, amount of layers and the convolution kernels. Our experiments demonstrate this model can output more satisfactory visual colorizations compared with the state-of-the-art methods. Moreover, we prove our method has extensive application domains and can be applied to stylistic colorization.",
"title": ""
},
{
"docid": "46d36fbc092f0f8e1e8154db1ad1f9de",
"text": "Multicarrier phase-based ranging is fast emerging as a cost-optimized solution for a wide variety of proximitybased applications due to its low power requirement, low hardware complexity and compatibility with existing standards such as ZigBee and 6LoWPAN. Given potentially critical nature of the applications in which phasebased ranging can be deployed (e.g., access control, asset tracking), it is important to evaluate its security guarantees. Therefore, in this work, we investigate the security of multicarrier phase-based ranging systems and specifically focus on distance decreasing relay attacks that have proven detrimental to the security of proximity-based access control systems (e.g., vehicular passive keyless entry and start systems). We show that phase-based ranging, as well as its implementations, are vulnerable to a variety of distance reduction attacks. We describe different attack realizations and verify their feasibility by simulations and experiments on a commercial ranging system. Specifically, we successfully reduced the estimated range to less than 3m even though the devices were more than 50 m apart. We discuss possible countermeasures against such attacks and illustrate their limitations, therefore demonstrating that phase-based ranging cannot be fully secured against distance decreasing attacks.",
"title": ""
},
{
"docid": "96d2a6082de66034759b521547e8c8d2",
"text": "Recent developments in deep convolutional neural networks (DCNNs) have shown impressive performance improvements on various object detection/recognition problems. This has been made possible due to the availability of large annotated data and a better understanding of the nonlinear mapping between images and class labels, as well as the affordability of powerful graphics processing units (GPUs). These developments in deep learning have also improved the capabilities of machines in understanding faces and automatically executing the tasks of face detection, pose estimation, landmark localization, and face recognition from unconstrained images and videos. In this article, we provide an overview of deep-learning methods used for face recognition. We discuss different modules involved in designing an automatic face recognition system and the role of deep learning for each of them. Some open issues regarding DCNNs for face recognition problems are then discussed. This article should prove valuable to scientists, engineers, and end users working in the fields of face recognition, security, visual surveillance, and biometrics.",
"title": ""
},
{
"docid": "7946e414908e2863ad0e2ba21dbee0be",
"text": "This paper presents a symbolic-execution-based approach and its implementation by POM/JLEC for checking the logical equivalence between two programs in the system replacement context. The primary contributions lie in the development of POM/JLEC, a fully automatic equivalence checker for Java enterprise systems. POM/JLEC consists of three main components: Domain Specific Pre-Processor for extracting the target code from the original system and adjusting it to a suitable scope for verification, Symbolic Execution for generating symbolic summaries, and solver-based EQuality comparison for comparing the symbolic summaries together and returning counter examples in the case of non-equivalence. We have evaluated POM/JLEC with a large-scale benchmark created from the function layer code of an industrial enterprise system. The evaluation result with 54% test cases passed shows the feasibility for deploying its mature version into software development industry.",
"title": ""
},
{
"docid": "064bb39aa50a484955cfde4f585f91d7",
"text": "Congenitally missing teeth are frequently presented to the dentist. Interdisciplinary approach may be needed for the proper treatment plan. The available treatment modalities to replace congenitally missing teeth include prosthodontic fixed and removable prostheses, resin bonded retainers, orthodontic movement of maxillary canine to the lateral incisor site and single tooth implants. Dental implants offer a promising treatment option for placement of congenitally missing teeth. Interdisciplinary approach may be needed in these cases. This article aims to present a case report of replacement of unilaterally congenitally missing maxillary lateral incisors with dental implants.",
"title": ""
},
{
"docid": "192663cdecdcfda1f86605adbc3c6a56",
"text": "With the introduction of IT to conduct business we accepted the loss of a human control step. For this reason, the introduction of new IT systems was accompanied by the development of the authorization concept. But since, in reality, there is no such thing as 100 per cent security; auditors are commissioned to examine all transactions for misconduct. Since the data exists in digital form already, it makes sense to use computer-based processes to analyse it. Such processes allow the auditor to carry out extensive checks within an acceptable timeframe and with reasonable effort. Once the algorithm has been defined, it only takes sufficient computing power to evaluate larger quantities of data. This contribution presents the state of the art for IT-based data analysis processes that can be used to identify fraudulent activities.",
"title": ""
},
{
"docid": "87dd4ba33b9f4ae20d60097960047551",
"text": "Lacking the presence of human and social elements is claimed one major weakness that is hindering the growth of e-commerce. The emergence of social commerce (SC) might help ameliorate this situation. Social commerce is a new evolution of e-commerce that combines the commercial and social activities by deploying social technologies into e-commerce sites. Social commerce reintroduces the social aspect of shopping to e-commerce, increasing the degree of social presences in online environment. Drawing upon the social presence theory, this study theorizes the nature of social aspect in online SC marketplace by proposing a set of three social presence variables. These variables are then hypothesized to have positive impacts on trusting beliefs which in turn result in online purchase behaviors. The research model is examined via data collected from a typical ecommerce site in China. Our findings suggest that social presence factors grounded in social technologies contribute significantly to the building of the trustworthy online exchanging relationships. In doing so, this paper confirms the positive role of social aspect in shaping online purchase behaviors, providing a theoretical evidence for the fusion of social and commercial activities. Finally, this paper introduces a new perspective of e-commerce and calls more attention to this new phenomenon.",
"title": ""
},
{
"docid": "5585cc22a0af9cf00656ac04b14ade5a",
"text": "Side-channel attacks pose a critical threat to the deployment of secure embedded systems. Differential-power analysis is a technique relying on measuring the power consumption of device while it computes a cryptographic primitive, and extracting the secret information from it exploiting the knowledge of the operations involving the key. There is no open literature describing how to properly employ Digital Signal Processing (DSP) techniques in order to improve the effectiveness of the attacks. This paper presents a pre-processing technique based on DSP, reducing the number of traces needed to perform an attack by an order of magnitude with respect to the results obtained with raw datasets, and puts it into practical use attacking a commercial 32-bit software implementation of AES running on a Cortex-M3 CPU. The main contribution of this paper is proposing a leakage model for software implemented cryptographic primitives and an effective framework to extract it.",
"title": ""
},
{
"docid": "bb7511f4137f487b2b8bf2f6f3f73a6a",
"text": "There is extensive evidence indicating that new neurons are generated in the dentate gyrus of the adult mammalian hippocampus, a region of the brain that is important for learning and memory. However, it is not known whether these new neurons become functional, as the methods used to study adult neurogenesis are limited to fixed tissue. We use here a retroviral vector expressing green fluorescent protein that only labels dividing cells, and that can be visualized in live hippocampal slices. We report that newly generated cells in the adult mouse hippocampus have neuronal morphology and can display passive membrane properties, action potentials and functional synaptic inputs similar to those found in mature dentate granule cells. Our findings demonstrate that newly generated cells mature into functional neurons in the adult mammalian brain.",
"title": ""
},
{
"docid": "a6959cc988542a077058e57a5d2c2eff",
"text": "A green and reliable method using supercritical fluid extraction (SFE) and molecular distillation (MD) was optimized for the separation and purification of standardized typical volatile components fraction (STVCF) from turmeric to solve the shortage of reference compounds in quality control (QC) of volatile components. A high quality essential oil with 76.0% typical components of turmeric was extracted by SFE. A sequential distillation strategy was performed by MD. The total recovery and purity of prepared STVCF were 97.3% and 90.3%, respectively. Additionally, a strategy, i.e., STVCF-based qualification and quantitative evaluation of major bioactive analytes by multiple calibrated components, was proposed to easily and effectively control the quality of turmeric. Compared with the individual calibration curve method, the STVCF-based quantification method was demonstrated to be credible and was effectively adapted for solving the shortage of reference volatile compounds and improving the QC of typical volatile components in turmeric, especially its functional products.",
"title": ""
},
{
"docid": "3412d99c29f7672fe3846173c9a4d734",
"text": "In the last decade, the ease of online payment has opened up many new opportunities for e-commerce, lowering the geographical boundaries for retail. While e-commerce is still gaining popularity, it is also the playground of fraudsters who try to misuse the transparency of online purchases and the transfer of credit card records. This paper proposes APATE, a novel approach to detect fraudulent credit card ∗NOTICE: this is the author’s version of a work that was accepted for publication in Decision Support Systems in May 8, 2015, published online as a self-archive copy after the 24 month embargo period. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. Please cite this paper as follows: Van Vlasselaer, V., Bravo, C., Caelen, O., Eliassi-Rad, T., Akoglu, L., Snoeck, M., Baesens, B. (2015). APATE: A novel approach for automated credit card transaction fraud detection using network-based extensions. Decision Support Systems, 75, 38-48. Available Online: http://www.sciencedirect.com/science/article/pii/S0167923615000846",
"title": ""
},
{
"docid": "7fe99b63d2b3d94918e4b2f536053b1c",
"text": "Delay Tolerant Networks (DTN) are networks of self-organizing wireless nodes, where end-to-end connectivity is intermittent. In these networks, forwarding decisions are made using locally collected knowledge about node behavior (e.g., past contacts between nodes) to predict which nodes are likely to deliver a content or bring it closer to the destination. One promising way of predicting future contact opportunities is to aggregate contacts seen in the past to a social graph and use metrics from complex network analysis (e.g., centrality and similarity) to assess the utility of a node to carry a piece of content. This aggregation presents an inherent tradeoff between the amount of time-related information lost during this mapping and the predictive capability of complex network analysis in this context. In this paper, we use two recent DTN routing algorithms that rely on such complex network analysis, to show that contact aggregation significantly affects the performance of these protocols. We then propose simple contact mapping algorithms that demonstrate improved performance up to a factor of 4 in delivery ratio, and robustness to various connectivity scenarios for both protocols.",
"title": ""
},
{
"docid": "a5d0f584dd0be0d305b8e1247622bfb5",
"text": "In this paper, an all NMOS voltage-mode four-quadrant analog multiplier, based on a basic NMOS differential amplifier that can produce the output signal in voltage form without using resistors, is presented. The proposed circuit has been simulated with SPICE and achieved -3 dB bandwidth of 120 MHz. The power consumption is about 3.6 mW from a /spl plusmn/2.5 V power supply voltage, and the total harmonic distortion is 0.85% with a 1 V input signal.",
"title": ""
},
{
"docid": "49cafb7a5a42b7a8f8260a398c390504",
"text": "With the availability of vast collection of research articles on internet, textual analysis is an increasingly important technique in scientometric analysis. While the context in which it is used and the specific algorithms implemented may vary, typically any textual analysis exercise involves intensive pre-processing of input text which includes removing topically uninteresting terms (stop words). In this paper we argue that corpus specific stop words, which take into account the specificities of a collection of texts, improve textual analysis in scientometrics. We describe two relatively simple techniques to generate corpus-specific stop words; stop words lists following a Poisson distribution and keyword adjacency stop words lists. In a case study to extract keywords from scientific abstracts of research project funded by the European Research Council in the domain of Life sciences, we show that a combination of those techniques gives better recall values than standard stop words or any of the two techniques alone. The method we propose can be implemented to obtain stop words lists in an automatic way by using author provided keywords for a set of abstracts. The stop words lists generated can be updated easily by adding new texts to the training corpus. Conference Topic Methods and techniques",
"title": ""
}
] | scidocsrr |
ae34a2fbc651d06af28faf80b5c7721f | Motion Blur Kernel Estimation via Deep Learning | [
{
"docid": "3e8b5f71776ab38861412f26f58e972e",
"text": "Camera shake leads to non-uniform image blurs. State-of-the-art methods for removing camera shake model the blur as a linear combination of homographically transformed versions of the true image. While this is conceptually interesting, the resulting algorithms are computationally demanding. In this paper we develop a forward model based on the efficient filter flow framework, incorporating the particularities of camera shake, and show how an efficient algorithm for blur removal can be obtained. Comprehensive comparisons on a number of real-world blurry images show that our approach is not only substantially faster, but it also leads to better deblurring results.",
"title": ""
},
{
"docid": "04d190daef0abb78f3c4d85e23297fbc",
"text": "Blind image deconvolution is an ill-posed problem that requires regularization to solve. However, many common forms of image prior used in this setting have a major drawback in that the minimum of the resulting cost function does not correspond to the true sharp solution. Accordingly, a range of additional methods are needed to yield good results (Bayesian methods, adaptive cost functions, alpha-matte extraction and edge localization). In this paper we introduce a new type of image regularization which gives lowest cost for the true sharp image. This allows a very simple cost formulation to be used for the blind deconvolution model, obviating the need for additional methods. Due to its simplicity the algorithm is fast and very robust. We demonstrate our method on real images with both spatially invariant and spatially varying blur.",
"title": ""
}
] | [
{
"docid": "f7a6cc4ebc1d2657175301dc05c86a7b",
"text": "Recent deep learning models have demonstrated strong capabilities for classifying text and non-text components in natural images. They extract a high-level feature globally computed from a whole image component (patch), where the cluttered background information may dominate true text features in the deep representation. This leads to less discriminative power and poorer robustness. In this paper, we present a new system for scene text detection by proposing a novel text-attentional convolutional neural network (Text-CNN) that particularly focuses on extracting text-related regions and features from the image components. We develop a new learning mechanism to train the Text-CNN with multi-level and rich supervised information, including text region mask, character label, and binary text/non-text information. The rich supervision information enables the Text-CNN with a strong capability for discriminating ambiguous texts, and also increases its robustness against complicated background components. The training process is formulated as a multi-task learning problem, where low-level supervised information greatly facilitates the main task of text/non-text classification. In addition, a powerful low-level detector called contrast-enhancement maximally stable extremal regions (MSERs) is developed, which extends the widely used MSERs by enhancing intensity contrast between text patterns and background. This allows it to detect highly challenging text patterns, resulting in a higher recall. Our approach achieved promising results on the ICDAR 2013 data set, with an F-measure of 0.82, substantially improving the state-of-the-art results.",
"title": ""
},
{
"docid": "1ca7cf4fd64327b2eb77b7b3a3e37cc8",
"text": "The current study demonstrates the separability of spatial and verbal working memory resources among college students. In Experiment 1, we developed a spatial span task that taxes both the processing and storage components of spatial working memory. This measure correlates with spatial ability (spatial visualization) measures, but not with verbal ability measures. In contrast, the reading span test, a common test of verbal working memory, correlates with verbal ability measures, but not with spatial ability measures. Experiment 2, which uses an interference paradigm to cross the processing and storage demands of span tasks, replicates this dissociation and further demonstrates that both the processing and storage components of working memory tasks are important for predicting performance on spatial thinking and language processing tasks.",
"title": ""
},
{
"docid": "abb01393c17bf9e5dbb07952a80fd2ab",
"text": "We report a case of a 48-year-old male patient with “krokodil” drug-related osteonecrosis of both jaws. Patient history included 1.5 years of “krokodil” use, with 8-month drug withdrawal prior to surgery. The patient was HCV positive. On the maxilla, sequestrectomy was performed. On the mandible, sequestrectomy was combined with bone resection. From ramus to ramus, segmental defect was formed, which was not reconstructed with any method. Post-operative follow-up period was 3 years and no disease recurrence was noted. On 3-year post-operative orthopantomogram, newly formed mandibular bone was found. This phenomenon shows that spontaneous bone formation is possible after mandible segmental resection in osteonecrosis patients.",
"title": ""
},
{
"docid": "06da3a4efe9ef2f5978a84da09650659",
"text": "We present CryptoML, the first practical framework for provably secure and efficient delegation of a wide range of contemporary matrix-based machine learning (ML) applications on massive datasets. In CryptoML a delegating client with memory and computational resource constraints wishes to assign the storage and ML-related computations to the cloud servers, while preserving the privacy of its data. We first suggest the dominant components of delegation performance cost, and create a matrix sketching technique that aims at minimizing the cost by data pre-processing. We then propose a novel interactive delegation protocol based on the provably secure Shamir's secret sharing. The protocol is customized for our new sketching technique to maximize the client's resource efficiency. CryptoML shows a new trade-off between the efficiency of secure delegation and the accuracy of the ML task. Proof of concept evaluations corroborate applicability of CryptoML to datasets with billions of non-zero records.",
"title": ""
},
{
"docid": "7ff79a0701051f653257aefa2c3ba154",
"text": "As antivirus and network intrusion detection systems have increasingly proven insufficient to detect advanced threats, large security operations centers have moved to deploy endpoint-based sensors that provide deeper visibility into low-level events across their enterprises. Unfortunately, for many organizations in government and industry, the installation, maintenance, and resource requirements of these newer solutions pose barriers to adoption and are perceived as risks to organizations' missions. To mitigate this problem we investigated the utility of agentless detection of malicious endpoint behavior, using only the standard built-in Windows audit logging facility as our signal. We found that Windows audit logs, while emitting manageable sized data streams on the endpoints, provide enough information to allow robust detection of malicious behavior. Audit logs provide an effective, low-cost alternative to deploying additional expensive agent-based breach detection systems in many government and industrial settings, and can be used to detect, in our tests, 83% percent of malware samples with a 0.1% false positive rate. They can also supplement already existing host signature-based antivirus solutions, like Kaspersky, Symantec, and McAfee, detecting, in our testing environment, 78% of malware missed by those antivirus systems.",
"title": ""
},
{
"docid": "cf1967eaa2fe97a3de2b99aec0df27cb",
"text": "We present a high gain linearly polarized Ku-band planar array for mobile satellite TV reception. In contrast with previously presented three dimensional designs, the approach presented here results in a low profile planar array with a similar performance. The elevation scan is performed electronically, whereas the azimuth scan is done mechanically using an electric motor. The incident angle of the arriving satellite signal is generally large, varying between 25° to 65° depending on the location of the receiver, thereby creating a considerable off-axis scan loss. In order to alleviate this problem, and yet maintaining a planar design, the antenna array is designed to be consisting of subarrays with a fixed scanned beam at 45°. Therefore, the array of fixed-beam subarrays needs to be scanned ±20° around their peak beam, which results in a higher combined gain/directivity. The proposed antenna demonstrates the minimum measured gain of 23.1 dBi throughout the scan range (for 65° scan) with the peak gain of 26.5 dBi (for 32° scan) at 12 GHz while occupying a circular aperture of 26 cm in diameter.",
"title": ""
},
{
"docid": "5941a883218e22a06efd3bba1e851fc7",
"text": "Sparse data and irregular data access patterns are hugely important to many applications, such as molecular dynamics and data analytics. Accelerating applications with these characteristics requires maximizing usable bandwidth at all levels of the memory hierarchy, reducing latency, maximizing reuse of moved data, and minimizing the amount the data is moved in the first place. Many specialized data structures have evolved to meet these requisites for specific applications, however, there are no general solutions for improving the performance of sparse applications. The structure of the memory hierarchy itself, conspires against general hardware for accelerating sparse applications, being designed for efficient bulk transport of data versus one byte at a time. This paper presents a general solution for a programmable data rearrangement/reduction engine near-memory to deliver bulk byte-addressable data access. The key technology presented in this paper is the Sparse Data Reduction Engine (SPDRE), which builds previous similar efforts to provide a practical near-memory reorganization engine. In addition to the primary contribution, this paper describes a programmer interface that enables all combinations of rearrangement, analysis of the methodology on a small series of applications, and finally a discussion of future work.",
"title": ""
},
{
"docid": "76454b3376ec556025201a2f694e1f1c",
"text": "Recurrent neural networks (RNNs) provide state-of-the-art accuracy for performing analytics on datasets with sequence (e.g., language model). This paper studied a state-of-the-art RNN variant, Gated Recurrent Unit (GRU). We first proposed memoization optimization to avoid 3 out of the 6 dense matrix vector multiplications (SGEMVs) that are the majority of the computation in GRU. Then, we study the opportunities to accelerate the remaining SGEMVs using FPGAs, in comparison to 14-nm ASIC, GPU, and multi-core CPU. Results show that FPGA provides superior performance/Watt over CPU and GPU because FPGA's on-chip BRAMs, hard DSPs, and reconfigurable fabric allow for efficiently extracting fine-grained parallelisms from small/medium size matrices used by GRU. Moreover, newer FPGAs with more DSPs, on-chip BRAMs, and higher frequency have the potential to narrow the FPGA-ASIC efficiency gap.",
"title": ""
},
{
"docid": "79beaf249c8772ee1cbd535df0bf5a13",
"text": "Accurate vessel detection in retinal images is an important and difficult task. Detection is made more challenging in pathological images with the presence of exudates and other abnormalities. In this paper, we present a new unsupervised vessel segmentation approach to address this problem. A novel inpainting filter, called neighborhood estimator before filling, is proposed to inpaint exudates in a way that nearby false positives are significantly reduced during vessel enhancement. Retinal vascular enhancement is achieved with a multiple-scale Hessian approach. Experimental results show that the proposed vessel segmentation method outperforms state-of-the-art algorithms reported in the recent literature, both visually and in terms of quantitative measurements, with overall mean accuracy of 95.62% on the STARE dataset and 95.81% on the HRF dataset.",
"title": ""
},
{
"docid": "5bff5c54824d24b6ab72d01e0771db36",
"text": "Visual restoration and recognition are traditionally addressed in pipeline fashion, i.e. denoising followed by classification. Instead, observing correlations between the two tasks, for example clearer image will lead to better categorization and vice visa, we propose a joint framework for visual restoration and recognition for handwritten images, inspired by advances in deep autoencoder and multi-modality learning. Our model is a 3-pathway deep architecture with a hidden-layer representation which is shared by multi-inputs and outputs, and each branch can be composed of a multi-layer deep model. Thus, visual restoration and classification can be unified using shared representation via non-linear mapping, and model parameters can be learnt via backpropagation. Using MNIST and USPS data corrupted with structured noise, the proposed framework performs at least 20% better in classification than separate pipelines, as well as clearer recovered images.",
"title": ""
},
{
"docid": "2a79464b8674b689239f4579043bd525",
"text": "In this paper, we extend an attention-based neural machine translation (NMT) model by allowing it to access an entire training set of parallel sentence pairs even after training. The proposed approach consists of two stages. In the first stage– retrieval stage–, an off-the-shelf, black-box search engine is used to retrieve a small subset of sentence pairs from a training set given a source sentence. These pairs are further filtered based on a fuzzy matching score based on edit distance. In the second stage–translation stage–, a novel translation model, called search engine guided NMT (SEG-NMT), seamlessly uses both the source sentence and a set of retrieved sentence pairs to perform the translation. Empirical evaluation on three language pairs (En-Fr, En-De, and En-Es) shows that the proposed approach significantly outperforms the baseline approach and the improvement is more significant when more relevant sentence pairs were retrieved.",
"title": ""
},
{
"docid": "1df4fad2d5448364834608f4bc9d10a0",
"text": "What causes adolescents to be materialistic? Prior research shows parents and peers are an important influence. Researchers have viewed parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents. We take a different approach, viewing parents and peers as important sources of emotional support and psychological well-being, which increase self-esteem in adolescents. Supportive parents and peers boost adolescents' self-esteem, which decreases their need to turn to material goods to develop positive selfperceptions. In a study with 12–18 year-olds, we find support for our view that self-esteem mediates the relationship between parent/peer influence and adolescent materialism. © 2010 Society for Consumer Psychology. Published by Elsevier Inc. All rights reserved. Rising levels of materialism among adolescents have raised concerns among parents, educators, and consumer advocates.More than half of 9–14 year-olds agree that, “when you grow up, the more money you have, the happier you are,” and over 60% agree that, “the only kind of job I want when I grow up is one that getsme a lot of money” (Goldberg, Gorn, Peracchio, & Bamossy, 2003). These trends have lead social scientists to conclude that adolescents today are “...the most brand-oriented, consumer-involved, and materialistic generation in history” (Schor, 2004, p. 13). What causes adolescents to bematerialistic? Themost consistent finding to date is that adolescent materialism is related to the interpersonal influences in their lives—notably, parents and peers. The vast majority of research is based on a social influence perspective, viewing parents and peers as socialization agents that transmit consumption attitudes, goals, and motives to adolescents through modeling, reinforcement, and social interaction. In early research, Churchill and Moschis (1979) proposed that adolescents learn rational aspects of consumption from their parents and social aspects of consumption (materialism) from their peers. Moore and ⁎ Corresponding author. Villanova School of Business, 800 Lancaster Avenue, Villanova, PA 19085, USA. Fax: +1 520 621 7483. E-mail addresses: [email protected] (L.N. Chaplin), [email protected] (D.R. John). 1057-7408/$ see front matter © 2010 Society for Consumer Psychology. Publish doi:10.1016/j.jcps.2010.02.002 Moschis (1981) examined family communication styles, suggesting that certain styles (socio-oriented) promote conformity to others' views, setting the stage for materialism. In later work, Goldberg et al. (2003) posited that parents transmit materialistic values to their offspring by modeling these values. Researchers have also reported positive correlations betweenmaterialism and socio-oriented family communication (Moore & Moschis, 1981), parents' materialism (Flouri, 2004; Goldberg et al., 2003), peer communication about consumption (Churchill & Moschis, 1979; Moschis & Churchill, 1978), and susceptibility to peer influence (Achenreiner, 1997; Banerjee & Dittmar, 2008; Roberts, Manolis, & Tanner, 2008). We take a different approach. Instead of viewing parents and peers as socialization agents that transmit consumption attitudes and values, we consider parents and peers as important sources of emotional support and psychological well-being, which lay the foundation for self-esteem in adolescents. We argue that supportive parents and peers boost adolescents' self-esteem, which decreases their need to embrace material goods as a way to develop positive self-perceptions. Prior research is suggestive of our perspective. In studies with young adults, researchers have found a link between (1) lower parental support (cold and controlling mothers) and a focus on financial success aspirations (Kasser, Ryan, Zax, & Sameroff, 1995: 18 year-olds) and (2) lower parental support (less affection and supervision) in ed by Elsevier Inc. All rights reserved. 1 Support refers to warmth, affection, nurturance, and acceptance (Becker, 1981; Ellis, Thomas, and Rollins, 1976). Parental nurturance involves the development of caring relationships, in which parents reason with their children about moral conflicts, involve them in family decision making, and set high moral expectations (Maccoby, 1984; Staub, 1988). 177 L.N. Chaplin, D.R. John / Journal of Consumer Psychology 20 (2010) 176–184 divorced families and materialism (Rindfleisch, Burroughs, & Denton, 1997: 20–32 year-olds). These studies do not focus on adolescents, do not examine peer factors, nor do they include measures of self-esteem or self-worth. But, they do suggest that parents and peers can influence materialism in ways other than transmitting consumption attitudes and values, which has been the focus of prior research on adolescent materialism. In this article, we seek preliminary evidence for our view by testing whether self-esteem mediates the relationship between parent/peer influence and adolescent materialism. We include parent and peer factors that inhibit or encourage adolescent materialism, which allows us to test self-esteem as a mediator under both conditions. For parental influence, we include parental support (inhibits materialism) and parents' materialism (encourages materialism). Both factors have appeared in prior materialism studies, but our interest here is whether self-esteem is a mediator of their influence on materialism. For peer influence, we include peer support (inhibits materialism) and peers' materialism (encourages materialism), with our interest being whether self-esteem is a mediator of their influence on materialism. These peer factors are new to materialism research and offer potentially new insights. Contrary to prior materialism research, which views peers as encouraging materialism among adolescents, we also consider the possibility that peers may be a positive influence by providing emotional support in the same way that parents do. Our research offers several contributions to understanding materialism in adolescents. First, we provide a broader perspective on the role of parents and peers as influences on adolescent materialism. The social influence perspective, which views parents and peers as transmitting consumption attitudes and values, has dominated materialism research with children and adolescents since its early days. We provide a broader perspective by considering parents and peers as much more than socialization agents—they contribute heavily to the sense of self-esteem that adolescents possess, which influences materialism. Second, our perspective provides a process explanation for why parents and peers influence materialism that can be empirically tested. Prior research offers a valuable set of findings about what factors correlate with adolescent materialism, but the process responsible for the correlation is left untested. Finally, we provide a parsimonious explanation for why different factors related to parent and peer influence affect adolescent materialism. Although the number of potential parent and peer factors is large, it is possible that there is a common thread (self-esteem) for why these factors influence adolescent materialism. Isolating mediators, such as selfesteem, could provide the basis for developing a conceptual framework to tie together findings across prior studies with different factors, providing a more unified explanation for why certain adolescents are more vulnerable to materialism.",
"title": ""
},
{
"docid": "e4f648d12495a2d7615fe13c84f35bbe",
"text": "We propose a simple modification to existing neural machine translation (NMT) models that enables using a single universal model to translate between multiple languages while allowing for language specific parameterization, and that can also be used for domain adaptation. Our approach requires no changes to the model architecture of a standard NMT system, but instead introduces a new component, the contextual parameter generator (CPG), that generates the parameters of the system (e.g., weights in a neural network). This parameter generator accepts source and target language embeddings as input, and generates the parameters for the encoder and the decoder, respectively. The rest of the model remains unchanged and is shared across all languages. We show how this simple modification enables the system to use monolingual data for training and also perform zero-shot translation. We further show it is able to surpass state-of-theart performance for both the IWSLT-15 and IWSLT-17 datasets and that the learned language embeddings are able to uncover interesting relationships between languages.",
"title": ""
},
{
"docid": "24ecf1119592cc5496dc4994d463eabe",
"text": "To improve data availability and resilience MapReduce frameworks use file systems that replicate data uniformly. However, analysis of job logs from a large production cluster shows wide disparity in data popularity. Machines and racks storing popular content become bottlenecks; thereby increasing the completion times of jobs accessing this data even when there are machines with spare cycles in the cluster. To address this problem, we present Scarlett, a system that replicates blocks based on their popularity. By accurately predicting file popularity and working within hard bounds on additional storage, Scarlett causes minimal interference to running jobs. Trace driven simulations and experiments in two popular MapReduce frameworks (Hadoop, Dryad) show that Scarlett effectively alleviates hotspots and can speed up jobs by 20.2%.",
"title": ""
},
{
"docid": "ce37f72aa7b1433cdb18af526c115138",
"text": "Deep learning algorithms achieve high classification accuracy at the expense of significant computation cost. To address this cost, a number of quantization schemes have been proposed but most of these techniques focused on quantizing weights, which are relatively smaller in size compared to activations. This paper proposes a novel quantization scheme for activations during training that enables neural networks to work well with ultra low precision weights and activations without any significant accuracy degradation. This technique, PArameterized Clipping acTivation (PACT), uses an activation clipping parameter α that is optimized during training to find the right quantization scale. PACT allows quantizing activations to arbitrary bit precisions, while achieving much better accuracy relative to published state-of-the-art quantization schemes. We show, for the first time, that both weights and activations can be quantized to 4-bits of precision while still achieving accuracy comparable to full precision networks across a range of popular models and datasets. We also show that exploiting these reduced-precision computational units in hardware can enable a super-linear improvement in inferencing performance due to a significant reduction in the area of accelerator compute engines coupled with the ability to retain the quantized model and activation data in on-chip memories.",
"title": ""
},
{
"docid": "d4f953596e49393a4ca65e202eab725c",
"text": "This work integrates deep learning and symbolic programming paradigms into a unified method for deploying applications to a neuromorphic system. The approach removes the need for coordination among disjoint co-processors by embedding both types entirely on a neuromorphic processor. This integration provides a flexible approach for using each technique where it performs best. A single neuromorphic solution can seamlessly deploy neural networks for classifying sensor-driven noisy data obtained from the environment alongside programmed symbolic logic to processes the input from the networks. We present a concrete implementation of the proposed framework using the TrueNorth neuromorphic processor to play blackjack using a pre-programmed optimal strategy algorithm combined with a neural network trained to classify card images as input. Future extensions of this approach will develop a symbolic neuromorphic compiler for automatically creating networks from a symbolic programming language.",
"title": ""
},
{
"docid": "9270af032d1adbf9829e7d723ff76849",
"text": "To detect illegal copies of copyrighted images, recent copy detection methods mostly rely on the bag-of-visual-words (BOW) model, in which local features are quantized into visual words for image matching. However, both the limited discriminability of local features and the BOW quantization errors will lead to many false local matches, which make it hard to distinguish similar images from copies. Geometric consistency verification is a popular technology for reducing the false matches, but it neglects global context information of local features and thus cannot solve this problem well. To address this problem, this paper proposes a global context verification scheme to filter false matches for copy detection. More specifically, after obtaining initial scale invariant feature transform (SIFT) matches between images based on the BOW quantization, the overlapping region-based global context descriptor (OR-GCD) is proposed for the verification of these matches to filter false matches. The OR-GCD not only encodes relatively rich global context information of SIFT features but also has good robustness and efficiency. Thus, it allows an effective and efficient verification. Furthermore, a fast image similarity measurement based on random verification is proposed to efficiently implement copy detection. In addition, we also extend the proposed method for partial-duplicate image detection. Extensive experiments demonstrate that our method achieves higher accuracy than the state-of-the-art methods, and has comparable efficiency to the baseline method based on the BOW quantization.",
"title": ""
},
{
"docid": "fc07af4d49f7b359e484381a0a88aff7",
"text": "In this paper, we develop the idea of a universal anytime intelligence test. The meaning of the terms “universal” and “anytime” is manifold here: the test should be able to measure the intelligence of any biological or artificial system that exists at this time or in the future. It should also be able to evaluate both inept and brilliant systems (any intelligence level) as well as very slow to very fast systems (any time scale). Also, the test may be interrupted at any time, producing an approximation to the intelligence score, in such a way that the more time is left for the test, the better the assessment will be. In order to do this, our test proposal is based on previous works on the measurement of machine intelligence based on Kolmogorov Complexity and universal distributions, which were developed in the late 1990s (C-tests and compression-enhanced Turing tests). It is also based on the more recent idea of measuring intelligence through dynamic/interactive tests held against a universal distribution of environments. We discuss some of these tests and highlight their limitations since we want to construct a test that is both general and practical. Consequently, we introduce many new ideas that develop early “compression tests” and the more recent definition of “universal intelligence” in order to design new “universal intelligence tests”, where a feasible implementation has been a design requirement. One of these tests is the “anytime intelligence test”, which adapts to the examinee’s level of intelligence in order to obtain an intelligence score within a limited time.",
"title": ""
},
{
"docid": "a56c98284e1ac38e9aa2e4aa4b7a87a9",
"text": "Background: The extrahepatic biliary tree with the exact anatomic features of the arterial supply observed by laparoscopic means has not been described heretofore. Iatrogenic injuries of the extrahepatic biliary tree and neighboring blood vessels are not rare. Accidents involving vessels or the common bile duct during laparoscopic cholecystectomy, with or without choledocotomy, can be avoided by careful dissection of Calot's triangle and the hepatoduodenal ligament. Methods: We performed 244 laparoscopic cholecystectomies over a 2-year period between January 1, 1995 and January 1, 1997. Results: In 187 of 244 consecutive cases (76.6%), we found a typical arterial supply anteromedial to the cystic duct, near the sentinel cystic lymph node. In the other cases, there was an atypical arterial supply, and 27 of these cases (11.1%) had no cystic artery in Calot's triangle. A typical blood supply and accessory arteries were observed in 18 cases (7.4%). Conclusion: Young surgeons who are not yet familiar with the handling of an anatomically abnormal cystic blood supply need to be more aware of the precise anatomy of the extrahepatic biliary tree.",
"title": ""
},
{
"docid": "aeb4af864a4e2435486a69f5694659dc",
"text": "A great amount of research has been developed around the early cognitive impairments that best predict the onset of Alzheimer's disease (AD). Given that mild cognitive impairment (MCI) is no longer considered to be an intermediate state between normal aging and AD, new paths have been traced to acquire further knowledge about this condition and its subtypes, and to determine which of them have a higher risk of conversion to AD. It is now known that other deficits besides episodic and semantic memory impairments may be present in the early stages of AD, such as visuospatial and executive function deficits. Furthermore, recent investigations have proven that the hippocampus and the medial temporal lobe structures are not only involved in memory functioning, but also in visual processes. These early changes in memory, visual, and executive processes may also be detected with the study of eye movement patterns in pathological conditions like MCI and AD. In the present review, we attempt to explore the existing literature concerning these patterns of oculomotor changes and how these changes are related to the early signs of AD. In particular, we argue that deficits in visual short-term memory, specifically in iconic memory, attention processes, and inhibitory control, may be found through the analysis of eye movement patterns, and we discuss how they might help to predict the progression from MCI to AD. We add that the study of eye movement patterns in these conditions, in combination with neuroimaging techniques and appropriate neuropsychological tasks based on rigorous concepts derived from cognitive psychology, may highlight the early presence of cognitive impairments in the course of the disease.",
"title": ""
}
] | scidocsrr |
cad6d5cdd67c96838b3f48470ebf28b1 | Visual Query Language: Finding patterns in and relationships among time series data | [
{
"docid": "44f41d363390f6f079f2e67067ffa36d",
"text": "The research described in this paper was supported in part by the National Science Foundation under Grants IST-g0-12418 and IST-82-10564. and in part by the Office of Naval Research under Grant N00014-80-C-0197. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the ACM copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Association for Computing Machinery. To copy otherwise, or to republish, requires a fee and/or specific permission. © 1983 ACM 0001-0782/83/1100.0832 75¢",
"title": ""
}
] | [
{
"docid": "e7a260bfb238d8b4f147ac9c2a029d1d",
"text": "The full-text may be used and/or reproduced, and given to third parties in any format or medium, without prior permission or charge, for personal research or study, educational, or not-for-pro t purposes provided that: • a full bibliographic reference is made to the original source • a link is made to the metadata record in DRO • the full-text is not changed in any way The full-text must not be sold in any format or medium without the formal permission of the copyright holders. Please consult the full DRO policy for further details.",
"title": ""
},
{
"docid": "c46b0f8d340bd45c0b64c5d6cfd752a3",
"text": "We propose a method for inferring the existence of a latent common cause (“confounder”) of two observed random variables. The method assumes that the two effects of the confounder are (possibly nonlinear) functions of the confounder plus independent, additive noise. We discuss under which conditions the model is identifiable (up to an arbitrary reparameterization of the confounder) from the joint distribution of the effects. We state and prove a theoretical result that provides evidence for the conjecture that the model is generically identifiable under suitable technical conditions. In addition, we propose a practical method to estimate the confounder from a finite i.i.d. sample of the effects and illustrate that the method works well on both simulated and real-world data.",
"title": ""
},
{
"docid": "d0c4997c611d8759805d33cf1ad9eef1",
"text": "The automatic evaluation of text-based assessment items, such as short answers or essays, is an open and important research challenge. In this paper, we compare several features for the classification of short open-ended responses to questions related to a large first-year health sciences course. These features include a) traditional n-gram models; b) entity URIs (Uniform Resource Identifier) and c) entity mentions extracted using a semantic annotation API; d) entity mention embeddings based on GloVe, and e) entity URI embeddings extracted from Wikipedia. These features are used in combination with classification algorithms to discriminate correct answers from incorrect ones. Our results show that, on average, n-gram features performed the best in terms of precision and entity mentions in terms of f1-score. Similarly, in terms of accuracy, entity mentions and n-gram features performed the best. Finally, features based on dense vector representations such as entity embeddings and mention embeddings obtained the best f1-score for predicting correct answers.",
"title": ""
},
{
"docid": "284c7292bd7e79c5c907fc2aa21fb52c",
"text": "Monte Carlo Tree Search (MCTS) is an AI technique that has been successfully applied to many deterministic games of perfect information, leading to large advances in a number of domains, such as Go and General Game Playing. Imperfect information games are less well studied in the field of AI despite being popular and of significant commercial interest, for example in the case of computer and mobile adaptations of turn based board and card games. This is largely because hidden information and uncertainty leads to a large increase in complexity compared to perfect information games. In this thesis MCTS is extended to games with hidden information and uncertainty through the introduction of the Information Set MCTS (ISMCTS) family of algorithms. It is demonstrated that ISMCTS can handle hidden information and uncertainty in a variety of complex board and card games. This is achieved whilst preserving the general applicability of MCTS and using computational budgets appropriate for use in a commercial game. The ISMCTS algorithm is shown to outperform the existing approach of Perfect Information Monte Carlo (PIMC) search. Additionally it is shown that ISMCTS can be used to solve two known issues with PIMC search, namely strategy fusion and non-locality. ISMCTS has been integrated into a commercial game, Spades by AI Factory, with over 2.5 million downloads. The Information Capture And ReUSe (ICARUS) framework is also introduced in this thesis. The ICARUS framework generalises MCTS enhancements in terms of information capture (from MCTS simulations) and reuse (to improve MCTS tree and simulation policies). The ICARUS framework is used to express existing enhancements, to provide a tool to design new ones, and to rigorously define how MCTS enhancements can be combined. The ICARUS framework is tested across a wide variety of games.",
"title": ""
},
{
"docid": "7b4dd695182f7e15e58f44e309bf897c",
"text": "Phosphorus is one of the most abundant elements preserved in earth, and it comprises a fraction of ∼0.1% of the earth crust. In general, phosphorus has several allotropes, and the two most commonly seen allotropes, i.e. white and red phosphorus, are widely used in explosives and safety matches. In addition, black phosphorus, though rarely mentioned, is a layered semiconductor and has great potential in optical and electronic applications. Remarkably, this layered material can be reduced to one single atomic layer in the vertical direction owing to the van der Waals structure, and is known as phosphorene, in which the physical properties can be tremendously different from its bulk counterpart. In this review article, we trace back to the research history on black phosphorus of over 100 years from the synthesis to material properties, and extend the topic from black phosphorus to phosphorene. The physical and transport properties are highlighted for further applications in electronic and optoelectronics devices.",
"title": ""
},
{
"docid": "0022623017e81ee0a102da0524c83932",
"text": "Calcite is a new Eclipse plugin that helps address the difficulty of understanding and correctly using an API. Calcite finds the most popular ways to instantiate a given class or interface by using code examples. To allow the users to easily add these object instantiations to their code, Calcite adds items to the popup completion menu that will insert the appropriate code into the user’s program. Calcite also uses crowd sourcing to add to the menu instructions in the form of comments that help the user perform functions that people have identified as missing from the API. In a user study, Calcite improved users’ success rate by 40%.",
"title": ""
},
{
"docid": "c253083ab44c842819059ad64781d51d",
"text": "RGB-D data is getting ever more interest from the research community as both cheap cameras appear in the market and the applications of this type of data become more common. A current trend in processing image data is the use of convolutional neural networks (CNNs) that have consistently beat competition in most benchmark data sets. In this paper we investigate the possibility of transferring knowledge between CNNs when processing RGB-D data with the goal of both improving accuracy and reducing training time. We present experiments that show that our proposed approach can achieve both these goals.",
"title": ""
},
{
"docid": "1aa7e7fe70bdcbc22b5d59b0605c34e9",
"text": "Surgical tasks are complex multi-step sequences of smaller subtasks (often called surgemes) and it is useful to segment task demonstrations into meaningful subsequences for:(a) extracting finite-state machines for automation, (b) surgical training and skill assessment, and (c) task classification. Existing supervised methods for task segmentation use segment labels from a dictionary of motions to build classifiers. However, as the datasets become voluminous, the labeling becomes arduous and further, this method doesnt́ generalize to new tasks that dont́ use the same dictionary. We propose an unsupervised semantic task segmentation framework by learning “milestones”, ellipsoidal regions of the position and feature states at which a task transitions between motion regimes modeled as locally linear. Milestone learning uses a hierarchy of Dirichlet Process Mixture Models, learned through Expectation-Maximization, to cluster the transition points and optimize the number of clusters. It leverages transition information from kinematic state as well as environment state such as visual features. We also introduce a compaction step which removes repetitive segments that correspond to a mid-demonstration failure recovery by retrying an action. We evaluate Milestones Learning on three surgical subtasks: pattern cutting, suturing, and needle passing. Initial results suggest that our milestones qualitatively match manually annotated segmentation. While one-to-one correspondence of milestones with annotated data is not meaningful, the milestones recovered from our method have exactly one annotated surgeme transition in 74% (needle passing) and 66% (suturing) of total milestones, indicating a semantic match.",
"title": ""
},
{
"docid": "d151881de9a0e1699e95db7bbebc032b",
"text": "Despite the noticeable progress in perceptual tasks like detection, instance segmentation and human parsing, computers still perform unsatisfactorily on visually understanding humans in crowded scenes, such as group behavior analysis, person re-identification and autonomous driving, etc. To this end, models need to comprehensively perceive the semantic information and the differences between instances in a multi-human image, which is recently defined as the multi-human parsing task. In this paper, we present a new large-scale database “Multi-Human Parsing (MHP)” for algorithm development and evaluation, and advances the state-of-the-art in understanding humans in crowded scenes. MHP contains 25,403 elaborately annotated images with 58 fine-grained semantic category labels, involving 2-26 persons per image and captured in real-world scenes from various viewpoints, poses, occlusion, interactions and background. We further propose a novel deep Nested Adversarial Network (NAN) model for multi-human parsing. NAN consists of three Generative Adversarial Network (GAN)-like sub-nets, respectively performing semantic saliency prediction, instance-agnostic parsing and instance-aware clustering. These sub-nets form a nested structure and are carefully designed to learn jointly in an end-to-end way. NAN consistently outperforms existing state-of-the-art solutions on our MHP and several other datasets, and serves as a strong baseline to drive the future research for multi-human parsing.",
"title": ""
},
{
"docid": "9858386550b0193c079f1d7fe2b5b8b3",
"text": "Objective This study examined the associations between household food security (access to sufficient, safe, and nutritious food) during infancy and attachment and mental proficiency in toddlerhood. Methods Data from a longitudinal nationally representative sample of infants and toddlers (n = 8944) from the Early Childhood Longitudinal Study—9-month (2001–2002) and 24-month (2003–2004) surveys were used. Structural equation modeling was used to examine the direct and indirect associations between food insecurity at 9 months, and attachment and mental proficiency at 24 months. Results Food insecurity worked indirectly through depression and parenting practices to influence security of attachment and mental proficiency in toddlerhood. Conclusions Social policies that address the adequacy and predictability of food supplies in families with infants have the potential to affect parental depression and parenting behavior, and thereby attachment and cognitive development at very early ages.",
"title": ""
},
{
"docid": "ba3bf5f03e44e29a657d8035bb00535c",
"text": "Due to the broadcast nature of WiFi communication anyone with suitable hardware is able to monitor surrounding traffic. However, a WiFi device is able to listen to only one channel at any given time. The simple solution for capturing traffic across multiple channels involves channel hopping, which as a side effect reduces dwell time per channel. Hence monitoring with channel hopping does not produce a comprehensive view of the traffic across all channels at a given time.\n In this paper we present an inexpensive multi-channel WiFi capturing system (dubbed the wireless shark\") and evaluate its performance in terms of traffic cap- turing efficiency. Our results confirm and quantify the intuition that the performance is directly related to the number of WiFi adapters being used for listening. As a second contribution of the paper we use the wireless shark to observe the behavior of 14 different mobile devices, both in controlled and normal office environments. In our measurements, we focus on the probe traffic that the devices send when they attempt to discover available WiFi networks. Our results expose some distinct characteristics in various mobile devices' probing behavior.",
"title": ""
},
{
"docid": "d71c2f3d1a10b5a2cb33247129bfd8e0",
"text": "PURPOSE OF REVIEW\nTo review the current practice in the field of auricular reconstruction and to highlight the recent advances reported in the medical literature.\n\n\nRECENT FINDINGS\nThe majority of surgeons who perform auricular reconstruction continue to employ the well-established techniques developed by Brent and Nagata. Surgery takes between two and four stages, with the initial stage being construction of a framework of autogenous rib cartilage which is implanted into a subcutaneous pocket. Several modifications of these techniques have been reported. More recently, synthetic frameworks have been employed instead of autogenous rib cartilage. For this procedure, the implant is generally covered with a temporoparietal flap and a skin graft at the first stage of surgery. Tissue engineering is a rapidly developing field, and there have been several articles related to the field of auricular reconstruction. These show great potential to offer a solution to the challenge associated with construction of a viable autogenous cartilage framework, whilst avoiding donor-site morbidity.\n\n\nSUMMARY\nThis article gives an overview of the current practice in the field of auricular reconstruction and summarizes the recent surgical developments and relevant tissue engineering research.",
"title": ""
},
{
"docid": "e26f8d654eb4bf0f3e974ed7e65fb4e1",
"text": "The FIRE 2016 Microblog track focused on retrieval of microblogs (tweets posted on Twitter) during disaster events. A collection of about 50,000 microblogs posted during a recent disaster event was made available to the participants, along with a set of seven practical information needs during a disaster situation. The task was to retrieve microblogs relevant to these needs. 10 teams participated in the task, submitting a total of 15 runs. The task resulted in comparison among performances of various microblog retrieval strategies over a benchmark collection, and brought out the challenges in microblog retrieval.",
"title": ""
},
{
"docid": "c9b4ada661599a4c0c78176840f78171",
"text": "In this paper, we present the suite of tools of the FOMCON (“Fractional-order Modeling and Control”) toolbox for MATLAB that are used to carry out fractional-order PID controller design and hardware realization. An overview of the toolbox, its structure and particular modules, is presented with appropriate comments. We use a laboratory object designed to conduct temperature control experiments to illustrate the methods employed in FOMCON to derive suitable parameters for the controller and arrive at a digital implementation thereof on an 8-bit AVR microprocessor. The laboratory object is working under a real-time simulation platform with Simulink, Real-Time Windows Target toolbox and necessary drivers as its software backbone. Experimental results are provided which support the effectiveness of the proposed software solution.",
"title": ""
},
{
"docid": "8b84dc47c6a9d39ef1d094aa173a954c",
"text": "Named entity recognition (NER) is a subtask of information extraction that seeks to locate and classify atomic elements in text into predefined categories such as the names of persons, organizations, locations, expressions of times, quantities, monetary values, percentages, etc. We use the JavaNLP repository(http://nlp.stanford.edu/javanlp/ ) for its implementation of a Conditional Random Field(CRF) and a Conditional Markov Model(CMM), also called a Maximum Entropy Markov Model. We have obtained results on majority voting with different labeling schemes, with backward and forward parsing of the CMM, and also some results when we trained a decision tree to take a decision based on the outputs of the different labeling schemes. We have also tried to solve the problem of label inconsistency issue by attempting the naive approach of enforcing hard label-consistency by choosing the majority entity for a sequence of tokens, in the specific test document, as well as the whole test corpus, and managed to get reasonable gains. We also attempted soft label consistency in the following way. We use a portion of the training data to train a CRF to make predictions on the rest of the train data and on the test data. We then train a second CRF with the majority label predictions as additional input features.",
"title": ""
},
{
"docid": "d2324527cd1b8e28fd63c8c20f57f4d4",
"text": "Learning phonetic categories is one of the first steps to learning a language, yet is hard to do using only distributional phonetic information. Semantics could potentially be useful, since words with different meanings have distinct phonetics, but it is unclear how many word meanings are known to infants learning phonetic categories. We show that attending to a weaker source of semantics, in the form of a distribution over topics in the current context, can lead to improvements in phonetic category learning. In our model, an extension of a previous model of joint word-form and phonetic category inference, the probability of word-forms is topic-dependent, enabling the model to find significantly better phonetic vowel categories and word-forms than a model with no semantic knowledge.",
"title": ""
},
{
"docid": "f489708f15f3e5cdd15f669fb9979488",
"text": "Humans learn to play video games significantly faster than state-of-the-art reinforcement learning (RL) algorithms. Inspired by this, we introduce strategic object oriented reinforcement learning (SOORL) to learn simple dynamics model through automatic model selection and perform efficient planning with strategic exploration. We compare different exploration strategies in a model-based setting in which exact planning is impossible. Additionally, we test our approach on perhaps the hardest Atari game Pitfall! and achieve significantly improved exploration and performance over prior methods.",
"title": ""
},
{
"docid": "748ae7abfd8b1dfb3e79c94c5adace9d",
"text": "Users routinely access cloud services through third-party apps on smartphones by giving apps login credentials (i.e., a username and password). Unfortunately, users have no assurance that their apps will properly handle this sensitive information. In this paper, we describe the design and implementation of ScreenPass, which significantly improves the security of passwords on touchscreen devices. ScreenPass secures passwords by ensuring that they are entered securely, and uses taint-tracking to monitor where apps send password data. The primary technical challenge addressed by ScreenPass is guaranteeing that trusted code is always aware of when a user is entering a password. ScreenPass provides this guarantee through two techniques. First, ScreenPass includes a trusted software keyboard that encourages users to specify their passwords' domains as they are entered (i.e., to tag their passwords). Second, ScreenPass performs optical character recognition (OCR) on a device's screenbuffer to ensure that passwords are entered only through the trusted software keyboard. We have evaluated ScreenPass through experiments with a prototype implementation, two in-situ user studies, and a small app study. Our prototype detected a wide range of dynamic and static keyboard-spoofing attacks and generated zero false positives. As long as a screen is off, not updated, or not tapped, our prototype consumes zero additional energy; in the worst case, when a highly interactive app rapidly updates the screen, our prototype under a typical configuration introduces only 12% energy overhead. Participants in our user studies tagged their passwords at a high rate and reported that tagging imposed no additional burden. Finally, a study of malicious and non-malicious apps running under ScreenPass revealed several cases of password mishandling.",
"title": ""
},
{
"docid": "b5f7511566b902bc206228dc3214c211",
"text": "In the imitation learning paradigm algorithms learn from expert demonstrations in order to become able to accomplish a particular task. Daumé III et al. (2009) framed structured prediction in this paradigm and developed the search-based structured prediction algorithm (Searn) which has been applied successfully to various natural language processing tasks with state-of-the-art performance. Recently, Ross et al. (2011) proposed the dataset aggregation algorithm (DAgger) and compared it with Searn in sequential prediction tasks. In this paper, we compare these two algorithms in the context of a more complex structured prediction task, namely biomedical event extraction. We demonstrate that DAgger has more stable performance and faster learning than Searn, and that these advantages are more pronounced in the parameter-free versions of the algorithms.",
"title": ""
}
] | scidocsrr |
8beb7712d1d49745bf134ca4276f2787 | Overview: Generalizations of Multi-Agent Path Finding to Real-World Scenarios | [
{
"docid": "8bc1d9cd9a912a7c3a8e874ce09cae52",
"text": "Multi-Agent Path Finding (MAPF) is well studied in both AI and robotics. Given a discretized environment and agents with assigned start and goal locations, MAPF solvers from AI find collision-free paths for hundreds of agents with userprovided sub-optimality guarantees. However, they ignore that actual robots are subject to kinematic constraints (such as finite maximum velocity limits) and suffer from imperfect plan-execution capabilities. We therefore introduce MAPFPOST, a novel approach that makes use of a simple temporal network to postprocess the output of a MAPF solver in polynomial time to create a plan-execution schedule that can be executed on robots. This schedule works on non-holonomic robots, takes their maximum translational and rotational velocities into account, provides a guaranteed safety distance between them, and exploits slack to absorb imperfect plan executions and avoid time-intensive replanning in many cases. We evaluate MAPF-POST in simulation and on differentialdrive robots, showcasing the practicality of our approach.",
"title": ""
}
] | [
{
"docid": "b59a2c49364f3e95a2c030d800d5f9ce",
"text": "An algorithm with linear filters and morphological operations has been proposed for automatic fabric defect detection. The algorithm is applied off-line and real-time to denim fabric samples for five types of defects. All defect types have been detected successfully and the defective regions are labeled. The defective fabric samples are then classified by using feed forward neural network method. Both defect detection and classification application performances are evaluated statistically. Defect detection performance of real time and off-line applications are obtained as 88% and 83% respectively. The defective images are classified with an average accuracy rate of 96.3%.",
"title": ""
},
{
"docid": "fbcaba091a407d2bd831d3520577cf27",
"text": "Studying a software project by mining data from a single repository has been a very active research field in software engineering during the last years. However, few efforts have been devoted to perform studies by integrating data from various repositories, with different kinds of information, which would, for instance, track the different activities of developers. One of the main problems of these multi-repository studies is the different identities that developers use when they interact with different tools in different contexts. This makes them appear as different entities when data is mined from different repositories (and in some cases, even from a single one). In this paper we propose an approach, based on the application of heuristics, to identify the many identities of developers in such cases, and a data structure for allowing both the anonymized distribution of information, and the tracking of identities for verification purposes. The methodology will be presented in general, and applied to the GNOME project as a case example. Privacy issues and partial merging with new data sources will also be considered and discussed.",
"title": ""
},
{
"docid": "cbe1b2575db111cd3b22b7288c0e345c",
"text": "A reversible gate has the equal number of inputs and outputs and one-to-one mappings between input vectors and output vectors; so that, the input vector states can be always uniquely reconstructed from the output vector states. This correspondence introduces a reversible full-adder circuit that requires only three reversible gates and produces least number of \"garbage outputs \", that is two. After that, a theorem has been proposed that proves the optimality of the propounded circuit in terms of number of garbage outputs. An efficient algorithm is also introduced in this paper that leads to construct a reversible circuit.",
"title": ""
},
{
"docid": "8d3a5a9327ab93fef50712e931d0e06b",
"text": "Cite this article Romager JA, Hughes K, Trimble JE. Personality traits as predictors of leadership style preferences: Investigating the relationship between social dominance orientation and attitudes towards authentic leaders. Soc Behav Res Pract Open J. 2017; 3(1): 1-9. doi: 10.17140/SBRPOJ-3-110 Personality Traits as Predictors of Leadership Style Preferences: Investigating the Relationship Between Social Dominance Orientation and Attitudes Towards Authentic Leaders Original Research",
"title": ""
},
{
"docid": "6655b03c0fcc83a71a3119d7e526eedc",
"text": "Dynamic magnetic resonance imaging (MRI) scans can be accelerated by utilizing compressed sensing (CS) reconstruction methods that allow for diagnostic quality images to be generated from undersampled data. Unfortunately, CS reconstruction is time-consuming, requiring hours between a dynamic MRI scan and image availability for diagnosis. In this work, we train a convolutional neural network (CNN) to perform fast reconstruction of severely undersampled dynamic cardiac MRI data, and we explore the utility of CNNs for further accelerating dynamic MRI scan times. Compared to state-of-the-art CS reconstruction techniques, our CNN achieves reconstruction speeds that are 150x faster without significant loss of image quality. Additionally, preliminary results suggest that CNNs may allow scan times that are 2x faster than those allowed by CS.",
"title": ""
},
{
"docid": "a433f47a3c7c06a409a8fc0d98e955be",
"text": "The local-dimming backlight has recently been presented for use in LCD TVs. However, the image resolution is low, particularly at weak edges. In this work, a local-dimming backlight is developed to improve the image contrast and reduce power dissipation. The algorithm enhances low-level edge information to improve the perceived image resolution. Based on the algorithm, a 42-in backlight module with white light-emitting diode (LED) devices was driven by a local dimming control core. The block-wise register approach substantially reduced the number of required line-buffers and shortened the latency time. The measurements made in the laboratory indicate that the backlight system reduces power dissipation by an average of 48% and exhibits no visible distortion compared relative to the fixed backlighting system. The system was successfully demonstrated in a 42-in LCD TV, and the contrast ratio was greatly improved by a factor of 100.",
"title": ""
},
{
"docid": "e6bbe7de06295817435acafbbb7470cc",
"text": "Cortical circuits work through the generation of coordinated, large-scale activity patterns. In sensory systems, the onset of a discrete stimulus usually evokes a temporally organized packet of population activity lasting ∼50–200 ms. The structure of these packets is partially stereotypical, and variation in the exact timing and number of spikes within a packet conveys information about the identity of the stimulus. Similar packets also occur during ongoing stimuli and spontaneously. We suggest that such packets constitute the basic building blocks of cortical coding.",
"title": ""
},
{
"docid": "e6291818253de22ee675f67eed8213d9",
"text": "This literature review focuses on aesthetics of interaction design with further goal of outlining a study towards prediction model of aesthetic value. The review covers three main issues, tightly related to aesthetics of interaction design: evaluation of aesthetics, relations between aesthetics and interaction qualities and implementation of aesthetics in interaction design. Analysis of previous models is carried out according to definition of interaction aesthetics: holistic approach to aesthetic perception considering its' action- and appearance-related components. As a result the empirical study is proposed for investigating the relations between attributes of interaction and users' aesthetic experience.",
"title": ""
},
{
"docid": "7579b5cb9f18e3dc296bcddc7831abc5",
"text": "Unlike conventional anomaly detection research that focuses on point anomalies, our goal is to detect anomalous collections of individual data points. In particular, we perform group anomaly detection (GAD) with an emphasis on irregular group distributions (e.g. irregular mixtures of image pixels). GAD is an important task in detecting unusual and anomalous phenomena in real-world applications such as high energy particle physics, social media and medical imaging. In this paper, we take a generative approach by proposing deep generative models: Adversarial autoencoder (AAE) and variational autoencoder (VAE) for group anomaly detection. Both AAE and VAE detect group anomalies using point-wise input data where group memberships are known a priori. We conduct extensive experiments to evaluate our models on real world datasets. The empirical results demonstrate that our approach is effective and robust in detecting group anomalies.",
"title": ""
},
{
"docid": "860d39ff0ddd80caaf712e84a82f4d86",
"text": "Steganography and steganalysis received a great deal of attention from media and law enforcement. Many powerful and robust methods of steganography and steganalysis have been developed. In this paper we are considering the methods of steganalysis that are to be used for this processes. Paper giving some idea about the steganalysis and its method. Keywords— Include at least 5 keywords or phrases",
"title": ""
},
{
"docid": "1465b6c38296dfc46f8725dca5179cf1",
"text": "A brief introduction is given to the actual mechanics of simulated annealing, and a simple example from an IC layout is used to illustrate how these ideas can be applied. The complexities and tradeoffs involved in attacking a realistically complex design problem are illustrated by dissecting two very different annealing algorithms for VLSI chip floorplanning. Several current research problems aimed at determining more precisely how and why annealing algorithms work are examined. Some philosophical issues raised by the introduction of annealing are discussed.<<ETX>>",
"title": ""
},
{
"docid": "f8c1654abd0ffced4b5dbf3ef0724d36",
"text": "The proposed social media crisis mapping platform for natural disasters uses locations from gazetteer, street map, and volunteered geographic information (VGI) sources for areas at risk of disaster and matches them to geoparsed real-time tweet data streams. The authors use statistical analysis to generate real-time crisis maps. Geoparsing results are benchmarked against existing published work and evaluated across multilingual datasets. Two case studies compare five-day tweet crisis maps to official post-event impact assessment from the US National Geospatial Agency (NGA), compiled from verified satellite and aerial imagery sources.",
"title": ""
},
{
"docid": "1dee6d60a94e434dd6d3b6754e9cd3f3",
"text": "The barrier function of the intestine is essential for maintaining the normal homeostasis of the gut and mucosal immune system. Abnormalities in intestinal barrier function expressed by increased intestinal permeability have long been observed in various gastrointestinal disorders such as Crohn's disease (CD), ulcerative colitis (UC), celiac disease, and irritable bowel syndrome (IBS). Imbalance of metabolizing junction proteins and mucosal inflammation contributes to intestinal hyperpermeability. Emerging studies exploring in vitro and in vivo model system demonstrate that Rho-associated coiled-coil containing protein kinase- (ROCK-) and myosin light chain kinase- (MLCK-) mediated pathways are involved in the regulation of intestinal permeability. With this perspective, we aim to summarize the current state of knowledge regarding the role of inflammation and ROCK-/MLCK-mediated pathways leading to intestinal hyperpermeability in gastrointestinal disorders. In the near future, it may be possible to specifically target these specific pathways to develop novel therapies for gastrointestinal disorders associated with increased gut permeability.",
"title": ""
},
{
"docid": "e91dd3f9e832de48a27048a0efa1b67a",
"text": "Smart Home technology is the future of residential related technology which is designed to deliver and distribute number of services inside and outside the house via networked devices in which all the different applications & the intelligence behind them are integrated and interconnected. These smart devices have the potential to share information with each other given the permanent availability to access the broadband internet connection. Hence, Smart Home Technology has become part of IoT (Internet of Things). In this work, a home model is analyzed to demonstrate an energy efficient IoT based smart home. Several Multiphysics simulations were carried out focusing on the kitchen of the home model. A motion sensor with a surveillance camera was used as part of the home security system. Coupled with the home light and HVAC control systems, the smart system can remotely control the lighting and heating or cooling when an occupant enters or leaves the kitchen.",
"title": ""
},
{
"docid": "76e01466b9d7d4cbea714ce29f13759a",
"text": "In this survey we review the image processing literature on the various approaches and models investigators have used for texture. These include statistical approaches of autocorrelation function, optical transforms, digital transforms, textural edgeness, structural element, gray tone cooccurrence, run lengths, and autoregressive models. We discuss and generalize some structural approaches to texture based on more complex primitives than gray tone. We conclude with some structural-statistical generalizations which apply the statistical techniques to the structural primitives.",
"title": ""
},
{
"docid": "429b6eedecef4d769b3341aca7de85ef",
"text": "Correspondence Lars Ruthotto, Department of Mathematics and Computer Science, Emory University, 400 Dowman Dr, Atlanta, GA 30322, USA. Email: [email protected] Summary Image registration is a central problem in a variety of areas involving imaging techniques and is known to be challenging and ill-posed. Regularization functionals based on hyperelasticity provide a powerful mechanism for limiting the ill-posedness. A key feature of hyperelastic image registration approaches is their ability to model large deformations while guaranteeing their invertibility, which is crucial in many applications. To ensure that numerical solutions satisfy this requirement, we discretize the variational problem using piecewise linear finite elements, and then solve the discrete optimization problem using the Gauss–Newton method. In this work, we focus on computational challenges arising in approximately solving the Hessian system. We show that the Hessian is a discretization of a strongly coupled system of partial differential equations whose coefficients can be severely inhomogeneous. Motivated by a local Fourier analysis, we stabilize the system by thresholding the coefficients. We propose a Galerkin-multigrid scheme with a collective pointwise smoother. We demonstrate the accuracy and effectiveness of the proposed scheme, first on a two-dimensional problem of a moderate size and then on a large-scale real-world application with almost 9 million degrees of freedom.",
"title": ""
},
{
"docid": "c734c98b1ca8261694386c537870c2f3",
"text": "Uncontrolled wind turbine configuration, such as stall-regulation captures, energy relative to the amount of wind speed. This configuration requires constant turbine speed because the generator that is being directly coupled is also connected to a fixed-frequency utility grid. In extremely strong wind conditions, only a fraction of available energy is captured. Plants designed with such a configuration are economically unfeasible to run in these circumstances. Thus, wind turbines operating at variable speed are better alternatives. This paper focuses on a controller design methodology applied to a variable-speed, horizontal axis wind turbine. A simple but rigid wind turbine model was used and linearised to some operating points to meet the desired objectives. By using blade pitch control, the deviation of the actual rotor speed from a reference value is minimised. The performances of PI and PID controllers were compared relative to a step wind disturbance. Results show comparative responses between these two controllers. The paper also concludes that with the present methodology, despite the erratic wind data, the wind turbine still manages to operate most of the time at 88% in the stable region.",
"title": ""
},
{
"docid": "53b38576a378b7680a69bba1ebe971ba",
"text": "The detection of symmetry axes through the optimization of a given symmetry measure, computed as a function of the mean-square error between the original and reflected images, is investigated in this paper. A genetic algorithm and an optimization scheme derived from the self-organizing maps theory are presented. The notion of symmetry map is then introduced. This transform allows us to map an object into a symmetry space where its symmetry properties can be analyzed. The locations of the different axes that globally and locally maximize the symmetry value can be obtained. The input data are assumed to be vector-valued, which allow to focus on either shape. color or texture information. Finally, the application to skin cancer diagnosis is illustrated and discussed.",
"title": ""
},
{
"docid": "bb4a83a48d1943cc8205510dc2a750a8",
"text": "Whenever a document containing sensitive information needs to be made public, privacy-preserving measures should be implemented. Document sanitization aims at detecting sensitive pieces of information in text, which are removed or hidden prior publication. Even though methods detecting sensitive structured information like e-mails, dates or social security numbers, or domain specific data like disease names have been developed, the sanitization of raw textual data has been scarcely addressed. In this paper, we present a general-purpose method to automatically detect sensitive information from textual documents in a domain-independent way. Relying on the Information Theory and a corpus as large as the Web, it assess the degree of sensitiveness of terms according to the amount of information they provide. Preliminary results show that our method significantly improves the detection recall in comparison with approaches based on trained classifiers.",
"title": ""
}
] | scidocsrr |
6ebaf2722502a9553803a05b66bfa95e | There's No Free Lunch, Even Using Bitcoin: Tracking the Popularity and Profits of Virtual Currency Scams | [
{
"docid": "bc8b40babfc2f16144cdb75b749e3a90",
"text": "The Bitcoin scheme is a rare example of a large scale global payment system in which all the transactions are publicly accessible (but in an anonymous way). We downloaded the full history of this scheme, and analyzed many statistical properties of its associated transaction graph. In this paper we answer for the first time a variety of interesting questions about the typical behavior of users, how they acquire and how they spend their bitcoins, the balance of bitcoins they keep in their accounts, and how they move bitcoins between their various accounts in order to better protect their privacy. In addition, we isolated all the large transactions in the system, and discovered that almost all of them are closely related to a single large transaction that took place in November 2010, even though the associated users apparently tried to hide this fact with many strange looking long chains and fork-merge structures in the transaction graph.",
"title": ""
},
{
"docid": "8ee24b38d7cf4f63402cd4f2c0beaf79",
"text": "At the current stratospheric value of Bitcoin, miners with access to significant computational horsepower are literally printing money. For example, the first operator of a USD $1,500 custom ASIC mining platform claims to have recouped his investment in less than three weeks in early February 2013, and the value of a bitcoin has more than tripled since then. Not surprisingly, cybercriminals have also been drawn to this potentially lucrative endeavor, but instead are leveraging the resources available to them: stolen CPU hours in the form of botnets. We conduct the first comprehensive study of Bitcoin mining malware, and describe the infrastructure and mechanism deployed by several major players. By carefully reconstructing the Bitcoin transaction records, we are able to deduce the amount of money a number of mining botnets have made.",
"title": ""
}
] | [
{
"docid": "091c57447d5a3c97d3ff1afb57ebb4e3",
"text": "We present ORB-SLAM2, a complete simultaneous localization and mapping (SLAM) system for monocular, stereo and RGB-D cameras, including map reuse, loop closing, and relocalization capabilities. The system works in real time on standard central processing units in a wide variety of environments from small hand-held indoors sequences, to drones flying in industrial environments and cars driving around a city. Our back-end, based on bundle adjustment with monocular and stereo observations, allows for accurate trajectory estimation with metric scale. Our system includes a lightweight localization mode that leverages visual odometry tracks for unmapped regions and matches with map points that allow for zero-drift localization. The evaluation on 29 popular public sequences shows that our method achieves state-of-the-art accuracy, being in most cases the most accurate SLAM solution. We publish the source code, not only for the benefit of the SLAM community, but with the aim of being an out-of-the-box SLAM solution for researchers in other fields.",
"title": ""
},
{
"docid": "7a6ae2e12dbd9f4a0a3355caec648ca7",
"text": "Near Field Communication (NFC) is an emerging wireless short-range communication technology that is based on existing standards of the Radio Frequency Identification (RFID) infrastructure. In combination with NFC-capable smartphones it enables intuitive application scenarios for contactless transactions, in particular services for mobile payment and over-theair ticketing. The intention of this paper is to describe basic characteristics and benefits of the underlaying technology, to classify modes of operation and to present various use cases. Both existing NFC applications and possible future scenarios will be analyzed in this context. Furthermore, security concerns, challenges and present conflicts will be discussed eventually.",
"title": ""
},
{
"docid": "2bdfeabf15a4ca096c2fe5ffa95f3b17",
"text": "This paper studies how to incorporate the external word correlation knowledge to improve the coherence of topic modeling. Existing topic models assume words are generated independently and lack the mechanism to utilize the rich similarity relationships among words to learn coherent topics. To solve this problem, we build a Markov Random Field (MRF) regularized Latent Dirichlet Allocation (LDA) model, which defines a MRF on the latent topic layer of LDA to encourage words labeled as similar to share the same topic label. Under our model, the topic assignment of each word is not independent, but rather affected by the topic labels of its correlated words. Similar words have better chance to be put into the same topic due to the regularization of MRF, hence the coherence of topics can be boosted. In addition, our model can accommodate the subtlety that whether two words are similar depends on which topic they appear in, which allows word with multiple senses to be put into different topics properly. We derive a variational inference method to infer the posterior probabilities and learn model parameters and present techniques to deal with the hardto-compute partition function in MRF. Experiments on two datasets demonstrate the effectiveness of our model.",
"title": ""
},
{
"docid": "4a9da1575b954990f98e6807deae469e",
"text": "Recently, there has been considerable debate concerning key sizes for publ i c key based cry p t o graphic methods. Included in the debate have been considerations about equivalent key sizes for diffe rent methods and considerations about the minimum re q u i red key size for diffe rent methods. In this paper we propose a method of a n a lyzing key sizes based upon the value of the data being protected and the cost of b reaking ke y s . I . I n t ro d u c t i o n A . W H Y I S K E Y S I Z E I M P O R T A N T ? In order to keep transactions based upon public key cryptography secure, one must ensure that the underlying keys are sufficiently large as to render the best possible attack infeasible. However, this really just begs the question as one is now left with the task of defining ‘infeasible’. Does this mean infeasible given access to (say) most of the Internet to do the computations? Does it mean infeasible to a large adversary with a large (but unspecified) budget to buy the hardware for an attack? Does it mean infeasible with what hardware might be obtained in practice by utilizing the Internet? Is it reasonable to assume that if utilizing the entire Internet in a key breaking effort makes a key vulnerable that such an attack might actually be conducted? If a public effort involving a substantial fraction of the Internet breaks a single key, does this mean that similar sized keys are unsafe? Does one need to be concerned about such public efforts or does one only need to be concerned about possible private, sur reptitious efforts? After all, if a public attack is known on a particular key, it is easy to change that key. We shall attempt to address these issues within this paper. number 13 Apr i l 2000 B u l l e t i n News and A dv i c e f rom RSA La bo rat o r i e s I . I n t ro d u c t i o n I I . M et ho ds o f At tac k I I I . H i s tor i ca l R es u l t s and t he R S A Ch a l le nge I V. Se cu r i t y E st i m ate s",
"title": ""
},
{
"docid": "ae6d36ccbf79ae6f62af3a62ef3e3bb2",
"text": "This paper presents a new neural network system called the Evolving Tree. This network resembles the Self-Organizing map, but deviates from it in several aspects, which are desirable in many analysis tasks. First of all the Evolving Tree grows automatically, so the user does not have to decide the network’s size before training. Secondly the network has a hierarchical structure, which makes network training and use computationally very efficient. Test results with both synthetic and actual data show that the Evolving Tree works quite well.",
"title": ""
},
{
"docid": "7d5d2f819a5b2561db31645d534836b8",
"text": "Recent work has suggested enhancing Bloom filters by using a pre-filter, based on applying machine learning to model the data set the Bloom filter is meant to represent. Here we model such learned Bloom filters, clarifying what guarantees can and cannot be associated with such a structure.",
"title": ""
},
{
"docid": "1eba8eccf88ddb44a88bfa4a937f648f",
"text": "We present a deep learning framework for probabilistic pixel-wise semantic segmentation, which we term Bayesian SegNet. Semantic segmentation is an important tool for visual scene understanding and a meaningful measure of uncertainty is essential for decision making. Our contribution is a practical system which is able to predict pixelwise class labels with a measure of model uncertainty using Bayesian deep learning. We achieve this by Monte Carlo sampling with dropout at test time to generate a posterior distribution of pixel class labels. In addition, we show that modelling uncertainty improves segmentation performance by 2-3% across a number of datasets and architectures such as SegNet, FCN, Dilation Network and DenseNet.",
"title": ""
},
{
"docid": "0d747bd516498ae314e3197b7e7ad1e3",
"text": "Neurotoxins and fillers continue to remain in high demand, comprising a large part of the growing business of cosmetic minimally invasive procedures. Multiple Food and Drug Administration-approved safe yet different products exist within each category, and the role of each product continues to expand. The authors review the literature to provide an overview of the use of neurotoxins and fillers and their future directions.",
"title": ""
},
{
"docid": "2edcf1a54bded9a77345cbe88cc02533",
"text": "Although the uncanny exists, the inherent, unavoidable dip (or valley) may be an illusion. Extremely abstract robots can be uncanny if the aesthetic is off, as can cosmetically atypical humans. Thus, the uncanny occupies a continuum ranging from the abstract to the real, although norms of acceptability may narrow as one approaches human likeness. However, if the aesthetic is right, any level of realism or abstraction can be appealing. If so, then avoiding or creating an uncanny effect just depends on the quality of the aesthetic design, regardless of the level of realism. The author’s preliminary experiments on human reaction to near-realistic androids appear to support this hypothesis.",
"title": ""
},
{
"docid": "56998c03c373dfae07460a7b731ef03e",
"text": "52 This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/ by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited. Statistical notes for clinical researchers: assessing normal distribution (2) using skewness and kurtosis",
"title": ""
},
{
"docid": "a084e7dd5485e01d97ccf628bc00d644",
"text": "A novel concept called gesture-changeable under-actuated (GCUA) function is proposed to improve the dexterities of traditional under-actuated hands and reduce the control difficulties of dexterous hands. Based on the GCUA function, a new humanoid robot hand, GCUA Hand is designed and manufactured. The GCUA Hand can grasp different objects self-adaptively and change its initial gesture dexterously before contacting objects. The hand has 5 fingers and 15 DOFs, each finger is based on screw-nut transmission, flexible drawstring constraint and belt-pulley under-actuated mechanism to realize GCUA function. The analyses on grasping static forces and grasping stabilities are put forward. The analyses and Experimental results show that the GCUA function is very nice and valid. The hands with the GCUA function can meet the requirements of grasping and operating with lower control and cost, which is the middle road between traditional under-actuated hands and dexterous hands.",
"title": ""
},
{
"docid": "e7b42688ce3936604aefa581802040a4",
"text": "Identity management through biometrics offer potential advantages over knowledge and possession based methods. A wide variety of biometric modalities have been tested so far but several factors paralyse the accuracy of mono modal biometric systems. Usually, the analysis of multiple modalities offers better accuracy. An extensive review of biometric technology is presented here. Besides the mono modal systems, the article also discusses multi modal biometric systems along with their architecture and information fusion levels. The paper along with the exemplary evidences highlights the potential for biometric technology, market value and prospects. Keywords— Biometrics, Fingerprint, Face, Iris, Retina, Behavioral biometrics, Gait, Voice, Soft biometrics, Multi-modal biometrics.",
"title": ""
},
{
"docid": "69624e1501b897bf1a9f9a5a84132da3",
"text": "360° videos and Head-Mounted Displays (HMDs) are geing increasingly popular. However, streaming 360° videos to HMDs is challenging. is is because only video content in viewers’ Fieldof-Views (FoVs) is rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper, we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensor data (such as viewer head positions and orientations derived from HMD sensors). We put extra eorts to align the content and sensor data using the timestamps in the raw log les. e resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven cameramovements). We believe that our dataset will stimulate more research activities along this exciting new research direction. ACM Reference format: Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu. 2017. 360° Video Viewing Dataset in Head-Mounted Virtual Reality. In Proceedings ofMMSys’17, Taipei, Taiwan, June 20-23, 2017, 6 pages. DOI: hp://dx.doi.org/10.1145/3083187.3083219 CCS Concept • Information systems→Multimedia streaming",
"title": ""
},
{
"docid": "f519d349d928e7006955943043ab0eae",
"text": "A critical application of metabolomics is the evaluation of tissues, which are often the primary sites of metabolic dysregulation in disease. Laboratory rodents have been widely used for metabolomics studies involving tissues due to their facile handing, genetic manipulability and similarity to most aspects of human metabolism. However, the necessary step of administration of anesthesia in preparation for tissue sampling is not often given careful consideration, in spite of its potential for causing alterations in the metabolome. We examined, for the first time using untargeted and targeted metabolomics, the effect of several commonly used methods of anesthesia and euthanasia for collection of skeletal muscle, liver, heart, adipose and serum of C57BL/6J mice. The data revealed dramatic, tissue-specific impacts of tissue collection strategy. Among many differences observed, post-euthanasia samples showed elevated levels of glucose 6-phosphate and other glycolytic intermediates in skeletal muscle. In heart and liver, multiple nucleotide and purine degradation metabolites accumulated in tissues of euthanized compared to anesthetized animals. Adipose tissue was comparatively less affected by collection strategy, although accumulation of lactate and succinate in euthanized animals was observed in all tissues. Among methods of tissue collection performed pre-euthanasia, ketamine showed more variability compared to isoflurane and pentobarbital. Isoflurane induced elevated liver aspartate but allowed more rapid initiation of tissue collection. Based on these findings, we present a more optimal collection strategy mammalian tissues and recommend that rodent tissues intended for metabolomics studies be collected under anesthesia rather than post-euthanasia.",
"title": ""
},
{
"docid": "099a2ee305b703a765ff3579f0e0c1c3",
"text": "To enhance the security of mobile cloud users, a few proposals have been presented recently. However we argue that most of them are not suitable for mobile cloud where mobile users might join or leave the mobile networks arbitrarily. In this paper, we design a secure mobile user-based data service mechanism (SDSM) to provide confidentiality and fine-grained access control for data stored in the cloud. This mechanism enables the mobile users to enjoy a secure outsourced data services at a minimized security management overhead. The core idea of SDSM is that SDSM outsources not only the data but also the security management to the mobile cloud in a trust way. Our analysis shows that the proposed mechanism has many advantages over the existing traditional methods such as lower overhead and convenient update, which could better cater the requirements in mobile cloud computing scenarios.",
"title": ""
},
{
"docid": "0e5a11ef4daeb969702e40ea0c50d7f3",
"text": "OBJECTIVES\nThe aim of this study was to assess the long-term safety and efficacy of the CYPHER (Cordis, Johnson and Johnson, Bridgewater, New Jersey) sirolimus-eluting coronary stent (SES) in percutaneous coronary intervention (PCI) for ST-segment elevation myocardial infarction (STEMI).\n\n\nBACKGROUND\nConcern over the safety of drug-eluting stents implanted during PCI for STEMI remains, and long-term follow-up from randomized trials are necessary. TYPHOON (Trial to assess the use of the cYPHer sirolimus-eluting stent in acute myocardial infarction treated with ballOON angioplasty) randomized 712 patients with STEMI treated by primary PCI to receive either SES (n = 355) or bare-metal stents (BMS) (n = 357). The primary end point, target vessel failure at 1 year, was significantly lower in the SES group than in the BMS group (7.3% vs. 14.3%, p = 0.004) with no increase in adverse events.\n\n\nMETHODS\nA 4-year follow-up was performed. Complete data were available in 501 patients (70%), and the survival status is known in 580 patients (81%).\n\n\nRESULTS\nFreedom from target lesion revascularization (TLR) at 4 years was significantly better in the SES group (92.4% vs. 85.1%; p = 0.002); there were no significant differences in freedom from cardiac death (97.6% and 95.9%; p = 0.37) or freedom from repeat myocardial infarction (94.8% and 95.6%; p = 0.85) between the SES and BMS groups. No difference in definite/probable stent thrombosis was noted at 4 years (SES: 4.4%, BMS: 4.8%, p = 0.83). In the 580 patients with known survival status at 4 years, the all-cause death rate was 5.8% in the SES and 7.0% in the BMS group (p = 0.61).\n\n\nCONCLUSIONS\nIn the 70% of patients with complete follow-up at 4 years, SES demonstrated sustained efficacy to reduce TLR with no difference in death, repeat myocardial infarction or stent thrombosis. (The Study to Assess AMI Treated With Balloon Angioplasty [TYPHOON]; NCT00232830).",
"title": ""
},
{
"docid": "08a6f27e905a732062ae585d8b324200",
"text": "The advent of cost-effectiveness and easy-operation depth cameras has facilitated a variety of visual recognition tasks including human activity recognition. This paper presents a novel framework for recognizing human activities from video sequences captured by depth cameras. We extend the surface normal to polynormal by assembling local neighboring hypersurface normals from a depth sequence to jointly characterize local motion and shape information. We then propose a general scheme of super normal vector (SNV) to aggregate the low-level polynormals into a discriminative representation, which can be viewed as a simplified version of the Fisher kernel representation. In order to globally capture the spatial layout and temporal order, an adaptive spatio-temporal pyramid is introduced to subdivide a depth video into a set of space-time cells. In the extensive experiments, the proposed approach achieves superior performance to the state-of-the-art methods on the four public benchmark datasets, i.e., MSRAction3D, MSRDailyActivity3D, MSRGesture3D, and MSRActionPairs3D.",
"title": ""
},
{
"docid": "957a3970611470b611c024ed3b558115",
"text": "SHARE is a unique panel database of micro data on health, socio-economic status and social and family networks covering most of the European Union and Israel. To date, SHARE has collected three panel waves (2004, 2006, 2010) of current living circumstances and retrospective life histories (2008, SHARELIFE); 6 additional waves are planned until 2024. The more than 150 000 interviews give a broad picture of life after the age of 50 years, measuring physical and mental health, economic and non-economic activities, income and wealth, transfers of time and money within and outside the family as well as life satisfaction and well-being. The data are available to the scientific community free of charge at www.share-project.org after registration. SHARE is harmonized with the US Health and Retirement Study (HRS) and the English Longitudinal Study of Ageing (ELSA) and has become a role model for several ageing surveys worldwide. SHARE's scientific power is based on its panel design that grasps the dynamic character of the ageing process, its multidisciplinary approach that delivers the full picture of individual and societal ageing, and its cross-nationally ex-ante harmonized design that permits international comparisons of health, economic and social outcomes in Europe and the USA.",
"title": ""
},
{
"docid": "efe279fbc7307bc6a191ebb397b01823",
"text": "Real-time traffic sign detection and recognition has been receiving increasingly more attention in recent years due to the popularity of driver-assistance systems and autonomous vehicles. This paper proposes an accurate and efficient traffic sign detection technique by exploring AdaBoost and support vector regression (SVR) for discriminative detector learning. Different from the reported traffic sign detection techniques, a novel saliency estimation approach is first proposed, where a new saliency model is built based on the traffic sign-specific color, shape, and spatial information. By incorporating the saliency information, enhanced feature pyramids are built to learn an AdaBoost model that detects a set of traffic sign candidates from images. A novel iterative codeword selection algorithm is then designed to generate a discriminative codebook for the representation of sign candidates, as detected by the AdaBoost, and an SVR model is learned to identify the real traffic signs from the detected sign candidates. Experiments on three public data sets show that the proposed traffic sign detection technique is robust and obtains superior accuracy and efficiency.",
"title": ""
},
{
"docid": "764ebb7673237d152995a0b6ae34e82a",
"text": "Due to limitations of chemical analysis procedures, small concentrations cannot be precisely measured. These concentrations are said to be below the limit of detection (LOD). In statistical analyses, these values are often censored and substituted with a constant value, such as half the LOD, the LOD divided by the square root of 2, or zero. These methods for handling below-detection values results in two distributions, a uniform distribution for those values below the LOD, and the true distribution. As a result, this can produce questionable descriptive statistics depending upon the percentage of values below the LOD. An alternative method uses the characteristics of the distribution of the values above the LOD to estimate the values below the LOD. This can be done with an extrapolation technique or maximum likelihood estimation. An example program using the same data is presented calculating the mean, standard deviation, t-test, and relative difference in the means for various methods and compares the results. The extrapolation and maximum likelihood estimate techniques have smaller error rates than all the standard replacement techniques. Although more computational, these methods produce more reliable descriptive statistics.",
"title": ""
}
] | scidocsrr |
299763e0a76597424582bf792d879f1d | Sexuality before and after male-to-female sex reassignment surgery. | [
{
"docid": "9b1a4e27c5d387ef091fdb9140eb8795",
"text": "In this study I investigated the relation between normal heterosexual attraction and autogynephilia (a man's propensity to be sexually aroused by the thought or image of himself as a woman). The subjects were 427 adult male outpatients who reported histories of dressing in women's garments, of feeling like women, or both. The data were questionnaire measures of autogynephilia, heterosexual interest, and other psychosexual variables. As predicted, the highest levels of autogynephilia were observed at intermediate rather than high levels of heterosexual interest; that is, the function relating these variables took the form of an inverted U. This finding supports the hypothesis that autogynephilia is a misdirected type of heterosexual impulse, which arises in association with normal heterosexuality but also competes with it.",
"title": ""
}
] | [
{
"docid": "fa260dabc7a58b760b4306e880afb821",
"text": "BACKGROUND\nPerforator-based flaps have been explored across almost all of the lower leg except in the Achilles tendon area. This paper introduced a perforator flap sourced from this area with regard to its anatomic basis and clinical applications.\n\n\nMETHODS\nTwenty-four adult cadaver legs were dissected to investigate the perforators emerging along the lateral edge of the Achilles tendon in terms of number and location relative to the tip of the lateral malleolus, and distribution. Based on the anatomic findings, perforator flaps, based on the perforator(s) of the lateral calcaneal artery (LCA) alone or in concert with the perforator of the peroneal artery (PA), were used for reconstruction of lower-posterior heel defects in eight cases. Postoperatively, subjective assessment and Semmes-Weinstein filament test were performed to evaluate the sensibility of the sural nerve-innerved area.\n\n\nRESULTS\nThe PA ended into the anterior perforating branch and LCA at the level of 6.0 ± 1.4 cm (range 3.3-9.4 cm) above the tip of the lateral malleolus. Both PA and LCA, especially the LCA, gave rise to perforators to contribute to the integument overlying the Achilles tendon. Of eight flaps, six were based on perforator(s) of the LCA and two were on perforators of the PA and LCA. Follow-up lasted for 6-28 months (mean 13.8 months), during which total flap loss and nerve injury were not found. Functional and esthetic outcomes were good in all patients.\n\n\nCONCLUSION\nThe integument overlying the Achilles tendon gets its blood supply through the perforators of the LCA primarily and that of through the PA secondarily. The LCA perforator(s)-based and the LCA plus PA perforators-based stepladder flap is a reliable, sensate flap, and should be thought of as a valuable procedure of choice for coverage of lower-posterior heel defects in selected patients.",
"title": ""
},
{
"docid": "82dd67625fd8f2af3bf825fdef410836",
"text": "Public health thrives on high-quality evidence, yet acquiring meaningful data on a population remains a central challenge of public health research and practice. Social monitoring, the analysis of social media and other user-generated web data, has brought advances in the way we leverage population data to understand health. Social media offers advantages over traditional data sources, including real-time data availability, ease of access, and reduced cost. Social media allows us to ask, and answer, questions we never thought possible. This book presents an overview of the progress on uses of social monitoring to study public health over the past decade. We explain available data sources, common methods, and survey research on social monitoring in a wide range of public health areas. Our examples come from topics such as disease surveillance, behavioral medicine, and mental health, among others. We explore the limitations and concerns of these methods. Our survey of this exciting new field of data-driven research lays out future research directions.",
"title": ""
},
{
"docid": "553719cb1cb8829ceaf8e0f1a40953ff",
"text": "“The distinctive faculties of Man are visibly expressed in his elevated cranial domeda feature which, though much debased in certain savage races, essentially characterises the human species. But, considering that the Neanderthal skull is eminently simial, both in its general and particular characters, I feel myself constrained to believe that the thoughts and desires which once dwelt within it never soared beyond those of a brute. The Andamaner, it is indisputable, possesses but the dimmest conceptions of the existence of the Creator of the Universe: his ideas on this subject, and on his own moral obligations, place him very little above animals of marked sagacity; nevertheless, viewed in connection with the strictly human conformation of his cranium, they are such as to specifically identify him with Homo sapiens. Psychical endowments of a lower grade than those characterising the Andamaner cannot be conceived to exist: they stand next to brute benightedness. (.) Applying the above argument to the Neanderthal skull, and considering . that it more closely conforms to the brain-case of the Chimpanzee, . there seems no reason to believe otherwise than that similar darkness characterised the being to which the fossil belonged” (King, 1864; pp. 96).",
"title": ""
},
{
"docid": "397b3b96c16b2ce310ab61f9d2d7bdbf",
"text": "Dependency networks approximate a joint probability distribution over multiple random variables as a product of conditional distributions. Relational Dependency Networks (RDNs) are graphical models that extend dependency networks to relational domains. This higher expressivity, however, comes at the expense of a more complex model-selection problem: an unbounded number of relational abstraction levels might need to be explored. Whereas current learning approaches for RDNs learn a single probability tree per random variable, we propose to turn the problem into a series of relational function-approximation problems using gradient-based boosting. In doing so, one can easily induce highly complex features over several iterations and in turn estimate quickly a very expressive model. Our experimental results in several different data sets show that this boosting method results in efficient learning of RDNs when compared to state-of-the-art statistical relational learning approaches.",
"title": ""
},
{
"docid": "5ff019e3c12f7b1c2b3518e0883e3b6f",
"text": "A novel PFC (Power Factor Corrected) Converter using Zeta DC-DC converter feeding a BLDC (Brush Less DC) motor drive using a single voltage sensor is proposed for fan applications. A single phase supply followed by an uncontrolled bridge rectifier and a Zeta DC-DC converter is used to control the voltage of a DC link capacitor which is lying between the Zeta converter and a VSI (Voltage Source Inverter). Voltage of a DC link capacitor of Zeta converter is controlled to achieve the speed control of BLDC motor. The Zeta converter is working as a front end converter operating in DICM (Discontinuous Inductor Current Mode) and thus using a voltage follower approach. The DC link capacitor of the Zeta converter is followed by a VSI which is feeding a BLDC motor. A sensorless control of BLDC motor is used to eliminate the requirement of Hall Effect position sensors. A MATLAB/Simulink environment is used to simulate the developed model to achieve a wide range of speed control with high PF (power Factor) and improved PQ (Power Quality) at the supply.",
"title": ""
},
{
"docid": "27f3060ef96f1656148acd36d50f02ce",
"text": "Video sensors become particularly important in traffic applications mainly due to their fast response, easy installation, operation and maintenance, and their ability to monitor wide areas. Research in several fields of traffic applications has resulted in a wealth of video processing and analysis methods. Two of the most demanding and widely studied applications relate to traffic monitoring and automatic vehicle guidance. In general, systems developed for these areas must integrate, amongst their other tasks, the analysis of their static environment (automatic lane finding) and the detection of static or moving obstacles (object detection) within their space of interest. In this paper we present an overview of image processing and analysis tools used in these applications and we relate these tools with complete systems developed for specific traffic applications. More specifically, we categorize processing methods based on the intrinsic organization of their input data (feature-driven, area-driven, or model-based) and the domain of processing (spatial/frame or temporal/video). Furthermore, we discriminate between the cases of static and mobile camera. Based on this categorization of processing tools, we present representative systems that have been deployed for operation. Thus, the purpose of the paper is threefold. First, to classify image-processing methods used in traffic applications. Second, to provide the advantages and disadvantages of these algorithms. Third, from this integrated consideration, to attempt an evaluation of shortcomings and general needs in this field of active research. q 2003 Elsevier Science B.V. All rights reserved.",
"title": ""
},
{
"docid": "06675c4b42683181cecce7558964c6b6",
"text": "We present in this work an economic analysis of ransomware, with relevant data from Cryptolocker, CryptoWall, TeslaCrypt and other major strands. We include a detailed study of the impact that different price discrimination strategies can have on the success of a ransomware family, examining uniform pricing, optimal price discrimination and bargaining strategies and analysing their advantages and limitations. In addition, we present results of a preliminary survey that can helps in estimating an optimal ransom value. We discuss at each stage whether the different schemes we analyse have been encountered already in existing malware, and the likelihood of them being implemented and becoming successful. We hope this work will help to gain some useful insights for predicting how ransomware may evolve in the future and be better prepared to counter its current and future threat.",
"title": ""
},
{
"docid": "386edbf8dee79dd53a0a6c3475286f13",
"text": "The underrepresentation of women at the top of math-intensive fields is controversial, with competing claims of biological and sociocultural causation. The authors develop a framework to delineate possible causal pathways and evaluate evidence for each. Biological evidence is contradictory and inconclusive. Although cross-cultural and cross-cohort differences suggest a powerful effect of sociocultural context, evidence for specific factors is inconsistent and contradictory. Factors unique to underrepresentation in math-intensive fields include the following: (a) Math-proficient women disproportionately prefer careers in non-math-intensive fields and are more likely to leave math-intensive careers as they advance; (b) more men than women score in the extreme math-proficient range on gatekeeper tests, such as the SAT Mathematics and the Graduate Record Examinations Quantitative Reasoning sections; (c) women with high math competence are disproportionately more likely to have high verbal competence, allowing greater choice of professions; and (d) in some math-intensive fields, women with children are penalized in promotion rates. The evidence indicates that women's preferences, potentially representing both free and constrained choices, constitute the most powerful explanatory factor; a secondary factor is performance on gatekeeper tests, most likely resulting from sociocultural rather than biological causes.",
"title": ""
},
{
"docid": "fe38de8c129845b86ee0ec4acf865c14",
"text": "0 7 4 0 7 4 5 9 / 0 2 / $ 1 7 . 0 0 © 2 0 0 2 I E E E McDonald’s develop product lines. But software product lines are a relatively new concept. They are rapidly emerging as a practical and important software development paradigm. A product line succeeds because companies can exploit their software products’ commonalities to achieve economies of production. The Software Engineering Institute’s (SEI) work has confirmed the benefits of pursuing this approach; it also found that doing so is both a technical and business decision. To succeed with software product lines, an organization must alter its technical practices, management practices, organizational structure and personnel, and business approach.",
"title": ""
},
{
"docid": "7e127a6f25e932a67f333679b0d99567",
"text": "This paper presents a novel manipulator for human-robot interaction that has low mass and inertia without losing stiffness and payload performance. A lightweight tension amplifying mechanism that increases the joint stiffness in quadratic order is proposed. High stiffness is essential for precise and rapid manipulation, and low mass and inertia are important factors for safety due to low stored kinetic energy. The proposed tension amplifying mechanism was applied to a 1-DOF elbow joint and then extended to a 3-DOF wrist joint. The developed manipulator was analyzed in terms of inertia, stiffness, and strength properties. Its moving part weighs 3.37 kg, and its inertia is 0.57 kg·m2, which is similar to that of a human arm. The stiffness of the developed elbow joint is 1440Nm/rad, which is comparable to that of the joints with rigid components in industrial manipulators. A detailed description of the design is provided, and thorough analysis verifies the performance of the proposed mechanism.",
"title": ""
},
{
"docid": "e1c877aa583aa10e2565ef2748585cb0",
"text": "OBJECTIVE\nTo encourage treatment of depression and prevention of suicide in physicians by calling for a shift in professional attitudes and institutional policies to support physicians seeking help.\n\n\nPARTICIPANTS\nAn American Foundation for Suicide Prevention planning group invited 15 experts on the subject to evaluate the state of knowledge about physician depression and suicide and barriers to treatment. The group assembled for a workshop held October 6-7, 2002, in Philadelphia, Pa.\n\n\nEVIDENCE\nThe planning group worked with each participant on a preworkshop literature review in an assigned area. Abstracts of presentations and key publications were distributed to participants before the workshop. After workshop presentations, participants were assigned to 1 of 2 breakout groups: (1) physicians in their role as patients and (2) medical institutions and professional organizations. The groups identified areas that required further research, barriers to treatment, and recommendations for reform.\n\n\nCONSENSUS PROCESS\nThis consensus statement emerged from a plenary session during which each work group presented its recommendations. The consensus statement was circulated to and approved by all participants.\n\n\nCONCLUSIONS\nThe culture of medicine accords low priority to physician mental health despite evidence of untreated mood disorders and an increased burden of suicide. Barriers to physicians' seeking help are often punitive, including discrimination in medical licensing, hospital privileges, and professional advancement. This consensus statement recommends transforming professional attitudes and changing institutional policies to encourage physicians to seek help. As barriers are removed and physicians confront depression and suicidality in their peers, they are more likely to recognize and treat these conditions in patients, including colleagues and medical students.",
"title": ""
},
{
"docid": "59c4b8a66a6cf6add26000cb2475ffe6",
"text": "Intelligent transport systems are the rising technology in the near future to build cooperative vehicular networks in which a variety of different ITS applications are expected to communicate with a variety of different units. Therefore, the demand for highly customized communication channel for each or sets of similar ITS applications is increased. This article explores the capabilities of available wireless communication technologies in order to produce a win-win situation while selecting suitable carrier( s) for a single application or a profile of similar applications. Communication requirements for future ITS applications are described to select the best available communication interface for the target application(s).",
"title": ""
},
{
"docid": "5aa8c418b63a3ecb71dc60d4863f35cc",
"text": "Based on the sense definition of words available in the Bengali WordNet, an attempt is made to classify the Bengali sentences automatically into different groups in accordance with their underlying senses. The input sentences are collected from 50 different categories of the Bengali text corpus developed in the TDIL project of the Govt. of India, while information about the different senses of particular ambiguous lexical item is collected from Bengali WordNet. In an experimental basis we have used Naive Bayes probabilistic model as a useful classifier of sentences. We have applied the algorithm over 1747 sentences that contain a particular Bengali lexical item which, because of its ambiguous nature, is able to trigger different senses that render sentences in different meanings. In our experiment we have achieved around 84% accurate result on the sense classification over the total input sentences. We have analyzed those residual sentences that did not comply with our experiment and did affect the results to note that in many cases, wrong syntactic structures and less semantic information are the main hurdles in semantic classification of sentences. The applicational relevance of this study is attested in automatic text classification, machine learning, information extraction, and word sense disambiguation.",
"title": ""
},
{
"docid": "0e153353fb8af1511de07c839f6eaca5",
"text": "The calculation of a transformer's parasitics, such as its self capacitance, is fundamental for predicting the frequency behavior of the device, reducing this capacitance value and moreover for more advanced aims of capacitance integration and cancellation. This paper presents a comprehensive procedure for calculating all contributions to the self-capacitance of high-voltage transformers and provides a detailed analysis of the problem, based on a physical approach. The advantages of the analytical formulation of the problem rather than a finite element method analysis are discussed. The approach and formulas presented in this paper can also be used for other wound components rather than just step-up transformers. Finally, analytical and experimental results are presented for three different high-voltage transformer architectures.",
"title": ""
},
{
"docid": "9679713ae8ab7e939afba18223086128",
"text": "If, as many psychologists seem to believe, im mediate memory represents a distinct system or set of processes from long-term memory (L TM), then what might· it be for? This fundamental, functional question was surprisingly unanswer able in the 1970s, given the volume of research that had explored short-term memory (STM), and given the ostensible role that STM was thought to play in cognitive control (Atkinson & Shiffrin, 1971 ). Indeed, failed attempts to link STM to complex cognitive· functions, such as reading comprehension, loomed large in Crow der's (1982) obituary for the concept. Baddeley and Hitch ( 197 4) tried to validate immediate memory's functions by testing sub jects in reasoning, comprehension, and list learning tasks at the same time their memory was occupied by irrelevant material. Generally, small memory loads (i.e., three or fewer items) were retained with virtually no effect on the primary tasks, whereas memory loads of six items consistently impaired reasoning, compre hension, and learning. Baddeley and Hitch therefore argued that \"working memory\" (WM)",
"title": ""
},
{
"docid": "c625e9d1bb6cdb54864ab10ae2b0e060",
"text": "This special issue of the proceedings of the IEEE presents a systematical and complete tutorial on digital television (DTV), produced by a team of DTV experts worldwide. This introductory paper puts the current DTV systems into perspective and explains the historical background and different evolution paths that each system took. The main focus is on terrestrial DTV systems, but satellite and cable DTV are also covered,as well as several other emerging services.",
"title": ""
},
{
"docid": "5be35d2aa81cc1e15b857892f376fbf0",
"text": "This paper proposes a new method for fabric defect classification by incorporating the design of a wavelet frames based feature extractor with the design of a Euclidean distance based classifier. Channel variances at the outputs of the wavelet frame decomposition are used to characterize each nonoverlapping window of the fabric image. A feature extractor using linear transformation matrix is further employed to extract the classification-oriented features. With a Euclidean distance based classifier, each nonoverlapping window of the fabric image is then assigned to its corresponding category. Minimization of the classification error is achieved by incorporating the design of the feature extractor with the design of the classifier based on minimum classification error (MCE) training method. The proposed method has been evaluated on the classification of 329 defect samples containing nine classes of fabric defects, and 328 nondefect samples, where 93.1% classification accuracy has been achieved.",
"title": ""
},
{
"docid": "c68b94c11170fae3caf7dc211ab83f91",
"text": "Data mining is the extraction of useful, prognostic, interesting, and unknown information from massive transaction databases and other repositories. Data mining tools predict potential trends and actions, allowing various fields to make proactive, knowledge-driven decisions. Recently, with the rapid growth of information technology, the amount of data has exponentially increased in various fields. Big data mostly comes from people’s day-to-day activities and Internet-based companies. Mining frequent itemsets and association rule mining (ARM) are well-analysed techniques for revealing attractive correlations among variables in huge datasets. The Apriori algorithm is one of the most broadly used algorithms in ARM, and it collects the itemsets that frequently occur in order to discover association rules in massive datasets. The original Apriori algorithm is for sequential (single node or computer) environments. This Apriori algorithm has many drawbacks for processing huge datasets, such as that a single machine’s memory, CPU and storage capacity are insufficient. Parallel and distributed computing is the better solution to overcome the above problems. Many researchers have parallelized the Apriori algorithm. This study performs a survey on several well-enhanced and revised techniques for the parallel Apriori algorithm in the HadoopMapReduce environment. The Hadoop-MapReduce framework is a programming model that efficiently and effectively processes enormous databases in parallel. It can handle large clusters of commodity hardware in a reliable and fault-tolerant manner. This survey will provide an overall view of the parallel Apriori algorithm implementation in the Hadoop-MapReduce environment and briefly discuss the challenges and open issues of big data in the cloud and Hadoop-MapReduce. Moreover, this survey will not only give overall existing improved Apriori algorithm methods on Hadoop-MapReduce but also provide future research direction for upcoming researchers.",
"title": ""
},
{
"docid": "c3500e2b50f70c81d7f2c4a425f12742",
"text": "Material recognition is an important subtask in computer vision. In this paper, we aim for the identification of material categories from a single image captured under unknown illumination and view conditions. Therefore, we use several features which cover various aspects of material appearance and perform supervised classification using Support Vector Machines. We demonstrate the feasibility of our approach by testing on the challenging Flickr Material Database. Based on this dataset, we also carry out a comparison to a previously published work [Liu et al., ”Exploring Features in a Bayesian Framework for Material Recognition”, CVPR 2010] which uses Bayesian inference and reaches a recognition rate of 44.6% on this dataset and represents the current state-of the-art. With our SVM approach we obtain 53.1% and hence, significantly outperform this approach.",
"title": ""
},
{
"docid": "620574da26151188171a91eb64de344d",
"text": "Major security issues for banking and financial institutions are Phishing. Phishing is a webpage attack, it pretends a customer web services using tactics and mimics from unauthorized persons or organization. It is an illegitimate act to steals user personal information such as bank details, social security numbers and credit card details, by showcasing itself as a truthful object, in the public network. When users provide confidential information, they are not aware of the fact that the websites they are using are phishing websites. This paper presents a technique for detecting phishing website attacks and also spotting phishing websites by combines source code and URL in the webpage. Keywords—Phishing, Website attacks, Source Code, URL.",
"title": ""
}
] | scidocsrr |
286ea8972c234744e1b70f8e9d9b0bed | A Novel Approach for Effective Recognition of the Code-Switched Data on Monolingual Language Model | [
{
"docid": "1d05fb1a3ca5e83659996fba154fb12e",
"text": "Code-switching is a very common phenomenon in multilingual communities. In this paper, we investigate language modeling for conversational Mandarin-English code-switching (CS) speech recognition. First, we investigate the prediction of code switches based on textual features with focus on Part-of-Speech (POS) tags and trigger words. Second, we propose a structure of recurrent neural networks to predict code-switches. We extend the networks by adding POS information to the input layer and by factorizing the output layer into languages. The resulting models are applied to our task of code-switching language modeling. The final performance shows 10.8% relative improvement in perplexity on the SEAME development set which transforms into a 2% relative improvement in terms of Mixed Error Rate and a relative improvement of 16.9% in perplexity on the evaluation set which leads to a 2.7% relative improvement of MER.",
"title": ""
},
{
"docid": "9df0cdd0273b19737de0591310131bff",
"text": "We present freely available open-source toolkit for training recurrent neural network based language models. I t can be easily used to improve existing speech recognition and ma chine translation systems. Also, it can be used as a baseline for fu ture research of advanced language modeling techniques. In the p a er, we discuss optimal parameter selection and different modes of functionality. The toolkit, example scripts and basic setups are freely available at http://rnnlm.sourceforge.net/. I. I NTRODUCTION, MOTIVATION AND GOALS Statistical language modeling attracts a lot of attention, as models of natural languages are important part of many practical systems today. Moreover, it can be estimated that with further research progress, language models will becom e closer to human understanding [1] [2], and completely new applications will become practically realizable. Immedia tely, any significant progress in language modeling can be utilize d in the esisting speech recognition and statistical machine translation systems. However, the whole research field struggled for decades to overcome very simple, but also effective models based on ngram frequencies [3] [4]. Many techniques were developed to beat n-grams, but the improvements came at the cost of computational complexity. Moreover, the improvements wer e often reported on very basic systems, and after application to state-of-the-art setups and comparison to n-gram models trained on large amounts of data, improvements provided by many techniques vanished. This has lead to scepticism among speech recognition researchers. In our previous work, we have compared many major advanced language modeling techniques, and found that neur al network based language models (NNLM) perform the best on several standard setups [5]. Models of this type were introduced by Bengio in [6], about ten years ago. Their main weaknesses were huge computational complexity, and nontrivial implementation. Successful training of neural net works require well chosen hyper-parameters, such as learning rat e and size of hidden layer. To help overcome these basic obstacles, we have decided to release our toolkit for training recurrent neural network b ased language models (RNNLM). We have shown that the recurrent architecture outperforms the feedforward one on several se tup in [7]. Moreover, the implemenation is simple and easy to understand. The most importantly, recurrent neural networ ks are very interesting from the research point of view, as they allow effective processing of sequences and patterns with arbitraty length these models can learn to store informati on in the hidden layer. Recurrent neural networks can have memory , and are thus important step forward to overcome the most painful and often criticized drawback of n-gram models dependence on previous two or three words only. In this paper we present an open source and freely available toolkit for training statistical language models base d or recurrent neural networks. It includes techniques for redu cing computational complexity (classes in the output layer and direct connections between input and output layer). Our too lkit has been designed to provide comparable results to the popul ar toolkit for training n-gram models, SRILM [8]. The main goals for the RNNLM toolkit are these: • promotion of research of advanced language modeling techniques • easy usage • simple portable code without any dependencies • computational efficiency In the paper, we describe how to easily make RNNLM part of almost any speech recognition or machine translation syste m that produces lattices. II. RECURRENTNEURAL NETWORK The recurrent neural network architecture used in the toolk it is shown at Figure 1 (usually called Elman network, or simple RNN). The input layer uses the 1-of-N representation of the previous wordw(t) concatenated with previous state of the hidden layers(t − 1). The neurons in the hidden layer s(t) use sigmoid activation function. The output layer (t) has the same dimensionality as w(t), and after the network is trained, it represents probability distribution of the next word giv en the previous word and state of the hidden layer in the previous time step [9]. The class layer c(t) can be optionally used to reduce computational complexity of the model, at a small cost of accuracy [7]. Training is performed by the standard stochastic gradient descent algorithm, and the matrix W that",
"title": ""
},
{
"docid": "f09733894d94052707ed768aea8d26e6",
"text": "The aim of this paper is to investigate the rules and constraints of code-switching (CS) in Hindi-English mixed language data. In this paper, we’ll discuss how we collected the mixed language corpus. This corpus is primarily made up of student interview speech. The speech was manually transcribed and verified by bilingual speakers of Hindi and English. The code-switching cases in the corpus are discussed and the reasons for code-switching are explained.",
"title": ""
}
] | [
{
"docid": "0ab14a40df6fe28785262d27a4f5b8ce",
"text": "State-of-the-art 3D shape classification and retrieval algorithms, hereinafter referred to as shape analysis, are often based on comparing signatures or descriptors that capture the main geometric and topological properties of 3D objects. None of the existing descriptors, however, achieve best performance on all shape classes. In this article, we explore, for the first time, the usage of covariance matrices of descriptors, instead of the descriptors themselves, in 3D shape analysis. Unlike histogram -based techniques, covariance-based 3D shape analysis enables the fusion and encoding of different types of features and modalities into a compact representation. Covariance matrices, however, are elements of the non-linear manifold of symmetric positive definite (SPD) matrices and thus \\BBL2 metrics are not suitable for their comparison and clustering. In this article, we study geodesic distances on the Riemannian manifold of SPD matrices and use them as metrics for 3D shape matching and recognition. We then: (1) introduce the concepts of bag of covariance (BoC) matrices and spatially-sensitive BoC as a generalization to the Riemannian manifold of SPD matrices of the traditional bag of features framework, and (2) generalize the standard kernel methods for supervised classification of 3D shapes to the space of covariance matrices. We evaluate the performance of the proposed BoC matrices framework and covariance -based kernel methods and demonstrate their superiority compared to their descriptor-based counterparts in various 3D shape matching, retrieval, and classification setups.",
"title": ""
},
{
"docid": "cce36b208b8266ddacc8baea18cd994b",
"text": "Shape from shading is known to be an ill-posed problem. We show in this paper that if we model the problem in a different way than it is usually done, more precisely by taking into account the 1/r/sup 2/ attenuation term of the illumination, shape from shading becomes completely well-posed. Thus the shading allows to recover (almost) any surface from only one image (of this surface) without any additional data (in particular, without the knowledge of the heights of the solution at the local intensity \"minima\", contrary to [P. Dupuis et al. (1994), E. Prados et al. (2004), B. Horn (1986), E. Rouy et al. (1992), R. Kimmel et al. (2001)]) and without regularity assumptions (contrary to [J. Oliensis et al. (1993), R. Kimmel et al. (1995)], for example). More precisely, we formulate the problem as that of solving a new partial differential equation (PDE), we develop a complete mathematical study of this equation and we design a new provably convergent numerical method. Finally, we present results of our new shape from shading method on various synthetic and real images.",
"title": ""
},
{
"docid": "3f6572916ac697188be30ef798acbbff",
"text": "The vector representation of Bengali words using word2vec model (Mikolov et al. (2013)) plays an important role in Bengali sentiment classification. It is observed that the words that are from same context stay closer in the vector space of word2vec model and they are more similar than other words. In this article, a new approach of sentiment classification of Bengali comments with word2vec and Sentiment extraction of words are presented. Combining the results of word2vec word co-occurrence score with the sentiment polarity score of the words, the accuracy obtained is 75.5%.",
"title": ""
},
{
"docid": "46291c5a7fafd089c7729f7bc77ae8b7",
"text": "This paper proposes a new system for offline writer identification and writer verification. The proposed method uses GMM supervectors to encode the feature distribution of individual writers. Each supervector originates from an individual GMM which has been adapted from a background model via a maximum-a-posteriori step followed by mixing the new statistics with the background model. We show that this approach improves the TOP-1 accuracy of the current best ranked methods evaluated at the ICDAR-2013 competition dataset from 95.1% [13] to 97.1%, and from 97.9% [11] to 99.2% at the CVL dataset, respectively. Additionally, we compare the GMM supervector encoding with other encoding schemes, namely Fisher vectors and Vectors of Locally Aggregated Descriptors.",
"title": ""
},
{
"docid": "5855428c40fd0e25e0d05554d2fc8864",
"text": "When the landmark patient Phineas Gage died in 1861, no autopsy was performed, but his skull was later recovered. The brain lesion that caused the profound personality changes for which his case became famous has been presumed to have involved the left frontal region, but questions have been raised about the involvement of other regions and about the exact placement of the lesion within the vast frontal territory. Measurements from Gage's skull and modern neuroimaging techniques were used to reconstitute the accident and determine the probable location of the lesion. The damage involved both left and right prefrontal cortices in a pattern that, as confirmed by Gage's modern counterparts, causes a defect in rational decision making and the processing of emotion.",
"title": ""
},
{
"docid": "56a52c6a6b1815daee9f65d8ffc2610e",
"text": "State of the art methods for image and object retrieval exploit both appearance (via visual words) and local geometry (spatial extent, relative pose). In large scale problems, memory becomes a limiting factor - local geometry is stored for each feature detected in each image and requires storage larger than the inverted file and term frequency and inverted document frequency weights together. We propose a novel method for learning discretized local geometry representation based on minimization of average reprojection error in the space of ellipses. The representation requires only 24 bits per feature without drop in performance. Additionally, we show that if the gravity vector assumption is used consistently from the feature description to spatial verification, it improves retrieval performance and decreases the memory footprint. The proposed method outperforms state of the art retrieval algorithms in a standard image retrieval benchmark.",
"title": ""
},
{
"docid": "adafa8a9f41878df975c239e592dc236",
"text": "Cognitive behavioral therapy (CBT) is one of the most effective psychotherapy modalities used to treat depression and anxiety disorders. Homework is an integral component of CBT, but homework compliance in CBT remains problematic in real-life practice. The popularization of the mobile phone with app capabilities (smartphone) presents a unique opportunity to enhance CBT homework compliance; however, there are no guidelines for designing mobile phone apps created for this purpose. Existing literature suggests 6 essential features of an optimal mobile app for maximizing CBT homework compliance: (1) therapy congruency, (2) fostering learning, (3) guiding therapy, (4) connection building, (5) emphasis on completion, and (6) population specificity. We expect that a well-designed mobile app incorporating these features should result in improved homework compliance and better outcomes for its users.",
"title": ""
},
{
"docid": "0bc403d33be9115e860cfe925ee8437a",
"text": "Orofacial analysis has been used by dentists for many years. The process involves applying mathematical rules, geometric principles, and straight lines to create either parallel or perpendicular references based on the true horizon and/or natural head position. These reference lines guide treatment planning and smile design for restorative treatments to achieve harmony between the new smile and the face. The goal is to obtain harmony and not symmetry. Faces are asymmetrical entities and because of that cannot be analyzed using purely straight lines. In this article, a more natural, organic, and dynamic process of evaluation is presented to minimize errors and generate harmoniously balanced smiles instead of perfect, mathematical smiles.",
"title": ""
},
{
"docid": "f27ad6bf5c65fdea1a98b118b1a43c85",
"text": "Localization is one of the problems that often appears in the world of robotics. Monte Carlo Localization (MCL) are the one of the popular algorithms in localization because easy to implement on issues Global Localization. This algorithm using particles to represent the robot position. MCL can simulated by Robot Operating System (ROS) using robot type is Pioneer3-dx. In this paper we will discuss about this algorithm on ROS, by analyzing the influence of the number particle that are used for localization of the actual robot position.",
"title": ""
},
{
"docid": "37a108b2d30a08cb78321f96c1e9eca4",
"text": "The TRAM flap, DIEP flap, and gluteal free flaps are routinely used for breast reconstruction. However, these have seldom been described for reconstruction of buttock deformities. We present three cases of free flaps used to restore significant buttock contour deformities. They introduce vascularised bulky tissue and provide adequate cushioning for future sitting, as well as correction of the aesthetic defect.",
"title": ""
},
{
"docid": "42e2aec24a5ab097b5fff3ec2fe0385d",
"text": "Online freelancing marketplaces have grown quickly in recent years. In theory, these sites offer workers the ability to earn money without the obligations and potential social biases associated with traditional employment frameworks. In this paper, we study whether two prominent online freelance marketplaces - TaskRabbit and Fiverr - are impacted by racial and gender bias. From these two platforms, we collect 13,500 worker profiles and gather information about workers' gender, race, customer reviews, ratings, and positions in search rankings. In both marketplaces, we find evidence of bias: we find that gender and race are significantly correlated with worker evaluations, which could harm the employment opportunities afforded to the workers. We hope that our study fuels more research on the presence and implications of discrimination in online environments.",
"title": ""
},
{
"docid": "390cb70c820d0ebefe936318f8668ac3",
"text": "BACKGROUND\nMandatory labeling of products with top allergens has improved food safety for consumers. Precautionary allergen labeling (PAL), such as \"may contain\" or \"manufactured on shared equipment,\" are voluntarily placed by the food industry.\n\n\nOBJECTIVE\nTo establish knowledge of PAL and its impact on purchasing habits by food-allergic consumers in North America.\n\n\nMETHODS\nFood Allergy Research & Education and Food Allergy Canada surveyed consumers in the United States and Canada on purchasing habits of food products featuring different types of PAL. Associations between respondents' purchasing behaviors and individual characteristics were estimated using multiple logistic regression.\n\n\nRESULTS\nOf 6684 participants, 84.3% (n = 5634) were caregivers of a food-allergic child and 22.4% had food allergy themselves. Seventy-one percent reported a history of experiencing a severe allergic reaction. Buying practices varied on the basis of PAL wording; 11% of respondents purchased food with \"may contain\" labeling, whereas 40% purchased food that used \"manufactured in a facility that also processes.\" Twenty-nine percent of respondents were unaware that the law requires labeling of priority food allergens. Forty-six percent were either unsure or incorrectly believed that PAL is required by law. Thirty-seven percent of respondents thought PAL was based on the amount of allergen present. History of a severe allergic reaction decreased the odds of purchasing foods with PAL.\n\n\nCONCLUSIONS\nAlmost half of consumers falsely believed that PAL was required by law. Up to 40% surveyed consumers purchased products with PAL. Understanding of PAL is poor, and improved awareness and guidelines are needed to help food-allergic consumers purchase food safely.",
"title": ""
},
{
"docid": "f97490dfe6b7d77870c3effbba14c204",
"text": "Mobile phones and carriers trust the traditional base stations which serve as the interface between the mobile devices and the fixed-line communication network. Femtocells, miniature cellular base stations installed in homes and businesses, are equally trusted yet are placed in possibly untrustworthy hands. By making several modifications to a commercially available femtocell, we evaluate the impact of attacks originating from a compromised device. We show that such a rogue device can violate all the important aspects of security for mobile subscribers, including tracking phones, intercepting communication and even modifying and impersonating traffic. The specification also enables femtocells to directly communicate with other femtocells over a VPN and the carrier we examined had no filtering on such communication, enabling a single rogue femtocell to directly communicate with (and thus potentially attack) all other femtocells within the carrier’s network.",
"title": ""
},
{
"docid": "01651546f9fb6c984e84cfd2d1702b8e",
"text": "There is increasing evidence for the involvement of glutamate-mediated neurotoxicity in the pathogenesis of Alzheimer's disease (AD). We suggest that glutamate receptors of the N-methyl-D-aspartate (NMDA) type are overactivated in a tonic rather than a phasic manner in this disorder. This continuous mild activation may lead to neuronal damage and impairment of synaptic plasticity (learning). It is likely that under such conditions Mg(2+) ions, which block NMDA receptors under normal resting conditions, can no longer do so. We found that overactivation of NMDA receptors using a direct agonist or a decrease in Mg(2+) concentration produced deficits in synaptic plasticity (in vivo: passive avoidance test and/or in vitro: LTP in the CA1 region). In both cases, memantine-an uncompetitive NMDA receptor antagonists with features of an 'improved' Mg(2+) (voltage-dependency, kinetics, affinity)-attenuated this deficit. Synaptic plasticity was restored by therapeutically-relevant concentrations of memantine (1 microM). Moreover, doses leading to similar brain/serum levels provided neuroprotection in animal models relevant for neurodegeneration in AD such as neurotoxicity produced by inflammation in the NBM or beta-amyloid injection to the hippocampus. As such, if overactivation of NMDA receptors is present in AD, memantine would be expected to improve both symptoms (cognition) and to slow down disease progression because it takes over the physiological function of magnesium.",
"title": ""
},
{
"docid": "d9bbe52033912f29c98ef620e70f1cb1",
"text": "Low-cost hardware platforms for biomedical engineering are becoming increasingly available, which empower the research community in the development of new projects in a wide range of areas related with physiological data acquisition. Building upon previous work by our group, this work compares the quality of the data acquired by means of two different versions of the multimodal physiological computing platform BITalino, with a device that can be considered a reference. We acquired data from 5 sensors, namely Accelerometry (ACC), Electrocardiography (ECG), Electroencephalography (EEG), Electrodermal Activity (EDA) and Electromyography (EMG). Experimental evaluation shows that ACC, ECG and EDA data are highly correlated with the reference in what concerns the raw waveforms. When compared by means of their commonly used features, EEG and EMG data are also quite similar across the different devices.",
"title": ""
},
{
"docid": "a966216fd4fc3a93e50dbbb1be84e908",
"text": "Extracting temporal information from raw text is fundamental for deep language understanding, and key to many applications like question answering, information extraction, and document summarization. Our long-term goal is to build complete temporal structure of documents and use the temporal structure in other applications like textual entailment, question answering, visualization, or others. In this paper, we present a first step, a system for extracting events, event features, main events, temporal expressions and their normalized values from raw text. Our system is a combination of deep semantic parsing with extraction rules, Markov Logic Network classifiers and Conditional Random Field classifiers. To compare with existing systems, we evaluated our system on the TempEval 1 and TempEval 2 corpus. Our system outperforms or performs competitively with existing systems that evaluate on the TimeBank, TempEval 1 and TempEval 2 corpus and our performance is very close to inter-annotator agreement of the TimeBank annotators.",
"title": ""
},
{
"docid": "826fd1fbf5fc5e72ed4c2a1cdce00dec",
"text": "In this paper, we design a fast MapReduce algorithm for Monte Carlo approximation of personalized PageRank vectors of all the nodes in a graph. The basic idea is very efficiently doing single random walks of a given length starting at each node in the graph. More precisely, we design a MapReduce algorithm, which given a graph G and a length », outputs a single random walk of length » starting at each node in G. We will show that the number of MapReduce iterations used by our algorithm is optimal among a broad family of algorithms for the problem, and its I/O efficiency is much better than the existing candidates. We will then show how we can use this algorithm to very efficiently approximate all the personalized PageRank vectors. Our empirical evaluation on real-life graph data and in production MapReduce environment shows that our algorithm is significantly more efficient than all the existing algorithms in the MapReduce setting.",
"title": ""
},
{
"docid": "e48941f23ee19ec4b26c4de409a84fe2",
"text": "Object recognition is challenging especially when the objects from different categories are visually similar to each other. In this paper, we present a novel joint dictionary learning (JDL) algorithm to exploit the visual correlation within a group of visually similar object categories for dictionary learning where a commonly shared dictionary and multiple category-specific dictionaries are accordingly modeled. To enhance the discrimination of the dictionaries, the dictionary learning problem is formulated as a joint optimization by adding a discriminative term on the principle of the Fisher discrimination criterion. As well as presenting the JDL model, a classification scheme is developed to better take advantage of the multiple dictionaries that have been trained. The effectiveness of the proposed algorithm has been evaluated on popular visual benchmarks.",
"title": ""
},
{
"docid": "fc25adc42c7e4267a9adfe13ddcabf75",
"text": "As automotive electronics have increased, models for predicting the transmission characteristics of wiring harnesses, suitable for the automotive EMC tests, are needed. In this paper, the repetitive structures of the cross-sectional shape of the twisted pair cable is focused on. By taking account of RLGC parameters, a theoretical analysis modeling for whole cables, based on multi-conductor transmission line theory, is proposed. Furthermore, the theoretical values are compared with measured values and a full-wave simulator. In case that a twisted pitch, a length of the cable, and a height of reference ground plane are changed, the validity of the proposed model is confirmed.",
"title": ""
}
] | scidocsrr |
18aea5129e8608abd1d5fd6b2c9d7a71 | A Framework for Clustering Uncertain Data | [
{
"docid": "f5168565306f6e7f2b36ef797a6c9de8",
"text": "We study the problem of clustering data objects whose locations are uncertain. A data object is represented by an uncertainty region over which a probability density function (pdf) is defined. One method to cluster uncertain objects of this sort is to apply the UK-means algorithm, which is based on the traditional K-means algorithm. In UK-means, an object is assigned to the cluster whose representative has the smallest expected distance to the object. For arbitrary pdf, calculating the expected distance between an object and a cluster representative requires expensive integration computation. We study various pruning methods to avoid such expensive expected distance calculation.",
"title": ""
},
{
"docid": "5f1f7847600207d1216384f8507be63b",
"text": "This paper introduces U-relations, a succinct and purely relational representation system for uncertain databases. U-relations support attribute-level uncertainty using vertical partitioning. If we consider positive relational algebra extended by an operation for computing possible answers, a query on the logical level can be translated into, and evaluated as, a single relational algebra query on the U-relational representation. The translation scheme essentially preserves the size of the query in terms of number of operations and, in particular, number of joins. Standard techniques employed in off-the-shelf relational database management systems are effective for optimizing and processing queries on U-relations. In our experiments we show that query evaluation on U-relations scales to large amounts of data with high degrees of uncertainty.",
"title": ""
}
] | [
{
"docid": "72a1798a864b4514d954e1e9b6089ad8",
"text": "Clustering image pixels is an important image segmentation technique. While a large amount of clustering algorithms have been published and some of them generate impressive clustering results, their performance often depends heavily on user-specified parameters. This may be a problem in the practical tasks of data clustering and image segmentation. In order to remove the dependence of clustering results on user-specified parameters, we investigate the characteristics of existing clustering algorithms and present a parameter-free algorithm based on the DSets (dominant sets) and DBSCAN (Density-Based Spatial Clustering of Applications with Noise) algorithms. First, we apply histogram equalization to the pairwise similarity matrix of input data and make DSets clustering results independent of user-specified parameters. Then, we extend the clusters from DSets with DBSCAN, where the input parameters are determined based on the clusters from DSets automatically. By merging the merits of DSets and DBSCAN, our algorithm is able to generate the clusters of arbitrary shapes without any parameter input. In both the data clustering and image segmentation experiments, our parameter-free algorithm performs better than or comparably with other algorithms with careful parameter tuning.",
"title": ""
},
{
"docid": "24e10d8e12d8b3c618f88f1f0d33985d",
"text": "W -algebras of finite type are certain finitely generated associative algebras closely related to universal enveloping algebras of semisimple Lie algebras. In this paper we prove a conjecture of Premet that gives an almost complete classification of finite dimensional irreducible modules for W -algebras. Also we get some partial results towards a conjecture by Ginzburg on their finite dimensional bimodules.",
"title": ""
},
{
"docid": "857efb4909ada73ca849acf24d6e74db",
"text": "Owing to inevitable thermal/moisture instability for organic–inorganic hybrid perovskites, pure inorganic perovskite cesium lead halides with both inherent stability and prominent photovoltaic performance have become research hotspots as a promising candidate for commercial perovskite solar cells. However, it is still a serious challenge to synthesize desired cubic cesium lead iodides (CsPbI3) with superior photovoltaic performance for its thermodynamically metastable characteristics. Herein, polymer poly-vinylpyrrolidone (PVP)-induced surface passivation engineering is reported to synthesize extra-long-term stable cubic CsPbI3. It is revealed that acylamino groups of PVP induce electron cloud density enhancement on the surface of CsPbI3, thus lowering surface energy, conducive to stabilize cubic CsPbI3 even in micrometer scale. The cubic-CsPbI3 PSCs exhibit extra-long carrier diffusion length (over 1.5 μm), highest power conversion efficiency of 10.74% and excellent thermal/moisture stability. This result provides important progress towards understanding of phase stability in realization of large-scale preparations of efficient and stable inorganic PSCs. Inorganic cesium lead iodide perovskite is inherently more stable than the hybrid perovskites but it undergoes phase transition that degrades the solar cell performance. Here Li et al. stabilize it with poly-vinylpyrrolidone and obtain high efficiency of 10.74% with excellent thermal and moisture stability.",
"title": ""
},
{
"docid": "5517c8f35c8e9df2994afc12d5cb928f",
"text": "Glomus tumors of the penis are extremely rare. A patient with multiple regional glomus tumors involving the penis is reported. A 16-year-old boy presented with the complaint of painless penile masses and resection of the lesions was performed. The pathologic diagnosis was glomus tumor of the penis. This is the ninth case of glomus tumor of the penis to be reported in the literature.",
"title": ""
},
{
"docid": "fd652333e274b25440767de985702111",
"text": "The global gold market has recently attracted a lot of attention and the price of gold is relatively higher than its historical trend. For mining companies to mitigate risk and uncertainty in gold price fluctuations, make hedging, future investment and evaluation decisions, depend on forecasting future price trends. The first section of this paper reviews the world gold market and the historical trend of gold prices from January 1968 to December 2008. This is followed by an investigation into the relationship between gold price and other key influencing variables, such as oil price and global inflation over the last 40 years. The second section applies a modified econometric version of the longterm trend reverting jump and dip diffusion model for forecasting natural-resource commodity prices. This method addresses the deficiencies of previous models, such as jumps and dips as parameters and unit root test for long-term trends. The model proposes that historical data of mineral commodities have three terms to demonstrate fluctuation of prices: a long-term trend reversion component, a diffusion component and a jump or dip component. The model calculates each term individually to estimate future prices of mineral commodities. The study validates the model and estimates the gold price for the next 10 years, based on monthly historical data of nominal gold price. & 2010 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "080032ded41edee2a26320e3b2afb123",
"text": "The aim of this study was to evaluate the effects of calisthenic exercises on psychological status in patients with ankylosing spondylitis (AS) and multiple sclerosis (MS). This study comprised 40 patients diagnosed with AS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based) and 40 patients diagnosed with MS randomized into two exercise groups (group 1 = hospital-based, group 2 = home-based). The exercise programme was completed by 73 participants (hospital-based = 34, home-based = 39). Mean age was 33.75 ± 5.77 years. After the 8-week exercise programme in the AS group, the home-based exercise group showed significant improvements in erythrocyte sedimentation rates (ESR). The hospital-based exercise group showed significant improvements in terms of the Bath AS Metrology Index (BASMI) and Hospital Anxiety and Depression Scale-Anxiety (HADS-A) scores. After the 8-week exercise programme in the MS group, the home-based and hospital-based exercise groups showed significant improvements in terms of the 10-m walking test, Berg Balance Scale (BBS), HADS-A, and MS international Quality of Life (MusiQoL) scores. There was a significant improvement in the hospital-based and a significant deterioration in the home-based MS patients according to HADS-Depression (HADS-D) score. The positive effects of exercises on neurologic and rheumatic chronic inflammatory processes associated with disability should not be underestimated. Ziel der vorliegenden Studie war die Untersuchung der Wirkungen von gymnastischen Übungen auf die psychische Verfassung von Patienten mit Spondylitis ankylosans (AS) und multipler Sklerose (MS). Die Studie umfasste 40 Patienten mit der Diagnose AS, die randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant), und 40 Patienten mit der Diagnose MS, die ebenfalls randomisiert in 2 Übungsgruppen aufgeteilt wurden (Gruppe 1: stationär, Gruppe 2: ambulant). Vollständig absolviert wurde das Übungsprogramm von 73 Patienten (stationär: 34, ambulant: 39). Das Durchschnittsalter betrug 33,75 ± 5,77 Jahre. Nach dem 8-wöchigen Übungsprogramm in der AS-Gruppe zeigten sich bei der ambulanten Übungsgruppe signifikante Verbesserungen bei der Blutsenkungsgeschwindigkeit (BSG). Die stationäre Übungsgruppe wies signifikante Verbesserungen in Bezug auf den BASMI-Score (Bath AS Metrology Index) und den HADS-A-Score (Hospital Anxiety and Depression Scale-Anxiety) auf. Nach dem 8-wöchigen Übungsprogramm in der MS-Gruppe zeigten sich sowohl in der ambulanten als auch in der stationären Übungsgruppe signifikante Verbesserungen hinsichtlich des 10-m-Gehtests, des BBS-Ergebnisses (Berg Balance Scale), des HADS-A- sowie des MusiQoL-Scores (MS international Quality of Life). Beim HADS-D-Score (HADS-Depression) bestand eine signifikante Verbesserung bei den stationären und eine signifikante Verschlechterung bei den ambulanten MS-Patienten. Die positiven Wirkungen von gymnastischen Übungen auf neurologische und rheumatische chronisch entzündliche Prozesse mit Behinderung sollten nicht unterschätzt werden.",
"title": ""
},
{
"docid": "66474114bf431f3ee6973ad6469565b2",
"text": "Fault analysis in solar photovoltaic (PV) arrays is a fundamental task to protect PV modules from damage and to eliminate risks of safety hazards. This paper focuses on line–line faults in PV arrays that may be caused by short-circuit faults or double ground faults. The effect on fault current from a maximum-power-point tracking of a PV inverter is discussed and shown to, at times, prevent overcurrent protection devices (OCPDs) to operate properly. Furthermore, fault behavior of PV arrays is highly related to the fault location, fault impedance, irradiance level, and use of blocking diodes. Particularly, this paper examines the challenges to OCPD in a PV array brought by unique faults: One is a fault that occurs under low-irradiance conditions, and the other is a fault that occurs at night and evolves during “night-to-day” transition. In both circumstances, the faults might remain hidden in the PV system, no matter how irradiance changes afterward. These unique faults may subsequently lead to unexpected safety hazards, reduced system efficiency, and reduced reliability. A small-scale experimental PV system has been developed to further validate the conclusions.",
"title": ""
},
{
"docid": "220532757b4a47422b5685577f7f4662",
"text": "In many sequential decision-making problems one is interested in minimizing an expected cumulative cost while taking into account risk, i.e., increased awareness of events of small probability and high consequences. Accordingly, the objective of this paper is to present efficient reinforcement learning algorithms for risk-constrained Markov decision processes (MDPs), where risk is represented via a chance constraint or a constraint on the conditional value-at-risk (CVaR) of the cumulative cost. We collectively refer to such problems as percentile risk-constrained MDPs. Specifically, we first derive a formula for computing the gradient of the Lagrangian function for percentile riskconstrained MDPs. Then, we devise policy gradient and actor-critic algorithms that (1) estimate such gradient, (2) update the policy in the descent direction, and (3) update the Lagrange multiplier in the ascent direction. For these algorithms we prove convergence to locally optimal policies. Finally, we demonstrate the effectiveness of our algorithms in an optimal stopping problem and an online marketing application.",
"title": ""
},
{
"docid": "b15dc135eda3a7c60565142ba7a6ae37",
"text": "We propose a mechanism to reconstruct part annotated 3D point clouds of objects given just a single input image. We demonstrate that jointly training for both reconstruction and segmentation leads to improved performance in both the tasks, when compared to training for each task individually. The key idea is to propagate information from each task so as to aid the other during the training procedure. Towards this end, we introduce a location-aware segmentation loss in the training regime. We empirically show the effectiveness of the proposed loss in generating more faithful part reconstructions while also improving segmentation accuracy. We thoroughly evaluate the proposed approach on different object categories from the ShapeNet dataset to obtain improved results in reconstruction as well as segmentation. Codes are available at https://github.com/val-iisc/3d-psrnet.",
"title": ""
},
{
"docid": "b7f4ad07e6d116df196da9c5be5d2fe8",
"text": "An ego-motion estimation method based on the spatial and Doppler information obtained by an automotive radar is proposed. The estimation of the motion state vector is performed in a density-based framework. Compared to standard vehicle odometry the approach is capable to estimate the full two dimensional motion state with three degrees of freedom. The measurement of a Doppler radar sensor is represented as a mixture of Gaussians. This mixture is matched with the mixture of a previous measurement by applying the appropriate egomotion transformation. The parameters of the transformation are found by the optimization of a suitable join metric. Due to the Doppler information the method is very robust against disturbances by moving objects and clutter. It provides excellent results for highly nonlinear movements. Real world results of the proposed method are presented. The measurements are obtained by a 77GHz radar sensor mounted on a test vehicle. A comparison using a high-precision inertial measurement unit with differential GPS support is made. The results show a high accuracy in velocity and yaw-rate estimation.",
"title": ""
},
{
"docid": "51d29ec1313df001efc78397cf1d4aaa",
"text": "Numerous studies have established that aggregating judgments or predictions across individuals can be surprisingly accurate in a variety of domains, including prediction markets, political polls, game shows, and forecasting (see Surowiecki, 2004). Under Galton’s (1907) conditions of individuals having largely unbiased and independent judgments, the aggregated judgment of a group of individuals is uncontroversially better, on average, than the individual judgments themselves (e.g., Armstrong, 2001; Clemen, 1989; Galton, 1907; Surowiecki, 2004; Winkler, 1971). The boundary conditions of crowd wisdom, however, are not as well-understood. For example, when group members are allowed access to other members’ predictions, as opposed to making them independently, their predictions become more positively correlated and the crowd’s performance can diminish (Lorenz, Rauhut, Schweitzer, & Helbing, 2011). In the context of handicapping sports results, individuals have been found to make systematically biased predictions, so that their aggregated judgments may not be wise (Simmons, Nelson, Galak, & Frederick, 2011). How robust is crowd wisdom to factors such as non-independence and bias of crowd members’ judgments? If the conditions for crowd wisdom are less than ideal, is it better to aggregate judgments or, for instance, rely on a skilled individual judge? Would it be better to add a highly skilled crowd member or a less skilled one who makes systematically different predictions than other members, increasing diversity? We provide a simple, precise definition of the wisdom-of-the-crowd effect and a systematic way to examine its boundary conditions. We define a crowd as wise if a linear aggregate of its members’ judgments of a criterion value has less expected squared error than the judgments of an individual sampled randomly, but not necessarily uniformly, from the crowd. Previous definitions of the wisdom of the crowd effect have largely focused on comparing the crowd’s accuracy to that of the average individual member (Larrick, Mannes, & Soll, 2012). Our definition generalizes prior approaches in a couple of ways. We consider crowds created by any linear aggregate, not just simple averaging. Second, our definition allows the comparison of the crowd to an individual selected according to a distribution that could reflect past individual performance, e.g., their skill, or other attributes. On the basis of our definition, we develop a framework for analyzing crowd wisdom that includes various aggregation and sampling rules. These rules include both weighting the aggregate and sampling the individual according to skill, where skill is operationalized as predictive validity, i.e., the correlation between a judge’s prediction and the criterion. Although the amount of the crowd’s wisdom the expected difference between individual error and crowd error is non-linear in the amount of bias and non-independence of the judgments, our results yield simple and general rules specifying when a simple average will be wise. While a simple average of the crowd is not always wise if individuals are not sampled uniformly at random, we show that there always exists some a priori aggregation rule that makes the crowd wise.",
"title": ""
},
{
"docid": "45ef4e4416a4cf20dec64f30ec584a7a",
"text": "Driving simulators play an important role in the development of new vehicles and advanced driver assistance devices. In fact, on the one hand, having a human driver on a driving simulator allows automotive OEMs to bridge the gap between virtual prototyping and on-road testing during the vehicle development phase. On the other hand, novel driver assistance systems (such as advanced accident avoidance systems) can be safely tested by having the driver operating the vehicle in a virtual, highly realistic environment, while being exposed to hazardous situations. In both applications, it is crucial to faithfully reproduce in the simulator the drivers perception of forces acting on the vehicle and its acceleration. The strategy used to operate the simulator platform within its limited working space to provide the driver with the most realistic perception goes under the name of motion cueing. In this paper we describe a novel approach to motion cueing design that is based on Model Predictive Control techniques. Two features characterize the algorithm, namely, the use of a detailed model of the human vestibular system and a predictive strategy based on the availability of a virtual driver. Differently from classical schemes based on washout filters, such features allows a better implementation of tilt coordination and to handle more efficiently the platform limits.",
"title": ""
},
{
"docid": "8f2cfb5cb55b093f67c1811aba8b87e2",
"text": "“You make what you measure” is a familiar mantra at datadriven companies. Accordingly, companies must be careful to choose North Star metrics that create a better product. Metrics fall into two general categories: direct count metrics such as total revenue and monthly active users, and nuanced quality metrics regarding value or other aspects of the user experience. Count metrics, when used exclusively as the North Star, might inform product decisions that harm user experience. Therefore, quality metrics play an important role in product development. We present a five-step framework for developing quality metrics using a combination of machine learning and product intuition. Machine learning ensures that the metric accurately captures user experience. Product intuition makes the metric interpretable and actionable. Through a case study of the Endorsements product at LinkedIn, we illustrate the danger of optimizing exclusively for count metrics, and showcase the successful application of our framework toward developing a quality metric. We show how the new quality metric has driven significant improvements toward creating a valuable, user-first product.",
"title": ""
},
{
"docid": "d7bd02def0f010016b53e2c41b42df35",
"text": "We utilise smart eyeglasses for dietary monitoring, in particular to sense food chewing. Our approach is based on a 3D-printed regular eyeglasses design that could accommodate processing electronics and Electromyography (EMG) electrodes. Electrode positioning was analysed and an optimal electrode placement at the temples was identified. We further compared gel and dry fabric electrodes. For the subsequent analysis, fabric electrodes were attached to the eyeglasses frame. The eyeglasses were used in a data recording study with eight participants eating different foods. Two chewing cycle detection methods and two food classification algorithms were compared. Detection rates for individual chewing cycles reached a precision and recall of 80%. For five foods, classification accuracy for individual chewing cycles varied between 43% and 71%. Majority voting across intake sequences improved accuracy, ranging between 63% and 84%. We concluded that EMG-based chewing analysis using smart eyeglasses can contribute essential chewing structure information to dietary monitoring systems, while the eyeglasses remain inconspicuous and thus could be continuously used.",
"title": ""
},
{
"docid": "78f8d28f4b20abbac3ad848033bb088b",
"text": "Many real-world applications involve multilabel classification, in which the labels are organized in the form of a tree or directed acyclic graph (DAG). However, current research efforts typically ignore the label dependencies or can only exploit the dependencies in tree-structured hierarchies. In this paper, we present a novel hierarchical multilabel classification algorithm which can be used on both treeand DAG-structured hierarchies. The key idea is to formulate the search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. Using a simple greedy strategy, the proposed algorithm is computationally efficient, easy to implement, does not suffer from the problem of insufficient/skewed training data in classifier training, and can be readily used on large hierarchies. Theoretical results guarantee the optimality of the obtained solution. Experiments are performed on a large number of functional genomics data sets. The proposed method consistently outperforms the state-of-the-art method on both treeand DAG-structured hierarchies.",
"title": ""
},
{
"docid": "231d8ef95d02889d70000d70d8743004",
"text": "Last decade witnessed a lot of research in the field of sentiment analysis. Understanding the attitude and the emotions that people express in written text proved to be really important and helpful in sociology, political science, psychology, market research, and, of course, artificial intelligence. This paper demonstrates a rule-based approach to clause-level sentiment analysis of reviews in Ukrainian. The general architecture of the implemented sentiment analysis system is presented, the current stage of research is described and further work is explained. The main emphasis is made on the design of rules for computing sentiments.",
"title": ""
},
{
"docid": "c28ee3a41d05654eedfd379baf2d5f24",
"text": "The problem of classifying subjects into disease categories is of common occurrence in medical research. Machine learning tools such as Artificial Neural Network (ANN), Support Vector Machine (SVM) and Logistic Regression (LR) and Fisher’s Linear Discriminant Analysis (LDA) are widely used in the areas of prediction and classification. The main objective of these competing classification strategies is to predict a dichotomous outcome (e.g. disease/healthy) based on several features.",
"title": ""
},
{
"docid": "fee504e2184570e80956ff1c8a4ec83c",
"text": "The use of computed tomography (CT) in clinical practice has been increasing rapidly, with the number of CT examinations performed in adults and children rising by 10% per year in England. Because the radiology community strives to reduce the radiation dose associated with pediatric examinations, external factors, including guidelines for pediatric head injury, are raising expectations for use of cranial CT in the pediatric population. Thus, radiologists are increasingly likely to encounter pediatric head CT examinations in daily practice. The variable appearance of cranial sutures at different ages can be confusing for inexperienced readers of radiologic images. The evolution of multidetector CT with thin-section acquisition increases the clarity of some of these sutures, which may be misinterpreted as fractures. Familiarity with the normal anatomy of the pediatric skull, how it changes with age, and normal variants can assist in translating the increased resolution of multidetector CT into more accurate detection of fractures and confident determination of normality, thereby reducing prolonged hospitalization of children with normal developmental structures that have been misinterpreted as fractures. More important, the potential morbidity and mortality related to false-negative interpretation of fractures as normal sutures may be avoided. The authors describe the normal anatomy of all standard pediatric sutures, common variants, and sutural mimics, thereby providing an accurate and safe framework for CT evaluation of skull trauma in pediatric patients.",
"title": ""
},
{
"docid": "f94385118e9fca123bae28093b288723",
"text": "One of the major restrictions on the performance of videobased person re-id is partial noise caused by occlusion, blur and illumination. Since different spatial regions of a single frame have various quality, and the quality of the same region also varies across frames in a tracklet, a good way to address the problem is to effectively aggregate complementary information from all frames in a sequence, using better regions from other frames to compensate the influence of an image region with poor quality. To achieve this, we propose a novel Region-based Quality Estimation Network (RQEN), in which an ingenious training mechanism enables the effective learning to extract the complementary region-based information between different frames. Compared with other feature extraction methods, we achieved comparable results of 92.4%, 76.1% and 77.83% on the PRID 2011, iLIDS-VID and MARS, respectively. In addition, to alleviate the lack of clean large-scale person re-id datasets for the community, this paper also contributes a new high-quality dataset, named “Labeled Pedestrian in the Wild (LPW)” which contains 7,694 tracklets with over 590,000 images. Despite its relatively large scale, the annotations also possess high cleanliness. Moreover, it’s more challenging in the following aspects: the age of characters varies from childhood to elderhood; the postures of people are diverse, including running and cycling in addition to the normal walking state.",
"title": ""
},
{
"docid": "78b874393739daa623724efad75cb97d",
"text": "Building curious machines that can answer as well as ask questions is an important challenge for AI. The two tasks of question answering and question generation are usually tackled separately in the NLP literature. At the same time, both require significant amounts of supervised data which is hard to obtain in many domains. To alleviate these issues, we propose a self-training method for jointly learning to ask as well as answer questions, leveraging unlabeled text along with labeled question answer pairs for learning. We evaluate our approach on four benchmark datasets: SQUAD, MS MARCO, WikiQA and TrecQA, and show significant improvements over a number of established baselines on both question answering and question generation tasks. We also achieved new state-of-the-art results on two competitive answer sentence selection tasks: WikiQA and TrecQA.",
"title": ""
}
] | scidocsrr |
b4fdf378ed0e152b0ad8c7e77967f38f | Towards intelligent lower limb wearable robots: Challenges and perspectives - State of the art | [
{
"docid": "b2199b7be543f0f287e0cbdb7a477843",
"text": "We developed a pneumatically powered orthosis for the human ankle joint. The orthosis consisted of a carbon fiber shell, hinge joint, and two artificial pneumatic muscles. One artificial pneumatic muscle provided plantar flexion torque and the second one provided dorsiflexion torque. Computer software adjusted air pressure in each artificial muscle independently so that artificial muscle force was proportional to rectified low-pass-filtered electromyography (EMG) amplitude (i.e., proportional myoelectric control). Tibialis anterior EMG activated the artificial dorsiflexor and soleus EMG activated the artificial plantar flexor. We collected joint kinematic and artificial muscle force data as one healthy participant walked on a treadmill with the orthosis. Peak plantar flexor torque provided by the orthosis was 70 Nm, and peak dorsiflexor torque provided by the orthosis was 38 Nm. The orthosis could be useful for basic science studies on human locomotion or possibly for gait rehabilitation after neurological injury.",
"title": ""
},
{
"docid": "69b1c87a06b1d83fd00d9764cdadc2e9",
"text": "Sarcos Research Corporation, and the Center for Engineering Design at the University of Utah, have long been interested in both the fundamental and the applied aspects of robots and other computationally driven machines. We have produced substantial numbers of systems that function as products for commercial applications, and as advanced research tools specifically designed for experimental",
"title": ""
}
] | [
{
"docid": "38c78be386aa3827f39825f9e40aa3cc",
"text": "Back Side Illumination (BSI) CMOS image sensors with two-layer photo detectors (2LPDs) have been fabricated and evaluated. The test pixel array has green pixels (2.2um x 2.2um) and a magenta pixel (2.2um x 4.4um). The green pixel has a single-layer photo detector (1LPD). The magenta pixel has a 2LPD and a vertical charge transfer (VCT) path to contact a back side photo detector. The 2LPD and the VCT were implemented by high-energy ion implantation from the circuit side. Measured spectral response curves from the 2LPDs fitted well with those estimated based on lightabsorption theory for Silicon detectors. Our measurement results show that the keys to realize the 2LPD in BSI are; (1) the reduction of crosstalk to the VCT from adjacent pixels and (2) controlling the backside photo detector thickness variance to reduce color signal variations.",
"title": ""
},
{
"docid": "88077fe7ce2ad4a3c3052a988f9f96c1",
"text": "When collecting patient-level resource use data for statistical analysis, for some patients and in some categories of resource use, the required count will not be observed. Although this problem must arise in most reported economic evaluations containing patient-level data, it is rare for authors to detail how the problem was overcome. Statistical packages may default to handling missing data through a so-called 'complete case analysis', while some recent cost-analyses have appeared to favour an 'available case' approach. Both of these methods are problematic: complete case analysis is inefficient and is likely to be biased; available case analysis, by employing different numbers of observations for each resource use item, generates severe problems for standard statistical inference. Instead we explore imputation methods for generating 'replacement' values for missing data that will permit complete case analysis using the whole data set and we illustrate these methods using two data sets that had incomplete resource use information.",
"title": ""
},
{
"docid": "80de1fba41f93953ea21a517065f8ca8",
"text": "This paper presents the kinematic calibration of a novel 7-degree-of-freedom (DOF) cable-driven robotic arm (CDRA), aimed at improving its absolute positioning accuracy. This CDRA consists of three 'self-calibrated' cable-driven parallel mechanism (CDPM) modules. In order to account for any kinematic errors that might arise when assembling the individual CDPMs, a calibration model is formulated based on the local product-of-exponential formula and the measurement residues in the tool-tip frame poses. An iterative least-squares algorithm is employed to identify the errors in the fixed transformation frames of the sequentially assembled 'self- calibrated' CDPM modules. Both computer simulations and experimental studies were carried out to verify the robustness and effectiveness of the proposed calibration algorithm. From the experimental studies, errors in the fixed kinematic transformation frames were precisely recovered after a minimum of 15 pose measurements.",
"title": ""
},
{
"docid": "8bed049baa03a11867b0205e16402d0e",
"text": "The paper investigates potential bias in awards of player disciplinary sanctions, in the form of cautions (yellow cards) and dismissals (red cards) by referees in the English Premier League and the German Bundesliga. Previous studies of behaviour of soccer referees have not adequately incorporated within-game information.Descriptive statistics from our samples clearly show that home teams receive fewer yellow and red cards than away teams. These differences may be wrongly interpreted as evidence of bias where the modeller has failed to include withingame events such as goals scored and recent cards issued.What appears as referee favouritism may actually be excessive and illegal aggressive behaviour by players in teams that are behind in score. We deal with these issues by using a minute-by-minute bivariate probit analysis of yellow and red cards issued in games over six seasons in the two leagues. The significance of a variable to denote the difference in score at the time of sanction suggests that foul play that is induced by a losing position is an important influence on the award of yellow and red cards. Controlling for various pre-game and within-game variables, we find evidence that is indicative of home team favouritism induced by crowd pressure: in Germany home teams with running tracks in their stadia attract more yellow and red cards than teams playing in stadia with less distance between the crowd and the pitch. Separating the competing teams in matches by favourite and underdog status, as perceived by the betting market, yields further evidence, this time for both leagues, that the source of home teams receiving fewer cards is not just that they are disproportionately often the favoured team and disproportionately ahead in score.Thus there is evidence that is consistent with pure referee bias in relative treatments of home and away teams.",
"title": ""
},
{
"docid": "e754c7c7821703ad298d591a3f7a3105",
"text": "The rapid growth in the population density in urban cities and the advancement in technology demands real-time provision of services and infrastructure. Citizens, especially travelers, want to be reached within time to the destination. Consequently, they require to be facilitated with smart and real-time traffic information depending on the current traffic scenario. Therefore, in this paper, we proposed a graph-oriented mechanism to achieve the smart transportation system in the city. We proposed to deploy road sensors to get the overall traffic information as well as the vehicular network to obtain location and speed information of the individual vehicle. These Internet of Things (IoT) based networks generate enormous volume of data, termed as Big Data, depicting the traffic information of the city. To process incoming Big Data from IoT devices, then generating big graphs from the data, and processing them, we proposed an efficient architecture that uses the Giraph tool with parallel processing servers to achieve real-time efficiency. Later, various graph algorithms are used to achieve smart transportation by making real-time intelligent decisions to facilitate the citizens as well as the metropolitan authorities. Vehicular Datasets from various reliable resources representing the real city traffic are used for analysis and evaluation purpose. The system is implemented using Giraph and Spark tool at the top of the Hadoop parallel nodes to generate and process graphs with near real-time. Moreover, the system is evaluated in terms of efficiency by considering the system throughput and processing time. The results show that the proposed system is more scalable and efficient.",
"title": ""
},
{
"docid": "96055f0e41d62dc0ef318772fa6d6d9f",
"text": "Building Information Modeling (BIM) has rapidly grown from merely being a three-dimensional (3D) model of a facility to serving as “a shared knowledge resource for information about a facility, forming a reliable basis for decisions during its life cycle from inception onward” [1]. BIM with three primary spatial dimensions (width, height, and depth) becomes 4D BIM when time (construction scheduling information) is added, and 5D BIM when cost information is added to it. Although the sixth dimension of the 6D BIM is often attributed to asset information useful for Facility Management (FM) processes, there is no agreement in the research literature on what each dimension represents beyond the fifth dimension [2]. BIM ultimately seeks to digitize the different stages of a building lifecycle such as planning, design, construction, and operation such that consistent digital information of a building project can be used by stakeholders throughout the building life-cycle [3]. The United States National Building Information Model Standard (NBIMS) initially characterized BIMs as digital representations of physical and functional aspects of a facility. But, in the most recent version released in July 2015, the NBIMS’ definition of BIM includes three separate but linked functions, namely business process, digital representation, and organization and control [4]. A number of national-level initiatives are underway in various countries to formally encourage the adoption of BIM technologies in the Architecture, Engineering, and Construction (AEC) and FM industries. Building SMART, with 18 chapters across the globe, including USA, UK, Australasia, etc., was established in 1995 with the aim of developing and driving the active use of open internationally-recognized standards to support the wider adoption of BIM across the building and infrastructure sectors [5]. The UK BIM Task Group, with experts from industry, government, public sector, institutes, and academia, is committed to facilitate the implementation of ‘collaborative 3D BIM’, a UK Government Construction Strategy initiative [6]. Similarly, the EUBIM Task Group was started with a vision to foster the common use of BIM in public works and produce a handbook containing the common BIM principles, guidance and practices for public contracting entities and policy makers [7].",
"title": ""
},
{
"docid": "13cfc33bd8611b3baaa9be37ea9d627e",
"text": "Some of the more difficult to define aspects of the therapeutic process (empathy, compassion, presence) remain some of the most important. Teaching them presents a challenge for therapist trainees and educators alike. In this study, we examine our beginning practicum students' experience of learning mindfulness meditation as a way to help them develop therapeutic presence. Through thematic analysis of their journal entries a variety of themes emerged, including the effects of meditation practice, the ability to be present, balancing being and doing modes in therapy, and the development of acceptance and compassion for themselves and for their clients. Our findings suggest that mindfulness meditation may be a useful addition to clinical training.",
"title": ""
},
{
"docid": "f0d3ab8a530d7634149a5c29fa8bfe1b",
"text": "In this paper, a novel broadband dual-polarized (slant ±45°) base station antenna element operating at 790–960 MHz is proposed. The antenna element consists of two pairs of symmetrical dipoles, four couples of baluns, a cricoid pedestal and two kinds of plastic fasteners. Specific shape metal reflector is also designed to achieve stable radiation pattern and high front-to-back ratio (FBR). All the simulated and measured results show that the proposed antenna element has wide impedance bandwidth (about 19.4%), low voltage standing wave ratio (VSWR < 1.4) and high port to port isolation (S21 < −25 dB) at the whole operating frequency band. Stable horizontal half-power beam width (HPBW) with 65°±4.83° and high gain (> 9.66 dBi) are also achieved. The proposed antenna element fabricated by integrated metal casting technology has great mechanical properties such as compact structure, low profile, good stability, light weight and easy to fabricate. Due to its good electrical and mechanical characteristics, the antenna element is suitable for European Digital Dividend, CDMA800 and GSM900 bands in base station antenna of modern mobile communication.",
"title": ""
},
{
"docid": "60d6869cadebea71ef549bb2a7d7e5c3",
"text": "BACKGROUND\nAcne is a common condition seen in up to 80% of people between 11 and 30 years of age and in up to 5% of older adults. In some patients, it can result in permanent scars that are surprisingly difficult to treat. A relatively new treatment, termed skin needling (needle dermabrasion), seems to be appropriate for the treatment of rolling scars in acne.\n\n\nAIM\nTo confirm the usefulness of skin needling in acne scarring treatment.\n\n\nMETHODS\nThe present study was conducted from September 2007 to March 2008 at the Department of Systemic Pathology, University of Naples Federico II and the UOC Dermatology Unit, University of Rome La Sapienza. In total, 32 patients (20 female, 12 male patients; age range 17-45) with acne rolling scars were enrolled. Each patient was treated with a specific tool in two sessions. Using digital cameras, photos of all patients were taken to evaluate scar depth and, in five patients, silicone rubber was used to make a microrelief impression of the scars. The photographic data were analysed by using the sign test statistic (alpha < 0.05) and the data from the cutaneous casts were analysed by fast Fourier transformation (FFT).\n\n\nRESULTS\nAnalysis of the patient photographs, supported by the sign test and of the degree of irregularity of the surface microrelief, supported by FFT, showed that, after only two sessions, the severity grade of rolling scars in all patients was greatly reduced and there was an overall aesthetic improvement. No patient showed any visible signs of the procedure or hyperpigmentation.\n\n\nCONCLUSION\nThe present study confirms that skin needling has an immediate effect in improving acne rolling scars and has advantages over other procedures.",
"title": ""
},
{
"docid": "d9123053892ce671665a3a4a1694a57c",
"text": "Visual perceptual learning (VPL) is defined as a long-term improvement in performance on a visual task. In recent years, the idea that conscious effort is necessary for VPL to occur has been challenged by research suggesting the involvement of more implicit processing mechanisms, such as reinforcement-driven processing and consolidation. In addition, we have learnt much about the neural substrates of VPL and it has become evident that changes in visual areas and regions beyond the visual cortex can take place during VPL.",
"title": ""
},
{
"docid": "7677b67bd95f05c2e4c87022c3caa938",
"text": "The semi-supervised learning usually only predict labels for unlabeled data appearing in training data, and cannot effectively predict labels for testing data never appearing in training set. To handle this outof-sample problem, many inductive methods make a constraint such that the predicted label matrix should be exactly equal to a linear model. In practice, this constraint is too rigid to capture the manifold structure of data. Motivated by this deficiency, we relax the rigid linear embedding constraint and propose to use an elastic embedding constraint on the predicted label matrix such that the manifold structure can be better explored. To solve our new objective and also a more general optimization problem, we study a novel adaptive loss with efficient optimization algorithm. Our new adaptive loss minimization method takes the advantages of both L1 norm and L2 norm, and is robust to the data outlier under Laplacian distribution and can efficiently learn the normal data under Gaussian distribution. Experiments have been performed on image classification tasks and our approach outperforms other state-of-the-art methods.",
"title": ""
},
{
"docid": "3646b64ac400c12f9c9c4f8ba4f53591",
"text": "Cerebral organoids recapitulate human brain development at a considerable level of detail, even in the absence of externally added signaling factors. The patterning events driving this self-organization are currently unknown. Here, we examine the developmental and differentiative capacity of cerebral organoids. Focusing on forebrain regions, we demonstrate the presence of a variety of discrete ventral and dorsal regions. Clearing and subsequent 3D reconstruction of entire organoids reveal that many of these regions are interconnected, suggesting that the entire range of dorso-ventral identities can be generated within continuous neuroepithelia. Consistent with this, we demonstrate the presence of forebrain organizing centers that express secreted growth factors, which may be involved in dorso-ventral patterning within organoids. Furthermore, we demonstrate the timed generation of neurons with mature morphologies, as well as the subsequent generation of astrocytes and oligodendrocytes. Our work provides the methodology and quality criteria for phenotypic analysis of brain organoids and shows that the spatial and temporal patterning events governing human brain development can be recapitulated in vitro.",
"title": ""
},
{
"docid": "4db29a3fd1f1101c3949d3270b15ef07",
"text": "Human goal-directed action emerges from the interaction between stimulus-driven sensorimotor online systems and slower-working control systems that relate highly processed perceptual information to the construction of goal-related action plans. This distribution of labor requires the acquisition of enduring action representations; that is, of memory traces which capture the main characteristics of successful actions and their consequences. It is argued here that these traces provide the building blocks for off-line prospective action planning, which renders the search through stored action representations an essential part of action control. Hence, action planning requires cognitive search (through possible options) and might have led to the evolution of cognitive search routines that humans have learned to employ for other purposes as well, such as searching for perceptual events and through memory. Thus, what is commonly considered to represent different types of search operations may all have evolved from action planning and share the same characteristics. Evidence is discussed which suggests that all types of cognitive search—be it in searching for perceptual events, for suitable actions, or through memory—share the characteristic of following a fi xed sequence of cognitive operations: divergent search followed by convergent search.",
"title": ""
},
{
"docid": "7c295cb178e58298b1f60f5a829118fd",
"text": "A dual-band 0.92/2.45 GHz circularly-polarized (CP) unidirectional antenna using the wideband dual-feed network, two orthogonally positioned asymmetric H-shape slots, and two stacked concentric annular-ring patches is proposed for RF identification (RFID) applications. The measurement result shows that the antenna achieves the impedance bandwidths of 15.4% and 41.9%, the 3-dB axial-ratio (AR) bandwidths of 4.3% and 21.5%, and peak gains of 7.2 dBic and 8.2 dBic at 0.92 and 2.45 GHz bands, respectively. Moreover, the antenna provides stable symmetrical radiation patterns and wide-angle 3-dB AR beamwidths in both lower and higher bands for unidirectional wide-coverage RFID reader applications. Above all, the dual-band CP unidirectional patch antenna presented is beneficial to dual-band RFID system on configuration, implementation, as well as cost reduction.",
"title": ""
},
{
"docid": "ba4d30e7ea09d84f8f7d96c426e50f34",
"text": "Submission instructions: These questions require thought but do not require long answers. Please be as concise as possible. You should submit your answers as a writeup in PDF format via GradeScope and code via the Snap submission site. Submitting writeup: Prepare answers to the homework questions into a single PDF file and submit it via http://gradescope.com. Make sure that the answer to each question is on a separate page. On top of each page write the number of the question you are answering. Please find the cover sheet and the recommended templates located here: Not including the cover sheet in your submission will result in a 2-point penalty. It is also important to tag your answers correctly on Gradescope. We will deduct 5/N points for each incorrectly tagged subproblem (where N is the number of subproblems). This means you can lose up to 5 points for incorrect tagging. Put all the code for a single question into a single file and upload it. Consider a user-item bipartite graph where each edge in the graph between user U to item I, indicates that user U likes item I. We also represent the ratings matrix for this set of users and items as R, where each row in R corresponds to a user and each column corresponds to an item. If user i likes item j, then R i,j = 1, otherwise R i,j = 0. Also assume we have m users and n items, so matrix R is m × n.",
"title": ""
},
{
"docid": "c695f74a41412606e31c771ec9d2b6d3",
"text": "Osteochondrosis dissecans (OCD) is a form of osteochondrosis limited to the articular epiphysis. The most commonly affected areas include, in decreasing order of frequency, the femoral condyles, talar dome and capitellum of the humerus. OCD rarely occurs in the shoulder joint, where it involves either the humeral head or the glenoid. The purpose of this report is to present a case with glenoid cavity osteochondritis dissecans and clinical and radiological outcome after arthroscopic debridement. The patient underwent arthroscopy to remove the loose body and to microfracture the cavity. The patient was followed-up for 4 years and she is pain-free with full range of motion and a stable shoulder joint.",
"title": ""
},
{
"docid": "678ef706d4cb1c35f6b3d82bf25a4aa7",
"text": "This article is an extremely rapid survey of the modern theory of partial differential equations (PDEs). Sources of PDEs are legion: mathematical physics, geometry, probability theory, continuum mechanics, optimization theory, etc. Indeed, most of the fundamental laws of the physical sciences are partial differential equations and most papers published in applied math concern PDEs. The following discussion is consequently very broad, but also very shallow, and will certainly be inadequate for any given PDE the reader may care about. The goal is rather to highlight some of the many key insights and unifying principles across the entire subject.",
"title": ""
},
{
"docid": "db190bb0cf83071b6e19c43201f92610",
"text": "In this paper, a MATLAB based simulation of Grid connected PV system is presented. The main components of this simulation are PV solar panel, Boost converter; Maximum Power Point Tracking System (MPPT) and Grid Connected PV inverter with closed loop control system is designed and simulated. A simulation studies is carried out in different solar radiation level.",
"title": ""
},
{
"docid": "ac156d7b3069ff62264bd704b7b8dfc9",
"text": "Rynes, Colbert, and Brown (2002) presented the following statement to 959 members of the Society for Human Resource Management (SHRM): “Surveys that directly ask employees how important pay is to them are likely to overestimate pay’s true importance in actual decisions” (p. 158). If our interpretation (and that of Rynes et al.) of the research literature is accurate, then the correct true-false answer to the above statement is “false.” In other words, people are more likely to underreport than to overreport the importance of pay as a motivational factor in most situations. Put another way, research suggests that pay is much more important in people’s actual choices and behaviors than it is in their self-reports of what motivates them, much like the cartoon viewers mentioned in the quote above. Yet, only 35% of the respondents in the Rynes et al. study answered in a way consistent with research findings (i.e., chose “false”). Our objective in this article is to show that employee surveys regarding the importance of various factors in motivation generally produce results that are inconsistent with studies of actual employee behavior. In particular, we focus on well-documented findings that employees tend to say that pay THE IMPORTANCE OF PAY IN EMPLOYEE MOTIVATION: DISCREPANCIES BETWEEN WHAT PEOPLE SAY AND WHAT THEY DO",
"title": ""
},
{
"docid": "5008ecf234a3449f524491de04b7868c",
"text": "Cross-domain recommendations are currently available in closed, proprietary social networking ecosystems such as Facebook, Twitter and Google+. I propose an open framework as an alternative, which enables cross-domain recommendations with domain-agnostic user profiles modeled as semantic interest graphs. This novel framework covers all parts of a recommender system. It includes an architecture for privacy-enabled profile exchange, a distributed and domain-agnostic user model and a cross-domain recommendation algorithm. This enables users to receive recommendations for a target domain (e.g. food) based on any kind of previous interests.",
"title": ""
}
] | scidocsrr |
c07287090c74ba660018576f21d102d7 | How competitive are you: Analysis of people's attractiveness in an online dating system | [
{
"docid": "9efa0ff0743edacc4e9421ed45441fde",
"text": "Perception of universal facial beauty has long been debated amongst psychologists and anthropologists. In this paper, we perform experiments to evaluate the extent of universal beauty by surveying a number of diverse human referees to grade a collection of female facial images. Results obtained show that there exists a strong central tendency in the human grades, thus exhibiting agreement on beauty assessment. We then trained an automated classifier using the average human grades as the ground truth and used it to classify an independent test set of facial images. The high accuracy achieved proves that this classifier can be used as a general, automated tool for objective classification of female facial beauty. Potential applications exist in the entertainment industry, cosmetic industry, virtual media, and plastic surgery.",
"title": ""
},
{
"docid": "4f8fea97733000d58f2ff229c85aeaa0",
"text": "Online dating sites have become popular platforms for people to look for potential romantic partners. Many online dating sites provide recommendations on compatible partners based on their proprietary matching algorithms. It is important that not only the recommended dates match the user’s preference or criteria, but also the recommended users are interested in the user and likely to reciprocate when contacted. The goal of this paper is to predict whether an initial contact message from a user will be replied to by the receiver. The study is based on a large scale real-world dataset obtained from a major dating site in China with more than sixty million registered users. We formulate our reply prediction as a link prediction problem of social networks and approach it using a machine learning framework. The availability of a large amount of user profile information and the bipartite nature of the dating network present unique opportunities and challenges to the reply prediction problem. We extract user-based features from user profiles and graph-based features from the bipartite dating network, apply them in a variety of classification algorithms, and compare the utility of the features and performance of the classifiers. Our results show that the user-based and graph-based features result in similar performance, and can be used to effectively predict the reciprocal links. Only a small performance gain is achieved when both feature sets are used. Among the five classifiers we considered, random forests method outperforms the other four algorithms (naive Bayes, logistic regression, KNN, and SVM). Our methods and results can provide valuable guidelines to the design and performance of recommendation engine for online dating sites.",
"title": ""
}
] | [
{
"docid": "3fbb2bb37f44cb8f300fd28cdbd8bc06",
"text": "The synapse is a crucial element in biological neural networks, but a simple electronic equivalent has been absent. This complicates the development of hardware that imitates biological architectures in the nervous system. Now, the recent progress in the experimental realization of memristive devices has renewed interest in artificial neural networks. The resistance of a memristive system depends on its past states and exactly this functionality can be used to mimic the synaptic connections in a (human) brain. After a short introduction to memristors, we present and explain the relevant mechanisms in a biological neural network, such as long-term potentiation and spike time-dependent plasticity, and determine the minimal requirements for an artificial neural network. We review the implementations of these processes using basic electric circuits and more complex mechanisms that either imitate biological systems or could act as a model system for them. (Some figures may appear in colour only in the online journal)",
"title": ""
},
{
"docid": "3567af18bc17efdb0efeb41d08fabb7b",
"text": "In this review we examine recent research in the area of motivation in mathematics education and discuss findings from research perspectives in this domain. We note consistencies across research perspectives that suggest a set of generalizable conclusions about the contextual factors, cognitive processes, and benefits of interventions that affect students’ and teachers’ motivational attitudes. Criticisms are leveled concerning the lack of theoretical guidance driving the conduct and interpretation of the majority of studies in the field. Few researchers have attempted to extend current theories of motivation in ways that are consistent with the current research on learning and classroom discourse. In particular, researchers interested in studying motivation in the content domain of school mathematics need to examine the relationship that exists between mathematics as a socially constructed field and students’ desire to achieve.",
"title": ""
},
{
"docid": "6e82e635682cf87a84463f01c01a1d33",
"text": "Finger veins have been proved to be an effective biometric for personal identification in the recent years. However, finger vein images are easily affected by influences such as image translation, orientation, scale, scattering, finger structure, complicated background, uneven illumination, and collection posture. All these factors may contribute to inaccurate region of interest (ROI) definition, and so degrade the performance of finger vein identification system. To improve this problem, in this paper, we propose a finger vein ROI localization method that has high effectiveness and robustness against the above factors. The proposed method consists of a set of steps to localize ROIs accurately, namely segmentation, orientation correction, and ROI detection. Accurate finger region segmentation and correct calculated orientation can support each other to produce higher accuracy in localizing ROIs. Extensive experiments have been performed on the finger vein image database, MMCBNU_6000, to verify the robustness of the proposed method. The proposed method shows the segmentation accuracy of 100%. Furthermore, the average processing time of the proposed method is 22 ms for an acquired image, which satisfies the criterion of a real-time finger vein identification system.",
"title": ""
},
{
"docid": "6e60d6b878c35051ab939a03bdd09574",
"text": "We propose a new CNN-CRF end-to-end learning framework, which is based on joint stochastic optimization with respect to both Convolutional Neural Network (CNN) and Conditional Random Field (CRF) parameters. While stochastic gradient descent is a standard technique for CNN training, it was not used for joint models so far. We show that our learning method is (i) general, i.e. it applies to arbitrary CNN and CRF architectures and potential functions; (ii) scalable, i.e. it has a low memory footprint and straightforwardly parallelizes on GPUs; (iii) easy in implementation. Additionally, the unified CNN-CRF optimization approach simplifies a potential hardware implementation. We empirically evaluate our method on the task of semantic labeling of body parts in depth images and show that it compares favorably to competing techniques.",
"title": ""
},
{
"docid": "049def2d879d0b873132660b0b856443",
"text": "This report explores the relationship between narcissism and unethical conduct in an organization by answering two questions: (1) In what ways does narcissism affect an organization?, and (2) What is the relationship between narcissism and the financial industry? Research suggests the overall conclusion that narcissistic individuals directly influence the identity of an organization and how it behaves. Ways to address these issues are shown using Enron as a case study example.",
"title": ""
},
{
"docid": "d835cb852c482c2b7e14f9af4a5a1141",
"text": "This paper investigates the effectiveness of state-of-the-art classification algorithms to categorise road vehicles for an urban traffic monitoring system using a multi-shape descriptor. The analysis is applied to monocular video acquired from a static pole-mounted road side CCTV camera on a busy street. Manual vehicle segmentation was used to acquire a large (>2000 sample) database of labelled vehicles from which a set of measurement-based features (MBF) in combination with a pyramid of HOG (histogram of orientation gradients, both edge and intensity based) features. These are used to classify the objects into four main vehicle categories: car, van, bus and motorcycle. Results are presented for a number of experiments that were conducted to compare support vector machines (SVM) and random forests (RF) classifiers. 10-fold cross validation has been used to evaluate the performance of the classification methods. The results demonstrate that all methods achieve a recognition rate above 95% on the dataset, with SVM consistently outperforming RF. A combination of MBF and IPHOG features gave the best performance of 99.78%.",
"title": ""
},
{
"docid": "9f530b42ae19ddcf52efa41272b2dbc7",
"text": "Learning-based methods for appearance-based gaze estimation achieve state-of-the-art performance in challenging real-world settings but require large amounts of labelled training data. Learningby-synthesis was proposed as a promising solution to this problem but current methods are limited with respect to speed, the appearance variability as well as the head pose and gaze angle distribution they can synthesize. We present UnityEyes, a novel method to rapidly synthesize large amounts of variable eye region images as training data. Our method combines a novel generative 3D model of the human eye region with a real-time rendering framework. The model is based on high-resolution 3D face scans and uses realtime approximations for complex eyeball materials and structures as well as novel anatomically inspired procedural geometry methods for eyelid animation. We show that these synthesized images can be used to estimate gaze in difficult in-the-wild scenarios, even for extreme gaze angles or in cases in which the pupil is fully occluded. We also demonstrate competitive gaze estimation results on a benchmark in-the-wild dataset, despite only using a light-weight nearest-neighbor algorithm. We are making our UnityEyes synthesis framework freely available online for the benefit of the research community.",
"title": ""
},
{
"docid": "759a19f60890a11e7e460aecd7bb6477",
"text": "The stiff man syndrome (SMS) and its variants, focal SMS, stiff limb (or leg) syndrome (SLS), jerking SMS, and progressive encephalomyelitis with rigidity and myoclonus (PERM), appear to occur more frequently than hitherto thought. A characteristic ensemble of symptoms and signs allows a tentative clinical diagnosis. Supportive ancillary findings include (1) the demonstration of continuous muscle activity in trunk and proximal limb muscles despite attempted relaxation, (2) enhanced exteroceptive reflexes, and (3) antibodies to glutamic acid decarboxylase (GAD) in both serum and spinal fluid. Antibodies to GAD are not diagnostic or specific for SMS and the role of these autoantibodies in the pathogenesis of SMS/SLS/PERM is the subject of debate and difficult to reconcile on the basis of our present knowledge. Nevertheless, evidence is emerging to suggest that SMS/SLS/PERM are manifestations of an immune-mediated chronic encephalomyelitis and immunomodulation is an effective therapeutic approach.",
"title": ""
},
{
"docid": "5ca5cfcd0ed34d9b0033977e9cde2c74",
"text": "We study the impact of regulation on competition between brand-names and generics and pharmaceutical expenditures using a unique policy experiment in Norway, where reference pricing (RP) replaced price cap regulation in 2003 for a sub-sample of o¤-patent products. First, we construct a vertical di¤erentiation model to analyze the impact of regulation on prices and market shares of brand-names and generics. Then, we exploit a detailed panel data set at product level covering several o¤-patent molecules before and after the policy reform. O¤-patent drugs not subject to RP serve as our control group. We
nd that RP signi
cantly reduces both brand-name and generic prices, and results in signi
cantly lower brand-name market shares. Finally, we show that RP has a strong negative e¤ect on average molecule prices, suggesting signi
cant cost-savings, and that patients copayments decrease despite the extra surcharges under RP. Key words: Pharmaceuticals; Regulation; Generic Competition JEL Classi
cations: I11; I18; L13; L65 We thank David Bardey, Øivind Anti Nilsen, Frode Steen, and two anonymous referees for valuable comments and suggestions. We also thank the Norwegian Research Council, Health Economics Bergen (HEB) for
nancial support. Corresponding author. Department of Economics and Health Economics Bergen, Norwegian School of Economics and Business Administration, Helleveien 30, N-5045 Bergen, Norway. E-mail: [email protected]. Uni Rokkan Centre, Health Economics Bergen, Nygårdsgaten 5, N-5015 Bergen, Norway. E-mail: [email protected]. Department of Economics/NIPE, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal; and University of Bergen (Economics), Norway. E-mail: [email protected].",
"title": ""
},
{
"docid": "00c17123df0fa10f0d405b4d0c9dfad0",
"text": "Touchless hand gesture recognition systems are becoming important in automotive user interfaces as they improve safety and comfort. Various computer vision algorithms have employed color and depth cameras for hand gesture recognition, but robust classification of gestures from different subjects performed under widely varying lighting conditions is still challenging. We propose an algorithm for drivers’ hand gesture recognition from challenging depth and intensity data using 3D convolutional neural networks. Our solution combines information from multiple spatial scales for the final prediction. It also employs spatiotemporal data augmentation for more effective training and to reduce potential overfitting. Our method achieves a correct classification rate of 77.5% on the VIVA challenge dataset.",
"title": ""
},
{
"docid": "8f7428569e1d3036cdf4842d48b56c22",
"text": "This paper describes a unified model for role-based access control (RBAC). RBAC is a proven technology for large-scale authorization. However, lack of a standard model results in uncertainty and confusion about its utility and meaning. The NIST model seeks to resolve this situation by unifying ideas from prior RBAC models, commercial products and research prototypes. It is intended to serve as a foundation for developing future standards. RBAC is a rich and open-ended technology which is evolving as users, researchers and vendors gain experience with it. The NIST model focuses on those aspects of RBAC for which consensus is available. It is organized into four levels of increasing functional capabilities called flat RBAC, hierarchical RBAC, constrained RBAC and symmetric RBAC. These levels are cumulative and each adds exactly one new requirement. An alternate approach comprising flat and hierarchical RBAC in an ordered sequence and two unordered features—constraints and symmetry—is also presented. The paper furthermore identifies important attributes of RBAC not included in the NIST model. Some are not suitable for inclusion in a consensus document. Others require further work and agreement before standardization is feasible.",
"title": ""
},
{
"docid": "895f0424cb71c79b86ecbd11a4f2eb8e",
"text": "A chronic alcoholic who had also been submitted to partial gastrectomy developed a syndrome of continuous motor unit activity responsive to phenytoin therapy. There were signs of minimal distal sensorimotor polyneuropathy. Symptoms of the syndrome of continuous motor unit activity were fasciculation, muscle stiffness, myokymia, impaired muscular relaxation and percussion myotonia. Electromyography at rest showed fasciculation, doublets, triplets, multiplets, trains of repetitive discharges and myotonic discharges. Trousseau's and Chvostek's signs were absent. No abnormality of serum potassium, calcium, magnesium, creatine kinase, alkaline phosphatase, arterial blood gases and pH were demonstrated, but the serum Vitamin B12 level was reduced. The electrophysiological findings and muscle biopsy were compatible with a mixed sensorimotor polyneuropathy. Tests of neuromuscular transmission showed a significant decrement in the amplitude of the evoked muscle action potential in the abductor digiti minimi on repetitive nerve stimulation. These findings suggest that hyperexcitability and hyperactivity of the peripheral motor axons underlie the syndrome of continuous motor unit activity in the present case. Ein chronischer Alkoholiker, mit subtotaler Gastrectomie, litt an einem Syndrom dauernder Muskelfaseraktivität, das mit Diphenylhydantoin behandelt wurde. Der Patient wies minimale Störungen im Sinne einer distalen sensori-motorischen Polyneuropathie auf. Die Symptome dieses Syndroms bestehen in: Fazikulationen, Muskelsteife, Myokymien, eine gestörte Erschlaffung nach der Willküraktivität und eine Myotonie nach Beklopfen des Muskels. Das Elektromyogramm in Ruhe zeigt: Faszikulationen, Doublets, Triplets, Multiplets, Trains repetitiver Potentiale und myotonische Entladungen. Trousseau- und Chvostek-Zeichen waren nicht nachweisbar. Gleichzeitig lagen die Kalium-, Calcium-, Magnesium-, Kreatinkinase- und Alkalinphosphatase-Werte im Serumspiegel sowie O2, CO2 und pH des arteriellen Blutes im Normbereich. Aber das Niveau des Vitamin B12 im Serumspiegel war deutlich herabgesetzt. Die muskelbioptische und elektrophysiologische Veränderungen weisen auf eine gemischte sensori-motorische Polyneuropathie hin. Die Abnahme der Amplitude der evozierten Potentiale, vom M. abductor digiti minimi abgeleitet, bei repetitiver Reizung des N. ulnaris, stellten eine Störung der neuromuskulären Überleitung dar. Aufgrund unserer klinischen und elektrophysiologischen Befunde könnten wir die Hypererregbarkeit und Hyperaktivität der peripheren motorischen Axonen als Hauptmechanismus des Syndroms dauernder motorischer Einheitsaktivität betrachten.",
"title": ""
},
{
"docid": "d488d9d754c360efb3910c83e3175756",
"text": "The most common question asked by patients with inflammatory bowel disease (IBD) is, \"Doctor, what should I eat?\" Findings from epidemiology studies have indicated that diets high in animal fat and low in fruits and vegetables are the most common pattern associated with an increased risk of IBD. Low levels of vitamin D also appear to be a risk factor for IBD. In murine models, diets high in fat, especially saturated animal fats, also increase inflammation, whereas supplementation with omega 3 long-chain fatty acids protect against intestinal inflammation. Unfortunately, omega 3 supplements have not been shown to decrease the risk of relapse in patients with Crohn's disease. Dietary intervention studies have shown that enteral therapy, with defined formula diets, helps children with Crohn's disease and reduces inflammation and dysbiosis. Although fiber supplements have not been shown definitively to benefit patients with IBD, soluble fiber is the best way to generate short-chain fatty acids such as butyrate, which has anti-inflammatory effects. Addition of vitamin D and curcumin has been shown to increase the efficacy of IBD therapy. There is compelling evidence from animal models that emulsifiers in processed foods increase risk for IBD. We discuss current knowledge about popular diets, including the specific carbohydrate diet and diet low in fermentable oligo-, di-, and monosaccharides and polyols. We present findings from clinical and basic science studies to help gastroenterologists navigate diet as it relates to the management of IBD.",
"title": ""
},
{
"docid": "3f2d4df1b0ef315ee910636c9439b049",
"text": "Real-Time Line and Disk Light Shading\n Eric Heitz and Stephen Hill\n At SIGGRAPH 2016, we presented a new real-time area lighting technique for polygonal sources. In this talk, we will show how the underlying framework, based on Linearly Transformed Cosines (LTCs), can be extended to support line and disk lights. We will discuss the theory behind these approaches as well as practical implementation tips and tricks concerning numerical precision and performance.\n Physically Based Shading at DreamWorks Animation\n Feng Xie and Jon Lanz\n PDI/DreamWorks was one of the first animation studios to adopt global illumination in production rendering. Concurrently, we have also been developing and applying physically based shading principles to improve the consistency and realism of our material models, while balancing the need for intuitive artistic control required for feature animations.\n In this talk, we will start by presenting the evolution of physically based shading in our films. Then we will present some fundamental principles with respect to importance sampling and energy conservation in our BSDF framework with a pragmatic and efficient approach to transimssion fresnel modeling. Finally, we will present our new set of physically plausible production shaders for our new path tracer, which includes our new hard surface shader, our approach to material layering and some new developments in fabric and glitter shading.\n Volumetric Skin and Fabric Shading at Framestore\n Nathan Walster\n Recent advances in shading have led to the use of free-path sampling to better solve complex light transport within volumetric materials. In this talk, we describe how we have implemented these ideas and techniques within a production environment, their application on recent shows---such as Guardians of the Galaxy Vol. 2 and Alien: Covenant---and the effect this has had on artists' workflow within our studio.\n Practical Multilayered Materials in Call of Duty: Infinite Warfare\n Michał Drobot\n This talk presents a practical approach to multilayer, physically based surface rendering, specifically optimized for Forward+ rendering pipelines. The presented pipeline allows for the creation of complex surface by decomposing them into different mediums, each represented by a simple BRDF/BSSRDF and set of simple, physical macro properties, such as thickness, scattering and absorption. The described model is explained via practical examples of common multilayer materials such as car paint, lacquered wood, ice, and semi-translucent plastics. Finally, the talk describes intrinsic implementation details for achieving a low performance budget for 60 Hz titles as well as supporting multiple rendering modes: opaque, alpha blend, and refractive blend.\n Pixar's Foundation for Materials: PxrSurface and PxrMarschnerHair\n Christophe Hery and Junyi Ling\n Pixar's Foundation Materials, PxrSurface and PxrMarschnerHair, began shipping with RenderMan 21.\n PxrSurface is the standard surface shader developed in the studio for Finding Dory and used more recently for Cars 3 and Coco. This shader contains nine lobes that cover the entire gamut of surface materials for these two films: diffuse, three specular, iridescence, fuzz, subsurface, single scatter and a glass lobe. Each of these BxDF lobes is energy conserving, but conservation is not enforced between lobes on the surface level. We use parameter layering methods to feed a PxrSurface with pre-layered material descriptions. This simultaneously allows us the flexibility of a multilayered shading pipeline together with efficient and consistent rendering behavior.\n We also implemented our individual BxDFs with the latest state-of-the-art techniques. For example, our three specular lobes can be switched between Beckmann and GGX modes. Many compound materials have multiple layers of specular; these lobes interact with each other modulated by the Fresnel effect of the clearcoat layer. We also leverage LEADR mapping to recreate sub-displacement micro features such as metal flakes and clearcoat scratches.\n Another example is that PxrSurface ships with Jensen, d'Eon and Burley diffusion profiles. Additionally, we implemented a novel subsurface model using path-traced volumetric scattering, which represents a significant advancement. It captures zero and single scattering events of subsurface scattering implicit to the path-tracing algorithm. The user can adjust the phase-function of the scattering events and change the extinction profiles, and it also comes with standardized color inversion features for intuitive albedo input. To the best of our knowledge, this is the first commercially available rendering system to model these features and the rendering cost is comparable to classic diffusion subsurface scattering models.\n PxrMarschnerHair implements Marschner's seminal hair illumination model with importance sampling. We also account for the residual energy left after the R, TT, TRT and glint lobes, through a fifth diffuse lobe. We show that this hair surface shader can reproduce dark and blonde hair effectively in a path-traced production context. Volumetric scattering from fiber to fiber changes the perceived hue and saturation of a groom, so we also provide a color inversion scheme to invert input albedos, such that the artistic inputs are straightforward and intuitive.\n Revisiting Physically Based Shading at Imageworks\n Christopher Kulla and Alejandro Conty\n Two years ago, the rendering and shading groups at Sony Imageworks embarked on a project to review the structure of our physically based shaders in an effort to simplify their implementation, improve quality and pave the way to take advantage of future improvements in light transport algorithms.\n We started from classic microfacet BRDF building blocks and investigated energy conservation and artist friendly parametrizations. We continued by unifying volume rendering and subsurface scattering algorithms and put in place a system for medium tracking to improve the setup of nested media. Finally, from all these building blocks, we rebuilt our artist-facing shaders with a simplified interface and a more flexible layering approach through parameter blending.\n Our talk will discuss the details of our various building blocks, what worked and what didn't, as well as some future research directions we are still interested in exploring.",
"title": ""
},
{
"docid": "4689161101a990d17b08e27b3ccf2be3",
"text": "The growth of the software game development industry is enormous and is gaining importance day by day. This growth imposes severe pressure and a number of issues and challenges on the game development community. Game development is a complex process, and one important game development choice is to consider the developer’s perspective to produce good-quality software games by improving the game development process. The objective of this study is to provide a better understanding of the developer’s dimension as a factor in software game success. It focuses mainly on an empirical investigation of the effect of key developer’s factors on the software game development process and eventually on the quality of the resulting game. A quantitative survey was developed and conducted to identify key developer’s factors for an enhanced game development process. For this study, the developed survey was used to test the research model and hypotheses. The results provide evidence that game development organizations must deal with multiple key factors to remain competitive and to handle high pressure in the software game industry. The main contribution of this paper is to investigate empirically the influence of key developer’s factors on the game development process.",
"title": ""
},
{
"docid": "934ee0b55bf90eed86fabfff8f1238d1",
"text": "Schelling (1969, 1971a,b, 1978) considered a simple proximity model of segregation where individual agents only care about the types of people living in their own local geographical neighborhood, the spatial structure being represented by oneor two-dimensional lattices. In this paper, we argue that segregation might occur not only in the geographical space, but also in social environments. Furthermore, recent empirical studies have documented that social interaction structures are well-described by small-world networks. We generalize Schelling’s model by allowing agents to interact in small-world networks instead of regular lattices. We study two alternative dynamic models where agents can decide to move either arbitrarily far away (global model) or are bound to choose an alternative location in their social neighborhood (local model). Our main result is that the system attains levels of segregation that are in line with those reached in the lattice-based spatial proximity model. Thus, Schelling’s original results seem to be robust to the structural properties of the network.",
"title": ""
},
{
"docid": "c6ebb1f54f42f38dae8c19566f2459ce",
"text": "We develop several predictive models linking legislative sentiment to legislative text. Our models, which draw on ideas from ideal point estimation and topic models, predict voting patterns based on the contents of bills and infer the political leanings of legislators. With supervised topics, we provide an exploratory window into how the language of the law is correlated with political support. We also derive approximate posterior inference algorithms based on variational methods. Across 12 years of legislative data, we predict specific voting patterns with high accuracy.",
"title": ""
},
{
"docid": "1865a404c970d191ed55e7509b21fb9e",
"text": "Most machine learning methods are known to capture and exploit biases of the training data. While some biases are beneficial for learning, others are harmful. Specifically, image captioning models tend to exaggerate biases present in training data (e.g., if a word is present in 60% of training sentences, it might be predicted in 70% of sentences at test time). This can lead to incorrect captions in domains where unbiased captions are desired, or required, due to over-reliance on the learned prior and image context. In this work we investigate generation of gender-specific caption words (e.g. man, woman) based on the person’s appearance or the image context. We introduce a new Equalizer model that encourages equal gender probability when gender evidence is occluded in a scene and confident predictions when gender evidence is present. The resulting model is forced to look at a person rather than use contextual cues to make a gender-specific prediction. The losses that comprise our model, the Appearance Confusion Loss and the Confident Loss, are general, and can be added to any description model in order to mitigate impacts of unwanted bias in a description dataset. Our proposed model has lower error than prior work when describing images with people and mentioning their gender and more closely matches the ground truth ratio of sentences including women to sentences including men. Finally, we show that our model more often looks at people when predicting their gender. 1",
"title": ""
},
{
"docid": "0b6a3b143dfccd7ca9ea09f7fa5b5e8c",
"text": "Cancer has been characterized as a heterogeneous disease consisting of many different subtypes. The early diagnosis and prognosis of a cancer type have become a necessity in cancer research, as it can facilitate the subsequent clinical management of patients. The importance of classifying cancer patients into high or low risk groups has led many research teams, from the biomedical and the bioinformatics field, to study the application of machine learning (ML) methods. Therefore, these techniques have been utilized as an aim to model the progression and treatment of cancerous conditions. In addition, the ability of ML tools to detect key features from complex datasets reveals their importance. A variety of these techniques, including Artificial Neural Networks (ANNs), Bayesian Networks (BNs), Support Vector Machines (SVMs) and Decision Trees (DTs) have been widely applied in cancer research for the development of predictive models, resulting in effective and accurate decision making. Even though it is evident that the use of ML methods can improve our understanding of cancer progression, an appropriate level of validation is needed in order for these methods to be considered in the everyday clinical practice. In this work, we present a review of recent ML approaches employed in the modeling of cancer progression. The predictive models discussed here are based on various supervised ML techniques as well as on different input features and data samples. Given the growing trend on the application of ML methods in cancer research, we present here the most recent publications that employ these techniques as an aim to model cancer risk or patient outcomes.",
"title": ""
},
{
"docid": "2b98fd7a61fd7c521758651191df74d0",
"text": "Nowadays, a great effort is done to find new alternative renewable energy sources to replace part of nuclear energy production. In this context, this paper presents a new axial counter-rotating turbine for small-hydro applications which is developed to recover the energy lost in release valves of water supply. The design of the two PM-generators, their mechanical integration in a bulb placed into the water conduit and the AC-DC Vienna converter developed for these turbines are presented. The sensorless regulation of the two generators is also briefly discussed. Finally, measurements done on the 2-kW prototype are analyzed and compared with the simulation.",
"title": ""
}
] | scidocsrr |
856d1c7e556a5f1423113cb1d1243167 | Mining big data using parsimonious factor , machine learning , variable selection and shrinkage methods | [
{
"docid": "2710a25b3cf3caf5ebd5fb9f08c9e5e3",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/page/info/about/policies/terms.jsp. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use.",
"title": ""
},
{
"docid": "ffc36fa0dcc81a7f5ba9751eee9094d7",
"text": "The independent component analysis (ICA) of a random vector consists of searching for a linear transformation that minimizes the statistical dependence between its components. In order to define suitable search criteria, the expansion of mutual information is utilized as a function of cumulants of increasing orders. An efficient algorithm is proposed, which allows the computation of the ICA of a data matrix within a polynomial time. The concept of lCA may actually be seen as an extension of the principal component analysis (PCA), which can only impose independence up to the second order and, consequently, defines directions that are orthogonal. Potential applications of ICA include data analysis and compression, Bayesian detection, localization of sources, and blind identification and deconvolution. Zusammenfassung Die Analyse unabhfingiger Komponenten (ICA) eines Vektors beruht auf der Suche nach einer linearen Transformation, die die statistische Abh~ingigkeit zwischen den Komponenten minimiert. Zur Definition geeigneter Such-Kriterien wird die Entwicklung gemeinsamer Information als Funktion von Kumulanten steigender Ordnung genutzt. Es wird ein effizienter Algorithmus vorgeschlagen, der die Berechnung der ICA ffir Datenmatrizen innerhalb einer polynomischen Zeit erlaubt. Das Konzept der ICA kann eigentlich als Erweiterung der 'Principal Component Analysis' (PCA) betrachtet werden, die nur die Unabh~ingigkeit bis zur zweiten Ordnung erzwingen kann und deshalb Richtungen definiert, die orthogonal sind. Potentielle Anwendungen der ICA beinhalten Daten-Analyse und Kompression, Bayes-Detektion, Quellenlokalisierung und blinde Identifikation und Entfaltung.",
"title": ""
},
{
"docid": "c7e584bca061335c8cd085511f4abb3b",
"text": "The application of boosting technique to regression problems has received relatively little attention in contrast to research aimed at classification problems. This letter describes a new boosting algorithm, AdaBoost.RT, for regression problems. Its idea is in filtering out the examples with the relative estimation error that is higher than the preset threshold value, and then following the AdaBoost procedure. Thus, it requires selecting the suboptimal value of the error threshold to demarcate examples as poorly or well predicted. Some experimental results using the M5 model tree as a weak learning machine for several benchmark data sets are reported. The results are compared to other boosting methods, bagging, artificial neural networks, and a single M5 model tree. The preliminary empirical comparisons show higher performance of AdaBoost.RT for most of the considered data sets.",
"title": ""
}
] | [
{
"docid": "9fac5ac1de2ae70964bdb05643d41a68",
"text": "A long-standing goal in the field of artificial intelligence is to develop agents that can perceive and understand the rich visual world around us and who can communicate with us about it in natural language. Significant strides have been made towards this goal over the last few years due to simultaneous advances in computing infrastructure, data gathering and algorithms. The progress has been especially rapid in visual recognition, where computers can now classify images into categories with a performance that rivals that of humans, or even surpasses it in some cases such as classifying breeds of dogs. However, despite much encouraging progress, most of the advances in visual recognition still take place in the context of assigning one or a few discrete labels to an image (e.g. person, boat, keyboard, etc.). In this dissertation we develop models and techniques that allow us to connect the domain of visual data and the domain of natural language utterances, enabling translation between elements of the two domains. In particular, first we introduce a model that embeds both images and sentences into a common multi-modal embedding space. This space then allows us to identify images that depict an arbitrary sentence description and conversely, we can identify sentences that describe any image. Second, we develop an image captioning model that takes an image and directly generates a sentence description without being constrained a finite collection of human-written sentences to choose from. Lastly, we describe a model that can take an image and both localize and describe all if its salient parts. We demonstrate that this model can also be used backwards to take any arbitrary description (e.g. white tennis shoes) and e ciently localize the described concept in a large collection of images. We argue that these models, the techniques they take advantage of internally and the interactions they enable are a stepping stone towards artificial intelligence and that connecting images and natural language o↵ers many practical benefits and immediate valuable applications. From the modeling perspective, instead of designing and staging explicit algorithms to process images and sentences in complex processing pipelines, our contribution lies in the design of hybrid convolutional and recurrent neural network architectures that connect visual data and natural language utterances with a single network. Therefore, the computational processing of images,",
"title": ""
},
{
"docid": "58e17619012ddb58f86dc4bfa79d19d8",
"text": "–Malicious programs have been the main actors in complex, sophisticated attacks against nations, governments, diplomatic agencies, private institutions and people. Knowledge about malicious program behavior forms the basis for constructing more secure information systems. In this article, we introduce MBO, a Malicious Behavior Ontology that represents complex behaviors of suspicious executions, and through inference rules calculates their associated threat level for analytical proposals. We evaluate MBO using over two thousand unique known malware and 385 unique known benign software. Results highlight the representativeness of the MBO for expressing typical malicious activities. Security ontologyMalware behaviorThreat analysis",
"title": ""
},
{
"docid": "00eb132ce5063dd983c0c36724f82cec",
"text": "This paper analyzes customer product-choice behavior based on the recency and frequency of each customer’s page views on e-commerce sites. Recently, we devised an optimization model for estimating product-choice probabilities that satisfy monotonicity, convexity, and concavity constraints with respect to recency and frequency. This shape-restricted model delivered high predictive performance even when there were few training samples. However, typical e-commerce sites deal in many different varieties of products, so the predictive performance of the model can be further improved by integration of such product heterogeneity. For this purpose, we develop a novel latent-class shape-restricted model for estimating product-choice probabilities for each latent class of products. We also give a tailored expectation-maximization algorithm for parameter estimation. Computational results demonstrate that higher predictive performance is achieved with our latent-class model than with the previous shape-restricted model and common latent-class logistic regression.",
"title": ""
},
{
"docid": "23ada5f749c5780ff45057747e978b66",
"text": "In this paper, we introduce ReTSO, a reliable and efficient design for transactional support in large-scale storage systems. ReTSO uses a centralized scheme and implements snapshot isolation, a property that guarantees that read operations of a transaction read a consistent snapshot of the data stored. The centralized scheme of ReTSO enables a lock-free commit algorithm that prevents unre-leased locks of a failed transaction from blocking others. We analyze the bottlenecks in a single-server implementation of transactional logic and propose solutions for each. The experimental results show that our implementation can service up to 72K transaction per second (TPS), which is an order of magnitude larger than the maximum achieved traffic in similar data storage systems. Consequently, we do not expect ReTSO to be a bottleneck even for current large distributed storage systems.",
"title": ""
},
{
"docid": "6280266740e1a3da3fd536c134b39cfd",
"text": "Despite years of research yielding systems and guidelines to aid visualization design, practitioners still face the challenge of identifying the best visualization for a given dataset and task. One promising approach to circumvent this problem is to leverage perceptual laws to quantitatively evaluate the effectiveness of a visualization design. Following previously established methodologies, we conduct a large scale (n = 1687) crowdsourced experiment to investigate whether the perception of correlation in nine commonly used visualizations can be modeled using Weber's law. The results of this experiment contribute to our understanding of information visualization by establishing that: (1) for all tested visualizations, the precision of correlation judgment could be modeled by Weber's law, (2) correlation judgment precision showed striking variation between negatively and positively correlated data, and (3) Weber models provide a concise means to quantify, compare, and rank the perceptual precision afforded by a visualization.",
"title": ""
},
{
"docid": "0b71777f8b4d03fb147ff41d1224136e",
"text": "Mobile broadband demand keeps growing at an overwhelming pace. Though emerging wireless technologies will provide more bandwidth, the increase in demand may easily consume the extra bandwidth. To alleviate this problem, we propose using the content available on individual devices as caches. Particularly, when a user reaches areas with dense clusters of mobile devices, \"data spots\", the operator can instruct the user to connect with other users sharing similar interests and serve the requests locally. This paper presents feasibility study as well as prototype implementation of this idea.",
"title": ""
},
{
"docid": "dc3495ec93462e68f606246205a8416d",
"text": "State-of-the-art methods for zero-shot visual recognition formulate learning as a joint embedding problem of images and side information. In these formulations the current best complement to visual features are attributes: manually-encoded vectors describing shared characteristics among categories. Despite good performance, attributes have limitations: (1) finer-grained recognition requires commensurately more attributes, and (2) attributes do not provide a natural language interface. We propose to overcome these limitations by training neural language models from scratch, i.e. without pre-training and only consuming words and characters. Our proposed models train end-to-end to align with the fine-grained and category-specific content of images. Natural language provides a flexible and compact way of encoding only the salient visual aspects for distinguishing categories. By training on raw text, our model can do inference on raw text as well, providing humans a familiar mode both for annotation and retrieval. Our model achieves strong performance on zero-shot text-based image retrieval and significantly outperforms the attribute-based state-of-the-art for zero-shot classification on the Caltech-UCSD Birds 200-2011 dataset.",
"title": ""
},
{
"docid": "680c621ebc0dd6f762abb8df9871070e",
"text": "Methods for learning to search for structured prediction typically imitate a reference policy, with existing theoretical guarantees demonstrating low regret compared to that reference. This is unsatisfactory in many applications where the reference policy is suboptimal and the goal of learning is to improve upon it. Can learning to search work even when the reference is poor? We provide a new learning to search algorithm, LOLS, which does well relative to the reference policy, but additionally guarantees low regret compared to deviations from the learned policy: a local-optimality guarantee. Consequently, LOLS can improve upon the reference policy, unlike previous algorithms. This enables us to develop structured contextual bandits, a partial information structured prediction setting with many potential applications.",
"title": ""
},
{
"docid": "0084faef0e08c4025ccb3f8fd50892f1",
"text": "Steganography is a method of hiding secret messages in a cover object while communication takes place between sender and receiver. Security of confidential information has always been a major issue from the past times to the present time. It has always been the interested topic for researchers to develop secure techniques to send data without revealing it to anyone other than the receiver. Therefore from time to time researchers have developed many techniques to fulfill secure transfer of data and steganography is one of them. In this paper we have proposed a new technique of image steganography i.e. Hash-LSB with RSA algorithm for providing more security to data as well as our data hiding method. The proposed technique uses a hash function to generate a pattern for hiding data bits into LSB of RGB pixel values of the cover image. This technique makes sure that the message has been encrypted before hiding it into a cover image. If in any case the cipher text got revealed from the cover image, the intermediate person other than receiver can't access the message as it is in encrypted form.",
"title": ""
},
{
"docid": "4eabc161187126a726a6b65f6fc6c685",
"text": "In this paper, we propose a new method to estimate synthetic aperture radar interferometry (InSAR) interferometric phase in the presence of large coregistration errors. The method takes advantage of the coherence information of neighboring pixel pairs to automatically coregister the SAR images and employs the projection of the joint signal subspace onto the corresponding joint noise subspace to estimate the terrain interferometric phase. The method can automatically coregister the SAR images and reduce the interferometric phase noise simultaneously. Theoretical analysis and computer simulation results show that the method can provide accurate estimate of the terrain interferometric phase (interferogram) as the coregistration error reaches one pixel. The effectiveness of the method is also verified with the real data from the Spaceborne Imaging Radar-C/X Band SAR and the European Remote Sensing 1 and 2 satellites.",
"title": ""
},
{
"docid": "c66c1523322809d1b2d1279b5b2b8384",
"text": "The design of the Smart Grid requires solving a complex problem of combined sensing, communications and control and, thus, the problem of choosing a networking technology cannot be addressed without also taking into consideration requirements related to sensor networking and distributed control. These requirements are today still somewhat undefined so that it is not possible yet to give quantitative guidelines on how to choose one communication technology over the other. In this paper, we make a first qualitative attempt to better understand the role that Power Line Communications (PLCs) can have in the Smart Grid. Furthermore, we here report recent results on the electrical and topological properties of the power distribution network. The topological characterization of the power grid is not only important because it allows us to model the grid as an information source, but also because the grid becomes the actual physical information delivery infrastructure when PLCs are used.",
"title": ""
},
{
"docid": "31f5c712760d1733acb0d7ffd3cec6ad",
"text": "Singular Spectrum Transform (SST) is a fundamental subspace analysis technique which has been widely adopted for solving change-point detection (CPD) problems in information security applications. However, the performance of a SST based CPD algorithm is limited to the lack of robustness to corrupted observations with large noises in practice. Based on the observation that large noises in practical time series are generally sparse, in this paper, we study a combination of Robust Principal Component Analysis (RPCA) and SST to obtain a robust CPD algorithm dealing with sparse large noises. The sparse large noises are to be eliminated from observation trajectory matrices by performing a low-rank matrix recovery procedure of RPCA. The noise-eliminated matrices are then used to extract SST subspaces for CPD. The effectiveness of the proposed method is demonstrated through experiments based on both synthetic and real-world datasets. Experimental results show that the proposed method outperforms the competing state-of-the-arts in terms of detection accuracy for time series with sparse large noises.",
"title": ""
},
{
"docid": "30178d1de9d0aab8c3ab0ac9be674d8c",
"text": "The immune system protects from infections primarily by detecting and eliminating the invading pathogens; however, the host organism can also protect itself from infectious diseases by reducing the negative impact of infections on host fitness. This ability to tolerate a pathogen's presence is a distinct host defense strategy, which has been largely overlooked in animal and human studies. Introduction of the notion of \"disease tolerance\" into the conceptual tool kit of immunology will expand our understanding of infectious diseases and host pathogen interactions. Analysis of disease tolerance mechanisms should provide new approaches for the treatment of infections and other diseases.",
"title": ""
},
{
"docid": "6702bfca88f86e0c35a8b6195d0c971c",
"text": "A hierarchical scheme for clustering data is presented which applies to spaces with a high number of dimensions ( 3 D N > ). The data set is first reduced to a smaller set of partitions (multi-dimensional bins). Multiple clustering techniques are used, including spectral clustering; however, new techniques are also introduced based on the path length between partitions that are connected to one another. A Line-of-Sight algorithm is also developed for clustering. A test bank of 12 data sets with varying properties is used to expose the strengths and weaknesses of each technique. Finally, a robust clustering technique is discussed based on reaching a consensus among the multiple approaches, overcoming the weaknesses found individually.",
"title": ""
},
{
"docid": "cb55daf6ada8e9caba80aa4f421fc395",
"text": "This paper surveys the state of the art on multimodal gesture recognition and introduces the JMLR special topic on gesture recognition 2011-2015. We began right at the start of the KinectT Mrevolution when inexpensive infrared cameras providing image depth recordings became available. We published papers using this technology and other more conventional methods, including regular video cameras, to record data, thus providing a good overview of uses of machine learning and computer vision using multimodal data in this area of application. Notably, we organized a series of challenges and made available several datasets we recorded for that purpose, including tens of thousands of videos, which are available to conduct further research. We also overview recent state of the art works on gesture recognition based on a proposed taxonomy for gesture recognition, discussing challenges and future lines of research.",
"title": ""
},
{
"docid": "4e8c67969add0e27dc1d3cb8f36971f8",
"text": "To date no AIS1 neck injury mechanism has been established, thus no neck injury criterion has been validated against such mechanism. Validation methods not related to an injury mechanism may be used. The aim of this paper was to validate different proposed neck injury criteria with reconstructed reallife crashes with recorded crash pulses and with known injury outcomes. A car fleet of more than 40,000 cars fitted with crash pulse recorders have been monitored in Sweden since 1996. All crashes with these cars, irrespective of repair cost and injury outcome have been reported. With the inclusion criteria of the three most represented car models, single rear-end crashes with a recorded crash pulse, and front seat occupants with no previous long-term AIS1 neck injury, 79 crashes with 110 front seat occupants remained to be analysed in this study. Madymo models of a BioRID II dummy in the three different car seats were exposed to the recorded crash pulses. The dummy readings were correlated to the real-life injury outcome, divided into duration of AIS1 neck injury symptoms. Effectiveness to predict neck injury was assessed for the criteria NIC, Nkm, NDC and lower neck moment, aimed at predicting AIS1 neck injury. Also risk curves were assessed for the effective criteria as well as for impact severity. It was found that NICmax and Nkm are applicable to predict risk of AIS1 neck injury when using a BioRID dummy. It is suggested that both BioRID NICmax and Nkm should be considered in rear-impact test evaluation. Furthermore, lower neck moment was found to be less applicable. Using the BioRID dummy NDC was also found less applicable.",
"title": ""
},
{
"docid": "23b18b2795b0e5ff619fd9e88821cfad",
"text": "Goal-oriented dialogue has been paid attention for its numerous applications in artificial intelligence. To solve this task, deep learning and reinforcement learning have recently been applied. However, these approaches struggle to find a competent recurrent neural questioner, owing to the complexity of learning a series of sentences. Motivated by theory of mind, we propose “Answerer in Questioner’s Mind” (AQM), a novel algorithm for goal-oriented dialogue. With AQM, a questioner asks and infers based on an approximated probabilistic model of the answerer. The questioner figures out the answerer’s intent via selecting a plausible question by explicitly calculating the information gain of the candidate intentions and possible answers to each question. We test our framework on two goal-oriented visual dialogue tasks: “MNIST Counting Dialog” and “GuessWhat?!.” In our experiments, AQM outperforms comparative algorithms and makes human-like dialogue. We further use AQM as a tool for analyzing the mechanism of deep reinforcement learning approach and discuss the future direction of practical goal-oriented neural dialogue systems.",
"title": ""
},
{
"docid": "8d957e6c626855a06ac2256c4e7cd15c",
"text": "This article presents a robotic dataset collected from the largest underground copper mine in the world. The sensor measurements from a 3D scanning lidar, a 2D radar, and stereo cameras were recorded from an approximately two kilometer traverse of a production-active tunnel. The equipment used and the data collection process is discussed in detail, along with the format of the data. This dataset is suitable for research in robotic navigation, as well as simultaneous localization and mapping. The download instructions are available at the following website http://dataset.amtc.cl.",
"title": ""
},
{
"docid": "69f413d247e88022c3018b2dee1b53e2",
"text": "Research and development (R&D) project selection is an important task for organizations with R&D project management. It is a complicated multi-stage decision-making process, which involves groups of decision makers. Current research on R&D project selection mainly focuses on mathematical decision models and their applications, but ignores the organizational aspect of the decision-making process. This paper proposes an organizational decision support system (ODSS) for R&D project selection. Object-oriented method is used to design the architecture of the ODSS. An organizational decision support system has also been developed and used to facilitate the selection of project proposals in the National Natural Science Foundation of China (NSFC). The proposed system supports the R&D project selection process at the organizational level. It provides useful information for decision-making tasks in the R&D project selection process. D 2004 Elsevier B.V. All rights reserved.",
"title": ""
},
{
"docid": "3c82ba94aa4d717d51c99cfceb527f22",
"text": "Manipulator collision avoidance using genetic algorithms is presented. Control gains in the collision avoidance control model are selected based on genetic algorithms. A repulsive force is artificially created using the distances between the robot links and obstacles, which are generated by a distance computation algorithm. Real-time manipulator collision avoidance control has achieved. A repulsive force gain is introduced through the approaches for definition of link coordinate frames and kinematics computations. The safety distance between objects is affected by the repulsive force gain. This makes the safety zone adjustable and provides greater intelligence for robotic tasks under the ever-changing environment.",
"title": ""
}
] | scidocsrr |
7b25c401a85ee8722811b60d0ad7cdee | Skinning mesh animations | [
{
"docid": "0382ad43b6d31a347d9826194a7261ce",
"text": "In this paper, we present a representation for three-dimensional geometric animation sequences. Different from standard key-frame techniques, this approach is based on the determination of principal animation components and decouples the animation from the underlying geometry. The new representation supports progressive animation compression with spatial, as well as temporal, level-of-detail and high compression ratios. The distinction of animation and geometry allows for mapping animations onto other objects.",
"title": ""
}
] | [
{
"docid": "281c64b492a1aff7707dbbb5128799c8",
"text": "Internet business models have been widely discussed in literature and applied within the last decade. Nevertheless, a clear understanding of some e-commerce concepts does not exist yet. The classification of business models in e-commerce is one of these areas. The current research tries to fill this gap through a conceptual and qualitative study. Nine main e-commerce business model types are selected from literature and analyzed to define the criteria and their sub-criteria (characteristics). As a result three different classifications for business models are determined. This study can be used to improve the understanding of essential functions, relations and mechanisms of existing e-commerce business models.",
"title": ""
},
{
"docid": "030c8aeb4e365bfd2fdab710f8c9f598",
"text": "By combining linear graph theory with the principle of virtual work, a dynamic formulation is obtained that extends graph-theoretic modelling methods to the analysis of exible multibody systems. The system is represented by a linear graph, in which nodes represent reference frames on rigid and exible bodies, and edges represent components that connect these frames. By selecting a spanning tree for the graph, the analyst can choose the set of coordinates appearing in the nal system of equations. This set can include absolute, joint, or elastic coordinates, or some combination thereof. If desired, all non-working constraint forces and torques can be automatically eliminated from the dynamic equations by exploiting the properties of virtual work. The formulation has been implemented in a computer program, DynaFlex, that generates the equations of motion in symbolic form. Three examples are presented to demonstrate the application of the formulation, and to validate the symbolic computer implementation.",
"title": ""
},
{
"docid": "3c778c71f621b2c887dc81e7a919058e",
"text": "We have witnessed the Fixed Internet emerging with virtually every computer being connected today; we are currently witnessing the emergence of the Mobile Internet with the exponential explosion of smart phones, tablets and net-books. However, both will be dwarfed by the anticipated emergence of the Internet of Things (IoT), in which everyday objects are able to connect to the Internet, tweet or be queried. Whilst the impact onto economies and societies around the world is undisputed, the technologies facilitating such a ubiquitous connectivity have struggled so far and only recently commenced to take shape. To this end, this paper introduces in a timely manner and for the first time the wireless communications stack the industry believes to meet the important criteria of power-efficiency, reliability and Internet connectivity. Industrial applications have been the early adopters of this stack, which has become the de-facto standard, thereby bootstrapping early IoT developments with already thousands of wireless nodes deployed. Corroborated throughout this paper and by emerging industry alliances, we believe that a standardized approach, using latest developments in the IEEE 802.15.4 and IETF working groups, is the only way forward. We introduce and relate key embodiments of the power-efficient IEEE 802.15.4-2006 PHY layer, the power-saving and reliable IEEE 802.15.4e MAC layer, the IETF 6LoWPAN adaptation layer enabling universal Internet connectivity, the IETF ROLL routing protocol enabling availability, and finally the IETF CoAP enabling seamless transport and support of Internet applications. The protocol stack proposed in the present work converges towards the standardized notations of the ISO/OSI and TCP/IP stacks. What thus seemed impossible some years back, i.e., building a clearly defined, standards-compliant and Internet-compliant stack given the extreme restrictions of IoT networks, is commencing to become reality.",
"title": ""
},
{
"docid": "540a6dd82c7764eedf99608359776e66",
"text": "Your use of the JSTOR archive indicates your acceptance of JSTOR's Terms and Conditions of Use, available at http://www.jstor.org/about/terms.html. JSTOR's Terms and Conditions of Use provides, in part, that unless you have obtained prior permission, you may not download an entire issue of a journal or multiple copies of articles, and you may use content in the JSTOR archive only for your personal, non-commercial use. Please contact the publisher regarding any further use of this work. Publisher contact information may be obtained at http://www.jstor.org/journals/aea.html. Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission.",
"title": ""
},
{
"docid": "22ef70869ce47993bbdf24b18b6988f5",
"text": "Recent results suggest that it is possible to grasp a variety of singulated objects with high precision using Convolutional Neural Networks (CNNs) trained on synthetic data. This paper considers the task of bin picking, where multiple objects are randomly arranged in a heap and the objective is to sequentially grasp and transport each into a packing box. We model bin picking with a discrete-time Partially Observable Markov Decision Process that specifies states of the heap, point cloud observations, and rewards. We collect synthetic demonstrations of bin picking from an algorithmic supervisor uses full state information to optimize for the most robust collision-free grasp in a forward simulator based on pybullet to model dynamic object-object interactions and robust wrench space analysis from the Dexterity Network (Dex-Net) to model quasi-static contact between the gripper and object. We learn a policy by fine-tuning a Grasp Quality CNN on Dex-Net 2.1 to classify the supervisor’s actions from a dataset of 10,000 rollouts of the supervisor in the simulator with noise injection. In 2,192 physical trials of bin picking with an ABB YuMi on a dataset of 50 novel objects, we find that the resulting policies can achieve 94% success rate and 96% average precision (very few false positives) on heaps of 5-10 objects and can clear heaps of 10 objects in under three minutes. Datasets, experiments, and supplemental material are available at http://berkeleyautomation.github.io/dex-net.",
"title": ""
},
{
"docid": "6dbaeff4f3cb814a47e8dc94c4660d33",
"text": "An Intrusion Detection System (IDS) is a software that monitors a single or a network of computers for malicious activities (attacks) that are aimed at stealing or censoring information or corrupting network protocols. Most techniques used in today’s IDS are not able to deal with the dynamic and complex nature of cyber attacks on computer networks. Hence, efficient adaptive methods like various techniques of machine learning can result in higher detection rates, lower false alarm rates and reasonable computation and communication costs. In this paper, we study several such schemes and compare their performance. We divide the schemes into methods based on classical artificial intelligence (AI) and methods based on computational intelligence (CI). We explain how various characteristics of CI techniques can be used to build efficient IDS.",
"title": ""
},
{
"docid": "7f3c6e8f0915160bbc9feba4d2175fb3",
"text": "Memory leaks are major problems in all kinds of applications, depleting their performance, even if they run on platforms with automatic memory management, such as Java Virtual Machine. In addition, memory leaks contribute to software aging, increasing the complexity of software maintenance. So far memory leak detection was considered to be a part of development process, rather than part of software maintenance. To detect slow memory leaks as a part of quality assurance process or in production environments statistical approach for memory leak detection was implemented and deployed in a commercial tool called Plumbr. It showed promising results in terms of leak detection precision and recall, however, even better detection quality was desired. To achieve this improvement goal, classification algorithms were applied to the statistical data, which was gathered from customer environments where Plumbr was deployed. This paper presents the challenges which had to be solved, method that was used to generate features for supervised learning and the results of the corresponding experiments.",
"title": ""
},
{
"docid": "23129bd3b502cd06e347b90f5a1516bc",
"text": "ISSN 2277 5080 | © 2012 Bonfring Abstract--This paper discusses DSP based implementation of Gaussian Minimum Shift Keying (GMSK) demodulator using Polarity type Costas loop. The demodulator consists of a Polarity type Costas loop for carrier recovery, data recovery, and phase detection. Carrier has been recovered using a loop of center-frequency locking scheme as in M-ary Phase Shift Keying (MPSK) Polarity type Costas-loop. Phase unwrapping and Bit-Reconstruction is presented in detail. All the modules are first modeled in MATLAB (Simulink) and Systemview. After bit true simulation, the design is coded in VHDL and code simulation is done using QuestaSim 6.3c. The design is targeted to Virtex-4 XC4VSX35-10FF668 Xilinx FPGA (Field programmable gate array) for real time testing, which is carried out on Xtreme DSP development platform.",
"title": ""
},
{
"docid": "643e97c3bc0cdde54bf95720fe52f776",
"text": "Ego-motion estimation based on images from a stereo camera has become a common function for autonomous mobile systems and is gaining increasing importance in the automotive sector. Unlike general robotic platforms, vehicles have a suspension adding degrees of freedom and thus complexity to their dynamics model. Some parameters of the model, such as the vehicle mass, are non-static as they depend on e.g. the specific load conditions and thus need to be estimated online to guarantee a concise and safe autonomous maneuvering of the vehicle. In this paper, a novel visual odometry based approach to simultaneously estimate ego-motion and selected vehicle parameters using a dual Ensemble Kalman Filter and a non-linear single-track model with pitch dynamics is presented. The algorithm has been validated using simulated data and showed a good performance for both the estimation of the ego-motion and of the relevant vehicle parameters.",
"title": ""
},
{
"docid": "9e0cbbe8d95298313fd929a7eb2bfea9",
"text": "We compare two technological approaches to augmented reality for 3-D medical visualization: optical and video see-through devices. We provide a context to discuss the technology by reviewing several medical applications of augmented-reality re search efforts driven by real needs in the medical field, both in the United States and in Europe. We then discuss the issues for each approach, optical versus video, from both a technology and human-factor point of view. Finally, we point to potentially promising future developments of such devices including eye tracking and multifocus planes capabilities, as well as hybrid optical/video technology.",
"title": ""
},
{
"docid": "63602b90688ddb0e8ba691702cbdaab8",
"text": "This paper presents a 50-d.o.f. humanoid robot, Computational Brain (CB). CB is a humanoid robot created for exploring the underlying processing of the human brain while dealing with the real world. We place our investigations within real—world contexts, as humans do. In so doing, we focus on utilizing a system that is closer to humans—in sensing, kinematics configuration and performance. We present the real-time network-based architecture for the control of all 50 d.o.f. The controller provides full position/velocity/force sensing and control at 1 kHz, allowing us the flexibility in deriving various forms of control. A dynamic simulator is also presented; the simulator acts as a realistic testbed for our controllers and acts as a common interface to our humanoid robots. A contact model developed to allow better validation of our controllers prior to final testing on the physical robot is also presented. Three aspects of the system are highlighted in this paper: (i) physical power for walking, (ii) full-body compliant control—physical interactions and (iii) perception and control—visual ocular-motor responses.",
"title": ""
},
{
"docid": "23d2349831a364e6b77e3c263a8321c8",
"text": "lmost a decade has passed since we started advocating a process of usability design [20-22]. This article is a status report about the value of this process and, mainly, a description of new ideas for enhancing the use of the process. We first note that, when followed , the process leads to usable, useful, likeable computer systems and applications. Nevertheless, experience and observational evidence show that (because of the way development work is organized and carried out) the process is often not followed, despite designers' enthusiasm and motivation to do so. To get around these organizational and technical obstacles, we propose a) greater reliance on existing methodologies for establishing test-able usability and productivity-enhancing goals; b) a new method for identifying and focuging attention on long-term, trends about the effects that computer applications have on end-user productivity; and c) a new approach, now under way, to application development, particularly the development of user interfaces. The process consists of four activities [18, 20-22]. Early Focus On Users. Designers should have direct contact with intended or actual users-via interviews , observations, surveys, partic-ipatory design. The aim is to understand users' cognitive, behav-ioral, attitudinal, and anthropomet-ric characteristics-and the characteristics of the jobs they will be doing. Integrated Design. All aspects of usability (e.g., user interface, help system, training plan, documentation) should evolve in parallel, rather than be defined sequentially, and should be under one management. Early~And Continual~User Testing. The only presently feasible approach to successful design is an empirical one, requiring observation and measurement of user behavior , careful evaluation of feedback , insightful solutions to existing problems, and strong motivation to make design changes. Iterative Design. A system under development must be modified based upon the results of behav-ioral tests of functions, user interface , help system, documentation, training approach. This process of implementation, testing, feedback, evaluation, and change must be repeated to iteratively improve the system. We, and others proposing similar ideas (see below), have worked hard at spreading this process of usabil-ity design. We have used numerous channels to accomplish this: frequent talks, workshops, seminars, publications, consulting, addressing arguments used against it [22], conducting a direct case study of the process [20], and identifying methods for people not fully trained as human factors professionals to use in carrying out this process [18]. The Process Works. Several lines of evidence indicate that this usabil-ity design process leads to systems, applications, and products …",
"title": ""
},
{
"docid": "111743197c23aff0fac0699a30edca23",
"text": "Origami describes rules for creating folded structures from patterns on a flat sheet, but does not prescribe how patterns can be designed to fit target shapes. Here, starting from the simplest periodic origami pattern that yields one-degree-of-freedom collapsible structures-we show that scale-independent elementary geometric constructions and constrained optimization algorithms can be used to determine spatially modulated patterns that yield approximations to given surfaces of constant or varying curvature. Paper models confirm the feasibility of our calculations. We also assess the difficulty of realizing these geometric structures by quantifying the energetic barrier that separates the metastable flat and folded states. Moreover, we characterize the trade-off between the accuracy to which the pattern conforms to the target surface, and the effort associated with creating finer folds. Our approach enables the tailoring of origami patterns to drape complex surfaces independent of absolute scale, as well as the quantification of the energetic and material cost of doing so.",
"title": ""
},
{
"docid": "3754b5c86e0032382f144ded5f1ca4d8",
"text": "Use and users have an important and acknowledged role to most designers of interactive systems. Nevertheless any touch of user hands does not in itself secure development of meaningful artifacts. In this article we stress the need for a professional PD practice in order to yield the full potentiality of user involvement. We suggest two constituting elements of such a professional PD practice. The existence of a shared 'where-to' and 'why' artifact and an ongoing reflection and off-loop reflection among practitioners in the PD process.",
"title": ""
},
{
"docid": "a5a53221aa9ccda3258223b9ed4e2110",
"text": "Accurate and reliable inventory forecasting can save an organization from overstock, under-stock and no stock/stock-out situation of inventory. Overstocking leads to high cost of storage and its maintenance, whereas under-stocking leads to failure to meet the demand and losing profit and customers, similarly stock-out leads to complete halt of production or sale activities. Inventory transactions generate data, which is a time-series data having characteristic volume, speed, range and regularity. The inventory level of an item depends on many factors namely, current stock, stock-on-order, lead-time, annual/monthly target. In this paper, we present a perspective of treating Inventory management as a problem of Genetic Programming based on inventory transactions data. A Genetic Programming — Symbolic Regression (GP-SR) based mathematical model is developed and subsequently used to make forecasts using Holt-Winters Exponential Smoothing method for time-series modeling. The GP-SR model evolves based on RMSE as the fitness function. The performance of the model is measured in terms of RMSE and MAE. The estimated values of item demand from the GP-SR model is finally used to simulate a time-series and forecasts are generated for inventory required on a monthly time horizon.",
"title": ""
},
{
"docid": "69e0179971396fcaf09c9507735a8d5b",
"text": "In this paper, we describe a statistical approach to both an articulatory-to-acoustic mapping and an acoustic-to-articulatory inversion mapping without using phonetic information. The joint probability density of an articulatory parameter and an acoustic parameter is modeled using a Gaussian mixture model (GMM) based on a parallel acoustic-articulatory speech database. We apply the GMM-based mapping using the minimum mean-square error (MMSE) criterion, which has been proposed for voice conversion, to the two mappings. Moreover, to improve the mapping performance, we apply maximum likelihood estimation (MLE) to the GMM-based mapping method. The determination of a target parameter trajectory having appropriate static and dynamic properties is obtained by imposing an explicit relationship between static and dynamic features in the MLE-based mapping. Experimental results demonstrate that the MLE-based mapping with dynamic features can significantly improve the mapping performance compared with the MMSE-based mapping in both the articulatory-to-acoustic mapping and the inversion mapping.",
"title": ""
},
{
"docid": "490dc6ee9efd084ecf2496b72893a39a",
"text": "The rise of blockchain-based cryptocurrencies has led to an explosion of services using distributed ledgers as their underlying infrastructure. However, due to inherently single-service oriented blockchain protocols, such services can bloat the existing ledgers, fail to provide sufficient security, or completely forego the property of trustless auditability. Security concerns, trust restrictions, and scalability limits regarding the resource requirements of users hamper the sustainable development of loosely-coupled services on blockchains. This paper introduces Aspen, a sharded blockchain protocol designed to securely scale with increasing number of services. Aspen shares the same trust model as Bitcoin in a peer-to-peer network that is prone to extreme churn containing Byzantine participants. It enables introduction of new services without compromising the security, leveraging the trust assumptions, or flooding users with irrelevant messages.",
"title": ""
},
{
"docid": "9cc2dfde38bed5e767857b1794d987bc",
"text": "Smartphones providing proprietary encryption schemes, albeit offering a novel paradigm to privacy, are becoming a bone of contention for certain sovereignties. These sovereignties have raised concerns about their security agencies not having any control on the encrypted data leaving their jurisdiction and the ensuing possibility of it being misused by people with malicious intents. Such smartphones have typically two types of customers, independent users who use it to access public mail servers and corporates/enterprises whose employees use it to access corporate emails in an encrypted form. The threat issues raised by security agencies concern mainly the enterprise servers where the encrypted data leaves the jurisdiction of the respective sovereignty while on its way to the global smartphone router. In this paper, we have analyzed such email message transfer mechanisms in smartphones and proposed some feasible solutions, which, if accepted and implemented by entities involved, can lead to a possible win-win situation for both the parties, viz., the smartphone provider who does not want to lose the customers and these sovereignties who can avoid the worry of encrypted data leaving their jurisdiction.",
"title": ""
},
{
"docid": "af691c2ca5d9fd1ca5109c8b2e7e7b6d",
"text": "As social robots become more widely used as educational tutoring agents, it is important to study how children interact with these systems, and how effective they are as assessed by learning gains, sustained engagement, and perceptions of the robot tutoring system as a whole. In this paper, we summarize our prior work involving a long-term child-robot interaction study and outline important lessons learned regarding individual differences in children. We then discuss how these lessons inform future research in child-robot interaction.",
"title": ""
},
{
"docid": "c8fdcfa08aff6286a02b984cc5f716b2",
"text": "As interest in adopting Cloud computing for various applications is rapidly growing, it is important to understand how these applications and systems will perform when deployed on Clouds. Due to the scale and complexity of shared resources, it is often hard to analyze the performance of new scheduling and provisioning algorithms on actual Cloud test beds. Therefore, simulation tools are becoming more and more important in the evaluation of the Cloud computing model. Simulation tools allow researchers to rapidly evaluate the efficiency, performance and reliability of their new algorithms on a large heterogeneous Cloud infrastructure. However, current solutions lack either advanced application models such as message passing applications and workflows or scalable network model of data center. To fill this gap, we have extended a popular Cloud simulator (CloudSim) with a scalable network and generalized application model, which allows more accurate evaluation of scheduling and resource provisioning policies to optimize the performance of a Cloud infrastructure.",
"title": ""
}
] | scidocsrr |
5f7cb537da11a86fcd3b211ca8da75bb | Toward parallel continuum manipulators | [
{
"docid": "f80f1952c5b58185b261d53ba9830c47",
"text": "This paper presents a new class of thin, dexterous continuum robots, which we call active cannulas due to their potential medical applications. An active cannula is composed of telescoping, concentric, precurved superelastic tubes that can be axially translated and rotated at the base relative to one another. Active cannulas derive bending not from tendon wires or other external mechanisms but from elastic tube interaction in the backbone itself, permitting high dexterity and small size, and dexterity improves with miniaturization. They are designed to traverse narrow and winding environments without relying on ldquoguidingrdquo environmental reaction forces. These features seem ideal for a variety of applications where a very thin robot with tentacle-like dexterity is needed. In this paper, we apply beam mechanics to obtain a kinematic model of active cannula shape and describe design tools that result from the modeling process. After deriving general equations, we apply them to a simple three-link active cannula. Experimental results illustrate the importance of including torsional effects and the ability of our model to predict energy bifurcation and active cannula shape.",
"title": ""
},
{
"docid": "be749e59367ee1033477bb88503032cf",
"text": "This paper describes the results of field trials and associated testing of the OctArm series of multi-section continuous backbone \"continuum\" robots. This novel series of manipulators has recently (Spring 2005) undergone a series of trials including open-air and in-water field tests. Outcomes of the trials, in which the manipulators demonstrated the ability for adaptive and novel manipulation in challenging environments, are described. Implications for the deployment of continuum robots in a variety of applications are discussed",
"title": ""
},
{
"docid": "8bb465b2ec1f751b235992a79c6f7bf1",
"text": "Continuum robotics has rapidly become a rich and diverse area of research, with many designs and applications demonstrated. Despite this diversity in form and purpose, there exists remarkable similarity in the fundamental simplified kinematic models that have been applied to continuum robots. However, this can easily be obscured, especially to a newcomer to the field, by the different applications, coordinate frame choices, and analytical formalisms employed. In this paper we review several modeling approaches in a common frame and notational convention, illustrating that for piecewise constant curvature, they produce identical results. This discussion elucidates what has been articulated in different ways by a number of researchers in the past several years, namely that constant-curvature kinematics can be considered as consisting of two separate submappings: one that is general and applies to all continuum robots, and another that is robot-specific. These mappings are then developed both for the singlesection and for the multi-section case. Similarly, we discuss the decomposition of differential kinematics (the robot’s Jacobian) into robot-specific and robot-independent portions. The paper concludes with a perspective on several of the themes of current research that are shaping the future of continuum robotics.",
"title": ""
}
] | [
{
"docid": "d157d7b6e1c5796b6d7e8fedf66e81d8",
"text": "Intrusion detection for computer network systems becomes one of the most critical tasks for network administrators today. It has an important role for organizations, governments and our society due to its valuable resources on computer networks. Traditional misuse detection strategies are unable to detect new and unknown intrusion. Besides , anomaly detection in network security is aim to distinguish between illegal or malicious events and normal behavior of network systems. Anomaly detection can be considered as a classification problem where it builds models of normal network behavior, which it uses to detect new patterns that significantly deviate from the model. Most of the current research on anomaly detection is based on the learning of normally and anomaly behaviors. They do not take into account the previous, recent events to detect the new incoming one. In this paper, we propose a real time collective anomaly detection model based on neural network learning and feature operating. Normally a Long Short-Term Memory Recurrent Neural Network (LSTM RNN) is trained only on normal data and it is capable of predicting several time steps ahead of an input. In our approach, a LSTM RNN is trained with normal time series data before performing a live prediction for each time step. Instead of considering each time step separately, the observation of prediction errors from a certain number of time steps is now proposed as a new idea for detecting collective anomalies. The prediction errors from a number of the latest time steps above a threshold will indicate a collective anomaly. The model is built on a time series version of the KDD 1999 dataset. The experiments demonstrate that it is possible to offer reliable and efficient for collective anomaly detection.",
"title": ""
},
{
"docid": "b55eb410f2a2c7eb6be1c70146cca203",
"text": "Permissioned blockchains are arising as a solution to federate companies prompting accountable interactions. A variety of consensus algorithms for such blockchains have been proposed, each of which has different benefits and drawbacks. Proof-of-Authority (PoA) is a new family of Byzantine fault-tolerant (BFT) consensus algorithms largely used in practice to ensure better performance than traditional Practical Byzantine Fault Tolerance (PBFT). However, the lack of adequate analysis of PoA hinders any cautious evaluation of their effectiveness in real-world permissioned blockchains deployed over the Internet, hence on an eventually synchronous network experimenting Byzantine nodes. In this paper, we analyse two of the main PoA algorithms, named Aura and Clique, both in terms of provided guarantees and performances. First, we derive their functioning including how messages are exchanged, then we weight, by relying on the CAP theorem, consistency, availability and partition tolerance guarantees. We also report a qualitative latency analysis based on message rounds. The analysis advocates that PoA for permissioned blockchains, deployed over the Internet with Byzantine nodes, do not provide adequate consistency guarantees for scenarios where data integrity is essential. We claim that PBFT can fit better such scenarios, despite a limited loss in terms of performance.",
"title": ""
},
{
"docid": "969a8e447fb70d22a7cbabe7fc47a9c9",
"text": "A novel multi-level AC six-phase motor drive is developed in this paper. The scheme is based on three conventional 2-level three-phase voltage source inverters (VSIs) supplying the open-end windings of a dual three-phase motor (six-phase induction machine). The proposed inverter is capable of supply the machine with multi-level voltage waveforms. The developed system is compared with the conventional solution and it is demonstrated that the drive system permits to reduce the harmonic distortion of the machine currents, to reduce the total semiconductor losses and to decrease the power processed by converter switches. The system model and the Pulse-Width Modulation (PWM) strategy are presented. The experimental verification was obtained by using IGBTs with dedicated drives and a digital signal processor (DSP) with plug-in boards and sensors.",
"title": ""
},
{
"docid": "97412a2a6e6d91fef2c75b62aca5b6f4",
"text": "Predicting the outcome of National Basketball Association (NBA) matches poses a challenging problem of interest to the research community as well as the general public. In this article, we formalize the problem of predicting NBA game results as a classification problem and apply the principle of Maximum Entropy to construct an NBA Maximum Entropy (NBAME) model that fits to discrete statistics for NBA games, and then predict the outcomes of NBA playoffs using the model. Our results reveal that the model is able to predict the winning team with 74.4% accuracy, outperforming other classical machine learning algorithms that could only afford a maximum prediction accuracy of 70.6% in the experiments that we performed.",
"title": ""
},
{
"docid": "dd4cc15729f65a0102028949b34cc56f",
"text": "Autonomous vehicles platooning has received considerable attention in recent years, due to its potential to significantly benefit road transportation, improving traffic efficiency, enhancing road safety and reducing fuel consumption. The Vehicular ad hoc Networks and the de facto vehicular networking standard IEEE 802.11p communication protocol are key tools for the deployment of platooning applications, since the cooperation among vehicles is based on a reliable communication structure. However, vehicular networks can suffer different security threats. Indeed, in collaborative driving applications, the sudden appearance of a malicious attack can mainly compromise: (i) the correctness of data traffic flow on the vehicular network by sending malicious messages that alter the platoon formation and its coordinated motion; (ii) the safety of platooning application by altering vehicular network communication capability. In view of the fact that cyber attacks can lead to dangerous implications for the security of autonomous driving systems, it is fundamental to consider their effects on the behavior of the interconnected vehicles, and to try to limit them from the control design stage. To this aim, in this work we focus on some relevant types of malicious threats that affect the platoon safety, i.e. application layer attacks (Spoofing and Message Falsification) and network layer attacks (Denial of Service and Burst Transmission), and we propose a novel collaborative control strategy for enhancing the protection level of autonomous platoons. The control protocol is designed and validated in both analytically and experimental way, for the appraised malicious attack scenarios and for different communication topology structures. The effectiveness of the proposed strategy is shown by using PLEXE, a state of the art inter-vehicular communications and mobility simulator that includes basic building blocks for platooning. A detailed experimental analysis discloses the robustness of the proposed approach and its capabilities in reacting to the malicious attack effects.",
"title": ""
},
{
"docid": "25ed874d2bf1125b5539d595319d334b",
"text": "The notion of creativity, as opposed to related concepts such as beauty or interestingness, has not been studied from the perspective of automatic analysis of multimedia content. Meanwhile, short online videos shared on social media platforms, or micro-videos, have arisen as a new medium for creative expression. In this paper we study creative micro-videos in an effort to understand the features that make a video creative, and to address the problem of automatic detection of creative content. Defining creative videos as those that are novel and have aesthetic value, we conduct a crowdsourcing experiment to create a dataset of over 3, 800 micro-videos labelled as creative and non-creative. We propose a set of computational features that we map to the components of our definition of creativity, and conduct an analysis to determine which of these features correlate most with creative video. Finally, we evaluate a supervised approach to automatically detect creative video, with promising results, showing that it is necessary to model both aesthetic value and novelty to achieve optimal classification accuracy.",
"title": ""
},
{
"docid": "5de19873c4bd67cdcc57d879d923dc10",
"text": "BACKGROUND AND PURPOSE\nNeuromyelitis optica (NMO) or Devic's disease is a rare inflammatory and demyelinating autoimmune disorder of the central nervous system (CNS) characterized by recurrent attacks of optic neuritis (ON) and longitudinally extensive transverse myelitis (LETM), which is distinct from multiple sclerosis (MS). The guidelines are designed to provide guidance for best clinical practice based on the current state of clinical and scientific knowledge.\n\n\nSEARCH STRATEGY\nEvidence for this guideline was collected by searches for original articles, case reports and meta-analyses in the MEDLINE and Cochrane databases. In addition, clinical practice guidelines of professional neurological and rheumatological organizations were studied.\n\n\nRESULTS\nDifferent diagnostic criteria for NMO diagnosis [Wingerchuk et al. Revised NMO criteria, 2006 and Miller et al. National Multiple Sclerosis Society (NMSS) task force criteria, 2008] and features potentially indicative of NMO facilitate the diagnosis. In addition, guidance for the work-up and diagnosis of spatially limited NMO spectrum disorders is provided by the task force. Due to lack of studies fulfilling requirement for the highest levels of evidence, the task force suggests concepts for treatment of acute exacerbations and attack prevention based on expert opinion.\n\n\nCONCLUSIONS\nStudies on diagnosis and management of NMO fulfilling requirements for the highest levels of evidence (class I-III rating) are limited, and diagnostic and therapeutic concepts based on expert opinion and consensus of the task force members were assembled for this guideline.",
"title": ""
},
{
"docid": "53a55e8aa8b3108cdc8d015eabb3476d",
"text": "We investigate a family of poisoning attacks against Support Vector Machines (SVM). Such attacks inject specially crafted training data that increases the SVM’s test error. Central to the motivation for these attacks is the fact that most learning algorithms assume that their training data comes from a natural or well-behaved distribution. However, this assumption does not generally hold in security-sensitive settings. As we demonstrate, an intelligent adversary can, to some extent, predict the change of the SVM’s decision function due to malicious input and use this ability to construct malicious data. The proposed attack uses a gradient ascent strategy in which the gradient is computed based on properties of the SVM’s optimal solution. This method can be kernelized and enables the attack to be constructed in the input space even for non-linear kernels. We experimentally demonstrate that our gradient ascent procedure reliably identifies good local maxima of the non-convex validation error surface, which significantly increases the classifier’s test error.",
"title": ""
},
{
"docid": "79e2e4af34e8a2b89d9439ff83b9fd5a",
"text": "PROBLEM\nThe current nursing workforce is composed of multigenerational staff members creating challenges and at times conflict for managers.\n\n\nMETHODS\nGenerational cohorts are defined and two multigenerational scenarios are presented and discussed using the ACORN imperatives and Hahn's Five Managerial Strategies for effectively managing a multigenerational staff.\n\n\nFINDINGS\nCommunication and respect are the underlying key strategies to understanding and bridging the generational gap in the workplace.\n\n\nCONCLUSION\nEmbracing and respecting generational differences can bring strength and cohesiveness to nursing teams on the managerial or unit level.",
"title": ""
},
{
"docid": "1878b3e7742a0ffbd3da67be23c6e366",
"text": "Compensation for geometrical spreading along a raypath is one of the key steps in AVO amplitude-variation-with-offset analysis, in particular, for wide-azimuth surveys. Here, we propose an efficient methodology to correct long-spread, wide-azimuth reflection data for geometrical spreading in stratified azimuthally anisotropic media. The P-wave geometrical-spreading factor is expressed through the reflection traveltime described by a nonhyperbolic moveout equation that has the same form as in VTI transversely isotropic with a vertical symmetry axis media. The adapted VTI equation is parameterized by the normal-moveout NMO ellipse and the azimuthally varying anellipticity parameter . To estimate the moveout parameters, we apply a 3D nonhyperbolic semblance algorithm of Vasconcelos and Tsvankin that operates simultaneously with traces at all offsets and",
"title": ""
},
{
"docid": "ef372c1537c8eabb4595dc5385199575",
"text": "This article provides a review of the traditional clinical concepts for the design and fabrication of removable partial dentures (RPDs). Although classic theories and rules for RPD designs have been presented and should be followed, excellent clinical care for partially edentulous patients may also be achieved with computer-aided design/computer-aided manufacturing technology and unique blended designs. These nontraditional RPD designs and fabrication methods provide for improved fit, function, and esthetics by using computer-aided design software, composite resin for contours and morphology of abutment teeth, metal support structures for long edentulous spans and collapsed occlusal vertical dimensions, and flexible, nylon thermoplastic material for metal-supported clasp assemblies.",
"title": ""
},
{
"docid": "afdc8b3e00a4fe39b281e17056d97664",
"text": "This demo presents the features of the Proactive Insights (PI) engine, which uses machine learning and artificial intelligence capabilities to automatically identify weaknesses in business processes, to reveal their root causes, and to give intelligent advice on how to improve process inefficiencies. We demonstrate the four PI elements covering Conformance, Machine Learning, Social, and Companion. The new insights are especially valuable for process managers and academics interested in BPM and process mining.",
"title": ""
},
{
"docid": "df404258bca8d16cabf935fd94fc7463",
"text": "Training deep neural networks with Stochastic Gradient Descent, or its variants, requires careful choice of both learning rate and batch size. While smaller batch sizes generally converge in fewer training epochs, larger batch sizes offer more parallelism and hence better computational efficiency. We have developed a new training approach that, rather than statically choosing a single batch size for all epochs, adaptively increases the batch size during the training process. Our method delivers the convergence rate of small batch sizes while achieving performance similar to large batch sizes. We analyse our approach using the standard AlexNet, ResNet, and VGG networks operating on the popular CIFAR10, CIFAR-100, and ImageNet datasets. Our results demonstrate that learning with adaptive batch sizes can improve performance by factors of up to 6.25 on 4 NVIDIA Tesla P100 GPUs while changing accuracy by less than 1% relative to training with fixed batch sizes.",
"title": ""
},
{
"docid": "ed769b97bea6d4bbe7e282ad6dbb1c67",
"text": "Three basic switching structures are defined: one is formed by two capacitors and three diodes; the other two are formed by two inductors and two diodes. They are inserted in either a Cuk converter, or a Sepic, or a Zeta converter. The SC/SL structures are built in such a way as when the active switch of the converter is on, the two inductors are charged in series or the two capacitors are discharged in parallel. When the active switch is off, the two inductors are discharged in parallel or the two capacitors are charged in series. As a result, the line voltage is reduced more times than in classical Cuk/Sepic/Zeta converters. The steady-state analysis of the new converters, a comparison of the DC voltage gain and of the voltage and current stresses of the new hybrid converters with those of the available quadratic converters, and experimental results are given",
"title": ""
},
{
"docid": "b36e9a2f1143fa242c4d372cb0ba38b3",
"text": "Invariance to nuisance transformations is one of the desirable properties of effective representations. We consider transformations that form a group and propose an approach based on kernel methods to derive local group invariant representations. Locality is achieved by defining a suitable probability distribution over the group which in turn induces distributions in the input feature space. We learn a decision function over these distributions by appealing to the powerful framework of kernel methods and generate local invariant random feature maps via kernel approximations. We show uniform convergence bounds for kernel approximation and provide generalization bounds for learning with these features. We evaluate our method on three real datasets, including Rotated MNIST and CIFAR-10, and observe that it outperforms competing kernel based approaches. The proposed method also outperforms deep CNN on RotatedMNIST and performs comparably to the recently proposed group-equivariant CNN.",
"title": ""
},
{
"docid": "daa30843c26d285b3b42cb588e4d0cd1",
"text": "In this paper, we rigorously study tractable models for provably recovering low-rank tensors. Unlike their matrix-based predecessors, current convex approaches for recovering low-rank tensors based on incomplete (tensor completion) and/or grossly corrupted (tensor robust principal analysis) observations still suffer from the lack of theoretical guarantees, although they have been used in various recent applications and have exhibited promising empirical performance. In this work, we attempt to fill this gap. Specifically, we propose a class of convex recovery models (including strongly convex programs) that can be proved to guarantee exact recovery under certain conditions. All parameters in our formulations can be determined beforehand based on the measurement data and thus there is no parameter tuning involved.",
"title": ""
},
{
"docid": "49d5f6fdc02c777d42830bac36f6e7e2",
"text": "Current tools for exploratory data analysis (EDA) require users to manually select data attributes, statistical computations and visual encodings. This can be daunting for large-scale, complex data. We introduce Foresight, a visualization recommender system that helps the user rapidly explore large high-dimensional datasets through “guideposts.” A guidepost is a visualization corresponding to a pronounced instance of a statistical descriptor of the underlying data, such as a strong linear correlation between two attributes, high skewness or concentration about the mean of a single attribute, or a strong clustering of values. For each descriptor, Foresight initially presents visualizations of the “strongest” instances, based on an appropriate ranking metric. Given these initial guideposts, the user can then look at “nearby” guideposts by issuing “guidepost queries” containing constraints on metric type, metric strength, data attributes, and data values. Thus, the user can directly explore the network of guideposts, rather than the overwhelming space of data attributes and visual encodings. Foresight also provides for each descriptor a global visualization of ranking-metric values to both help orient the user and ensure a thorough exploration process. Foresight facilitates interactive exploration of large datasets using fast, approximate sketching to compute ranking metrics. We also contribute insights on EDA practices of data scientists, summarizing results from an interview study we conducted to inform the design of Foresight.",
"title": ""
},
{
"docid": "7ec93b17c88d09f8a442dd32127671d8",
"text": "Understanding the 3D structure of a scene is of vital importance, when it comes to developing fully autonomous robots. To this end, we present a novel deep learning based framework that estimates depth, surface normals and surface curvature by only using a single RGB image. To the best of our knowledge this is the first work to estimate surface curvature from colour using a machine learning approach. Additionally, we demonstrate that by tuning the network to infer well designed features, such as surface curvature, we can achieve improved performance at estimating depth and normals. This indicates that network guidance is still a useful aspect of designing and training a neural network. We run extensive experiments where the network is trained to infer different tasks while the model capacity is kept constant resulting in different feature maps based on the tasks at hand. We outperform the previous state-of-the-art benchmarks which jointly estimate depths and surface normals while predicting surface curvature in parallel.",
"title": ""
},
{
"docid": "eebeb59c737839e82ecc20a748b12c6b",
"text": "We present SWARM, a wearable affective technology designed to help a user to reflect on their own emotional state, modify their affect, and interpret the emotional states of others. SWARM aims for a universal design (inclusive of people with various disabilities), with a focus on modular actuation components to accommodate users' sensory capabilities and preferences, and a scarf form-factor meant to reduce the stigma of accessible technologies through a fashionable embodiment. Using an iterative, user-centered approach, we present SWARM's design. Additionally, we contribute findings for communicating emotions through technology actuations, wearable design techniques (including a modular soft circuit design technique that fuses conductive fabric with actuation components), and universal design considerations for wearable technology.",
"title": ""
}
] | scidocsrr |
aa7c85f32127a96c63fc22c07cbede29 | Unsupervised Discovery of Discourse Relations for Eliminating Intra-sentence Polarity Ambiguities | [
{
"docid": "7723c78b2ff8f9fdc285ee05b482efef",
"text": "We describe our experience in developing a discourse-annotated corpus for community-wide use. Working in the framework of Rhetorical Structure Theory, we were able to create a large annotated resource with very high consistency, using a well-defined methodology and protocol. This resource is made publicly available through the Linguistic Data Consortium to enable researchers to develop empirically grounded, discourse-specific applications.",
"title": ""
}
] | [
{
"docid": "ff1834a5b249c436dfa5a48b5f464568",
"text": "Communication primitives such as coding and multiple antenna processing have provided significant benefits for traditional wireless systems. Existing designs, however, consume significant power and computational resources, and hence cannot be run on low complexity, power constrained backscatter devices. This paper makes two main contributions: (1) we introduce the first multi-antenna cancellation design that operates on backscatter devices while retaining a small form factor and power footprint, (2) we introduce a novel coding mechanism that enables long range communication as well as concurrent transmissions and can be decoded on backscatter devices. We build hardware prototypes of the above designs that can be powered solely using harvested energy from TV and solar sources. The results show that our designs provide benefits for both RFID and ambient backscatter systems: they enable RFID tags to communicate directly with each other at distances of tens of meters and through multiple walls. They also increase the communication rate and range achieved by ambient backscatter systems by 100X and 40X respectively. We believe that this paper represents a substantial leap in the capabilities of backscatter communication.",
"title": ""
},
{
"docid": "ca8d70248ef68c41f34eee375e511abf",
"text": "While mobile advertisement is the dominant source of revenue for mobile apps, the usage patterns of mobile users, and thus their engagement and exposure times, may be in conflict with the effectiveness of current ads. Users engagement with apps can range from a few seconds to several minutes, depending on a number of factors such as users' locations, concurrent activities and goals. Despite the wide-range of engagement times, the current format of ad auctions dictates that ads are priced, sold and configured prior to actual viewing, that is regardless of the actual ad exposure time.\n We argue that the wealth of easy-to-gather contextual information on mobile devices is sufficient to allow advertisers to make better choices by effectively predicting exposure time. We analyze mobile device usage patters with a detailed two-week long user study of 37 users in the US and South Korea. After characterizing application session times, we use factor analysis to derive a simple predictive model and show that is able to offer improved accuracy compared to mean session time over 90% of the time. We make the case for including predicted ad exposure duration in the price of mobile advertisements and posit that such information could significantly impact the effectiveness of mobile ads by giving publishers the ability to tune campaigns for engagement length, and enable a more efficient market for ad impressions while lowering network utilization and device power consumption.",
"title": ""
},
{
"docid": "a258c6b5abf18cb3880e4bc7a436c887",
"text": "We propose a reactive controller framework for robust quadrupedal locomotion, designed to cope with terrain irregularities, trajectory tracking errors and poor state estimation. The framework comprises two main modules: One related to the generation of elliptic trajectories for the feet and the other for control of the stability of the whole robot. We propose a task space CPG-based trajectory generation that can be modulated according to terrain irregularities and the posture of the robot trunk. To improve the robot's stability, we implemented a null space based attitude control for the trunk and a push recovery algorithm based on the concept of capture points. Simulations and experimental results on the hydraulically actuated quadruped robot HyQ will be presented to demonstrate the effectiveness of our framework.",
"title": ""
},
{
"docid": "c2e7425f719dd51eec0d8e180577269e",
"text": "Most important way of communication among humans is language and primary medium used for the said is speech. The speech recognizers make use of a parametric form of a signal to obtain the most important distinguishable features of speech signal for recognition purpose. In this paper, Linear Prediction Cepstral Coefficient (LPCC), Mel Frequency Cepstral Coefficient (MFCC) and Bark frequency Cepstral coefficient (BFCC) feature extraction techniques for recognition of Hindi Isolated, Paired and Hybrid words have been studied and the corresponding recognition rates are compared. Artifical Neural Network is used as back end processor. The experimental results show that the better recognition rate is obtained for MFCC as compared to LPCC and BFCC for all the three types of words.",
"title": ""
},
{
"docid": "04a85672df9da82f7e5da5b8b25c9481",
"text": "This study investigated long-term effects of training on postural control using the model of deficits in activation of transversus abdominis (TrA) in people with recurrent low back pain (LBP). Nine volunteers with LBP attended four sessions for assessment and/or training (initial, two weeks, four weeks and six months). Training of repeated isolated voluntary TrA contractions were performed at the initial and two-week session with feedback from real-time ultrasound imaging. Home program involved training twice daily for four weeks. Electromyographic activity (EMG) of trunk and deltoid muscles was recorded with surface and fine-wire electrodes. Rapid arm movement and walking were performed at each session, and immediately after training on the first two sessions. Onset of trunk muscle activation relative to prime mover deltoid during arm movements, and the coefficient of variation (CV) of EMG during averaged gait cycle were calculated. Over four weeks of training, onset of TrA EMG was earlier during arm movements and CV of TrA EMG was reduced (consistent with more sustained EMG activity). Changes were retained at six months follow-up (p<0.05). These results show persistence of motor control changes following training and demonstrate that this training approach leads to motor learning of automatic postural control strategies.",
"title": ""
},
{
"docid": "f6342101ff8315bcaad4e4f965e6ba8a",
"text": "In radar imaging it is well known that relative motion or deformation of parts of illuminated objects induce additional features in the Doppler frequency spectra. These features are called micro-Doppler effect and appear as sidebands around the central Doppler frequency. They can provide valuable information about the structure of the moving parts and may be used for identification purposes [1].",
"title": ""
},
{
"docid": "df677d32bdbba01d27c8eb424b9893e9",
"text": "Active learning is an area of machine learning examining strategies for allocation of finite resources, particularly human labeling efforts and to an extent feature extraction, in situations where available data exceeds available resources. In this open problem paper, we motivate the necessity of active learning in the security domain, identify problems caused by the application of present active learning techniques in adversarial settings, and propose a framework for experimentation and implementation of active learning systems in adversarial contexts. More than other contexts, adversarial contexts particularly need active learning as ongoing attempts to evade and confuse classifiers necessitate constant generation of labels for new content to keep pace with adversarial activity. Just as traditional machine learning algorithms are vulnerable to adversarial manipulation, we discuss assumptions specific to active learning that introduce additional vulnerabilities, as well as present vulnerabilities that are amplified in the active learning setting. Lastly, we present a software architecture, Security-oriented Active Learning Testbed (SALT), for the research and implementation of active learning applications in adversarial contexts.",
"title": ""
},
{
"docid": "8439309414a9999abbd0e0be95a25fb8",
"text": "Cython is a Python language extension that allows explicit type declarations and is compiled directly to C. As such, it addresses Python's large overhead for numerical loops and the difficulty of efficiently using existing C and Fortran code, which Cython can interact with natively.",
"title": ""
},
{
"docid": "89238dd77c0bf0994b53190078eb1921",
"text": "Several methods exist for a computer to generate music based on data including Markov chains, recurrent neural networks, recombinancy, and grammars. We explore the use of unit selection and concatenation as a means of generating music using a procedure based on ranking, where, we consider a unit to be a variable length number of measures of music. We first examine whether a unit selection method, that is restricted to a finite size unit library, can be sufficient for encompassing a wide spectrum of music. This is done by developing a deep autoencoder that encodes a musical input and reconstructs the input by selecting from the library. We then describe a generative model that combines a deep structured semantic model (DSSM) with an LSTM to predict the next unit, where units consist of four, two, and one measures of music. We evaluate the generative model using objective metrics including mean rank and accuracy and with a subjective listening test in which expert musicians are asked to complete a forcedchoiced ranking task. Our system is compared to a note-level generative baseline model that consists of a stacked LSTM trained to predict forward by one note.",
"title": ""
},
{
"docid": "410bd8286a87a766dd221c1269f05c04",
"text": "The lowand mid-frequency model of the transformer with resistive load is analysed for different values of coupling coefficients. The model comprising of coupling-dependent inductances is used to derive the following characteristics: voltage gain, current gain, bandwidth, input impedance, and transformer efficiency. It is shown that in the lowand mid-frequency range, the turns ratio between the windings is a strong function of the coupling coefficient, i.e., if the coupling coefficient decreases, then the effective turns ratio reduces. A practical transformer was designed, simulated, and tested. It was observed that the magnitudes of the voltage transfer function and current transfer function exhibit a maximum value each at a different value of coupling coefficient. In addition, as the coupling coefficient decreases, the transformer bandwidth also decreases. Furthermore, analytical expressions for the transformer efficiency for resistive loads are derived and its variation with respect to frequency at different coupling coefficients is investigated. It is shown that the transformer efficiency is maximum at any coupling coefficient if the input resistance is equal to the load resistance. Experimental validation of the theoretical results was performed using a practical transformer set-up. The theoretical predictions were found to be in good agreement with the experimental results.",
"title": ""
},
{
"docid": "2ea886246d4f59d88c3eabd99c60dd5d",
"text": "This paper proposes a Modified Particle Swarm Optimization with Time Varying Acceleration Coefficients (MPSO-TVAC) for solving economic load dispatch (ELD) problem. Due to prohibited operating zones (POZ) and ramp rate limits of the practical generators, the ELD problems become nonlinear and nonconvex optimization problem. Furthermore, the ELD problem may be more complicated if transmission losses are considered. Particle swarm optimization (PSO) is one of the famous heuristic methods for solving nonconvex problems. However, this method may suffer to trap at local minima especially for multimodal problem. To improve the solution quality and robustness of PSO algorithm, a new best neighbour particle called ‘rbest’ is proposed. The rbest provides extra information for each particle that is randomly selected from other best particles in order to diversify the movement of particle and avoid premature convergence. The effectiveness of MPSO-TVAC algorithm is tested on different power systems with POZ, ramp-rate limits and transmission loss constraints. To validate the performances of the proposed algorithm, comparative studies have been carried out in terms of convergence characteristic, solution quality, computation time and robustness. Simulation results found that the proposed MPSO-TVAC algorithm has good solution quality and more robust than other methods reported in previous work.",
"title": ""
},
{
"docid": "aa64bd9576044ec5e654c9f29c4f7d84",
"text": "BACKGROUND\nSocial media are dynamic and interactive computer-mediated communication tools that have high penetration rates in the general population in high-income and middle-income countries. However, in medicine and health care, a large number of stakeholders (eg, clinicians, administrators, professional colleges, academic institutions, ministries of health, among others) are unaware of social media's relevance, potential applications in their day-to-day activities, as well as the inherent risks and how these may be attenuated and mitigated.\n\n\nOBJECTIVE\nWe conducted a narrative review with the aim to present case studies that illustrate how, where, and why social media are being used in the medical and health care sectors.\n\n\nMETHODS\nUsing a critical-interpretivist framework, we used qualitative methods to synthesize the impact and illustrate, explain, and provide contextual knowledge of the applications and potential implementations of social media in medicine and health care. Both traditional (eg, peer-reviewed) and nontraditional (eg, policies, case studies, and social media content) sources were used, in addition to an environmental scan (using Google and Bing Web searches) of resources.\n\n\nRESULTS\nWe reviewed, evaluated, and synthesized 76 articles, 44 websites, and 11 policies/reports. Results and case studies are presented according to 10 different categories of social media: (1) blogs (eg, WordPress), (2) microblogs (eg, Twitter), (3) social networking sites (eg, Facebook), (4) professional networking sites (eg, LinkedIn, Sermo), (5) thematic networking sites (eg, 23andMe), (6) wikis (eg, Wikipedia), (7) mashups (eg, HealthMap), (8) collaborative filtering sites (eg, Digg), (9) media sharing sites (eg, YouTube, Slideshare), and others (eg, SecondLife). Four recommendations are provided and explained for stakeholders wishing to engage with social media while attenuating risk: (1) maintain professionalism at all times, (2) be authentic, have fun, and do not be afraid, (3) ask for help, and (4) focus, grab attention, and engage.\n\n\nCONCLUSIONS\nThe role of social media in the medical and health care sectors is far reaching, and many questions in terms of governance, ethics, professionalism, privacy, confidentiality, and information quality remain unanswered. By following the guidelines presented, professionals have a starting point to engage with social media in a safe and ethical manner. Future research will be required to understand the synergies between social media and evidence-based practice, as well as develop institutional policies that benefit patients, clinicians, public health practitioners, and industry alike.",
"title": ""
},
{
"docid": "06f6ffa9c1c82570b564e1cd0f719950",
"text": "Widespread use of biometric architectures implies the need to secure highly sensitive data to respect the privacy rights of the users. In this paper, we discuss the following question: To what extent can biometric designs be characterized as Privacy Enhancing Technologies? The terms of privacy and security for biometric schemes are defined, while current regulations for the protection of biometric information are presented. Additionally, we analyze and compare cryptographic techniques for secure biometric designs. Finally, we introduce a privacy-preserving approach for biometric authentication in mobile electronic financial applications. Our model utilizes the mechanism of pseudonymous biometric identities for secure user registration and authentication. We discuss how the privacy requirements for the processing of biometric data can be met in our scenario. This work attempts to contribute to the development of privacy-by-design biometric technologies.",
"title": ""
},
{
"docid": "74a91327b85ac9681f618d4ba6a86151",
"text": "In this paper, a miniaturized planar antenna with enhanced bandwidth is designed for the ISM 433 MHz applications. The antenna is realized by cascading two resonant structures with meander lines, thus introducing two different radiating branches to realize two neighboring resonant frequencies. The techniques of shorting pin and novel ground plane are adopted for bandwidth enhancement. Combined with these structures, a novel antenna with a total size of 23 mm × 49.5 mm for the ISM band application is developed and fabricated. Measured results show that the proposed antenna has good performance with the -10 dB impedance bandwidth is about 12.5 MHz and the maximum gain is about -2.8 dBi.",
"title": ""
},
{
"docid": "f0f88be4a2b7619f6fb5cdcca1741d1f",
"text": "BACKGROUND\nThere is no evidence from randomized trials to support a strategy of lowering systolic blood pressure below 135 to 140 mm Hg in persons with type 2 diabetes mellitus. We investigated whether therapy targeting normal systolic pressure (i.e., <120 mm Hg) reduces major cardiovascular events in participants with type 2 diabetes at high risk for cardiovascular events.\n\n\nMETHODS\nA total of 4733 participants with type 2 diabetes were randomly assigned to intensive therapy, targeting a systolic pressure of less than 120 mm Hg, or standard therapy, targeting a systolic pressure of less than 140 mm Hg. The primary composite outcome was nonfatal myocardial infarction, nonfatal stroke, or death from cardiovascular causes. The mean follow-up was 4.7 years.\n\n\nRESULTS\nAfter 1 year, the mean systolic blood pressure was 119.3 mm Hg in the intensive-therapy group and 133.5 mm Hg in the standard-therapy group. The annual rate of the primary outcome was 1.87% in the intensive-therapy group and 2.09% in the standard-therapy group (hazard ratio with intensive therapy, 0.88; 95% confidence interval [CI], 0.73 to 1.06; P=0.20). The annual rates of death from any cause were 1.28% and 1.19% in the two groups, respectively (hazard ratio, 1.07; 95% CI, 0.85 to 1.35; P=0.55). The annual rates of stroke, a prespecified secondary outcome, were 0.32% and 0.53% in the two groups, respectively (hazard ratio, 0.59; 95% CI, 0.39 to 0.89; P=0.01). Serious adverse events attributed to antihypertensive treatment occurred in 77 of the 2362 participants in the intensive-therapy group (3.3%) and 30 of the 2371 participants in the standard-therapy group (1.3%) (P<0.001).\n\n\nCONCLUSIONS\nIn patients with type 2 diabetes at high risk for cardiovascular events, targeting a systolic blood pressure of less than 120 mm Hg, as compared with less than 140 mm Hg, did not reduce the rate of a composite outcome of fatal and nonfatal major cardiovascular events. (ClinicalTrials.gov number, NCT00000620.)",
"title": ""
},
{
"docid": "f3cb18c15459dd7a9c657e32442bd289",
"text": "The advent of crowdsourcing has created a variety of new opportunities for improving upon traditional methods of data collection and annotation. This in turn has created intriguing new opportunities for data-driven machine learning (ML). Convenient access to crowd workers for simple data collection has further generalized to leveraging more arbitrary crowd-based human computation (von Ahn 2005) to supplement automated ML. While new potential applications of crowdsourcing continue to emerge, a variety of practical and sometimes unexpected obstacles have already limited the degree to which its promised potential can be actually realized in practice. This paper considers two particular aspects of crowdsourcing and their interplay, data quality control (QC) and ML, reflecting on where we have been, where we are, and where we might go from here.",
"title": ""
},
{
"docid": "400048566b24d7527845f7c6b6d86fc0",
"text": "In brief: Diagnosis of skier's thumb-a common sports injury-is based on physical examination and history of the injury. The most important findings from the physical exam are point tenderness over the ulnar collateral ligament and instability, which is tested with the thumb at 0° and at 20° to 30° of flexion. Grade 1 and 2 injuries, which involve torn fibers but no loss of integrity, can be treated with casting and/or splinting and physical therapy. Grade 3 injuries involve complete disruption of the ligament and usually require surgical repair. Results from treatment are generally excellent, and with appropriate rehabilitation, athletes recover pinch and grip strength and return to sports.",
"title": ""
},
{
"docid": "06d2d07ed7532aa19b779607a21afef7",
"text": "BACKGROUND\nMyocardium irreversibly injured by ischemic stress must be efficiently repaired to maintain tissue integrity and contractile performance. Macrophages play critical roles in this process. These cells transform across a spectrum of phenotypes to accomplish diverse functions ranging from mediating the initial inflammatory responses that clear damaged tissue to subsequent reparative functions that help rebuild replacement tissue. Although macrophage transformation is crucial to myocardial repair, events governing this transformation are poorly understood.\n\n\nMETHODS\nHere, we set out to determine whether innate immune responses triggered by cytoplasmic DNA play a role.\n\n\nRESULTS\nWe report that ischemic myocardial injury, along with the resulting release of nucleic acids, activates the recently described cyclic GMP-AMP synthase-stimulator of interferon genes pathway. Animals lacking cyclic GMP-AMP synthase display significantly improved early survival after myocardial infarction and diminished pathological remodeling, including ventricular rupture, enhanced angiogenesis, and preserved ventricular contractile function. Furthermore, cyclic GMP-AMP synthase loss of function abolishes the induction of key inflammatory programs such as inducible nitric oxide synthase and promotes the transformation of macrophages to a reparative phenotype, which results in enhanced repair and improved hemodynamic performance.\n\n\nCONCLUSIONS\nThese results reveal, for the first time, that the cytosolic DNA receptor cyclic GMP-AMP synthase functions during cardiac ischemia as a pattern recognition receptor in the sterile immune response. Furthermore, we report that this pathway governs macrophage transformation, thereby regulating postinjury cardiac repair. Because modulators of this pathway are currently in clinical use, our findings raise the prospect of new treatment options to combat ischemic heart disease and its progression to heart failure.",
"title": ""
},
{
"docid": "f443e22db2a2313b47168740662ad187",
"text": "Tunneling-field-effect-transistor (TFET) has emerged as an alternative for conventional CMOS by enabling the supply voltage (VDD) scaling in ultra-low power, energy efficient computing, due to its sub-60 mV/ decade sub-threshold slope (SS). Given its unique device characteristics such as the asymmetrical source/drain design induced uni-directional conduction, enhanced on-state Miller capacitance effect and steep switching at low voltages, TFET based circuit design requires strong interactions between the device-level and the circuit-level to explore the performance benefits, with certain modifications of the conventional CMOS circuits to achieve the functionality and optimal energy efficiency. Because TFET operates at low supply voltage range (VDD < 0:5 V) to outperform CMOS, reliability issues can have profound impact on the circuit design from the practical application perspective. In this review paper, we present recent development on Tunnel FET device design, and modeling technique for circuit implementation and performance benchmarking. We focus on the reliability issues such as soft-error, electrical noise and process variation, and their impact on TFET based circuit performance compared to sub-threshold CMOS. Analytical models of electrical noise and process variation are also discussed for circuit-level",
"title": ""
},
{
"docid": "1e25480ef6bd5974fcd806aac7169298",
"text": "Alphabetical ciphers are being used since centuries for inducing confusion in messages, but there are some drawbacks that are associated with Classical alphabetic techniques like concealment of key and plaintext. Here in this paper we will suggest an encryption technique that is a blend of both classical encryption as well as modern technique, this hybrid technique will be superior in terms of security than average Classical ciphers.",
"title": ""
}
] | scidocsrr |
2ea6466de9702c55fb87df541947b9d0 | Searching by Talking: Analysis of Voice Queries on Mobile Web Search | [
{
"docid": "ef08ef786fd759b33a7d323c69be19db",
"text": "Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. A core problem in language model estimation is smoothing, which adjusts the maximum likelihood estimator so as to correct the inaccuracy due to data sparseness. In this paper, we study the problem of language model smoothing and its influence on retrieval performance. We examine the sensitivity of retrieval performance to the smoothing parameters and compare several popular smoothing methods on different test collection.",
"title": ""
}
] | [
{
"docid": "f4abfe0bb969e2a6832fa6317742f202",
"text": "We built a highly compliant, underactuated, robust and at the same time dexterous anthropomorphic hand. We evaluate its dexterous grasping capabilities by implementing the comprehensive Feix taxonomy of human grasps and by assessing the dexterity of its opposable thumb using the Kapandji test. We also illustrate the hand’s payload limits and demonstrate its grasping capabilities in real-world grasping experiments. To support our claim that compliant structures are beneficial for dexterous grasping, we compare the dimensionality of control necessary to implement the diverse grasp postures with the dimensionality of the grasp postures themselves. We find that actuation space is smaller than posture space and explain the difference with the mechanic interaction between hand and grasped object. Additional desirable properties are derived from using soft robotics technology: the hand is robust to impact and blunt collisions, inherently safe, and not affected by dirt, dust, or liquids. Furthermore, the hand is simple and inexpensive to manufacture.",
"title": ""
},
{
"docid": "b0c60343724a49266fac2d2f4c2d37d3",
"text": "In the Western world, aging is a growing problem of the society and computer assisted treatments can facilitate the telemedicine for old people or it can help in rehabilitations of patients after sport accidents in far locations. Physical exercises play an important role in physiotherapy and RGB-D devices can be utilized to recognize them in order to make interactive computer healthcare applications in the future. A practical model definition is introduced in this paper to recognize different exercises with Asus Xtion camera. One of the contributions is the extendable recognition models to detect other human activities with noisy sensors, but avoiding heavy data collection. The experiments show satisfactory detection performance without any false positives which is unique in the field to the best of the author knowledge. The computational costs are negligible thus the developed models can be suitable for embedded systems.",
"title": ""
},
{
"docid": "d7bb22eefbff0a472d3e394c61788be2",
"text": "Crowd evacuation of a building has been studied over the last decades. In this paper, seven methodological approaches for crowd evacuation have been identified. These approaches include cellular automata models, lattice gas models, social force models, fluid-dynamic models, agent-based models, game theoretic models, and approaches based on experiments with animals. According to available literatures, we discuss the advantages and disadvantages of these approaches, and conclude that a variety of different kinds of approaches should be combined to study crowd evacuation. Psychological and physiological elements affecting individual and collective behaviors should be also incorporated into the evacuation models. & 2008 Elsevier Ltd. All rights reserved.",
"title": ""
},
{
"docid": "ca9c4512d2258a44590a298879219970",
"text": "I propose a common framework that combines three different paradigms in machine learning: generative, discriminative and imitative learning. A generative probabilistic distribution is a principled way to model many machine learning and machine perception problems. Therein, one provides domain specific knowledge in terms of structure and parameter priors over the joint space of variables. Bayesian networks and Bayesian statistics provide a rich and flexible language for specifying this knowledge and subsequently refining it with data and observations. The final result is a distribution that is a good generator of novel exemplars. Conversely, discriminative algorithms adjust a possibly non-distributional model to data optimizing for a specific task, such as classification or prediction. This typically leads to superior performance yet compromises the flexibility of generative modeling. I present Maximum Entropy Discrimination (MED) as a framework to combine both discriminative estimation and generative probability densities. Calculations involve distributions over parameters, margins, and priors and are provably and uniquely solvable for the exponential family. Extensions include regression, feature selection, and transduction. SVMs are also naturally subsumed and can be augmented with, for example, feature selection, to obtain substantial improvements. To extend to mixtures of exponential families, I derive a discriminative variant of the ExpectationMaximization (EM) algorithm for latent discriminative learning (or latent MED). While EM and Jensen lower bound log-likelihood, a dual upper bound is made possible via a novel reverse-Jensen inequality. The variational upper bound on latent log-likelihood has the same form as EM bounds, is computable efficiently and is globally guaranteed. It permits powerful discriminative learning with the wide range of contemporary probabilistic mixture models (mixtures of Gaussians, mixtures of multinomials and hidden Markov models). We provide empirical results on standardized data sets that demonstrate the viability of the hybrid discriminative-generative approaches of MED and reverse-Jensen bounds over state of the art discriminative techniques or generative approaches. Subsequently, imitative learning is presented as another variation on generative modeling which also learns from exemplars from an observed data source. However, the distinction is that the generative model is an agent that is interacting in a much more complex surrounding external world. It is not efficient to model the aggregate space in a generative setting. I demonstrate that imitative learning (under appropriate conditions) can be adequately addressed as a discriminative prediction task which outperforms the usual generative approach. This discriminative-imitative learning approach is applied with a generative perceptual system to synthesize a real-time agent that learns to engage in social interactive behavior. Thesis Supervisor: Alex Pentland Title: Toshiba Professor of Media Arts and Sciences, MIT Media Lab Discriminative, Generative and Imitative Learning",
"title": ""
},
{
"docid": "9584909fc62cca8dc5c9d02db7fa7e5d",
"text": "As the nature of many materials handling tasks have begun to change from lifting to pushing and pulling, it is important that one understands the biomechanical nature of the risk to which the lumbar spine is exposed. Most previous assessments of push-pull tasks have employed models that may not be sensitive enough to consider the effects of the antagonistic cocontraction occurring during complex pushing and pulling motions in understanding the risk to the spine and the few that have considered the impact of cocontraction only consider spine load at one lumbar level. This study used an electromyography-assisted biomechanical model sensitive to complex motions to assess spine loadings throughout the lumbar spine as 10 males and 10 females pushed and pulled loads at three different handle heights and of three different load magnitudes. Pulling induced greater spine compressive loads than pushing, whereas the reverse was true for shear loads at the different lumbar levels. The results indicate that, under these conditions, anterior-posterior (A/P) shear loads were of sufficient magnitude to be of concern especially at the upper lumbar levels. Pushing and pulling loads equivalent to 20% of body weight appeared to be the limit of acceptable exertions, while pulling at low and medium handle heights (50% and 65% of stature) minimised A/P shear. These findings provide insight to the nature of spine loads and their potential risk to the low back during modern exertions.",
"title": ""
},
{
"docid": "4cc4c8fd07f30b5546be2376c1767c19",
"text": "We apply new bilevel and trilevel optimization models to make critical infrastructure more resilient against terrorist attacks. Each model features an intelligent attacker (terrorists) and a defender (us), information transparency, and sequential actions by attacker and defender. We illustrate with examples of the US Strategic Petroleum Reserve, the US Border Patrol at Yuma, Arizona, and an electrical transmission system. We conclude by reporting insights gained from the modeling experience and many “red-team” exercises. Each exercise gathers open-source data on a real-world infrastructure system, develops an appropriate bilevel or trilevel model, and uses these to identify vulnerabilities in the system or to plan an optimal defense.",
"title": ""
},
{
"docid": "8c174dbb8468b1ce6f4be3676d314719",
"text": "An estimated 24 million people worldwide have dementia, the majority of whom are thought to have Alzheimer's disease. Thus, Alzheimer's disease represents a major public health concern and has been identified as a research priority. Although there are licensed treatments that can alleviate symptoms of Alzheimer's disease, there is a pressing need to improve our understanding of pathogenesis to enable development of disease-modifying treatments. Methods for improving diagnosis are also moving forward, but a better consensus is needed for development of a panel of biological and neuroimaging biomarkers that support clinical diagnosis. There is now strong evidence of potential risk and protective factors for Alzheimer's disease, dementia, and cognitive decline, but further work is needed to understand these better and to establish whether interventions can substantially lower these risks. In this Seminar, we provide an overview of recent evidence regarding the epidemiology, pathogenesis, diagnosis, and treatment of Alzheimer's disease, and discuss potential ways to reduce the risk of developing the disease.",
"title": ""
},
{
"docid": "8af2e53cb3f77a2590945f135a94279b",
"text": "Time series data are an ubiquitous and important data source in many domains. Most companies and organizations rely on this data for critical tasks like decision-making, planning, and analytics in general. Usually, all these tasks focus on actual data representing organization and business processes. In order to assess the robustness of current systems and methods, it is also desirable to focus on time-series scenarios which represent specific time-series features. This work presents a generally applicable and easy-to-use method for the feature-driven generation of time series data. Our approach extracts descriptive features of a data set and allows the construction of a specific version by means of the modification of these features.",
"title": ""
},
{
"docid": "6b8329ef59c6811705688e48bf6c0c08",
"text": "Since the invention of word2vec, the skip-gram model has significantly advanced the research of network embedding, such as the recent emergence of the DeepWalk, LINE, PTE, and node2vec approaches. In this work, we show that all of the aforementioned models with negative sampling can be unified into the matrix factorization framework with closed forms. Our analysis and proofs reveal that: (1) DeepWalk empirically produces a low-rank transformation of a network's normalized Laplacian matrix; (2) LINE, in theory, is a special case of DeepWalk when the size of vertices' context is set to one; (3) As an extension of LINE, PTE can be viewed as the joint factorization of multiple networks» Laplacians; (4) node2vec is factorizing a matrix related to the stationary distribution and transition probability tensor of a 2nd-order random walk. We further provide the theoretical connections between skip-gram based network embedding algorithms and the theory of graph Laplacian. Finally, we present the NetMF method as well as its approximation algorithm for computing network embedding. Our method offers significant improvements over DeepWalk and LINE for conventional network mining tasks. This work lays the theoretical foundation for skip-gram based network embedding methods, leading to a better understanding of latent network representation learning.",
"title": ""
},
{
"docid": "1785d1d7da87d1b6e5c41ea89e447bf9",
"text": "Web usage mining is the application of data mining techniques to discover usage patterns from Web data, in order to understand and better serve the needs of Web-based applications. Web usage mining consists of three phases, namely preprocessing, pattern discovery, and pattern analysis. This paper describes each of these phases in detail. Given its application potential, Web usage mining has seen a rapid increase in interest, from both the research and practice communities. This paper provides a detailed taxonomy of the work in this area, including research efforts as well as commercial offerings. An up-to-date survey of the existing work is also provided. Finally, a brief overview of the WebSIFT system as an example of a prototypical Web usage mining system is given.",
"title": ""
},
{
"docid": "924768b271caa9d1ba0cb32ab512f92e",
"text": "Traditional keyboard and mouse based presentation prevents lecturers from interacting with the audiences freely and closely. In this paper, we propose a gesture-aware presentation tool named SlideShow to liberate lecturers from physical space constraints and make human-computer interaction more natural and convenient. In our system, gesture data is obtained by a handle controller with 3-axis accelerometer and gyro and transmitted to host-side through bluetooth, then we use Bayesian change point detection to segment continuous gesture series and HMM to recognize the gesture. In consequence Slideshow could carry out the corresponding operations on PowerPoint(PPT) to make a presentation, and operation states can be switched automatically and intelligently during the presentation. Both the experimental and testing results show our approach is practical, useful and convenient.",
"title": ""
},
{
"docid": "d2f64c21d0a3a54b4a2b75b7dd7df029",
"text": "Library of Congress Cataloging in Publication Data EB. Boston studies in the philosophy of science.The concept of autopoiesis is due to Maturana and Varela 8, 9. The aim of this article is to revisit the concepts of autopoiesis and cognition in the hope of.Amazon.com: Autopoiesis and Cognition: The Realization of the Living Boston Studies in the Philosophy of Science, Vol. 42 9789027710161: H.R. Maturana.Autopoiesis, The Santiago School of Cognition, and. In their early work together Maturana and Varela developed the idea of autopoiesis.Autopoiesis and Cognition: The Realization of the Living Dordecht.",
"title": ""
},
{
"docid": "566c6e3f9267fc8ccfcf337dc7aa7892",
"text": "Research into the values motivating unsustainable behavior has generated unique insight into how NGOs and environmental campaigns contribute toward successfully fostering significant and long-term behavior change, yet thus far this research has not been applied to the domain of sustainable HCI. We explore the implications of this research as it relates to the potential limitations of current approaches to persuasive technology, and what it means for designing higher impact interventions. As a means of communicating these implications to be readily understandable and implementable, we develop a set of antipatterns to describe persuasive technology approaches that values research suggests are unlikely to yield significant sustainability wins, and a complementary set of patterns to describe new guidelines for what may become persuasive technology best practice.",
"title": ""
},
{
"docid": "f48d02ff3661d3b91c68d6fcf750f83e",
"text": "There have been a number of techniques developed in recent years for the efficient analysis of probabilistic inference problems, represented as Bayes' networks or influence diagrams (Lauritzen and Spiegelhalter [9], Pearl [12], Shachter [14]). To varying degrees these methods exploit the conditional independence assumed and revealed in the problem structure to analyze problems in polynomial time, essentially polynomial in the number of variables and the size of the largest state space encountered during the evaluation. Unfortunately, there are many problems of interest for which the variables of interest are continuous rather than discrete, so the relevant state spaces become infinite and the polynomial complexity is of little help.",
"title": ""
},
{
"docid": "c3558d8f79cd8a7f53d8b6073c9a7db3",
"text": "De novo assembly of RNA-seq data enables researchers to study transcriptomes without the need for a genome sequence; this approach can be usefully applied, for instance, in research on 'non-model organisms' of ecological and evolutionary importance, cancer samples or the microbiome. In this protocol we describe the use of the Trinity platform for de novo transcriptome assembly from RNA-seq data in non-model organisms. We also present Trinity-supported companion utilities for downstream applications, including RSEM for transcript abundance estimation, R/Bioconductor packages for identifying differentially expressed transcripts across samples and approaches to identify protein-coding genes. In the procedure, we provide a workflow for genome-independent transcriptome analysis leveraging the Trinity platform. The software, documentation and demonstrations are freely available from http://trinityrnaseq.sourceforge.net. The run time of this protocol is highly dependent on the size and complexity of data to be analyzed. The example data set analyzed in the procedure detailed herein can be processed in less than 5 h.",
"title": ""
},
{
"docid": "745cdbb442c73316f691dc20cc696f31",
"text": "Computer-generated texts, whether from Natural Language Generation (NLG) or Machine Translation (MT) systems, are often post-edited by humans before being released to users. The frequency and type of post-edits is a measure of how well the system works, and can be used for evaluation. We describe how we have used post-edit data to evaluate SUMTIME-MOUSAM, an NLG system that produces weather forecasts.",
"title": ""
},
{
"docid": "f90784e4bdaad1f8ecb5941867a467cf",
"text": "Social Networks (SN) Sites are becoming very popular and the number of users is increasing rapidly. However, with that increase there is also an increase in the security threats which affect the users’ privacy, identity and confidentiality. Different research groups highlighted the security threats in SN and attempted to offer some solutions to these issues. In this paper we survey several examples of this research and highlight the approaches. All the models we surveyed were focusing on protecting users’ information yet they failed to cover other important issues. For example, none of the mechanisms provided the users with control over what others can reveal about them; and encryption of images is still not achieved properly. Generally having higher security measures will affect the system’s performance in terms of speed and response time. However, this trade-off was not discussed or addressed in any of the models we surveyed.",
"title": ""
},
{
"docid": "a38986fcee27fb733ec51cf83771a85f",
"text": "A tunable broadband inverted microstrip line phase shifter filled with Liquid Crystals (LCs) is investigated between 1.125 GHz and 35 GHz at room temperature. The effective dielectric anisotropy is tuned by a DC-voltage of up to 30 V. In addition to standard LCs like K15 (5CB), a novel highly anisotropic LC mixture is characterized by a resonator method at 8.5 GHz, showing a very high dielectric anisotropy /spl Delta/n of 0.32 for the novel mixture compared to 0.13 for K15. These LCs are filled into two inverted microstrip line phase shifter devices with different polyimide films and heights. With a physical length of 50 mm, the insertion losses are about 4 dB for the novel mixture compared to 6 dB for K15 at 24 GHz. A differential phase shift of 360/spl deg/ can be achieved at 30 GHz with the novel mixture. The figure-of-merit of the phase shifter exceeds 110/spl deg//dB for the novel mixture compared to 21/spl deg//dB for K15 at 24 GHz. To our knowledge, this is the best value above 20 GHz at room temperature demonstrated for a tunable phase shifter based on nonlinear dielectrics up to now. This substantial progress opens up totally new low-cost LC applications beyond optics.",
"title": ""
},
{
"docid": "ab0c80a10d26607134828c6b350089aa",
"text": "Parkinson's disease (PD) is a neurodegenerative disorder with symptoms that progressively worsen with age. Pathologically, PD is characterized by the aggregation of α-synuclein in cells of the substantia nigra in the brain and loss of dopaminergic neurons. This pathology is associated with impaired movement and reduced cognitive function. The etiology of PD can be attributed to a combination of environmental and genetic factors. A popular animal model, the nematode roundworm Caenorhabditis elegans, has been frequently used to study the role of genetic and environmental factors in the molecular pathology and behavioral phenotypes associated with PD. The current review summarizes cellular markers and behavioral phenotypes in transgenic and toxin-induced PD models of C. elegans.",
"title": ""
}
] | scidocsrr |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.