381 resultados para Discriminative Itemsets
Resumo:
With the advent of GPS enabled smartphones, an increasing number of users is actively sharing their location through a variety of applications and services. Along with the continuing growth of Location-Based Social Networks (LBSNs), security experts have increasingly warned the public of the dangers of exposing sensitive information such as personal location data. Most importantly, in addition to the geographical coordinates of the user’s location, LBSNs allow easy access to an additional set of characteristics of that location, such as the venue type or popularity. In this paper, we investigate the role of location semantics in the identification of LBSN users. We simulate a scenario in which the attacker’s goal is to reveal the identity of a set of LBSN users by observing their check-in activity. We then propose to answer the following question: what are the types of venues that a malicious user has to monitor to maximize the probability of success? Conversely, when should a user decide whether to make his/her check-in to a location public or not? We perform our study on more than 1 million check-ins distributed over 17 urban regions of the United States. Our analysis shows that different types of venues display different discriminative power in terms of user identity, with most of the venues in the “Residence” category providing the highest re-identification success across the urban regions. Interestingly, we also find that users with a high entropy of their check-ins distribution are not necessarily the hardest to identify, suggesting that it is the collective behaviour of the users’ population that determines the complexity of the identification task, rather than the individual behaviour.
Resumo:
Many Object recognition techniques perform some flavour of point pattern matching between a model and a scene. Such points are usually selected through a feature detection algorithm that is robust to a class of image transformations and a suitable descriptor is computed over them in order to get a reliable matching. Moreover, some approaches take an additional step by casting the correspondence problem into a matching between graphs defined over feature points. The motivation is that the relational model would add more discriminative power, however the overall effectiveness strongly depends on the ability to build a graph that is stable with respect to both changes in the object appearance and spatial distribution of interest points. In fact, widely used graph-based representations, have shown to suffer some limitations, especially with respect to changes in the Euclidean organization of the feature points. In this paper we introduce a technique to build relational structures over corner points that does not depend on the spatial distribution of the features. © 2012 ICPR Org Committee.
Resumo:
Microposts are small fragments of social media content that have been published using a lightweight paradigm (e.g. Tweets, Facebook likes, foursquare check-ins). Microposts have been used for a variety of applications (e.g., sentiment analysis, opinion mining, trend analysis), by gleaning useful information, often using third-party concept extraction tools. There has been very large uptake of such tools in the last few years, along with the creation and adoption of new methods for concept extraction. However, the evaluation of such efforts has been largely consigned to document corpora (e.g. news articles), questioning the suitability of concept extraction tools and methods for Micropost data. This report describes the Making Sense of Microposts Workshop (#MSM2013) Concept Extraction Challenge, hosted in conjunction with the 2013 World Wide Web conference (WWW'13). The Challenge dataset comprised a manually annotated training corpus of Microposts and an unlabelled test corpus. Participants were set the task of engineering a concept extraction system for a defined set of concepts. Out of a total of 22 complete submissions 13 were accepted for presentation at the workshop; the submissions covered methods ranging from sequence mining algorithms for attribute extraction to part-of-speech tagging for Micropost cleaning and rule-based and discriminative models for token classification. In this report we describe the evaluation process and explain the performance of different approaches in different contexts.
Resumo:
Establishing an association between the scent a perpetrator left at a crime scene to the odor of the suspect of that crime is the basis for the use of human scent identification evidence in a court of law. Law enforcement agencies gather evidence through the collection of scent from the objects that a perpetrator may have handled during the execution of the criminal act. The collected scent evidence is consequently presented to the canines for identification line-up procedures with the apprehended suspects. Presently, canine scent identification is admitted as expert witness testimony, however, the accurate behavior of the dogs and the scent collection methods used are often challenged by the court system. The primary focus of this research project entailed an evaluation of contact and non-contact scent collection techniques with an emphasis on the optimization of collection materials of different fiber chemistries to evaluate the chemical odor profiles obtained using varying environment conditions to provide a better scientific understanding of human scent as a discriminative tool in the identification of suspects. The collection of hand odor from female and male subjects through both contact and non-contact sampling approaches yielded new insights into the types of VOCs collected when different materials are utilized, which had never been instrumentally performed. Furthermore, the collected scent mass was shown to be obtained in the highest amounts for both gender hand odor samples on cotton sorbent materials. Compared to non-contact sampling, the contact sampling methods yielded a higher number of volatiles, an enhancement of up to 3 times, as well as a higher scent mass than non-contact methods by more than an order of magnitude. The evaluation of the STU-100 as a non-contact methodology highlighted strong instrumental drawbacks that need to be targeted for enhanced scientific validation of current field practices. These results demonstrated that an individual's human scent components vary considerably depending on the method used to collect scent from the same body region. This study demonstrated the importance of collection medium selection as well as the collection method employed in providing a reproducible human scent sample that can be used to differentiate individuals.
Resumo:
Recent technological developments in the field of experimental quantum annealing have made prototypical annealing optimizers with hundreds of qubits commercially available. The experimental demonstration of a quantum speedup for optimization problems has since then become a coveted, albeit elusive goal. Recent studies have shown that the so far inconclusive results, regarding a quantum enhancement, may have been partly due to the benchmark problems used being unsuitable. In particular, these problems had inherently too simple a structure, allowing for both traditional resources and quantum annealers to solve them with no special efforts. The need therefore has arisen for the generation of harder benchmarks which would hopefully possess the discriminative power to separate classical scaling of performance with size from quantum. We introduce here a practical technique for the engineering of extremely hard spin-glass Ising-type problem instances that does not require "cherry picking" from large ensembles of randomly generated instances. We accomplish this by treating the generation of hard optimization problems itself as an optimization problem, for which we offer a heuristic algorithm that solves it. We demonstrate the genuine thermal hardness of our generated instances by examining them thermodynamically and analyzing their energy landscapes, as well as by testing the performance of various state-of-the-art algorithms on them. We argue that a proper characterization of the generated instances offers a practical, efficient way to properly benchmark experimental quantum annealers, as well as any other optimization algorithm.
Resumo:
AIMS: Differentiation of heart failure with reduced (HFrEF) or preserved (HFpEF) ejection fraction independent of echocardiography is challenging in the community. Diagnostic strategies based on monitoring circulating microRNA (miRNA) levels may prove to be of clinical value in the near future. The aim of this study was to identify a novel miRNA signature that could be a useful HF diagnostic tool and provide valuable clinical information on whether a patient has HFrEF or HFpEF.
METHODS AND RESULTS: MiRNA biomarker discovery was carried out on three patient cohorts, no heart failure (no-HF), HFrEF, and HFpEF, using Taqman miRNA arrays. The top five miRNA candidates were selected based on differential expression in HFpEF and HFrEF (miR-30c, -146a, -221, -328, and -375), and their expression levels were also different between HF and no-HF. These selected miRNAs were further verified and validated in an independent cohort consisting of 225 patients. The discriminative value of BNP as a HF diagnostic could be improved by use in combination with any of the miRNA candidates alone or in a panel. Combinations of two or more miRNA candidates with BNP had the ability to improve significantly predictive models to distinguish HFpEF from HFrEF compared with using BNP alone (area under the receiver operating characteristic curve >0.82).
CONCLUSION: This study has shown for the first time that various miRNA combinations are useful biomarkers for HF, and also in the differentiation of HFpEF from HFrEF. The utility of these biomarker combinations can be altered by inclusion of natriuretic peptide. MiRNA biomarkers may support diagnostic strategies in subpopulations of patients with HF.
Resumo:
In this work, we propose a biologically inspired appearance model for robust visual tracking. Motivated in part by the success of the hierarchical organization of the primary visual cortex (area V1), we establish an architecture consisting of five layers: whitening, rectification, normalization, coding and polling. The first three layers stem from the models developed for object recognition. In this paper, our attention focuses on the coding and pooling layers. In particular, we use a discriminative sparse coding method in the coding layer along with spatial pyramid representation in the pooling layer, which makes it easier to distinguish the target to be tracked from its background in the presence of appearance variations. An extensive experimental study shows that the proposed method has higher tracking accuracy than several state-of-the-art trackers.
Resumo:
INTRODUCTION: The ProACS risk score is an early and simple risk stratification score developed for all-cause in-hospital mortality in acute coronary syndromes (ACS) from a Portuguese nationwide ACS registry. Our center only recently participated in the registry and was not included in the cohort used for developing the score. Our objective was to perform an external validation of this risk score for short- and long-term follow-up. METHODS: Consecutive patients admitted to our center with ACS were included. Demographic and admission characteristics, as well as treatment and outcome data were collected. The ProACS risk score variables are age (≥72 years), systolic blood pressure (≤116 mmHg), Killip class (2/3 or 4) and ST-segment elevation. We calculated ProACS, Global Registry of Acute Coronary Events (GRACE) and Canada Acute Coronary Syndrome risk score (C-ACS) risk scores for each patient. RESULTS: A total of 3170 patients were included, with a mean age of 64±13 years, 62% with ST-segment elevation myocardial infarction. All-cause in-hospital mortality was 5.7% and 10.3% at one-year follow-up. The ProACS risk score showed good discriminative ability for all considered outcomes (area under the receiver operating characteristic curve >0.75) and a good fit, similar to C-ACS, but lower than the GRACE risk score and slightly lower than in the original development cohort. The ProACS risk score provided good differentiation between patients at low, intermediate and high mortality risk in both short- and long-term follow-up (p<0.001 for all comparisons). CONCLUSIONS: The ProACS score is valid in external cohorts for risk stratification for ACS. It can be applied very early, at the first medical contact, but should subsequently be complemented by the GRACE risk score.
Resumo:
In den letzten Jahren haben eine Vielzahl gesicherter Befunde über Wahrnehmungs-, Diskriminierungs- und Interaktionsfähigkeiten das Bild des Säuglings völlig verändert. Dies führt, so des Autors These, zur Ablösung der vor allem durch Freud geprägten Vorstellung vom triebgesteuerten Säugling, der in den ersten Monaten nur seine innere Welt wahrnimmt. Die neueren Forschungsergebnisse zeigen ein von Geburt an aktiv realitätsverarbeitendes Kind. Dieses schon interagierende Kind ist aber um so mehr auf (elterliche) Resonanz angewiesen. (DIPF/Orig.)
Resumo:
Purpose: Stereopsis is the perception of depth based on retinal disparity. Global stereopsis depends on the process of random dot stimuli and local stereopsis depends on contour perception. The aim of this study was to correlate 3 stereopsis tests: TNO®, StereoTA B®, and Fly Stereo Acuity Test® and to study the sensitivity and correlation between them, using TNO® as the gold standard. Other variables as near convergence point, vergences, symptoms and optical correction were correlated with the 3 tests. Materials and Methods: Forty-nine students from Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL), aged 18-26 years old were included. Results: The stereopsis mean (standard-deviation-SD) values in each test were: TNO® = 87.04” ±84.09”; FlyTest® = 38.18” ±34.59”; StereoTA B® = 124.89’’ ±137.38’’. About the coefficient of determination: TNO® and StereoTA B® with R2 = 0.6 e TNO® and FlyTest® with R2 =0.2. Pearson correlation coefficient shows a positive correlation between TNO® and StereoTA B® (r = 0.784 with α = 0.01). Phi coefficient shows a strong and positive association between TNO® and StereoTA B® (Φ = 0.848 with α = 0.01). In the ROC Curve, the StereoTA B® has an area under the curve bigger than the FlyTest® with a sensivity of 92.3% for 94.4% of specificity, so it means that the test is sensitive with a good discriminative power. Conclusion: We conclude that the use of Stereopsis tests to study global Stereopsis are an asset for clinical use. This type of test is more sensitive, revealing changes in Stereopsis when it is actually changed, unlike the test Stereopsis, which often indicates normal Stereopsis, camouflaging a Stereopsis change. We noted also that the StereoTA B ® is very sensitive and despite being a digital application, possessed good correlation with the TNO®.
Resumo:
Estereopsia define-se como a perceção de profundidade baseada na disparidade retiniana. A estereopsia global depende do processamento de estímulos de pontos aleatórios e a estereopsia local depende da perceção de contornos. O objetivo deste estudo é correlacionar três testes de estereopsia: TNO®, StereoTAB® e Fly Stereo Acuity Test® e verificar a sensibilidade e correlação entre eles, tendo o TNO® como gold standard. Incluíram-se 49 estudantes da Escola Superior de Tecnologia da Saúde de Lisboa (ESTeSL) entre os 18 e 26 anos. As variáveis ponto próximo de convergência (ppc), vergências, sintomatologia e correção ótica foram correlacionadas com os três testes. Os valores médios (desvios-padrão) de estereopsia foram: TNO® = 87,04’’ ±84,09’’; FlyTest® = 38,18’’ ±34,59’’; StereoTAB® = 124,89’’ ±137,38’’. Coeficiente de determinação: TNO® e StereoTAB® com R2=0,6 e TNO® e FlyTest® com R2=0,2. O coeficiente de correlação de Pearson mostra uma correlação positiva de entre o TNO® e o StereoTAB® (r=0,784 com α=0,01). O coeficiente de associação de Phi mostrou uma relação positiva forte entre o TNO® e StereoTAB® (Φ=0,848 com α=0,01). Na curva ROC, o StereoTAB® possui uma área sob a curva maior que o FlyTest®, apresentando valor de sensibilidade de 92,3% para uma especificidade de 94,4%, tornando-o num teste sensível e com bom poder discriminativo.
Resumo:
Mungbean ( Vigna radiata (L.) Wilczek) is an important source of nutrients and income for smallholder farmers in East Africa. Mungbean production in countries like Uganda largely depends on landraces, in the absence of improved varieties. In order to enhance productivity, efforts have been underway to develop and evaluate mungbean varieties that meet farmers’ needs in various parts of the country. This study was conducted at six locations in Uganda, to determine the adaptability of introduced mungbean genotypes, and identify mungbean production mega-environments in Uganda. Eleven genotypes (Filsan, Sunshine, Blackgram, Mauritius1, VC6148 (50-12), VC6173 (B-10),Yellowmungo, KPS1, VC6137(B-14),VC6372(45-60),VC6153(B-20P) and one local check were evaluated in six locations during 2013 and 2014. The locations were; National Semi Arid Resources Research Institute (NaSARRI), Abi Zonal Agricultural Research and Development Institute (AbiZARDI),Kaberamaido variety trial center, Kumi variety trial center, Nabuin Zonal Agricultural Research and Development Institute (NabuinZARDI), and Ngetta Zonal Agricultural Research and Development Institute (NgettaZARDI). G × E interactions were significant for grain yield. Through GGEBiplot analysis, three introduced genotypes (Filsan, Blackgram and Sunshine) were found to be stable and high yielding, and therefore, were recommended for release. The six test multi-locations were grouped into two candidate mega-environments for mungbean production (one comprising of AbiZARDI and Kaberamaido and the other comprising of NaSARRI, NabuinZARDI, Kumi, and NgettaZARDI). National Semi Arid Resources Research Institute (NaSARRI) was the most suitable environment in terms of both discriminative ability and representativeness and therefore can be used for selection of widely adaptable genotypes.
Resumo:
Alzheimer's disease (AD) represents one ofthe greatest public health challenges worldwide nowadays, because it affects millions of people ali o ver the world and it is expected that the disease will increase considerably in the near future. This study is the first application attempt of cepstral analysis on Electroencephalogram (EEG) signals to find new parameters in arder to achieve a better differentiation belween EEGs of AD patients and Control subjects. The results show that the methodology that uses a combined Wavelet (WT) Biorthogonal (Bior) 3.5 and cepstrum analysis was able to describe the EEG dynamics with a higher discriminative power than the other WTs/spectmm methodologies m previous studies. The most important significance figures were found in cepstral distances between cepstrums oftheta and alpha bands (p=0. 00006<0. 05).
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Instituto de Psicologia, Departamento de Processos Psicológicos Básicos, Programa de Pós-Graduação em Ciências do Comportamento, 2016.
Resumo:
The reinforcer devaluation paradigm has been regarded as a canonical paradigm to detect habit-like behavior in animal and human instrumental learning. Though less studied, avoidance situations set a scenario where habit-like behavior may be of great experimental and clinical interest. On the other hand, proactive intolerance of uncertainty has been shown as a factor facilitating responses in uncertain situations. Thus, avoidance situations in which uncertainty is favoured, may be taken as a relevant paradigm to examine the role of intolerance of uncertainty as a facilitatory factor for habit-like behavior to occur. In our experiment we used a free-operant discriminative avoidance procedure to implement a devaluation paradigm. Participants learned to avoid an aversive noise presented either to the right or to the left ear by pressing two different keys. After a devaluation phase where the volume of one of the noises was reduced, they went through a test phase identical to the avoidance phase except for the fact that the noise was never administered. Sensitivity to reinforcer devaluation was examined by comparing the response rate to the cue associated to the devalued reinforcer with that to the cue associated to the still aversive reinforcer. The results showed that intolerance of uncertainty was positively associated to insensitivity to reinforcer devaluation. Finally, we discuss the theoretical and clinical implications of the habit-like behavior obtained in our avoidance procedure.