845 resultados para Detection and representation
Resumo:
This paper presents an innovative approach for signature verification and forgery detection based on fuzzy modeling. The signature image is binarized and resized to a fixed size window and is then thinned. The thinned image is then partitioned into a fixed number of eight sub-images called boxes. This partition is done using the horizontal density approximation approach. Each sub-image is then further resized and again partitioned into twelve further sub-images using the uniform partitioning approach. The features of consideration are normalized vector angle (α) from each box. Each feature extracted from sample signatures gives rise to a fuzzy set. Since the choice of a proper fuzzification function is crucial for verification, we have devised a new fuzzification function with structural parameters, which is able to adapt to the variations in fuzzy sets. This function is employed to develop a complete forgery detection and verification system.
Resumo:
This paper explains some drawbacks on previous approaches for detecting influential observations in deterministic nonparametric data envelopment analysis models as developed by Yang et al. (Annals of Operations Research 173:89-103, 2010). For example efficiency scores and relative entropies obtained in this model are unimportant to outlier detection and the empirical distribution of all estimated relative entropies is not a Monte-Carlo approximation. In this paper we developed a new method to detect whether a specific DMU is truly influential and a statistical test has been applied to determine the significance level. An application for measuring efficiency of hospitals is used to show the superiority of this method that leads to significant advancements in outlier detection. © 2014 Springer Science+Business Media New York.
Resumo:
A személyazonosság-menedzsment napjaink fontos kutatási területe mind elméleti, mind gyakorlati szempontból. A szakterületen megvalósuló együttműködés, elektronikus tudásáramoltatás és csere hosszú távon csak úgy képzelhető el, hogy az azonos módon történő értelmezést automatikus eszközök támogatják. A szerző cikkében azokat a kutatási tevékenységeket foglalja össze, amelyeket - felhasználva a tudásmenedzsment, a mesterséges intelligencia és az információtechnológia eszközeit - a személyazonosság-menedzsment terület fogalmi leképezésére, leírására használt fel. Kutatási célja olyan közös fogalmi bázis kialakítása volt személyazonosság-menedzsment területre, amely lehetővé teszi az őt körülvevő multidimenzionális környezet kezelését. A kutatás kapcsolódik a GUIDE kutatási projekthez is, amelynek a szerző résztvevője. ______________ Identity management is an important research field from theoretical and practical aspects as well. The task itself is not new, identification and authentication was necessary always in public administration and business life. Information Society offers new services for citizens, which dramatically change the way of administration and results additional risks and opportunities. The goal of the demonstrated research was to formulate a common basis for the identity management domain in order to support the management of the surrounding multidimensional environment. There is a need for capturing, mapping, processing knowledge concerning identity management in order to support reusability, interoperability; to help common sharing and understanding the domain and to avoid inconsistency. The paper summarizes research activities for the identification, conceptualisation and representation of domain knowledge related to identity management, using the results of knowledge management, artificial intelligence and information technology. I utilized the experiences of Guide project, in which I participate. The paper demonstrates, that domain ontologies could offer a proper solution for identity management domain conceptualisation.
Resumo:
Background Delirium is highly prevalent, especially in older patients. It independently leads to adverse outcomes, but remains under-detected, particularly hypoactive forms. Although early identification and intervention is important, delirium prevention is key to improving outcomes. The delirium prodrome concept has been mooted for decades, but remains poorly characterised. Greater understanding of this prodrome would promote prompt identification of delirium-prone patients, and facilitate improved strategies for delirium prevention and management. Methods Medical inpatients of ≥70 years were screened for prevalent delirium using the Revised Delirium Rating Scale (DRS--‐R98). Those without prevalent delirium were assessed daily for delirium development, prodromal features and motor subtype. Survival analysis models identified which prodromal features predicted the emergence of incident delirium in the cohort in the first week of admission. The Delirium Motor Subtype Scale-4 was used to ascertain motor subtype. Results Of 555 patients approached, 191 patients were included in the prospective study. The median age was 80 (IQR 10) and 101 (52.9%) were male. Sixty-one patients developed incident delirium within a week of admission. Several prodromal features predicted delirium emergence in the cohort. Firstly, using a novel Prodromal Checklist based on the existing literature, and controlling for confounders, seven predictive behavioural features were identified in the prodromal period (for example, increasing confusion; and being easily distractible). Additionally, using serial cognitive tests and the DRS-R98 daily, multiple cognitive and other core delirium features were detected in the prodrome (for example inattention; and sleep-wake cycle disturbance). Examining longitudinal motor subtypes in delirium cases, subtypes were found to be predominantly stable over time, the most prevalent being hypoactive subtype (62.3%). Discussion This thesis explored multiple aspects of delirium in older medical inpatients, with particular focus on the characterisation of the delirium prodrome. These findings should help to inform future delirium educational programmes, and detection and prevention strategies.
Resumo:
This dissertation offers an investigation of the role of visual strategies, art, and representation in reconciling Indian Residential School history in Canada. This research builds upon theories of biopolitics, settler colonialism, and race to examine the project of redress and reconciliation as nation and identity building strategies engaged in the ongoing structural invasion of settler colonialism. It considers the key policy moments and expressions of the federal government—from RCAP to the IRSSA and subsequent apology—as well as the visual discourse of reconciliation as it works through archival photography, institutional branding, and commissioned works. These articulations are read alongside the creative and critical work of Indigenous artists and knowledge producers working within and outside of hegemonic structures on the topics of Indian Residential School history and redress. In particular the works of Jeff Thomas, Adrian Stimson, Krista Belle Stewart, Christi Belcourt, Luke Marston, Peter Morin, and Carey Newman are discussed in this dissertation. These works must be understood in relationship to the normative discourse of reconciliation as a legitimizing mechanism of settler colonial hegemony. Beyond the binary of cooptation and autonomous resistance, these works demonstrate the complexity of representing Indigeneity: as an ongoing site of settler colonial encounter and simultaneously the forum for the willful refusal of contingency or containment.
Resumo:
Sensitive detection of pathogens is critical to ensure the safety of food supplies and to prevent bacterial disease infection and outbreak at the first onset. While conventional techniques such as cell culture, ELISA, PCR, etc. have been used as the predominant detection workhorses, they are however limited by either time-consuming procedure, complicated sample pre-treatment, expensive analysis and operation, or inability to be implemented at point-of-care testing. Here, we present our recently developed assay exploiting enzyme-induced aggregation of plasmonic gold nanoparticles (AuNPs) for label-free and ultrasensitive detection of bacterial DNA. In the experiments, AuNPs are first functionalized with specific, single-stranded RNA probes so that they exhibit high stability in solution even under high electrolytic condition thus exhibiting red color. When bacterial DNA is present in a sample, a DNA-RNA heteroduplex will be formed and subsequently prone to the RNase H cleavage on the RNA probe, allowing the DNA to liberate and hybridize with another RNA strand. This continuously happens until all of the RNA strands are cleaved, leaving the nanoparticles ‘unprotected’. The addition of NaCl will cause the ‘unprotected’ nanoparticles to aggregate, initiating a colour change from red to blue. The reaction is performed in a multi-well plate format, and the distinct colour signal can be discriminated by naked eye or simple optical spectroscopy. As a result, bacterial DNA as low as pM could be unambiguously detected, suggesting that the enzyme-induced aggregation of AuNPs assay is very easy to perform and sensitive, it will significantly benefit to development of fast and ultrasensitive methods that can be used for disease detection and diagnosis.
Resumo:
Data mining can be defined as the extraction of implicit, previously un-known, and potentially useful information from data. Numerous re-searchers have been developing security technology and exploring new methods to detect cyber-attacks with the DARPA 1998 dataset for Intrusion Detection and the modified versions of this dataset KDDCup99 and NSL-KDD, but until now no one have examined the performance of the Top 10 data mining algorithms selected by experts in data mining. The compared classification learning algorithms in this thesis are: C4.5, CART, k-NN and Naïve Bayes. The performance of these algorithms are compared with accuracy, error rate and average cost on modified versions of NSL-KDD train and test dataset where the instances are classified into normal and four cyber-attack categories: DoS, Probing, R2L and U2R. Additionally the most important features to detect cyber-attacks in all categories and in each category are evaluated with Weka’s Attribute Evaluator and ranked according to Information Gain. The results show that the classification algorithm with best performance on the dataset is the k-NN algorithm. The most important features to detect cyber-attacks are basic features such as the number of seconds of a network connection, the protocol used for the connection, the network service used, normal or error status of the connection and the number of data bytes sent. The most important features to detect DoS, Probing and R2L attacks are basic features and the least important features are content features. Unlike U2R attacks, where the content features are the most important features to detect attacks.
Resumo:
This article examines the role of new social media in the articulation and representation of the refugee and diasporic “voice.” The article problematizes the individualist, de-politicized, de-contextualized, and aestheticized representation of refugee/diasporic voices. It argues that new social media enable refugees and diaspora members to exercise agency in managing the creation, production, and dissemination of their voices and to engage in hybrid (on- and offline) activism. These new territories for self-representation challenge our conventional understanding of refugee/diaspora voices. The article is based on research with young Congolese living in the diaspora, and it describes the Geno-cost project created by the Congolese Action Youth Platform (CAYP) and JJ Bola’s spoken-word piece, “Refuge.” The first shows agency in the creation of analytical and activist voices that promote counter-hegemonic narratives of violence in the eastern Democratic Republic of Congo, while the second is an example of aesthetic expressions performed online and offline that reveal agency through authorship and ownership of one’s voice. The examples highlight the role that new social media play in challenging mainstream politics of representation of refugee/diaspora voices.
Resumo:
Finding rare events in multidimensional data is an important detection problem that has applications in many fields, such as risk estimation in insurance industry, finance, flood prediction, medical diagnosis, quality assurance, security, or safety in transportation. The occurrence of such anomalies is so infrequent that there is usually not enough training data to learn an accurate statistical model of the anomaly class. In some cases, such events may have never been observed, so the only information that is available is a set of normal samples and an assumed pairwise similarity function. Such metric may only be known up to a certain number of unspecified parameters, which would either need to be learned from training data, or fixed by a domain expert. Sometimes, the anomalous condition may be formulated algebraically, such as a measure exceeding a predefined threshold, but nuisance variables may complicate the estimation of such a measure. Change detection methods used in time series analysis are not easily extendable to the multidimensional case, where discontinuities are not localized to a single point. On the other hand, in higher dimensions, data exhibits more complex interdependencies, and there is redundancy that could be exploited to adaptively model the normal data. In the first part of this dissertation, we review the theoretical framework for anomaly detection in images and previous anomaly detection work done in the context of crack detection and detection of anomalous components in railway tracks. In the second part, we propose new anomaly detection algorithms. The fact that curvilinear discontinuities in images are sparse with respect to the frame of shearlets, allows us to pose this anomaly detection problem as basis pursuit optimization. Therefore, we pose the problem of detecting curvilinear anomalies in noisy textured images as a blind source separation problem under sparsity constraints, and propose an iterative shrinkage algorithm to solve it. Taking advantage of the parallel nature of this algorithm, we describe how this method can be accelerated using graphical processing units (GPU). Then, we propose a new method for finding defective components on railway tracks using cameras mounted on a train. We describe how to extract features and use a combination of classifiers to solve this problem. Then, we scale anomaly detection to bigger datasets with complex interdependencies. We show that the anomaly detection problem naturally fits in the multitask learning framework. The first task consists of learning a compact representation of the good samples, while the second task consists of learning the anomaly detector. Using deep convolutional neural networks, we show that it is possible to train a deep model with a limited number of anomalous examples. In sequential detection problems, the presence of time-variant nuisance parameters affect the detection performance. In the last part of this dissertation, we present a method for adaptively estimating the threshold of sequential detectors using Extreme Value Theory on a Bayesian framework. Finally, conclusions on the results obtained are provided, followed by a discussion of possible future work.
Resumo:
[EN] Parasitic diseases have a great impact in human and animal health. The gold standard for the diagnosis of the majority of parasitic infections is still conventional microscopy, which presents important limitations in terms of sensitivity and specificity and commonly requires highly trained technicians. More accurate molecular-based diagnostic tools are needed for the implementation of early detection, effective treatments and massive screenings with high-throughput capacities. In this respect, sensitive and affordable devices could greatly impact on sustainable control programmes which exist against parasitic diseases, especially in low income settings. Proteomics and nanotechnology approaches are valuable tools for sensing pathogens and host alteration signatures within micro fluidic detection platforms. These new devices might provide novel solutions to fight parasitic diseases. Newly described specific parasite derived products with immune-modulatory properties have been postulated as the best candidates for the early and accurate detection of parasitic infections as well as for the blockage of parasite development. This review provides the most recent methodological and technological advances with great potential for biosensing parasites in their hosts, showing the newest opportunities offered by modern “-omics” and platforms for parasite detection and control.
Resumo:
The main objectives of this thesis are to validate an improved principal components analysis (IPCA) algorithm on images; designing and simulating a digital model for image compression, face recognition and image detection by using a principal components analysis (PCA) algorithm and the IPCA algorithm; designing and simulating an optical model for face recognition and object detection by using the joint transform correlator (JTC); establishing detection and recognition thresholds for each model; comparing between the performance of the PCA algorithm and the performance of the IPCA algorithm in compression, recognition and, detection; and comparing between the performance of the digital model and the performance of the optical model in recognition and detection. The MATLAB © software was used for simulating the models. PCA is a technique used for identifying patterns in data and representing the data in order to highlight any similarities or differences. The identification of patterns in data of high dimensions (more than three dimensions) is too difficult because the graphical representation of data is impossible. Therefore, PCA is a powerful method for analyzing data. IPCA is another statistical tool for identifying patterns in data. It uses information theory for improving PCA. The joint transform correlator (JTC) is an optical correlator used for synthesizing a frequency plane filter for coherent optical systems. The IPCA algorithm, in general, behaves better than the PCA algorithm in the most of the applications. It is better than the PCA algorithm in image compression because it obtains higher compression, more accurate reconstruction, and faster processing speed with acceptable errors; in addition, it is better than the PCA algorithm in real-time image detection due to the fact that it achieves the smallest error rate as well as remarkable speed. On the other hand, the PCA algorithm performs better than the IPCA algorithm in face recognition because it offers an acceptable error rate, easy calculation, and a reasonable speed. Finally, in detection and recognition, the performance of the digital model is better than the performance of the optical model.
Resumo:
Thanks to the advanced technologies and social networks that allow the data to be widely shared among the Internet, there is an explosion of pervasive multimedia data, generating high demands of multimedia services and applications in various areas for people to easily access and manage multimedia data. Towards such demands, multimedia big data analysis has become an emerging hot topic in both industry and academia, which ranges from basic infrastructure, management, search, and mining to security, privacy, and applications. Within the scope of this dissertation, a multimedia big data analysis framework is proposed for semantic information management and retrieval with a focus on rare event detection in videos. The proposed framework is able to explore hidden semantic feature groups in multimedia data and incorporate temporal semantics, especially for video event detection. First, a hierarchical semantic data representation is presented to alleviate the semantic gap issue, and the Hidden Coherent Feature Group (HCFG) analysis method is proposed to capture the correlation between features and separate the original feature set into semantic groups, seamlessly integrating multimedia data in multiple modalities. Next, an Importance Factor based Temporal Multiple Correspondence Analysis (i.e., IF-TMCA) approach is presented for effective event detection. Specifically, the HCFG algorithm is integrated with the Hierarchical Information Gain Analysis (HIGA) method to generate the Importance Factor (IF) for producing the initial detection results. Then, the TMCA algorithm is proposed to efficiently incorporate temporal semantics for re-ranking and improving the final performance. At last, a sampling-based ensemble learning mechanism is applied to further accommodate the imbalanced datasets. In addition to the multimedia semantic representation and class imbalance problems, lack of organization is another critical issue for multimedia big data analysis. In this framework, an affinity propagation-based summarization method is also proposed to transform the unorganized data into a better structure with clean and well-organized information. The whole framework has been thoroughly evaluated across multiple domains, such as soccer goal event detection and disaster information management.
Resumo:
Bladder cancer is among the most common cancers in the UK and conventional detection techniques suffer from low sensitivity, low specificity, or both. Recent attempts to address the disparity have led to progress in the field of autofluorescence as a means to diagnose the disease with high efficiency, however there is still a lot not known about autofluorescence profiles in the disease. The multi-functional diagnostic system "LAKK-M" was used to assess autofluorescence profiles of healthy and cancerous bladder tissue to identify novel biomarkers of the disease. Statistically significant differences were observed in the optical redox ratio (a measure of tissue metabolic activity), the amplitude of endogenous porphyrins and the NADH/porphyrin ratio between tissue types. These findings could advance understanding of bladder cancer and aid in the development of new techniques for detection and surveillance.
Resumo:
In the absence of effective vaccine(s), control of African swine fever caused by African swine fever virus (ASFV) must be based on early, efficient, cost-effective detection and strict control and elimination strategies. For this purpose, we developed an indirect ELISA capable of detecting ASFV antibodies in either serum or oral fluid specimens. The recombinant protein used in the ELISA was selected by comparing the early serum antibody response of ASFV-infected pigs (NHV-p68 isolate) to three major recombinant polypeptides (p30, p54, p72) using a multiplex fluorescent microbead-based immunoassay (FMIA). Non-hazardous (non-infectious) antibody-positive serum for use as plate positive controls and for the calculation of sample-to-positive (S:P) ratios was produced by inoculating pigs with a replicon particle (RP) vaccine expressing the ASFV p30 gene. The optimized ELISA detected anti-p30 antibodies in serum and/or oral fluid samples from pigs inoculated with ASFV under experimental conditions beginning 8 to 12 days post inoculation. Tests on serum (n = 200) and oral fluid (n = 200) field samples from an ASFV-free population demonstrated that the assay was highly diagnostically specific. The convenience and diagnostic utility of oral fluid sampling combined with the flexibility to test either serum or oral fluid on the same platform suggests that this assay will be highly useful under the conditions for which OIE recommends ASFV antibody surveillance, i.e., in ASFV-endemic areas and for the detection of infections with ASFV isolates of low virulence.
Resumo:
Filamentous fungi are a threat to the conservation of Cultural Heritage. Thus, detection and identification of viable filamentous fungi are crucial for applying adequate Safeguard measures. RNA-FISH protocols have been previously applied with this aim in Cultural Heritage samples. However, only hyphae detection was reported in the literature, even if spores and conidia are not only a potential risk to Cultural Heritage but can also be harmful for the health of visitors, curators and restorers. Thus, the aim of this work was to evaluate various permeabilizing strategies for their application in the detection of spores/conidia and hyphae of artworks’ biodeteriogenic filamentous fungi by RNA-FISH. Besides of this, the influence of cell aging on the success of the technique and on the development of fungal autofluorescence (that could hamper the RNA-FISH signal detection) were also investigated. Five common biodeteriogenic filamentous fungi species isolated from biodegradated artworks were used as biological model: Aspergillus niger, Cladosporium sp, Fusarium sp, Penicillium sp. and Exophialia sp. Fungal autofluorescence was only detected in cells harvested from Fusarium sp, and Exophialia sp. old cultures, being aging-dependent. However, it was weak enough to allow autofluorescence/RNA-FISH signals distinction. Thus, autofluorescence was not a limitation for the application of RNA-FISH for detection of the taxa investigated. All the permeabilization strategies tested allowed to detect fungal cells from young cultures by RNA-FISH. However, only the combination of paraformaldehyde with Triton X-100 allowed the detection of conidia/spores and hyphae of old filamentous fungi. All the permeabilization strategies failed in the Aspergillus niger conidia/spores staining, which are known to be particularly difficult to permeabilize. But, even in spite of this, the application of this permeabilization method increased the analytical potential of RNA FISH in Cultural Heritage biodeterioration. Whereas much work is required to validate this RNA-FISH approach for its application in real samples from Cultural Heritage it could represent an important advance for the detection, not only of hyphae but also of spores and conidia of various filamentous fungi taxa by RNA-FISH.