968 resultados para Information Fusion
Resumo:
This paper demonstrates an experimental study that examines the accuracy of various information retrieval techniques for Web service discovery. The main goal of this research is to evaluate algorithms for semantic web service discovery. The evaluation is comprehensively benchmarked using more than 1,700 real-world WSDL documents from INEX 2010 Web Service Discovery Track dataset. For automatic search, we successfully use Latent Semantic Analysis and BM25 to perform Web service discovery. Moreover, we provide linking analysis which automatically links possible atomic Web services to meet the complex requirements of users. Our fusion engine recommends a final result to users. Our experiments show that linking analysis can improve the overall performance of Web service discovery. We also find that keyword-based search can quickly return results but it has limitation of understanding users’ goals.
Resumo:
Fusion techniques have received considerable attention for achieving performance improvement with biometrics. While a multi-sample fusion architecture reduces false rejects, it also increases false accepts. This impact on performance also depends on the nature of subsequent attempts, i.e., random or adaptive. Expressions for error rates are presented and experimentally evaluated in this work by considering the multi-sample fusion architecture for text-dependent speaker verification using HMM based digit dependent speaker models. Analysis incorporating correlation modeling demonstrates that the use of adaptive samples improves overall fusion performance compared to randomly repeated samples. For a text dependent speaker verification system using digit strings, sequential decision fusion of seven instances with three random samples is shown to reduce the overall error of the verification system by 26% which can be further reduced by 6% for adaptive samples. This analysis novel in its treatment of random and adaptive multiple presentations within a sequential fused decision architecture, is also applicable to other biometric modalities such as finger prints and handwriting samples.
Resumo:
Mobile devices and smartphones have become a significant communication channel for everyday life. The sensing capabilities of mobile devices are expanding rapidly, and sensors embedded in these devices are cheaper and more powerful than before. It is evident that mobile devices have become the most suitable candidates to sense contextual information without needing extra tools. However, current research shows only a limited number of sensors are being explored and investigated. As a result, it still needs to be clarified what forms of contextual information extracted from mo- bile sensors are useful. Therefore, this research investigates the context sensing using current mobile sensors, the study follows experimental methods and sensor data is evaluated and synthesised, in order to deduce the value of various sensors and combinations of sensor for the use in context-aware mobile applications. This study aims to develop a context fusion framework that will enhance the context-awareness on mobile applications, as well as exploring innovative techniques for context sensing on smartphone devices.
Resumo:
This poster summarises the current findings from STRC’s Integrated Traveller Information research domain that aims for accurate and reliable travel time prediction, and optimisation of multimodal trips. Following are the three selected discussions: a) Fundamental understanding on the use of Bluetooth MAC Scanner (BMS) for travel time estimation b) Integration of multi-sources (Loops and Bluetooth) for travel time and density estimation c) Architecture for online and predictive multimodal trip planner
Resumo:
Focuses on the various aspects of advances in future information communication technology and its applications Presents the latest issues and progress in the area of future information communication technology Applicable to both researchers and professionals These proceedings are based on the 2013 International Conference on Future Information & Communication Engineering (ICFICE 2013), which will be held at Shenyang in China from June 24-26, 2013. The conference is open to all over the world, and participation from Asia-Pacific region is particularly encouraged. The focus of this conference is on all technical aspects of electronics, information, and communications ICFICE-13 will provide an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of FICE. In addition, the conference will publish high quality papers which are closely related to the various theories and practical applications in FICE. Furthermore, we expect that the conference and its publications will be a trigger for further related research and technology improvements in this important subject. "This work was supported by the NIPA (National IT Industry Promotion Agency) of Korea Grant funded by the Korean Government (Ministry of Science, ICT & Future Planning)."
Resumo:
Reliability of the performance of biometric identity verification systems remains a significant challenge. Individual biometric samples of the same person (identity class) are not identical at each presentation and performance degradation arises from intra-class variability and inter-class similarity. These limitations lead to false accepts and false rejects that are dependent. It is therefore difficult to reduce the rate of one type of error without increasing the other. The focus of this dissertation is to investigate a method based on classifier fusion techniques to better control the trade-off between the verification errors using text-dependent speaker verification as the test platform. A sequential classifier fusion architecture that integrates multi-instance and multisample fusion schemes is proposed. This fusion method enables a controlled trade-off between false alarms and false rejects. For statistically independent classifier decisions, analytical expressions for each type of verification error are derived using base classifier performances. As this assumption may not be always valid, these expressions are modified to incorporate the correlation between statistically dependent decisions from clients and impostors. The architecture is empirically evaluated by applying the proposed architecture for text dependent speaker verification using the Hidden Markov Model based digit dependent speaker models in each stage with multiple attempts for each digit utterance. The trade-off between the verification errors is controlled using the parameters, number of decision stages (instances) and the number of attempts at each decision stage (samples), fine-tuned on evaluation/tune set. The statistical validation of the derived expressions for error estimates is evaluated on test data. The performance of the sequential method is further demonstrated to depend on the order of the combination of digits (instances) and the nature of repetitive attempts (samples). The false rejection and false acceptance rates for proposed fusion are estimated using the base classifier performances, the variance in correlation between classifier decisions and the sequence of classifiers with favourable dependence selected using the 'Sequential Error Ratio' criteria. The error rates are better estimated by incorporating user-dependent (such as speaker-dependent thresholds and speaker-specific digit combinations) and class-dependent (such as clientimpostor dependent favourable combinations and class-error based threshold estimation) information. The proposed architecture is desirable in most of the speaker verification applications such as remote authentication, telephone and internet shopping applications. The tuning of parameters - the number of instances and samples - serve both the security and user convenience requirements of speaker-specific verification. The architecture investigated here is applicable to verification using other biometric modalities such as handwriting, fingerprints and key strokes.
Resumo:
This chapter describes decentralized data fusion algorithms for a team of multiple autonomous platforms. Decentralized data fusion (DDF) provides a useful basis with which to build upon for cooperative information gathering tasks for robotic teams operating in outdoor environments. Through the DDF algorithms, each platform can maintain a consistent global solution from which decisions may then be made. Comparisons will be made between the implementation of DDF using two probabilistic representations. The first, Gaussian estimates and the second Gaussian mixtures are compared using a common data set. The overall system design is detailed, providing insight into the overall complexity of implementing a robust DDF system for use in information gathering tasks in outdoor UAV applications.
Resumo:
Multidimensional data are getting increasing attention from researchers for creating better recommender systems in recent years. Additional metadata provides algorithms with more details for better understanding the interaction between users and items. While neighbourhood-based Collaborative Filtering (CF) approaches and latent factor models tackle this task in various ways effectively, they only utilize different partial structures of data. In this paper, we seek to delve into different types of relations in data and to understand the interaction between users and items more holistically. We propose a generic multidimensional CF fusion approach for top-N item recommendations. The proposed approach is capable of incorporating not only localized relations of user-user and item-item but also latent interaction between all dimensions of the data. Experimental results show significant improvements by the proposed approach in terms of recommendation accuracy.
Resumo:
Automatic labeling of white matter fibres in diffusion-weighted brain MRI is vital for comparing brain integrity and connectivity across populations, but is challenging. Whole brain tractography generates a vast set of fibres throughout the brain, but it is hard to cluster them into anatomically meaningful tracts, due to wide individual variations in the trajectory and shape of white matter pathways. We propose a novel automatic tract labeling algorithm that fuses information from tractography and multiple hand-labeled fibre tract atlases. As streamline tractography can generate a large number of false positive fibres, we developed a top-down approach to extract tracts consistent with known anatomy, based on a distance metric to multiple hand-labeled atlases. Clustering results from different atlases were fused, using a multi-stage fusion scheme. Our "label fusion" method reliably extracted the major tracts from 105-gradient HARDI scans of 100 young normal adults. © 2012 Springer-Verlag.
Resumo:
Simultaneous expression of highly homologous RLN1 and RLN2 genes in prostate impairs their accurate delineation. We used PacBio SMRT sequencing and RNA-Seq in LNCaP cells in order to dissect the expression of RLN1 and RLN2 variants. We identified a novel fusion transcript comprising the RLN1 and RLN2 genes and found evidence of its expression in the normal and prostate cancer tissues. The RLN1-RLN2 fusion putatively encodes RLN2 isoform with the deleted secretory signal peptide. The identification of the fusion transcript provided information to determine unique RLN1-RLN2 fusion and RLN1 regions. The RLN1-RLN2 fusion was co-expressed with RLN1 in LNCaP cells, but the two gene products were inversely regulated by androgens. We showed that RLN1 is underrepresented in common PCa cell lines in comparison to normal and PCa tissue. The current study brings a highly relevant update to the relaxin field, and will encourage further studies of RLN1 and RLN2 in PCa and broader.
Resumo:
Study Design Retrospective review of prospectively collected data. Objectives To analyze intervertebral (IV) fusion after thoracoscopic anterior spinal fusion (TASF) and explore the relationship between fusion scores and key clinical variables. Summary of Background Information TASF provides comparable correction with some advantages over posterior approaches but reported mechanical complications, and their relationship to non-union and graft material is unclear. Similarly, the optimal combination of graft type and implant stiffness for effecting successful radiologic union remains undetermined. Methods A subset of patients from a large single-center series who had TASF for progressive scoliosis underwent low-dose computed tomographic scans 2 years after surgery. The IV fusion mass in the disc space was assessed using the 4-point Sucato scale, where 1 indicates <50% and 4 indicates 100% bony fusion of the disc space. The effects of rod diameter, rod material, graft type, fusion level, and mechanical complications on fusion scores were assessed. Results Forty-three patients with right thoracic major curves (mean age 14.9 years) participated in the study. Mean fusion scores for patient subgroups ranged from 1.0 (IV levels with rod fractures) to 2.2 (4.5-mm rod with allograft), with scores tending to decrease with increasing rod size and stiffness. Graft type (autograft vs. allograft) did not affect fusion scores. Fusion scores were highest in the middle levels of the rod construct (mean 2.52), dropping off by 20% to 30% toward the upper and lower extremities of the rod. IV levels where a rod fractured had lower overall mean fusion scores compared to levels without a fracture. Mean total Scoliosis Research Society (SRS) questionnaire scores were 98.9 from a possible total of 120, indicating a good level of patient satisfaction. Conclusions Results suggest that 100% radiologic fusion of the entire disc space is not necessary for successful clinical outcomes following thoracoscopic anterior selective thoracic fusion.
Resumo:
Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.
Resumo:
A central tenet in the theory of reliability modelling is the quantification of the probability of asset failure. In general, reliability depends on asset age and the maintenance policy applied. Usually, failure and maintenance times are the primary inputs to reliability models. However, for many organisations, different aspects of these data are often recorded in different databases (e.g. work order notifications, event logs, condition monitoring data, and process control data). These recorded data cannot be interpreted individually, since they typically do not have all the information necessary to ascertain failure and preventive maintenance times. This paper presents a methodology for the extraction of failure and preventive maintenance times using commonly-available, real-world data sources. A text-mining approach is employed to extract keywords indicative of the source of the maintenance event. Using these keywords, a Naïve Bayes classifier is then applied to attribute each machine stoppage to one of two classes: failure or preventive. The accuracy of the algorithm is assessed and the classified failure time data are then presented. The applicability of the methodology is demonstrated on a maintenance data set from an Australian electricity company.
Resumo:
Flood extent mapping is a basic tool for flood damage assessment, which can be done by digital classification techniques using satellite imageries, including the data recorded by radar and optical sensors. However, converting the data into the information we need is not a straightforward task. One of the great challenges involved in the data interpretation is to separate the permanent water bodies and flooding regions, including both the fully inundated areas and the wet areas where trees and houses are partly covered with water. This paper adopts the decision fusion technique to combine the mapping results from radar data and the NDVI data derived from optical data. An improved capacity in terms of identifying the permanent or semi-permanent water bodies from flood inundated areas has been achieved. Computer software tools Multispec and Matlab were used.