952 resultados para Detection models
Resumo:
Visual information in the form of lip movements of the speaker has been shown to improve the performance of speech recognition and search applications. In our previous work, we proposed cross database training of synchronous hidden Markov models (SHMMs) to make use of external large and publicly available audio databases in addition to the relatively small given audio visual database. In this work, the cross database training approach is improved by performing an additional audio adaptation step, which enables audio visual SHMMs to benefit from audio observations of the external audio models before adding visual modality to them. The proposed approach outperforms the baseline cross database training approach in clean and noisy environments in terms of phone recognition accuracy as well as spoken term detection (STD) accuracy.
Resumo:
Even though crashes between trains and road users are rare events at railway level crossings, they are one of the major safety concerns for the Australian railway industry. Nearmiss events at level crossings occur more frequently, and can provide more information about factors leading to level crossing incidents. In this paper we introduce a video analytic approach for automatically detecting and localizing vehicles from cameras mounted on trains for detecting near-miss events. To detect and localize vehicles at level crossings we extract patches from an image and classify each patch for detecting vehicles. We developed a region proposals algorithm for generating patches, and we use a Convolutional Neural Network (CNN) for classifying each patch. To localize vehicles in images we combine the patches that are classified as vehicles according to their CNN scores and positions. We compared our system with the Deformable Part Models (DPM) and Regions with CNN features (R-CNN) object detectors. Experimental results on a railway dataset show that the recall rate of our proposed system is 29% higher than what can be achieved with DPM or R-CNN detectors.
Resumo:
Environmental data usually include measurements, such as water quality data, which fall below detection limits, because of limitations of the instruments or of certain analytical methods used. The fact that some responses are not detected needs to be properly taken into account in statistical analysis of such data. However, it is well-known that it is challenging to analyze a data set with detection limits, and we often have to rely on the traditional parametric methods or simple imputation methods. Distributional assumptions can lead to biased inference and justification of distributions is often not possible when the data are correlated and there is a large proportion of data below detection limits. The extent of bias is usually unknown. To draw valid conclusions and hence provide useful advice for environmental management authorities, it is essential to develop and apply an appropriate statistical methodology. This paper proposes rank-based procedures for analyzing non-normally distributed data collected at different sites over a period of time in the presence of multiple detection limits. To take account of temporal correlations within each site, we propose an optimal linear combination of estimating functions and apply the induced smoothing method to reduce the computational burden. Finally, we apply the proposed method to the water quality data collected at Susquehanna River Basin in United States of America, which dearly demonstrates the advantages of the rank regression models.
Resumo:
QTL mapping methods for complex traits are challenged by new developments in marker technology, phenotyping platforms, and breeding methods. In meeting these challenges, QTL mapping approaches will need to also acknowledge the central roles of QTL by environment interactions (QEI) and QTL by trait interactions in the expression of complex traits like yield. This paper presents an overview of mixed model QTL methodology that is suitable for many types of populations and that allows predictive modeling of QEI, both for environmental and developmental gradients. Attention is also given to multi-trait QTL models which are essential to interpret the genetic basis of trait correlations. Biophysical (crop growth) model simulations are proposed as a complement to statistical QTL mapping for the interpretation of the nature of QEI and to investigate better methods for the dissection of complex traits into component traits and their genetic controls.
Resumo:
Early detection surveillance programs aim to find invasions of exotic plant pests and diseases before they are too widespread to eradicate. However, the value of these programs can be difficult to justify when no positive detections are made. To demonstrate the value of pest absence information provided by these programs, we use a hierarchical Bayesian framework to model estimates of incursion extent with and without surveillance. A model for the latent invasion process provides the baseline against which surveillance data are assessed. Ecological knowledge and pest management criteria are introduced into the model using informative priors for invasion parameters. Observation models assimilate information from spatio-temporal presence/absence data to accommodate imperfect detection and generate posterior estimates of pest extent. When applied to an early detection program operating in Queensland, Australia, the framework demonstrates that this typical surveillance regime provides a modest reduction in the estimate that a surveyed district is infested. More importantly, the model suggests that early detection surveillance programs can provide a dramatic reduction in the putative area of incursion and therefore offer a substantial benefit to incursion management. By mapping spatial estimates of the point probability of infestation, the model identifies where future surveillance resources can be most effectively deployed.
Resumo:
Various intrusion detection systems (IDSs) reported in the literature have shown distinct preferences for detecting a certain class of attack with improved accuracy, while performing moderately on the other classes. In view of the enormous computing power available in the present-day processors, deploying multiple IDSs in the same network to obtain best-of-breed solutions has been attempted earlier. The paper presented here addresses the problem of optimizing the performance of IDSs using sensor fusion with multiple sensors. The trade-off between the detection rate and false alarms with multiple sensors is highlighted. It is illustrated that the performance of the detector is better when the fusion threshold is determined according to the Chebyshev inequality. In the proposed data-dependent decision ( DD) fusion method, the performance optimization of ndividual IDSs is first addressed. A neural network supervised learner has been designed to determine the weights of individual IDSs depending on their reliability in detecting a certain attack. The final stage of this DD fusion architecture is a sensor fusion unit which does the weighted aggregation in order to make an appropriate decision. This paper theoretically models the fusion of IDSs for the purpose of demonstrating the improvement in performance, supplemented with the empirical evaluation.
Resumo:
PURPOSE To develop and test decision tree (DT) models to classify physical activity (PA) intensity from accelerometer output and Gross Motor Function Classification System (GMFCS) classification level in ambulatory youth with cerebral palsy (CP); and 2) compare the classification accuracy of the new DT models to that achieved by previously published cut-points for youth with CP. METHODS Youth with CP (GMFCS Levels I - III) (N=51) completed seven activity trials with increasing PA intensity while wearing a portable metabolic system and ActiGraph GT3X accelerometers. DT models were used to identify vertical axis (VA) and vector magnitude (VM) count thresholds corresponding to sedentary (SED) (<1.5 METs), light PA (LPA) (>/=1.5 and <3 METs) and moderate-to-vigorous PA (MVPA) (>/=3 METs). Models were trained and cross-validated using the 'rpart' and 'caret' packages within R. RESULTS For the VA (VA_DT) and VM decision trees (VM_DT), a single threshold differentiated LPA from SED, while the threshold for differentiating MVPA from LPA decreased as the level of impairment increased. The average cross-validation accuracy for the VC_DT was 81.1%, 76.7%, and 82.9% for GMFCS levels I, II, and III, respectively. The corresponding cross-validation accuracy for the VM_DT was 80.5%, 75.6%, and 84.2%, respectively. Within each GMFCS level, the decision tree models achieved better PA intensity recognition than previously published cut-points. The accuracy differential was greatest among GMFCS level III participants, in whom the previously published cut-points misclassified 40% of the MVPA activity trials. CONCLUSION GMFCS-specific cut-points provide more accurate assessments of MVPA levels in youth with CP across the full spectrum of ambulatory ability.
Resumo:
Non-competitive bids have recently become a major concern in both Public and Private sector construction contract auctions. Consequently, several models have been developed to help identify bidders potentially involved in collusive practices. However, most of these models require complex calculations and extensive information that is difficult to obtain. The aim of this paper is to utilize recent developments for detecting abnormal bids in capped auctions (auctions with an upper bid limit set by the auctioner) and extend them to the more conventional uncapped auctions (where no such limits are set). To accomplish this, a new method is developed for estimating the values of bid distribution supports by using the solution to what has become known as the German tank problem. The model is then demonstrated and tested on a sample of real construction bid data and shown to detect cover bids with high accuracy. This work contributes to an improved understanding of abnormal bid behavior as an aid to detecting and monitoring potential collusive bid practices.
Resumo:
Incursions of plant pests and diseases pose serious threats to food security, agricultural productivity and the natural environment. One of the challenges in confidently delimiting and eradicating incursions is how to choose from an arsenal of surveillance and quarantine approaches in order to best control multiple dispersal pathways. Anthropogenic spread (propagules carried on humans or transported on produce or equipment) can be controlled with quarantine measures, which in turn can vary in intensity. In contrast, environmental spread processes are more difficult to control, but often have a temporal signal (e.g. seasonality) which can introduce both challenges and opportunities for surveillance and control. This leads to complex decisions regarding when, where and how to search. Recent modelling investigations of surveillance performance have optimised the output of simulation models, and found that a risk-weighted randomised search can perform close to optimally. However, exactly how quarantine and surveillance strategies should change to reflect different dispersal modes remains largely unaddressed. Here we develop a spatial simulation model of a plant fungal-pathogen incursion into an agricultural region, and its subsequent surveillance and control. We include structural differences in dispersal via the interplay of biological, environmental and anthropogenic connectivity between host sites (farms). Our objective was to gain broad insights into the relative roles played by different spread modes in propagating an invasion, and how incorporating knowledge of these spread risks may improve approaches to quarantine restrictions and surveillance. We find that broad heuristic rules for quarantine restrictions fail to contain the pathogen due to residual connectivity between sites, but surveillance measures enable early detection and successfully lead to suppression of the pathogen in all farms. Alternative surveillance strategies attain similar levels of performance by incorporating environmental or anthropogenic dispersal risk in the prioritisation of sites. Our model provides the basis to develop essential insights into the effectiveness of different surveillance and quarantine decisions for fungal pathogen control. Parameterised for authentic settings it will aid our understanding of how the extent and resolution of interventions should suitably reflect the spatial structure of dispersal processes.
Resumo:
The motivation behind the fusion of Intrusion Detection Systems was the realization that with the increasing traffic and increasing complexity of attacks, none of the present day stand-alone Intrusion Detection Systems can meet the high demand for a very high detection rate and an extremely low false positive rate. Multi-sensor fusion can be used to meet these requirements by a refinement of the combined response of different Intrusion Detection Systems. In this paper, we show the design technique of sensor fusion to best utilize the useful response from multiple sensors by an appropriate adjustment of the fusion threshold. The threshold is generally chosen according to the past experiences or by an expert system. In this paper, we show that the choice of the threshold bounds according to the Chebyshev inequality principle performs better. This approach also helps to solve the problem of scalability and has the advantage of failsafe capability. This paper theoretically models the fusion of Intrusion Detection Systems for the purpose of proving the improvement in performance, supplemented with the empirical evaluation. The combination of complementary sensors is shown to detect more attacks than the individual components. Since the individual sensors chosen detect sufficiently different attacks, their result can be merged for improved performance. The combination is done in different ways like (i) taking all the alarms from each system and avoiding duplications, (ii) taking alarms from each system by fixing threshold bounds, and (iii) rule-based fusion with a priori knowledge of the individual sensor performance. A number of evaluation metrics are used, and the results indicate that there is an overall enhancement in the performance of the combined detector using sensor fusion incorporating the threshold bounds and significantly better performance using simple rule-based fusion.
Resumo:
Aerosol particles play an important role in the Earth s atmosphere and in the climate system: they scatter and absorb solar radiation, facilitate chemical processes, and serve as seeds for cloud formation. Secondary new particle formation (NPF) is a globally important source of these particles. Currently, the mechanisms of particle formation and the vapors participating in this process are, however, not truly understood. In order to fully explain atmospheric NPF and subsequent growth, we need to measure directly the very initial steps of the formation processes. This thesis investigates the possibility to study atmospheric particle formation using a recently developed Neutral cluster and Air Ion Spectrometer (NAIS). First, the NAIS was calibrated and intercompared, and found to be in good agreement with the reference instruments both in the laboratory and in the field. It was concluded that NAIS can be reliably used to measure small atmospheric ions and particles directly at the sizes where NPF begins. Second, several NAIS systems were deployed simultaneously at 12 European measurement sites to quantify the spatial and temporal distribution of particle formation events. The sites represented a variety of geographical and atmospheric conditions. The NPF events were detected using NAIS systems at all of the sites during the year-long measurement period. Various particle formation characteristics, such as formation and growth rates, were used as indicators of the relevant processes and participating compounds in the initial formation. In a case of parallel ion and neutral cluster measurements, we also estimated the relative contribution of ion-induced and neutral nucleation to the total particle formation. At most sites, the particle growth rate increased with the increasing particle size indicating that different condensing vapors are participating in the growth of different-sized particles. The results suggest that, in addition to sulfuric acid, organic vapors contribute to the initial steps of NPF and to the subsequent growth, not just later steps of the particle growth. As a significant new result, we found out that the total particle formation rate varied much more between the different sites than the formation rate of charged particles. The results infer that the ion-induced nucleation has a minor contribution to particle formation in the boundary layer in most of the environments. These results give tools to better quantify the aerosol source provided by secondary NPF in various environments. The particle formation characteristics determined in this thesis can be used in global models to assess NPF s climatic effects.
Resumo:
In this paper, we consider the application of belief propagation (BP) to achieve near-optimal signal detection in large multiple-input multiple-output (MIMO) systems at low complexities. Large-MIMO architectures based on spatial multiplexing (V-BLAST) as well as non-orthogonal space-time block codes(STBC) from cyclic division algebra (CDA) are considered. We adopt graphical models based on Markov random fields (MRF) and factor graphs (FG). In the MRF based approach, we use pairwise compatibility functions although the graphical models of MIMO systems are fully/densely connected. In the FG approach, we employ a Gaussian approximation (GA) of the multi-antenna interference, which significantly reduces the complexity while achieving very good performance for large dimensions. We show that i) both MRF and FG based BP approaches exhibit large-system behavior, where increasingly closer to optimal performance is achieved with increasing number of dimensions, and ii) damping of messages/beliefs significantly improves the bit error performance.
Resumo:
In this paper, we employ message passing algorithms over graphical models to jointly detect and decode symbols transmitted over large multiple-input multiple-output (MIMO) channels with low density parity check (LDPC) coded bits. We adopt a factor graph based technique to integrate the detection and decoding operations. A Gaussian approximation of spatial interference is used for detection. This serves as a low complexity joint detection/decoding approach for large dimensional MIMO systems coded with LDPC codes of large block lengths. This joint processing achieves significantly better performance than the individual detection and decoding scheme.
Resumo:
Detecting and quantifying the presence of human-induced climate change in regional hydrology is important for studying the impacts of such changes on the water resources systems as well as for reliable future projections and policy making for adaptation. In this article a formal fingerprint-based detection and attribution analysis has been attempted to study the changes in the observed monsoon precipitation and streamflow in the rain-fed Mahanadi River Basin in India, considering the variability across different climate models. This is achieved through the use of observations, several climate model runs, a principal component analysis and regression based statistical downscaling technique, and a Genetic Programming based rainfall-runoff model. It is found that the decreases in observed hydrological variables across the second half of the 20th century lie outside the range that is expected from natural internal variability of climate alone at 95% statistical confidence level, for most of the climate models considered. For several climate models, such changes are consistent with those expected from anthropogenic emissions of greenhouse gases. However, unequivocal attribution to human-induced climate change cannot be claimed across all the climate models and uncertainties in our detection procedure, arising out of various sources including the use of models, cannot be ruled out. Changes in solar irradiance and volcanic activities are considered as other plausible natural external causes of climate change. Time evolution of the anthropogenic climate change ``signal'' in the hydrological observations, above the natural internal climate variability ``noise'' shows that the detection of the signal is achieved earlier in streamflow as compared to precipitation for most of the climate models, suggesting larger impacts of human-induced climate change on streamflow than precipitation at the river basin scale.
Resumo:
A new delaminated composite beam element is formulated for Timoshenko as well as Euler-Bernoulli beam models. Shape functions are derived from Timoshenko functions; this provides a unified formulation for slender to moderately deep beam analyses. The element is simple and easy to implement, results are on par with those from free mode delamination models. Katz fractal dimension method is applied on the mode shapes obtained from finite element models, to detect the delamination in the beam. The effect of finite element size on fractal dimension method of delamination detection is quantified.