881 resultados para Signature Verification, Forgery Detection, Fuzzy Modeling
Resumo:
The ability to detect unusual events in surviellance footage as they happen is a highly desireable feature for a surveillance system. However, this problem remains challenging in crowded scenes due to occlusions and the clustering of people. In this paper, we propose using the Distributed Behavior Model (DBM), which has been widely used in computer graphics, for video event detection. Our approach does not rely on object tracking, and is robust to camera movements. We use sparse coding for classification, and test our approach on various datasets. Our proposed approach outperforms a state-of-the-art work which uses the social force model and Latent Dirichlet Allocation.
Resumo:
Singapore crash statistics from 2001 to 2006 show that the motorcyclist fatality and injury rates per registered vehicle are higher than those of other motor vehicles by 13 and 7 times respectively. The crash involvement rate of motorcyclists as victims of other road users is also about 43%. The objective of this study is to identify the factors that contribute to the fault of motorcyclists involved in crashes. This is done by using the binary logit model to differentiate between at-fault and not-at-fault cases and the analysis is further categorized by the location of the crashes, i.e., at intersections, on expressways and at non-intersections. A number of explanatory variables representing roadway characteristics, environmental factors, motorcycle descriptions, and rider demographics have been evaluated. Time trend effect shows that not-at-fault crash involvement of motorcyclists has increased with time. The likelihood of night time crashes has also increased for not-at-fault crashes at intersections and expressways. The presence of surveillance cameras is effective in reducing not-at-fault crashes at intersections. Wet road surfaces increase at-fault crash involvement at non-intersections. At intersections, not-at-fault crash involvement is more likely on single lane roads or on median lane of multi-lane roads, while on expressways at-fault crash involvement is more likely on the median lane. Roads with higher speed limit have higher at-fault crash involvement and this is also true on expressways. Motorcycles with pillion passengers or with higher engine capacity have higher likelihood of being at-fault in crashes on expressways. Motorcyclists are more likely to be at-fault in collisions involving pedestrians and this effect is higher at night. In multi-vehicle crashes, motorcyclists are more likely to be victims than at fault. Young and older riders are more likely to be at-fault in crashes than middle-aged group of riders. The findings of this study will help to develop more targeted countermeasures to improve motorcycle safety and more cost-effective safety awareness program in motorcyclist training.
Resumo:
This paper investigates the effects of limited speech data in the context of speaker verification using a probabilistic linear discriminant analysis (PLDA) approach. Being able to reduce the length of required speech data is important to the development of automatic speaker verification system in real world applications. When sufficient speech is available, previous research has shown that heavy-tailed PLDA (HTPLDA) modeling of speakers in the i-vector space provides state-of-the-art performance, however, the robustness of HTPLDA to the limited speech resources in development, enrolment and verification is an important issue that has not yet been investigated. In this paper, we analyze the speaker verification performance with regards to the duration of utterances used for both speaker evaluation (enrolment and verification) and score normalization and PLDA modeling during development. Two different approaches to total-variability representation are analyzed within the PLDA approach to show improved performance in short-utterance mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development. The results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset suggest that the HTPLDA system can continue to achieve better performance than Gaussian PLDA (GPLDA) as evaluation utterance lengths are decreased. We also highlight the importance of matching durations for score normalization and PLDA modeling to the expected evaluation conditions. Finally, we found that a pooled total-variability approach to PLDA modeling can achieve better performance than the traditional concatenated total-variability approach for short utterances in mismatched evaluation conditions and conditions for which insufficient speech resources are available for adequate system development.
Resumo:
Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.
Resumo:
This study proposes a full Bayes (FB) hierarchical modeling approach in traffic crash hotspot identification. The FB approach is able to account for all uncertainties associated with crash risk and various risk factors by estimating a posterior distribution of the site safety on which various ranking criteria could be based. Moreover, by use of hierarchical model specification, FB approach is able to flexibly take into account various heterogeneities of crash occurrence due to spatiotemporal effects on traffic safety. Using Singapore intersection crash data(1997-2006), an empirical evaluate was conducted to compare the proposed FB approach to the state-of-the-art approaches. Results show that the Bayesian hierarchical models with accommodation for site specific effect and serial correlation have better goodness-of-fit than non hierarchical models. Furthermore, all model-based approaches perform significantly better in safety ranking than the naive approach using raw crash count. The FB hierarchical models were found to significantly outperform the standard EB approach in correctly identifying hotspots.
Resumo:
Due to grave potential human, environmental and economical consequences of collisions at sea, collision avoidance has become an important safety concern in navigation. To reduce the risk of collisions at sea, appropriate collision avoidance actions need to be taken in accordance with the regulations, i.e., International Regulations for Preventing Collisions at Sea. However, the regulations only provide qualitative rules and guidelines, and therefore it requires navigators to decide on collision avoidance actions quantitatively by using their judgments which often leads to making errors in navigation. To better help navigators in collision avoidance, this paper develops a comprehensive collision avoidance decision making model for providing whether a collision avoidance action is required, when to take action and what action to be taken. The model is developed based on three types of collision avoidance actions, such as course change only, speed change only, and a combination of both. The model has potential to reduce the chance of making human error in navigation by assisting navigators in decision making on collision avoidance actions.
Resumo:
Navigational collisions are one of the major safety concerns for many seaports. Continuing growth of shipping traffic in number and sizes is likely to result in increased number of traffic movements, which consequently could result higher risk of collisions in these restricted waters. This continually increasing safety concern warrants a comprehensive technique for modeling collision risk in port waters, particularly for modeling the probability of collision events and the associated consequences (i.e., injuries and fatalities). A number of techniques have been utilized for modeling the risk qualitatively, semi-quantitatively and quantitatively. These traditional techniques mostly rely on historical collision data, often in conjunction with expert judgments. However, these techniques are hampered by several shortcomings, such as randomness and rarity of collision occurrence leading to obtaining insufficient number of collision counts for a sound statistical analysis, insufficiency in explaining collision causation, and reactive approach to safety. A promising alternative approach that overcomes these shortcomings is the navigational traffic conflict technique (NTCT), which uses traffic conflicts as an alternative to the collisions for modeling the probability of collision events quantitatively. This article explores the existing techniques for modeling collision risk in port waters. In particular, it identifies the advantages and limitations of the traditional techniques and highlights the potentials of the NTCT in overcoming the limitations. In view of the principles of the NTCT, a structured method for managing collision risk is proposed. This risk management method allows safety analysts to diagnose safety deficiencies in a proactive manner, which consequently has great potential for managing collision risk in a fast, reliable and efficient manner.
Resumo:
Complexity is a major concern which is aimed to be overcome by people through modeling. One way of reducing complexity is separation of concerns, e.g. separation of business process from applications. One sort of concerns are cross-cutting concerns i.e. concerns which are scattered and tangled through one of several models. In business process management, examples of such concerns are security and privacy policies. To deal with these cross-cutting concerns, the aspect orientated approach was introduced in the software development area and recently also in the business process management area. The work presented in this paper elaborates on aspect oriented process modelling. It extends earlier work by defining a mechanism for capturing multiple concerns and specifying a precedence order according to which they should be handled in a process. A formal syntax of the notation is presented precisely capturing the extended concepts and mechanisms. Finally, the relevant of the approach is demonstrated through a case study.
Resumo:
Several track-before-detection approaches for image based aircraft detection have recently been examined in an important automated aircraft collision detection application. A particularly popular approach is a two stage processing paradigm which involves: a morphological spatial filter stage (which aims to emphasize the visual characteristics of targets) followed by a temporal or track filter stage (which aims to emphasize the temporal characteristics of targets). In this paper, we proposed new spot detection techniques for this two stage processing paradigm that fuse together raw and morphological images or fuse together various different morphological images (we call these approaches morphological reinforcement). On the basis of flight test data, the proposed morphological reinforcement operations are shown to offer superior signal to-noise characteristics when compared to standard spatial filter options (such as the close-minus-open and adaptive contour morphological operations). However, system operation characterised curves, which examine detection verses false alarm characteristics after both processing stages, illustrate that system performance is very data dependent.
Resumo:
An array of substrates link the tryptic serine protease, kallikrein-related peptidase 14 (KLK14), to physiological functions including desquamation and activation of signaling molecules associated with inflammation and cancer. Recognition of protease cleavage sequences is driven by complementarity between exposed substrate motifs and the physicochemical signature of an enzyme's active site cleft. However, conventional substrate screening methods have generated conflicting subsite profiles for KLK14. This study utilizes a recently developed screening technique, the sparse matrix library, to identify five novel high-efficiency sequences for KLK14. The optimal sequence, YASR, was cleaved with higher efficiency (k(cat)/K(m)=3.81 ± 0.4 × 10(6) M(-1) s(-1)) than favored substrates from positional scanning and phage display by 2- and 10-fold, respectively. Binding site cooperativity was prominent among preferred sequences, which enabled optimal interaction at all subsites as indicated by predictive modeling of KLK14/substrate complexes. These simulations constitute the first molecular dynamics analysis of KLK14 and offer a structural rationale for the divergent subsite preferences evident between KLK14 and closely related KLKs, KLK4 and KLK5. Collectively, these findings highlight the importance of binding site cooperativity in protease substrate recognition, which has implications for discovery of optimal substrates and engineering highly effective protease inhibitors.
Resumo:
The quick detection of abrupt (unknown) parameter changes in an observed hidden Markov model (HMM) is important in several applications. Motivated by the recent application of relative entropy concepts in the robust sequential change detection problem (and the related model selection problem), this paper proposes a sequential unknown change detection algorithm based on a relative entropy based HMM parameter estimator. Our proposed approach is able to overcome the lack of knowledge of post-change parameters, and is illustrated to have similar performance to the popular cumulative sum (CUSUM) algorithm (which requires knowledge of the post-change parameter values) when examined, on both simulated and real data, in a vision-based aircraft manoeuvre detection problem.