975 resultados para Sequential Gaussian simulation
Resumo:
The future vehicle navigation for safety applications requires seamless positioning at the accuracy of sub-meter or better. However, standalone Global Positioning System (GPS) or Differential GPS (DGPS) suffer from solution outages while being used in restricted areas such as high-rise urban areas and tunnels due to the blockages of satellite signals. Smoothed DGPS can provide sub-meter positioning accuracy, but not the seamless requirement. A disadvantage of the traditional navigation aids such as Dead Reckoning and Inertial Measurement Unit onboard vehicles are either not accurate enough due to error accumulation or too expensive to be acceptable by the mass market vehicle users. One of the alternative technologies is to use the wireless infrastructure installed in roadside to locate vehicles in regions where the Global Navigation Satellite Systems (GNSS) signals are not available (for example: inside tunnels, urban canyons and large indoor car parks). The examples of roadside infrastructure which can be potentially used for positioning purposes could include Wireless Local Area Network (WLAN)/Wireless Personal Area Network (WPAN) based positioning systems, Ultra-wide band (UWB) based positioning systems, Dedicated Short Range Communication (DSRC) devices, Locata’s positioning technology, and accurate road surface height information over selected road segments such as tunnels. This research reviews and compares the possible wireless technologies that could possibly be installed along roadside for positioning purposes. Models and algorithms of integrating different positioning technologies are also presented. Various simulation schemes are designed to examine the performance benefits of united GNSS and roadside infrastructure for vehicle positioning. The results from these experimental studies have shown a number of useful findings. It is clear that in the open road environment where sufficient satellite signals can be obtained, the roadside wireless measurements contribute very little to the improvement of positioning accuracy at the sub-meter level, especially in the dual constellation cases. In the restricted outdoor environments where only a few GPS satellites, such as those with 45 elevations, can be received, the roadside distance measurements can help improve both positioning accuracy and availability to the sub-meter level. When the vehicle is travelling in tunnels with known heights of tunnel surfaces and roadside distance measurements, the sub-meter horizontal positioning accuracy is also achievable. Overall, simulation results have demonstrated that roadside infrastructure indeed has the potential to provide sub-meter vehicle position solutions for certain road safety applications if the properly deployed roadside measurements are obtainable.
Resumo:
Exploiting wind-energy is one possible way to ex- tend flight duration for Unmanned Arial Vehicles. Wind-energy can also be used to minimise energy consumption for a planned path. In this paper, we consider uncertain time-varying wind fields and plan a path through them. A Gaussian distribution is used to determine uncertainty in the Time-varying wind fields. We use Markov Decision Process to plan a path based upon the uncertainty of Gaussian distribution. Simulation results that compare the direct line of flight between start and target point and our planned path for energy consumption and time of travel are presented. The result is a robust path using the most visited cell while sampling the Gaussian distribution of the wind field in each cell.
Resumo:
In this paper, we apply a simulation based approach for estimating transmission rates of nosocomial pathogens. In particular, the objective is to infer the transmission rate between colonised health-care practitioners and uncolonised patients (and vice versa) solely from routinely collected incidence data. The method, using approximate Bayesian computation, is substantially less computer intensive and easier to implement than likelihood-based approaches we refer to here. We find through replacing the likelihood with a comparison of an efficient summary statistic between observed and simulated data that little is lost in the precision of estimated transmission rates. Furthermore, we investigate the impact of incorporating uncertainty in previously fixed parameters on the precision of the estimated transmission rates.
Resumo:
Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.
Resumo:
Fusion techniques have received considerable attention for achieving performance improvement with biometrics. While a multi-sample fusion architecture reduces false rejects, it also increases false accepts. This impact on performance also depends on the nature of subsequent attempts, i.e., random or adaptive. Expressions for error rates are presented and experimentally evaluated in this work by considering the multi-sample fusion architecture for text-dependent speaker verification using HMM based digit dependent speaker models. Analysis incorporating correlation modeling demonstrates that the use of adaptive samples improves overall fusion performance compared to randomly repeated samples. For a text dependent speaker verification system using digit strings, sequential decision fusion of seven instances with three random samples is shown to reduce the overall error of the verification system by 26% which can be further reduced by 6% for adaptive samples. This analysis novel in its treatment of random and adaptive multiple presentations within a sequential fused decision architecture, is also applicable to other biometric modalities such as finger prints and handwriting samples.
Resumo:
Statistical dependence between classifier decisions is often shown to improve performance over statistically independent decisions. Though the solution for favourable dependence between two classifier decisions has been derived, the theoretical analysis for the general case of 'n' client and impostor decision fusion has not been presented before. This paper presents the expressions developed for favourable dependence of multi-instance and multi-sample fusion schemes that employ 'AND' and 'OR' rules. The expressions are experimentally evaluated by considering the proposed architecture for text-dependent speaker verification using HMM based digit dependent speaker models. The improvement in fusion performance is found to be higher when digit combinations with favourable client and impostor decisions are used for speaker verification. The total error rate of 20% for fusion of independent decisions is reduced to 2.1% for fusion of decisions that are favourable for both client and impostors. The expressions developed here are also applicable to other biometric modalities, such as finger prints and handwriting samples, for reliable identity verification.
Resumo:
Information communication and technology (ICT) systems are almost ubiquitous in the modern world. It is hard to identify any industry, or for that matter any part of society, that is not in some way dependent on these systems and their continued secure operation. Therefore the security of information infrastructures, both on an organisational and societal level, is of critical importance. Information security risk assessment is an essential part of ensuring that these systems are appropriately protected and positioned to deal with a rapidly changing threat environment. The complexity of these systems and their inter-dependencies however, introduces a similar complexity to the information security risk assessment task. This complexity suggests that information security risk assessment cannot, optimally, be undertaken manually. Information security risk assessment for individual components of the information infrastructure can be aided by the use of a software tool, a type of simulation, which concentrates on modelling failure rather than normal operational simulation. Avoiding the modelling of the operational system will once again reduce the level of complexity of the assessment task. The use of such a tool provides the opportunity to reuse information in many different ways by developing a repository of relevant information to aid in both risk assessment and management and governance and compliance activities. Widespread use of such a tool allows the opportunity for the risk models developed for individual information infrastructure components to be connected in order to develop a model of information security exposures across the entire information infrastructure. In this thesis conceptual and practical aspects of risk and its underlying epistemology are analysed to produce a model suitable for application to information security risk assessment. Based on this work prototype software has been developed to explore these concepts for information security risk assessment. Initial work has been carried out to investigate the use of this software for information security compliance and governance activities. Finally, an initial concept for extending the use of this approach across an information infrastructure is presented.
Resumo:
Percolation flow problems are discussed in many research fields, such as seepage hydraulics, groundwater hydraulics, groundwater dynamics and fluid dynamics in porous media. Many physical processes appear to exhibit fractional-order behavior that may vary with time, or space, or space and time. The theory of pseudodifferential operators and equations has been used to deal with this situation. In this paper we use a fractional Darcys law with variable order Riemann-Liouville fractional derivatives, this leads to a new variable-order fractional percolation equation. In this paper, a new two-dimensional variable-order fractional percolation equation is considered. A new implicit numerical method and an alternating direct method for the two-dimensional variable-order fractional model is proposed. Consistency, stability and convergence of the implicit finite difference method are established. Finally, some numerical examples are given. The numerical results demonstrate the effectiveness of the methods. This technique can be used to simulate a three-dimensional variable-order fractional percolation equation.
Resumo:
The quick detection of abrupt (unknown) parameter changes in an observed hidden Markov model (HMM) is important in several applications. Motivated by the recent application of relative entropy concepts in the robust sequential change detection problem (and the related model selection problem), this paper proposes a sequential unknown change detection algorithm based on a relative entropy based HMM parameter estimator. Our proposed approach is able to overcome the lack of knowledge of post-change parameters, and is illustrated to have similar performance to the popular cumulative sum (CUSUM) algorithm (which requires knowledge of the post-change parameter values) when examined, on both simulated and real data, in a vision-based aircraft manoeuvre detection problem.
Resumo:
A number of mathematical models investigating certain aspects of the complicated process of wound healing are reported in the literature in recent years. However, effective numerical methods and supporting error analysis for the fractional equations which describe the process of wound healing are still limited. In this paper, we consider numerical simulation of fractional model based on the coupled advection-diffusion equations for cell and chemical concentration in a polar coordinate system. The space fractional derivatives are defined in the Left and Right Riemann-Liouville sense. Fractional orders in advection and diffusion terms belong to the intervals (0; 1) or (1; 2], respectively. Some numerical techniques will be used. Firstly, the coupled advection-diffusion equations are decoupled to a single space fractional advection-diffusion equation in a polar coordinate system. Secondly, we propose a new implicit difference method for simulating this equation by using the equivalent of the Riemann-Liouville and Gr¨unwald-Letnikov fractional derivative definitions. Thirdly, its stability and convergence are discussed, respectively. Finally, some numerical results are given to demonstrate the theoretical analysis.
Resumo:
Fractional order dynamics in physics, particularly when applied to diffusion, leads to an extension of the concept of Brown-ian motion through a generalization of the Gaussian probability function to what is termed anomalous diffusion. As MRI is applied with increasing temporal and spatial resolution, the spin dynamics are being examined more closely; such examinations extend our knowledge of biological materials through a detailed analysis of relaxation time distribution and water diffusion heterogeneity. Here the dynamic models become more complex as they attempt to correlate new data with a multiplicity of tissue compartments where processes are often anisotropic. Anomalous diffusion in the human brain using fractional order calculus has been investigated. Recently, a new diffusion model was proposed by solving the Bloch-Torrey equation using fractional order calculus with respect to time and space (see R.L. Magin et al., J. Magnetic Resonance, 190 (2008) 255-270). However effective numerical methods and supporting error analyses for the fractional Bloch-Torrey equation are still limited. In this paper, the space and time fractional Bloch-Torrey equation (ST-FBTE) is considered. The time and space derivatives in the ST-FBTE are replaced by the Caputo and the sequential Riesz fractional derivatives, respectively. Firstly, we derive an analytical solution for the ST-FBTE with initial and boundary conditions on a finite domain. Secondly, we propose an implicit numerical method (INM) for the ST-FBTE, and the stability and convergence of the INM are investigated. We prove that the implicit numerical method for the ST-FBTE is unconditionally stable and convergent. Finally, we present some numerical results that support our theoretical analysis.
Resumo:
A breaker restrike is an abnormal arcing phenomenon, leading to a possible breaker failure. Eventually, this failure leads to interruption of the transmission and distribution of the electricity supply system until the breaker is replaced. Before 2008, there was little evidence in the literature of monitoring techniques based on restrike measurement and interpretation produced during switching of capacitor banks and shunt reactor banks in power systems. In 2008 a non-intrusive radiometric restrike measurement method and a restrike hardware detection algorithm were developed by M.S. Ramli and B. Kasztenny. However, the limitations of the radiometric measurement method are a band limited frequency response as well as limitations in amplitude determination. Current restrike detection methods and algorithms require the use of wide bandwidth current transformers and high voltage dividers. A restrike switch model using Alternative Transient Program (ATP) and Wavelet Transforms which support diagnostics are proposed. Restrike phenomena become a new diagnostic process using measurements, ATP and Wavelet Transforms for online interrupter monitoring. This research project investigates the restrike switch model Parameter „A. dielectric voltage gradient related to a normal and slowed case of the contact opening velocity and the escalation voltages, which can be used as a diagnostic tool for a vacuum circuit-breaker (CB) at service voltages between 11 kV and 63 kV. During current interruption of an inductive load at current quenching or chopping, a transient voltage is developed across the contact gap. The dielectric strength of the gap should rise to a point to withstand this transient voltage. If it does not, the gap will flash over, resulting in a restrike. A straight line is fitted through the voltage points at flashover of the contact gap. This is the point at which the gap voltage has reached a value that exceeds the dielectric strength of the gap. This research shows that a change in opening contact velocity of the vacuum CB produces a corresponding change in the slope of the gap escalation voltage envelope. To investigate the diagnostic process, an ATP restrike switch model was modified with contact opening velocity computation for restrike waveform signature analyses along with experimental investigations. This also enhanced a mathematical CB model with the empirical dielectric model for SF6 (sulphur hexa-fluoride) CBs at service voltages above 63 kV and a generalised dielectric curve model for 12 kV CBs. A CB restrike can be predicted if there is a similar type of restrike waveform signatures for measured and simulated waveforms. The restrike switch model applications are used for: computer simulations as virtual experiments, including predicting breaker restrikes; estimating the interrupter remaining life of SF6 puffer CBs; checking system stresses; assessing point-on-wave (POW) operations; and for a restrike detection algorithm development using Wavelet Transforms. A simulated high frequency nozzle current magnitude was applied to an Equation (derived from the literature) which can calculate the life extension of the interrupter of a SF6 high voltage CB. The restrike waveform signatures for a medium and high voltage CB identify its possible failure mechanism such as delayed opening, degraded dielectric strength and improper contact travel. The simulated and measured restrike waveform signatures are analysed using Matlab software for automatic detection. Experimental investigation of a 12 kV vacuum CB diagnostic was carried out for the parameter determination and a passive antenna calibration was also successfully developed with applications for field implementation. The degradation features were also evaluated with a predictive interpretation technique from the experiments, and the subsequent simulation indicates that the drop in voltage related to the slow opening velocity mechanism measurement to give a degree of contact degradation. A predictive interpretation technique is a computer modeling for assessing switching device performance, which allows one to vary a single parameter at a time; this is often difficult to do experimentally because of the variable contact opening velocity. The significance of this thesis outcome is that it is a non-intrusive method developed using measurements, ATP and Wavelet Transforms to predict and interpret a breaker restrike risk. The measurements on high voltage circuit-breakers can identify degradation that can interrupt the distribution and transmission of an electricity supply system. It is hoped that the techniques for the monitoring of restrike phenomena developed by this research will form part of a diagnostic process that will be valuable for detecting breaker stresses relating to the interrupter lifetime. Suggestions for future research, including a field implementation proposal to validate the restrike switch model for ATP system studies and the hot dielectric strength curve model for SF6 CBs, are given in Appendix A.
Resumo:
In microscopic traffic simulators, the interaction between vehicles is considered. The dynamics of the system then becomes an emergent property of the interaction between its components. Such interactions include lane-changing, car-following behaviours and intersection management. Although, in some cases, such simulators produce realistic prediction, they do not allow for an important aspect of the dynamics, that is, the driver-vehicle interaction. This paper introduces a physically sound vehicle-driver model for realistic microscopic simulation. By building a nanoscopic traffic simulation model that uses steering angle and throttle position as parameters, the model aims to overcome unrealistic acceleration and deceleration values, as found in various microscopic simulation tools. A physics engine calculates the driving force of the vehicle, and the preliminary results presented here, show that, through a realistic driver-vehicle-environment simulator, it becomes possible to model realistic driver and vehicle behaviours in a traffic simulation.