890 resultados para ROBUST


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This research is concerned with the development of distributed real-time systems, in which software is used for the control of concurrent physical processes. These distributed control systems are required to periodically coordinate the operation of several autonomous physical processes, with the property of an atomic action. The implementation of this coordination must be fault-tolerant if the integrity of the system is to be maintained in the presence of processor or communication failures. Commit protocols have been widely used to provide this type of atomicity and ensure consistency in distributed computer systems. The objective of this research is the development of a class of robust commit protocols, applicable to the coordination of distributed real-time control systems. Extended forms of the standard two phase commit protocol, that provides fault-tolerant and real-time behaviour, were developed. Petri nets are used for the design of the distributed controllers, and to embed the commit protocol models within these controller designs. This composition of controller and protocol model allows the analysis of the complete system in a unified manner. A common problem for Petri net based techniques is that of state space explosion, a modular approach to both the design and analysis would help cope with this problem. Although extensions to Petri nets that allow module construction exist, generally the modularisation is restricted to the specification, and analysis must be performed on the (flat) detailed net. The Petri net designs for the type of distributed systems considered in this research are both large and complex. The top down, bottom up and hybrid synthesis techniques that are used to model large systems in Petri nets are considered. A hybrid approach to Petri net design for a restricted class of communicating processes is developed. Designs produced using this hybrid approach are modular and allow re-use of verified modules. In order to use this form of modular analysis, it is necessary to project an equivalent but reduced behaviour on the modules used. These projections conceal events local to modules that are not essential for the purpose of analysis. To generate the external behaviour, each firing sequence of the subnet is replaced by an atomic transition internal to the module, and the firing of these transitions transforms the input and output markings of the module. Thus local events are concealed through the projection of the external behaviour of modules. This hybrid design approach preserves properties of interest, such as boundedness and liveness, while the systematic concealment of local events allows the management of state space. The approach presented in this research is particularly suited to distributed systems, as the underlying communication model is used as the basis for the interconnection of modules in the design procedure. This hybrid approach is applied to Petri net based design and analysis of distributed controllers for two industrial applications that incorporate the robust, real-time commit protocols developed. Temporal Petri nets, which combine Petri nets and temporal logic, are used to capture and verify causal and temporal aspects of the designs in a unified manner.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

MRI of fluids containing lipid coated microbubbles has been shown to be an effective toot for measuring the local fluid pressure. However, the intrinsically buoyant nature of these microbubbles precludes lengthy measurements due to their vertical migration under gravity and pressure-induced coalescence. A novel preparation is presented which is shown to minimize both these effects for at least 25 min. By using a 2% polysaccharide gel base with a small concentration of glycerol and 1,2-distearoyl-sn-glycero-3-phosphocholine coated gas microbubbles, MR measurements are made for pressures between 0.95 and 1.44 bar. The signal drifts due to migration and amalgamation are shown to be minimized for such an experiment whilst yielding very high NMR sensitivities up to 38% signal change per bar.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently introduced surface nanoscale axial photonics (SNAP) makes it possible to fabricate high-Q-factor microresonators and other photonic microdevices by dramatically small deformation of the optical fiber surface. To become a practical and robust technology, the SNAP platform requires methods enabling reproducible modification of the optical fiber radius at nanoscale. In this Letter, we demonstrate superaccurate fabrication of high-Q-factor microresonators by nanoscale modification of the optical fiber radius and refractive index using CO laser and UV excimer laser beam exposures. The achieved fabrication accuracy is better than 2Å in variation of the effective fiber radius. © 2011 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The accurate identification of T-cell epitopes remains a principal goal of bioinformatics within immunology. As the immunogenicity of peptide epitopes is dependent on their binding to major histocompatibility complex (MHC) molecules, the prediction of binding affinity is a prerequisite to the reliable prediction of epitopes. The iterative self-consistent (ISC) partial-least-squares (PLS)-based additive method is a recently developed bioinformatic approach for predicting class II peptide−MHC binding affinity. The ISC−PLS method overcomes many of the conceptual difficulties inherent in the prediction of class II peptide−MHC affinity, such as the binding of a mixed population of peptide lengths due to the open-ended class II binding site. The method has applications in both the accurate prediction of class II epitopes and the manipulation of affinity for heteroclitic and competitor peptides. The method is applied here to six class II mouse alleles (I-Ab, I-Ad, I-Ak, I-As, I-Ed, and I-Ek) and included peptides up to 25 amino acids in length. A series of regression equations highlighting the quantitative contributions of individual amino acids at each peptide position was established. The initial model for each allele exhibited only moderate predictivity. Once the set of selected peptide subsequences had converged, the final models exhibited a satisfactory predictive power. Convergence was reached between the 4th and 17th iterations, and the leave-one-out cross-validation statistical terms - q2, SEP, and NC - ranged between 0.732 and 0.925, 0.418 and 0.816, and 1 and 6, respectively. The non-cross-validated statistical terms r2 and SEE ranged between 0.98 and 0.995 and 0.089 and 0.180, respectively. The peptides used in this study are available from the AntiJen database (http://www.jenner.ac.uk/AntiJen). The PLS method is available commercially in the SYBYL molecular modeling software package. The resulting models, which can be used for accurate T-cell epitope prediction, will be made freely available online (http://www.jenner.ac.uk/MHCPred).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recently introduced surface nanoscale axial photonics (SNAP) makes it possible to fabricate high-Q-factor microresonators and other photonic microdevices by dramatically small deformation of the optical fiber surface. To become a practical and robust technology, the SNAP platform requires methods enabling reproducible modification of the optical fiber radius at nanoscale. In this Letter, we demonstrate superaccurate fabrication of high-Q-factor microresonators by nanoscale modification of the optical fiber radius and refractive index using CO laser and UV excimer laser beam exposures. The achieved fabrication accuracy is better than 2Å in variation of the effective fiber radius. © 2011 Optical Society of America.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mathematics Subject Classification: 26A33; 93C15, 93C55, 93B36, 93B35, 93B51; 03B42; 70Q05; 49N05

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Data fluctuation in multiple measurements of Laser Induced Breakdown Spectroscopy (LIBS) greatly affects the accuracy of quantitative analysis. A new LIBS quantitative analysis method based on the Robust Least Squares Support Vector Machine (RLS-SVM) regression model is proposed. The usual way to enhance the analysis accuracy is to improve the quality and consistency of the emission signal, such as by averaging the spectral signals or spectrum standardization over a number of laser shots. The proposed method focuses more on how to enhance the robustness of the quantitative analysis regression model. The proposed RLS-SVM regression model originates from the Weighted Least Squares Support Vector Machine (WLS-SVM) but has an improved segmented weighting function and residual error calculation according to the statistical distribution of measured spectral data. Through the improved segmented weighting function, the information on the spectral data in the normal distribution will be retained in the regression model while the information on the outliers will be restrained or removed. Copper elemental concentration analysis experiments of 16 certified standard brass samples were carried out. The average value of relative standard deviation obtained from the RLS-SVM model was 3.06% and the root mean square error was 1.537%. The experimental results showed that the proposed method achieved better prediction accuracy and better modeling robustness compared with the quantitative analysis methods based on Partial Least Squares (PLS) regression, standard Support Vector Machine (SVM) and WLS-SVM. It was also demonstrated that the improved weighting function had better comprehensive performance in model robustness and convergence speed, compared with the four known weighting functions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

AMS subject classification: 65J15, 47H04, 90C30.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 60J80.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62J05, 62J10, 62F35, 62H12, 62P30.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62J05, 62G35

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study is focused on the comparison and modification of different estimates arising in the branching processes. Simulations of models with or without migration are put through. Due to the complexity of the computations the algorithms are designed with the language of technical computing MATLAB. Using the simulations, estimates of the o spring mean of the generated processes are calculated. It is well known in the literature that under certain conditions the asymptotic distribution of the estimates is proved to be normal. Using the asymptotic normality a modified method of maximum likelihood is proposed. The aim is to obtain trimmed maximum likelihood estimates based on several sample paths with the same number of generations. Thus in a natural way the observations, inconsistent with the aprior information about the asymptotic normality are excluded from the model. The computation of the standard error allows the comparison of different types of estimates.