930 resultados para Parameter Estimation, Fractional Dynamical Models, Fractional Predictor-Corrector Method, Hybrid Simplex Search, Particle Swarm Optimization, Competence Induction


Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the development of the embedded application and driving assistance systems, it becomes relevant to develop parallel mechanisms in order to check and to diagnose these new systems. In this thesis we focus our research on one of this type of parallel mechanisms and analytical redundancy for fault diagnosis of an automotive suspension system. We have considered a quarter model car passive suspension model and used a parameter estimation, ARX model, method to detect the fault happening in the damper and spring of system. Moreover, afterward we have deployed a neural network classifier to isolate the faults and identifies where the fault is happening. Then in this regard, the safety measurements and redundancies can take into the effect to prevent failure in the system. It is shown that The ARX estimator could quickly detect the fault online using the vertical acceleration and displacement sensor data which are common sensors in nowadays vehicles. Hence, the clear divergence is the ARX response make it easy to deploy a threshold to give alarm to the intelligent system of vehicle and the neural classifier can quickly show the place of fault occurrence.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sediment samples were obtained for detailed Adenosine 5'-Triphosphate (ATP) analysis down to 57.8 m below the seafloor (mbsf). The samples were also analyzed for particle-size distribution, calcium carbonate (CaCO3), organic carbon, and total nitrogen. The concentrations of ATP ranged between 360 and 7050 pg/g (dry weight sediment), which agree well with a limited number of direct bacteria counts. Principal component analyses show that 63% of the total variance can be accounted for by the first two principal components. The concentration of ATP (bacterial numbers by inference) is virtually independent of the concentration of sedimentary organic carbon, but correlates with CaCO3 and coarse particles.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Graduate Institute organized an academic workshop and roundtable on the occasion of EFTA's 50th Anniversary in Geneva under the chairmanship of H.E. Doris Leuthard, President of the Swiss Confederation. Pierre Sauve, Deputy Managing Director and Director of Studies, WTI and Co-leader, NCCR-Trade work programme on preferentialism and Anirudh Shingal, Senior Research Fellow, WTI and Co-leader, NCCR-Trade work programme on impact assessment of trade, co-authored a paper on the nature of preferentialism in services trade, which Anirudh presented at the workshop. The event was extremely well-attended by high profile dignitaries and academics including President Leuthard; Director General of the WTO, Pascal Lamy; trade ministers of Brazil and Finland; Jan Kubis, Executive Secretary of the UNECE and several current and former ambassadors. The academic workshop, moderated by Theresa Carpenter (Graduate Institute, Geneva), began in the morning with Prof. Victor Norman's (Norwegian School of Economics & Business Administration) presentation on the future of EFTA. Other presentations included those by Prof. Peter Egger (ETH Zurich) on the structural estimation of gravity models with market entry dynamics and by Prof. Richard Baldwin (Graduate Institute, Geneva) on 21st century regionalism. The high-profile Panel in the afternoon, moderated by Prof. Richard Baldwin, was led by President Leuthard who spoke on free trade agreements and the multilateral trading system in 2020. The keynote address at the Panel was delivered by Prof. Jagdish Bhagwati (Coulmbia University), who spoke on strengthening defences against protectionism and liberalizing trade.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Use of nonlinear parameter estimation techniques is now commonplace in ground water model calibration. However, there is still ample room for further development of these techniques in order to enable them to extract more information from calibration datasets, to more thoroughly explore the uncertainty associated with model predictions, and to make them easier to implement in various modeling contexts. This paper describes the use of pilot points as a methodology for spatial hydraulic property characterization. When used in conjunction with nonlinear parameter estimation software that incorporates advanced regularization functionality (such as PEST), use of pilot points can add a great deal of flexibility to the calibration process at the same time as it makes this process easier to implement. Pilot points can be used either as a substitute for zones of piecewise parameter uniformity, or in conjunction with such zones. In either case, they allow the disposition of areas of high and low hydraulic property value to be inferred through the calibration process, without the need for the modeler to guess the geometry of such areas prior to estimating the parameters that pertain to them. Pilot points and regularization can also be used as an adjunct to geostatistically based stochastic parameterization methods. Using the techniques described herein, a series of hydraulic property fields can be generated, all of which recognize the stochastic characterization of an area at the same time that they satisfy the constraints imposed on hydraulic property values by the need to ensure that model outputs match field measurements. Model predictions can then be made using all of these fields as a mechanism for exploring predictive uncertainty.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of presence/absence data in wildlife management and biological surveys is widespread. There is a growing interest in quantifying the sources of error associated with these data. We show that false-negative errors (failure to record a species when in fact it is present) can have a significant impact on statistical estimation of habitat models using simulated data. Then we introduce an extension of logistic modeling, the zero-inflated binomial (ZIB) model that permits the estimation of the rate of false-negative errors and the correction of estimates of the probability of occurrence for false-negative errors by using repeated. visits to the same site. Our simulations show that even relatively low rates of false negatives bias statistical estimates of habitat effects. The method with three repeated visits eliminates the bias, but estimates are relatively imprecise. Six repeated visits improve precision of estimates to levels comparable to that achieved with conventional statistics in the absence of false-negative errors In general, when error rates are less than or equal to50% greater efficiency is gained by adding more sites, whereas when error rates are >50% it is better to increase the number of repeated visits. We highlight the flexibility of the method with three case studies, clearly demonstrating the effect of false-negative errors for a range of commonly used survey methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The goal of this manuscript is to introduce a framework for consideration of designs for population pharmacokinetic orpharmacokinetic-pharmacodynamic studies. A standard one compartment pharmacokinetic model with first-order input and elimination is considered. A series of theoretical designs are considered that explore the influence of optimizing the allocation of sampling times, allocating patients to elementary designs, consideration of sparse sampling and unbalanced designs and also the influence of single vs. multiple dose designs. It was found that what appears to be relatively sparse sampling (less blood samples per patient than the number of fixed effects parameters to estimate) can also be highly informative. Overall, it is evident that exploring the population design space can yield many parsimonious designs that are efficient for parameter estimation and that may not otherwise have been considered without the aid of optimal design theory.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An appreciation of the physical mechanisms which cause observed seismicity complexity is fundamental to the understanding of the temporal behaviour of faults and single slip events. Numerical simulation of fault slip can provide insights into fault processes by allowing exploration of parameter spaces which influence microscopic and macroscopic physics of processes which may lead towards an answer to those questions. Particle-based models such as the Lattice Solid Model have been used previously for the simulation of stick-slip dynamics of faults, although mainly in two dimensions. Recent increases in the power of computers and the ability to use the power of parallel computer systems have made it possible to extend particle-based fault simulations to three dimensions. In this paper a particle-based numerical model of a rough planar fault embedded between two elastic blocks in three dimensions is presented. A very simple friction law without any rate dependency and no spatial heterogeneity in the intrinsic coefficient of friction is used in the model. To simulate earthquake dynamics the model is sheared in a direction parallel to the fault plane with a constant velocity at the driving edges. Spontaneous slip occurs on the fault when the shear stress is large enough to overcome the frictional forces on the fault. Slip events with a wide range of event sizes are observed. Investigation of the temporal evolution and spatial distribution of slip during each event shows a high degree of variability between the events. In some of the larger events highly complex slip patterns are observed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In biologically mega-diverse countries that are undergoing rapid human landscape transformation, it is important to understand and model the patterns of land cover change. This problem is particularly acute in Colombia, where lowland forests are being rapidly cleared for cropping and ranching. We apply a conceptual model with a nested set of a priori predictions to analyse the spatial and temporal patterns of land cover change for six 50-100 km(2) case study areas in lowland ecosystems of Colombia. Our analysis included soil fertility, a cost-distance function, and neighbourhood of forest and secondary vegetation cover as independent variables. Deforestation and forest regrowth are tested using logistic regression analysis and an information criterion approach to rank the models and predictor variables. The results show that: (a) overall the process of deforestation is better predicted by the full model containing all variables, while for regrowth the model containing only the auto-correlated neighbourhood terms is a better predictor; (b) overall consistent patterns emerge, although there are variations across regions and time; and (c) during the transformation process, both the order of importance and significance of the drivers change. Forest cover follows a consistent logistic decline pattern across regions, with introduced pastures being the major replacement land cover type. Forest stabilizes at 2-10% of the original cover, with an average patch size of 15.4 (+/- 9.2) ha. We discuss the implications of the observed patterns and rates of land cover change for conservation planning in countries with high rates of deforestation. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A new integration scheme is developed for nonequilibrium molecular dynamics simulations where the temperature is constrained by a Gaussian thermostat. The utility of the scheme is demonstrated by its application to the SLLOD algorithm which is the standard nonequilibrium molecular dynamics algorithm for studying shear flow. Unlike conventional integrators, the new integrators are constructed using operator-splitting techniques to ensure stability and that little or no drift in the kinetic energy occurs. Moreover, they require minimum computer memory and are straightforward to program. Numerical experiments show that the efficiency and stability of the new integrators compare favorably with conventional integrators such as the Runge-Kutta and Gear predictor-corrector methods. (C) 1999 American Institute of Physics. [S0021-9606(99)50125-6].

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Le pitture intumescenti sono utilizzate come protettivi passivi antincendio nel settore delle costruzioni. In particolare sono utilizzate per aumentare la resistenza al fuoco di elementi in acciaio. Le proprietà termiche di questi rivestimenti sono spesso sconosciute o difficili da stimare per via del fatto che variano notevolmente durante il processo di espansione che subisce l’intumescente quando esposto al calore di un incendio. Per questa ragione la validazione della resistenza al fuoco di un rivestimento presente in commercio si basa su metodi costosi economicamente e come tempi di esecuzione nel quale ciascuna trave e colonna rivestita di protettivo deve essere testata una alla volta attraverso il test di resistenza al fuoco della curva cellulosica. In questo lavoro di tesi adottando invece un approccio basato sulla modellazione termica del rivestimento intumescente si ottiene un aiuto nella semplificazione della procedura di test ed un supporto nella progettazione della resistenza al fuoco delle strutture. Il tratto di unione nei vari passaggi della presente tesi è stata la metodologia di stima del comportamento termico sconosciuto, tale metodologia di stima è la “Inverse Parameter Estimation”. Nella prima fase vi è stata la caratterizzazione chimico fisica della vernice per mezzo di differenti apparecchiature come la DSC, la TGA e l’FT-IR che ci hanno permesso di ottenere la composizione qualitativa e le temperature a cui avvengono i principali processi chimici e fisici che subisce la pittura come anche le entalpie legate a questi eventi. Nella seconda fase si è proceduto alla caratterizzazione termica delle pitture al fine di ottenerne il valore di conduttività termica equivalente. A tale scopo si sono prima utilizzate le temperature dell’acciaio di prove termiche alla fornace con riscaldamento secondo lo standard ISO-834 e successivamente per meglio definire le condizioni al contorno si è presa come fonte di calore un cono calorimetrico in cui la misura della temperatura avveniva direttamente nello spessore del’intumescente. I valori di conduttività ottenuti sono risultati congruenti con la letteratura scientifica e hanno mostrato la dipendenza della stessa dalla temperatura, mentre si è mostrata poco variante rispetto allo spessore di vernice deposto ed alla geometria di campione utilizzato.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Deformable models are an attractive approach to recognizing objects which have considerable within-class variability such as handwritten characters. However, there are severe search problems associated with fitting the models to data which could be reduced if a better starting point for the search were available. We show that by training a neural network to predict how a deformable model should be instantiated from an input image, such improved starting points can be obtained. This method has been implemented for a system that recognizes handwritten digits using deformable models, and the results show that the search time can be significantly reduced without compromising recognition performance. © 1997 Academic Press.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The automatic interpolation of environmental monitoring network data such as air quality or radiation levels in real-time setting poses a number of practical and theoretical questions. Among the problems found are (i) dealing and communicating uncertainty of predictions, (ii) automatic (hyper)parameter estimation, (iii) monitoring network heterogeneity, (iv) dealing with outlying extremes, and (v) quality control. In this paper we discuss these issues, in light of the spatial interpolation comparison exercise held in 2004.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatically generating maps of a measured variable of interest can be problematic. In this work we focus on the monitoring network context where observations are collected and reported by a network of sensors, and are then transformed into interpolated maps for use in decision making. Using traditional geostatistical methods, estimating the covariance structure of data collected in an emergency situation can be difficult. Variogram determination, whether by method-of-moment estimators or by maximum likelihood, is very sensitive to extreme values. Even when a monitoring network is in a routine mode of operation, sensors can sporadically malfunction and report extreme values. If this extreme data destabilises the model, causing the covariance structure of the observed data to be incorrectly estimated, the generated maps will be of little value, and the uncertainty estimates in particular will be misleading. Marchant and Lark [2007] propose a REML estimator for the covariance, which is shown to work on small data sets with a manual selection of the damping parameter in the robust likelihood. We show how this can be extended to allow treatment of large data sets together with an automated approach to all parameter estimation. The projected process kriging framework of Ingram et al. [2007] is extended to allow the use of robust likelihood functions, including the two component Gaussian and the Huber function. We show how our algorithm is further refined to reduce the computational complexity while at the same time minimising any loss of information. To show the benefits of this method, we use data collected from radiation monitoring networks across Europe. We compare our results to those obtained from traditional kriging methodologies and include comparisons with Box-Cox transformations of the data. We discuss the issue of whether to treat or ignore extreme values, making the distinction between the robust methods which ignore outliers and transformation methods which treat them as part of the (transformed) process. Using a case study, based on an extreme radiological events over a large area, we show how radiation data collected from monitoring networks can be analysed automatically and then used to generate reliable maps to inform decision making. We show the limitations of the methods and discuss potential extensions to remedy these.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The research carried out in this thesis was mainly concerned with the effects of large induction motors and their transient performance in power systems. Computer packages using the three phase co-ordinate frame of reference were developed to simulate the induction motor transient performance. A technique using matrix algebra was developed to allow extension of the three phase co-ordinate method to analyse asymmetrical and symmetrical faults on both sides of the three phase delta-star transformer which is usually required when connecting large induction motors to the supply system. System simulation, applying these two techniques, was used to study the transient stability of a power system. The response of a typical system, loaded with a group of large induction motors, two three-phase delta-star transformers, a synchronous generator and an infinite system was analysed. The computer software developed to study this system has the advantage that different types of fault at different locations can be studied by simple changes in input data. The research also involved investigating the possibility of using different integrating routines such as Runge-Kutta-Gill, RungeKutta-Fehlberg and the Predictor-Corrector methods. The investigation enables the reduction of computation time, which is necessary when solving the induction motor equations expressed in terms of the three phase variables. The outcome of this investigation was utilised in analysing an introductory model (containing only minimal control action) of an isolated system having a significant induction motor load compared to the size of the generator energising the system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes work completed on the application of H controller synthesis to the design of controllers for single axis high speed independent drive design examples. H controller synthesis was used in a single controller format and in a self-tuning regulator, a type of adaptive controller. Three types of industrial design examples were attempted using H controller synthesis, both in simulation and on a Drives Test Facility at Aston University. The results were benchmarked against a Proportional, Integral and Derivative (PID) with velocity feedforward controller (VFF), the industrial standard for this application. An analysis of the differences between a H and PID with VFF controller was completed. A direct-form H controller was determined for a limited class of weighting function and plants which shows the relationship between the weighting function, nominal plant and the controller parameters. The direct-form controller was utilised in two ways. Firstly it allowed the production of simple guidelines for the industrial design of H controllers. Secondly it was used as the controller modifier in a self-tuning regulator (STR). The STR had a controller modification time (including nominal model parameter estimation) of 8ms. A Set-Point Gain Scheduling (SPGS) controller was developed and applied to an industrial design example. The applicability of each control strategy, PID with VFF, H, SPGS and STR, was investigated and a set of general guidelines for their use was determined. All controllers developed were implemented using standard industrial equipment.