994 resultados para calibration method
Resumo:
Least-squares support vector machines (LS-SVM) were used as an alternative multivariate calibration method for the simultaneous quantification of some common adulterants found in powdered milk samples, using near-infrared spectroscopy. Excellent models were built using LS-SVM for determining R², RMSECV and RMSEP values. LS-SVMs show superior performance for quantifying starch, whey and sucrose in powdered milk samples in relation to PLSR. This study shows that it is possible to determine precisely the amount of one and two common adulterants simultaneously in powdered milk samples using LS-SVM and NIR spectra.
Resumo:
Computational model-based simulation methods were developed for the modelling of bioaffinity assays. Bioaffinity-based methods are widely used to quantify a biological substance in biological research, development and in routine clinical in vitro diagnostics. Bioaffinity assays are based on the high affinity and structural specificity between the binding biomolecules. The simulation methods developed are based on the mechanistic assay model, which relies on the chemical reaction kinetics and describes the forming of a bound component as a function of time from the initial binding interaction. The simulation methods were focused on studying the behaviour and the reliability of bioaffinity assay and the possibilities the modelling methods of binding reaction kinetics provide, such as predicting assay results even before the binding reaction has reached equilibrium. For example, a rapid quantitative result from a clinical bioaffinity assay sample can be very significant, e.g. even the smallest elevation of a heart muscle marker reveals a cardiac injury. The simulation methods were used to identify critical error factors in rapid bioaffinity assays. A new kinetic calibration method was developed to calibrate a measurement system by kinetic measurement data utilizing only one standard concentration. A nodebased method was developed to model multi-component binding reactions, which have been a challenge to traditional numerical methods. The node-method was also used to model protein adsorption as an example of nonspecific binding of biomolecules. These methods have been compared with the experimental data from practice and can be utilized in in vitro diagnostics, drug discovery and in medical imaging.
Resumo:
To obtain the desirable accuracy of a robot, there are two techniques available. The first option would be to make the robot match the nominal mathematic model. In other words, the manufacturing and assembling tolerances of every part would be extremely tight so that all of the various parameters would match the “design” or “nominal” values as closely as possible. This method can satisfy most of the accuracy requirements, but the cost would increase dramatically as the accuracy requirement increases. Alternatively, a more cost-effective solution is to build a manipulator with relaxed manufacturing and assembling tolerances. By modifying the mathematical model in the controller, the actual errors of the robot can be compensated. This is the essence of robot calibration. Simply put, robot calibration is the process of defining an appropriate error model and then identifying the various parameter errors that make the error model match the robot as closely as possible. This work focuses on kinematic calibration of a 10 degree-of-freedom (DOF) redundant serial-parallel hybrid robot. The robot consists of a 4-DOF serial mechanism and a 6-DOF hexapod parallel manipulator. The redundant 4-DOF serial structure is used to enlarge workspace and the 6-DOF hexapod manipulator is used to provide high load capabilities and stiffness for the whole structure. The main objective of the study is to develop a suitable calibration method to improve the accuracy of the redundant serial-parallel hybrid robot. To this end, a Denavit–Hartenberg (DH) hybrid error model and a Product-of-Exponential (POE) error model are developed for error modeling of the proposed robot. Furthermore, two kinds of global optimization methods, i.e. the differential-evolution (DE) algorithm and the Markov Chain Monte Carlo (MCMC) algorithm, are employed to identify the parameter errors of the derived error model. A measurement method based on a 3-2-1 wire-based pose estimation system is proposed and implemented in a Solidworks environment to simulate the real experimental validations. Numerical simulations and Solidworks prototype-model validations are carried out on the hybrid robot to verify the effectiveness, accuracy and robustness of the calibration algorithms.
Resumo:
This paper describes an electronic transducer for multiphase flow measurement. Its high sensitivity, good signal to noise ratio and accuracy are achieved through an electrical impedance sensor with a special guard technique. The transducer consists of a wide bandwidth and high slew rate differentiator where the lead inductance and stray capacitance effects are compensated. The sensor edge effect is eliminated by using a guard electrode based on the virtual ground potential of the operational amplifier. A theoretical modeling and a calibration method are also presented. The results obtained seem to confirm the validity of the proposed technique.
Resumo:
Le contenu de cette thèse est divisé de la façon suivante. Après un premier chapitre d’introduction, le Chapitre 2 est consacré à introduire aussi simplement que possible certaines des théories qui seront utilisées dans les deux premiers articles. Dans un premier temps, nous discuterons des points importants pour la construction de l’intégrale stochastique par rapport aux semimartingales avec paramètre spatial. Ensuite, nous décrirons les principaux résultats de la théorie de l’évaluation en monde neutre au risque et, finalement, nous donnerons une brève description d’une méthode d’optimisation connue sous le nom de dualité. Les Chapitres 3 et 4 traitent de la modélisation de l’illiquidité et font l’objet de deux articles. Le premier propose un modèle en temps continu pour la structure et le comportement du carnet d’ordres limites. Le comportement du portefeuille d’un investisseur utilisant des ordres de marché est déduit et des conditions permettant d’éliminer les possibilités d’arbitrages sont données. Grâce à la formule d’Itô généralisée il est aussi possible d’écrire la valeur du portefeuille comme une équation différentielle stochastique. Un exemple complet de modèle de marché est présenté de même qu’une méthode de calibrage. Dans le deuxième article, écrit en collaboration avec Bruno Rémillard, nous proposons un modèle similaire mais cette fois-ci en temps discret. La question de tarification des produits dérivés est étudiée et des solutions pour le prix des options européennes de vente et d’achat sont données sous forme explicite. Des conditions spécifiques à ce modèle qui permettent d’éliminer l’arbitrage sont aussi données. Grâce à la méthode duale, nous montrons qu’il est aussi possible d’écrire le prix des options européennes comme un problème d’optimisation d’une espérance sur en ensemble de mesures de probabilité. Le Chapitre 5 contient le troisième article de la thèse et porte sur un sujet différent. Dans cet article, aussi écrit en collaboration avec Bruno Rémillard, nous proposons une méthode de prévision des séries temporelles basée sur les copules multivariées. Afin de mieux comprendre le gain en performance que donne cette méthode, nous étudions à l’aide d’expériences numériques l’effet de la force et la structure de dépendance sur les prévisions. Puisque les copules permettent d’isoler la structure de dépendance et les distributions marginales, nous étudions l’impact de différentes distributions marginales sur la performance des prévisions. Finalement, nous étudions aussi l’effet des erreurs d’estimation sur la performance des prévisions. Dans tous les cas, nous comparons la performance des prévisions en utilisant des prévisions provenant d’une série bivariée et d’une série univariée, ce qui permet d’illustrer l’avantage de cette méthode. Dans un intérêt plus pratique, nous présentons une application complète sur des données financières.
Resumo:
There are now considerable expectations that semi-distributed models are useful tools for supporting catchment water quality management. However, insufficient attention has been given to evaluating the uncertainties inherent to this type of model, especially those associated with the spatial disaggregation of the catchment. The Integrated Nitrogen in Catchments model (INCA) is subjected to an extensive regionalised sensitivity analysis in application to the River Kennet, part of the groundwater-dominated upper Thames catchment, UK The main results are: (1) model output was generally insensitive to land-phase parameters, very sensitive to groundwater parameters, including initial conditions, and significantly sensitive to in-river parameters; (2) INCA was able to produce good fits simultaneously to the available flow, nitrate and ammonium in-river data sets; (3) representing parameters as heterogeneous over the catchment (206 calibrated parameters) rather than homogeneous (24 calibrated parameters) produced a significant improvement in fit to nitrate but no significant improvement to flow and caused a deterioration in ammonium performance; (4) the analysis indicated that calibrating the flow-related parameters first, then calibrating the remaining parameters (as opposed to calibrating all parameters together) was not a sensible strategy in this case; (5) even the parameters to which the model output was most sensitive suffered from high uncertainty due to spatial inconsistencies in the estimated optimum values, parameter equifinality and the sampling error associated with the calibration method; (6) soil and groundwater nutrient and flow data are needed to reduce. uncertainty in initial conditions, residence times and nitrogen transformation parameters, and long-term historic data are needed so that key responses to changes in land-use management can be assimilated. The results indicate the general, difficulty of reconciling the questions which catchment nutrient models are expected to answer with typically limited data sets and limited knowledge about suitable model structures. The results demonstrate the importance of analysing semi-distributed model uncertainties prior to model application, and illustrate the value and limitations of using Monte Carlo-based methods for doing so. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The thesis aims to elaborate on the optimum trigger speed for Vehicle Activated Signs (VAS) and to study the effectiveness of VAS trigger speed on drivers’ behaviour. Vehicle activated signs (VAS) are speed warning signs that are activated by individual vehicle when the driver exceeds a speed threshold. The threshold, which triggers the VAS, is commonly based on a driver speed, and accordingly, is called a trigger speed. At present, the trigger speed activating the VAS is usually set to a constant value and does not consider the fact that an optimal trigger speed might exist. The optimal trigger speed significantly impacts driver behaviour. In order to be able to fulfil the aims of this thesis, systematic vehicle speed data were collected from field experiments that utilized Doppler radar. Further calibration methods for the radar used in the experiment have been developed and evaluated to provide accurate data for the experiment. The calibration method was bidirectional; consisting of data cleaning and data reconstruction. The data cleaning calibration had a superior performance than the calibration based on the reconstructed data. To study the effectiveness of trigger speed on driver behaviour, the collected data were analysed by both descriptive and inferential statistics. Both descriptive and inferential statistics showed that the change in trigger speed had an effect on vehicle mean speed and on vehicle standard deviation of the mean speed. When the trigger speed was set near the speed limit, the standard deviation was high. Therefore, the choice of trigger speed cannot be based solely on the speed limit at the proposed VAS location. The optimal trigger speeds for VAS were not considered in previous studies. As well, the relationship between the trigger value and its consequences under different conditions were not clearly stated. The finding from this thesis is that the optimal trigger speed should be primarily based on lowering the standard deviation rather than lowering the mean speed of vehicles. Furthermore, the optimal trigger speed should be set near the 85th percentile speed, with the goal of lowering the standard deviation.
Resumo:
Vehicle activated signs (VAS) display a warning message when drivers exceed a particular threshold. VAS are often installed on local roads to display a warning message depending on the speed of the approaching vehicles. VAS are usually powered by electricity; however, battery and solar powered VAS are also commonplace. This thesis investigated devel-opment of an automatic trigger speed of vehicle activated signs in order to influence driver behaviour, the effect of which has been measured in terms of reduced mean speed and low standard deviation. A comprehen-sive understanding of the effectiveness of the trigger speed of the VAS on driver behaviour was established by systematically collecting data. Specif-ically, data on time of day, speed, length and direction of the vehicle have been collected for the purpose, using Doppler radar installed at the road. A data driven calibration method for the radar used in the experiment has also been developed and evaluated. Results indicate that trigger speed of the VAS had variable effect on driv-ers’ speed at different sites and at different times of the day. It is evident that the optimal trigger speed should be set near the 85th percentile speed, to be able to lower the standard deviation. In the case of battery and solar powered VAS, trigger speeds between the 50th and 85th per-centile offered the best compromise between safety and power consump-tion. Results also indicate that different classes of vehicles report differ-ences in mean speed and standard deviation; on a highway, the mean speed of cars differs slightly from the mean speed of trucks, whereas a significant difference was observed between the classes of vehicles on lo-cal roads. A differential trigger speed was therefore investigated for the sake of completion. A data driven approach using Random forest was found to be appropriate in predicting trigger speeds respective to types of vehicles and traffic conditions. The fact that the predicted trigger speed was found to be consistently around the 85th percentile speed justifies the choice of the automatic model.
Resumo:
This work proposes a method to determine the depth of objects in a scene using a combination between stereo vision and self-calibration techniques. Determining the rel- ative distance between visualized objects and a robot, with a stereo head, it is possible to navigate in unknown environments. Stereo vision techniques supply a depth measure by the combination of two or more images from the same scene. To achieve a depth estimates of the in scene objects a reconstruction of this scene geometry is necessary. For such reconstruction the relationship between the three-dimensional world coordi- nates and the two-dimensional images coordinates is necessary. Through the achievement of the cameras intrinsic parameters it is possible to make this coordinates systems relationship. These parameters can be gotten through geometric camera calibration, which, generally is made by a correlation between image characteristics of a calibration pattern with know dimensions. The cameras self-calibration allows the achievement of their intrinsic parameters without using a known calibration pattern, being possible their calculation and alteration during the displacement of the robot in an unknown environment. In this work a self-calibration method based in the three-dimensional polar coordinates to represent image features is presented. This representation is determined by the relationship between images features and horizontal and vertical opening cameras angles. Using the polar coordinates it is possible to geometrically reconstruct the scene. Through the proposed techniques combination it is possible to calculate a scene objects depth estimate, allowing the robot navigation in an unknown environment
Resumo:
This dissertation aims the development of an experimental device to determine quantitatively the content of benzene, toluene and xylenes (BTX) in the atmosphere. BTX are extremely volatile solvents, and therefore play an important role in atmospheric chemistry, being precursors in the tropospheric ozone formation. In this work a BTX new standard gas was produced in nitrogen for stagnant systems. The aim of this dissertation is to develop a new method, simple and cheaper, to quantify and monitor BTX in air using solid phase microextraction/ gas chromatography/mass spectrometry (SPME/CG/MS). The features of the calibration method proposed are presented in this dissertation. SPME sampling was carried out under non-equilibrium conditions using a Carboxen/PDMS fiber exposed for 10 min standard gas mixtures. It is observed that the main parameters that affect the extraction process are sampling time and concentration. The results of the BTX multicomponent system studied have shown a linear and a nonlinear range. In the non-linear range, it is remarkable the effect of competition by selective adsorption with the following affinity order p-xylene > toluene > benzene. This behavior represents a limitation of the method, however being in accordance with the literature. Furthermore, this behavior does not prevent the application of the technique out of the non-linear region to quantify the BTX contents in the atmosphere.
Resumo:
This work is combined with the potential of the technique of near infrared spectroscopy - NIR and chemometrics order to determine the content of diclofenac tablets, without destruction of the sample, to which was used as the reference method, ultraviolet spectroscopy, which is one of the official methods. In the construction of multivariate calibration models has been studied several types of pre-processing of NIR spectral data, such as scatter correction, first derivative. The regression method used in the construction of calibration models is the PLS (partial least squares) using NIR spectroscopic data of a set of 90 tablets were divided into two sets (calibration and prediction). 54 were used in the calibration samples and the prediction was used 36, since the calibration method used was crossvalidation method (full cross-validation) that eliminates the need for a validation set. The evaluation of the models was done by observing the values of correlation coefficient R 2 and RMSEC mean square error (calibration error) and RMSEP (forecast error). As the forecast values estimated for the remaining 36 samples, which the results were consistent with the values obtained by UV spectroscopy
Resumo:
Driven by the challenges involved in the development of new advanced materials with unusual drug delivery profiles capable of improving the therapeutic and toxicological properties of existing cancer chemotherapy, the one-pot sol-gel synthesis of flexible, transparent and insoluble urea-cross-linked polyether-siloxane hybrids has been recently developed. In this one-pot synthesis, the strong interaction between the antitumor cisplatin (CisPt) molecules and the ureasil-poly(propylene oxide) (PPO) hybrid matrix gives rise to the incorporation and release of an unknown CisPt-derived species, hindering the quantitative determination of the drug release pattern from the conventional UV-Vis absorption technique. In this article, we report the use of an original synchrotron radiation calibration method based on the combination of XAS and UV-Vis for the quantitative determination of the amount of Pt-based molecules released in water. Thanks to the combination of UV-Vis, XAS and Raman techniques, we demonstrated that both the CisPt molecules and the CisPt-derived species are loaded into an ureasil-PPO/ureasil-poly(ethylene oxide) (PEO) hybrid blend matrix. The experimentally determined molar extinction coefficient of the CisPt-derived species loaded into ureasil-PPO hybrid matrix enabled the simultaneous time-resolved monitoring of each Pt species released from this hybrid blend matrix.
Resumo:
The external detector method (EDM) is a widely used technique in fission track thermochronology (FTT) in which two different minerals are concomitantly employed: spontaneous tracks are observed in apatite and induced ones in the muscovite external detector. They show intrinsic differences in detection and etching properties that should be taken into account. In this work, new geometry factor values, g, in apatite, were obtained by directly measuring the ρed/ρis ratios and independently determined [GQR]ed/is values through the measurement of projected lengths. Five mounts, two of which were large area prismatic sections and three samples composed of random-orientation pieces have been used to determine the g-values. A side effect of applying EDM is that the value of the initial confined induced fission track, L0, is not measured in routine analyses. The L 0-value is an important parameter to quantify with good confidence the degree of annealing of the spontaneous fission tracks in unknown-age samples, and is essential for accurate thermal history modeling. The impact of using arbitrary L0-values on the inference of sample thermal history is investigated and discussed. The measurement of the L0-value for each sample to be dated using an extra irradiated apatite mount is proposed. This extra mount can be also used for determining the g value as an extension of the ρed/ρis ratio method. Eight apatite samples from crystalline basement, with grains at random orientation, were used to determine the g-values. The results found are statistically in agreement with the values found for apatite samples (from Durango, Mexico) measured in prismatic section and also measured at random orientation. There was no observable variation in efficiency regarding crystal orientation, showing that it is relatively safe using non-prismatic grains, especially in samples with paucity of grains, as it is the case of most basin samples. Implications for the ζ-calibration and for the calibration of the direct (spectrometer-based) fission-track dating are also discussed.
Resumo:
Neste trabalho, são propostas metodologias para otimização do parâmetro de forma local c do método RPIM (Radial Point Interpolation Method). Com as técnicas apresentadas, é possível reduzir problemas com inversão de matrizes comuns em métodos sem malha e, também, garantir um maior grau de liberdade e precisão para a utilização da técnica, já que se torna possível uma definição semi-automática dos fatores de forma mais adequados para cada domínio de suporte. Além disso, é apresentado um algoritmo baseado no Line Sweep para a geração eficiente dos domínios de suporte.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)