965 resultados para linear calibration model
Resumo:
Anthropomorphic model observers are mathe- matical algorithms which are applied to images with the ultimate goal of predicting human signal detection and classification accuracy across varieties of backgrounds, image acquisitions and display conditions. A limitation of current channelized model observers is their inability to handle irregularly-shaped signals, which are common in clinical images, without a high number of directional channels. Here, we derive a new linear model observer based on convolution channels which we refer to as the "Filtered Channel observer" (FCO), as an extension of the channelized Hotelling observer (CHO) and the nonprewhitening with an eye filter (NPWE) observer. In analogy to the CHO, this linear model observer can take the form of a single template with an external noise term. To compare with human observers, we tested signals with irregular and asymmetrical shapes spanning the size of lesions down to those of microcalfications in 4-AFC breast tomosynthesis detection tasks, with three different contrasts for each case. Whereas humans uniformly outperformed conventional CHOs, the FCO observer outperformed humans for every signal with only one exception. Additive internal noise in the models allowed us to degrade model performance and match human performance. We could not match all the human performances with a model with a single internal noise component for all signal shape, size and contrast conditions. This suggests that either the internal noise might vary across signals or that the model cannot entirely capture the human detection strategy. However, the FCO model offers an efficient way to apprehend human observer performance for a non-symmetric signal.
Resumo:
This chapter presents possible uses and examples of Monte Carlo methods for the evaluation of uncertainties in the field of radionuclide metrology. The method is already well documented in GUM supplement 1, but here we present a more restrictive approach, where the quantities of interest calculated by the Monte Carlo method are estimators of the expectation and standard deviation of the measurand, and the Monte Carlo method is used to propagate the uncertainties of the input parameters through the measurement model. This approach is illustrated by an example of the activity calibration of a 103Pd source by liquid scintillation counting and the calculation of a linear regression on experimental data points. An electronic supplement presents some algorithms which may be used to generate random numbers with various statistical distributions, for the implementation of this Monte Carlo calculation method.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
The aim of the thesis is to study the principles of the permanent magnet linear synchronous motor (PMLSM) and to develop a simulator model of direct force controlled PMLSM. The basic motor model is described by the traditional two-axis equations. The end effects, cogging force and friction model are also included into the final motor model. Direct thrust force control of PMLSM is described and modelled. The full system model is proven by comparison with the data provided by the motor manufacturer.
Resumo:
This thesis concentrates on developing a practical local approach methodology based on micro mechanical models for the analysis of ductile fracture of welded joints. Two major problems involved in the local approach, namely the dilational constitutive relation reflecting the softening behaviour of material, and the failure criterion associated with the constitutive equation, have been studied in detail. Firstly, considerable efforts were made on the numerical integration and computer implementation for the non trivial dilational Gurson Tvergaard model. Considering the weaknesses of the widely used Euler forward integration algorithms, a family of generalized mid point algorithms is proposed for the Gurson Tvergaard model. Correspondingly, based on the decomposition of stresses into hydrostatic and deviatoric parts, an explicit seven parameter expression for the consistent tangent moduli of the algorithms is presented. This explicit formula avoids any matrix inversion during numerical iteration and thus greatly facilitates the computer implementation of the algorithms and increase the efficiency of the code. The accuracy of the proposed algorithms and other conventional algorithms has been assessed in a systematic manner in order to highlight the best algorithm for this study. The accurate and efficient performance of present finite element implementation of the proposed algorithms has been demonstrated by various numerical examples. It has been found that the true mid point algorithm (a = 0.5) is the most accurate one when the deviatoric strain increment is radial to the yield surface and it is very important to use the consistent tangent moduli in the Newton iteration procedure. Secondly, an assessment of the consistency of current local failure criteria for ductile fracture, the critical void growth criterion, the constant critical void volume fraction criterion and Thomason's plastic limit load failure criterion, has been made. Significant differences in the predictions of ductility by the three criteria were found. By assuming the void grows spherically and using the void volume fraction from the Gurson Tvergaard model to calculate the current void matrix geometry, Thomason's failure criterion has been modified and a new failure criterion for the Gurson Tvergaard model is presented. Comparison with Koplik and Needleman's finite element results shows that the new failure criterion is fairly accurate indeed. A novel feature of the new failure criterion is that a mechanism for void coalescence is incorporated into the constitutive model. Hence the material failure is a natural result of the development of macroscopic plastic flow and the microscopic internal necking mechanism. By the new failure criterion, the critical void volume fraction is not a material constant and the initial void volume fraction and/or void nucleation parameters essentially control the material failure. This feature is very desirable and makes the numerical calibration of void nucleation parameters(s) possible and physically sound. Thirdly, a local approach methodology based on the above two major contributions has been built up in ABAQUS via the user material subroutine UMAT and applied to welded T joints. By using the void nucleation parameters calibrated from simple smooth and notched specimens, it was found that the fracture behaviour of the welded T joints can be well predicted using present methodology. This application has shown how the damage parameters of both base material and heat affected zone (HAZ) material can be obtained in a step by step manner and how useful and capable the local approach methodology is in the analysis of fracture behaviour and crack development as well as structural integrity assessment of practical problems where non homogeneous materials are involved. Finally, a procedure for the possible engineering application of the present methodology is suggested and discussed.
Resumo:
This technical note describes the construction of a low-cost optical detector. This device is composed by a high-sensitive linear light sensor (model ILX554) and a microcontroller. The performance of the detector was demonstrated by the detection of emission and Raman spectra of the several atomic systems and the results reproduce those found in the literature.
Resumo:
In this work, a spectrophotometric methodology was applied in order to determine epinephrine (EP), uric acid (UA), and acetaminophen (AC) in pharmaceutical formulations and spiked human serum, plasma, and urine by using a multivariate approach. Multivariate calibration methods such as partial least squares (PLS) methods and its derivates were used to obtain a model for simultaneous determination of EP, UA and AC with good figures of merit and mixture design was in the range of 1.8 - 35.3, 1.7 - 16.8, and 1.5 - 12.1 µg mL-1. The 2nd derivate PLS showed recoveries of 95.3 - 103.3, 93.3 - 104.0, and 94.0 - 105.5 µg mL-1 for EP, UA, and AC, respectively.
Resumo:
The Practical Stochastic Model is a simple and robust method to describe coupled chemical reactions. The connection between this stochastic method and a deterministic method was initially established to understand how the parameters and variables that describe the concentration in both methods were related. It was necessary to define two main concepts to make this connection: the filling of compartments or dilutions and the rate of reaction enhancement. The parameters, variables, and the time of the stochastic methods were scaled with the size of the compartment and were compared with a deterministic method. The deterministic approach was employed as an initial reference to achieve a consistent stochastic result. Finally, an independent robust stochastic method was obtained. This method could be compared with the Stochastic Simulation Algorithm developed by Gillespie, 1977. The Practical Stochastic Model produced absolute values that were essential to describe non-linear chemical reactions with a simple structure, and allowed for a correct description of the chemical kinetics.
Resumo:
A model to estimate damage caused by gray leaf spot of corn (Cercospora zea-maydis) was developed from experimental field data gathered during the summer seasons of 2000/01 and during the second crop season [January-seedtime] of 2001, in the southwest of Goiás state. Three corn hybrids were grown over two seasons and on two sites, resulting in 12 experimental plots. A disease intensity gradient (lesions per leaf) was generated through application, three times over the season, of five different doses of the fungicide propiconazol. From tasseling onward, disease intensity on the ear leaf (El), and El - 1, El - 2, El + 1, and El + 2, was evaluated weekly. A manual harvest at the physiological ripening stage was followed by grain drying and cleaning. Finally, grain yield in kg.ha-1 was estimated. Regression analysis, performed between grain yield and all combinations of the number of lesions on each leaf type, generated thirty linear equations representing the damage function. To estimate losses caused by different disease intensities at different corn growth stages, these models should first be validated. Damage coefficients may be used in determining the economic damage threshold.
Resumo:
Extinction coefficients (e) changes of manganese phthalocyanine (Mn-Pc) were studied in different organic solvents and related to solvent polarity scales; (Kosower's values (Z), Dimroth's values (E T), donor numbers (DN) and linear solvation energy relationships (LSER) or linear free energy relationships (LFER));, theoretical molecular orbital calculations and ligand/solvent coordination processes in order to predict molecular interaction with the medium and identification of predominant intermolecular forces.
Resumo:
The aim of this present work was to provide a more fast, simple and less expensive to analyze sulfur content in diesel samples than by the standard methods currently used. Thus, samples of diesel fuel with sulfur concentrations varying from 400 and 2500 mgkg-1 were analyzed by two methodologies: X-ray fluorescence, according to ASTM D4294 and by Fourier transform infrared spectrometry (FTIR). The spectral data obtained from FTIR were used to build multivariate calibration models by partial least squares (PLS). Four models were built in three different ways: 1) a model using the full spectra (665 to 4000 cm-1), 2) two models using some specific spectrum regions and 3) a model with variable selected by classic method of variable selection stepwise. The model obtained by variable selection stepwise and the model built with region spectra between 665 and 856 cm-1 and 1145 and 2717 cm-1 showed better results in the determination of sulfur content.
Centralized Motion Control of a Linear Tooth Belt Drive: Analysis of the Performance and Limitations
Resumo:
A centralized robust position control for an electrical driven tooth belt drive is designed in this doctoral thesis. Both a cascaded control structure and a PID based position controller are discussed. The performance and the limitations of the system are analyzed and design principles for the mechanical structure and the control design are given. These design principles are also suitable for most of the motion control applications, where mechanical resonance frequencies and control loop delays are present. One of the major challenges in the design of a controller for machinery applications is that the values of the parameters in the system model (parameter uncertainty) or the system model it self (non-parametric uncertainty) are seldom known accurately in advance. In this thesis a systematic analysis of the parameter uncertainty of the linear tooth beltdrive model is presented and the effect of the variation of a single parameter on the performance of the total system is shown. The total variation of the model parameters is taken into account in the control design phase using a Quantitative Feedback Theory (QFT). The thesis also introduces a new method to analyze reference feedforward controllers applying the QFT. The performance of the designed controllers is verified by experimentalmeasurements. The measurements confirm the control design principles that are given in this thesis.
Resumo:
The Switched Reluctance technology is probably best suited for industrial low-speed or zerospeed applications where the power can be small but the torque or the force in linear movement cases might be relatively high. Because of its simple structure the SR-motor is an interesting alternative for low power applications where pneumatic or hydraulic linear drives are to be avoided. This study analyses the basic parts of an LSR-motor which are the two mover poles and one stator pole and which form the “basic pole pair” in linear-movement transversal-flux switchedreluctance motors. The static properties of the basic pole pair are modelled and the basic design rules are derived. The models developed are validated with experiments. A one-sided one-polepair transversal-flux switched-reluctance-linear-motor prototype is demonstrated and its static properties are measured. The modelling of the static properties is performed with FEM-calculations. Two-dimensional models are accurate enough to model the static key features for the basic dimensioning of LSRmotors. Three-dimensional models must be used in order to get the most accurate calculation results of the static traction force production. The developed dimensioning and modelling methods, which could be systematically validated by laboratory measurements, are the most significant contributions of this thesis.
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
ABSTRACTA model to estimate yield loss caused by Asian soybean rust (ASR) (Phakopsora pachyrhizi) was developed by collecting data from field experiments during the growing seasons 2009/10 and 2010/11, in Passo Fundo, RS. The disease intensity gradient, evaluated in the phenological stages R5.3, R5.4 and R5.5 based on leaflet incidence (LI) and number of uredinium and lesions/cm2, was generated by applying azoxystrobin 60 g a.i/ha + cyproconazole 24 g a.i/ha + 0.5% of the adjuvant Nimbus. The first application occurred when LI = 25% and the remaining ones at 10, 15, 20 and 25-day intervals. Harvest occurred at physiological maturity and was followed by grain drying and cleaning. Regression analysis between the grain yield and the disease intensity assessment criteria generated 56 linear equations of the yield loss function. The greatest loss was observed in the earliest growth stage, and yield loss coefficients ranged from 3.41 to 9.02 kg/ha for each 1% LI for leaflet incidence, from 13.34 to 127.4 kg/ha/1 lesion/cm2 for lesion density and from 5.53 to 110.0 kg/ha/1 uredinium/cm2 for uredinium density.