918 resultados para least squares method
Resumo:
Time-lapse geophysical measurements are widely used to monitor the movement of water and solutes through the subsurface. Yet commonly used deterministic least squares inversions typically suffer from relatively poor mass recovery, spread overestimation, and limited ability to appropriately estimate nonlinear model uncertainty. We describe herein a novel inversion methodology designed to reconstruct the three-dimensional distribution of a tracer anomaly from geophysical data and provide consistent uncertainty estimates using Markov chain Monte Carlo simulation. Posterior sampling is made tractable by using a lower-dimensional model space related both to the Legendre moments of the plume and to predefined morphological constraints. Benchmark results using cross-hole ground-penetrating radar travel times measurements during two synthetic water tracer application experiments involving increasingly complex plume geometries show that the proposed method not only conserves mass but also provides better estimates of plume morphology and posterior model uncertainty than deterministic inversion results.
Resumo:
The analysis of multiexponential decays is challenging because of their complex nature. When analyzing these signals, not only the parameters, but also the orders of the models, have to be estimated. We present an improved spectroscopic technique specially suited for this purpose. The proposed algorithm combines an iterative linear filter with an iterative deconvolution method. A thorough analysis of the noise effect is presented. The performance is tested with synthetic and experimental data.
Resumo:
La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.
Resumo:
This article presents an experimental study about the classification ability of several classifiers for multi-classclassification of cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland lawenforcement authorities regularly ask forensic laboratories to determinate the chemotype of a seized cannabisplant and then to conclude if the plantation is legal or not. This classification is mainly performed when theplant is mature as required by the EU official protocol and then the classification of cannabis seedlings is a timeconsuming and costly procedure. A previous study made by the authors has investigated this problematic [1]and showed that it is possible to differentiate between drug type (illegal) and fibre type (legal) cannabis at anearly stage of growth using gas chromatography interfaced with mass spectrometry (GC-MS) based on therelative proportions of eight major leaf compounds. The aims of the present work are on one hand to continueformer work and to optimize the methodology for the discrimination of drug- and fibre type cannabisdeveloped in the previous study and on the other hand to investigate the possibility to predict illegal cannabisvarieties. Seven classifiers for differentiating between cannabis seedlings are evaluated in this paper, namelyLinear Discriminant Analysis (LDA), Partial Least Squares Discriminant Analysis (PLS-DA), Nearest NeighbourClassification (NNC), Learning Vector Quantization (LVQ), Radial Basis Function Support Vector Machines(RBF SVMs), Random Forest (RF) and Artificial Neural Networks (ANN). The performance of each method wasassessed using the same analytical dataset that consists of 861 samples split into drug- and fibre type cannabiswith drug type cannabis being made up of 12 varieties (i.e. 12 classes). The results show that linear classifiersare not able to manage the distribution of classes in which some overlap areas exist for both classificationproblems. Unlike linear classifiers, NNC and RBF SVMs best differentiate cannabis samples both for 2-class and12-class classifications with average classification results up to 99% and 98%, respectively. Furthermore, RBFSVMs correctly classified into drug type cannabis the independent validation set, which consists of cannabisplants coming from police seizures. In forensic case work this study shows that the discrimination betweencannabis samples at an early stage of growth is possible with fairly high classification performance fordiscriminating between cannabis chemotypes or between drug type cannabis varieties.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
La regressió basada en distàncies és un mètode de predicció que consisteix en dos passos: a partir de les distàncies entre observacions obtenim les variables latents, les quals passen a ser els regressors en un model lineal de mínims quadrats ordinaris. Les distàncies les calculem a partir dels predictors originals fent us d'una funció de dissimilaritats adequada. Donat que, en general, els regressors estan relacionats de manera no lineal amb la resposta, la seva selecció amb el test F usual no és possible. En aquest treball proposem una solució a aquest problema de selecció de predictors definint tests estadístics generalitzats i adaptant un mètode de bootstrap no paramètric per a l'estimació dels p-valors. Incluim un exemple numèric amb dades de l'assegurança d'automòbils.
Resumo:
Intensity-modulated radiotherapy (IMRT) treatment plan verification by comparison with measured data requires having access to the linear accelerator and is time consuming. In this paper, we propose a method for monitor unit (MU) calculation and plan comparison for step and shoot IMRT based on the Monte Carlo code EGSnrc/BEAMnrc. The beamlets of an IMRT treatment plan are individually simulated using Monte Carlo and converted into absorbed dose to water per MU. The dose of the whole treatment can be expressed through a linear matrix equation of the MU and dose per MU of every beamlet. Due to the positivity of the absorbed dose and MU values, this equation is solved for the MU values using a non-negative least-squares fit optimization algorithm (NNLS). The Monte Carlo plan is formed by multiplying the Monte Carlo absorbed dose to water per MU with the Monte Carlo/NNLS MU. Several treatment plan localizations calculated with a commercial treatment planning system (TPS) are compared with the proposed method for validation. The Monte Carlo/NNLS MUs are close to the ones calculated by the TPS and lead to a treatment dose distribution which is clinically equivalent to the one calculated by the TPS. This procedure can be used as an IMRT QA and further development could allow this technique to be used for other radiotherapy techniques like tomotherapy or volumetric modulated arc therapy.
Resumo:
PURPOSE: To compare different techniques for positive contrast imaging of susceptibility markers with MRI for three-dimensional visualization. As several different techniques have been reported, the choice of the suitable method depends on its properties with regard to the amount of positive contrast and the desired background suppression, as well as other imaging constraints needed for a specific application. MATERIALS AND METHODS: Six different positive contrast techniques are investigated for their ability to image at 3 Tesla a single susceptibility marker in vitro. The white marker method (WM), susceptibility gradient mapping (SGM), inversion recovery with on-resonant water suppression (IRON), frequency selective excitation (FSX), fast low flip-angle positive contrast SSFP (FLAPS), and iterative decomposition of water and fat with echo asymmetry and least-squares estimation (IDEAL) were implemented and investigated. RESULTS: The different methods were compared with respect to the volume of positive contrast, the product of volume and signal intensity, imaging time, and the level of background suppression. Quantitative results are provided, and strengths and weaknesses of the different approaches are discussed. CONCLUSION: The appropriate choice of positive contrast imaging technique depends on the desired level of background suppression, acquisition speed, and robustness against artifacts, for which in vitro comparative data are now available.
Resumo:
Drainage-basin and channel-geometry multiple-regression equations are presented for estimating design-flood discharges having recurrence intervals of 2, 5, 10, 25, 50, and 100 years at stream sites on rural, unregulated streams in Iowa. Design-flood discharge estimates determined by Pearson Type-III analyses using data collected through the 1990 water year are reported for the 188 streamflow-gaging stations used in either the drainage-basin or channel-geometry regression analyses. Ordinary least-squares multiple-regression techniques were used to identify selected drainage-basin and channel-geometry regions. Weighted least-squares multiple-regression techniques, which account for differences in the variance of flows at different gaging stations and for variable lengths in station records, were used to estimate the regression parameters. Statewide drainage-basin equations were developed from analyses of 164 streamflow-gaging stations. Drainage-basin characteristics were quantified using a geographic-information-system (GIS) procedure to process topographic maps and digital cartographic data. The significant characteristics identified for the drainage-basin equations included contributing drainage area, relative relief, drainage frequency, and 2-year, 24-hour precipitation intensity. The average standard errors of prediction for the drainage-basin equations ranged from 38.6% to 50.2%. The GIS procedure expanded the capability to quantitatively relate drainage-basin characteristics to the magnitude and frequency of floods for stream sites in Iowa and provides a flood-estimation method that is independent of hydrologic regionalization. Statewide and regional channel-geometry regression equations were developed from analyses of 157 streamflow-gaging stations. Channel-geometry characteristics were measured on site and on topographic maps. Statewide and regional channel-geometry regression equations that are dependent on whether a stream has been channelized were developed on the basis of bankfull and active-channel characteristics. The significant channel-geometry characteristics identified for the statewide and regional regression equations included bankfull width and bankfull depth for natural channels unaffected by channelization, and active-channel width for stabilized channels affected by channelization. The average standard errors of prediction ranged from 41.0% to 68.4% for the statewide channel-geometry equations and from 30.3% to 70.0% for the regional channel-geometry equations. Procedures provided for applying the drainage-basin and channel-geometry regression equations depend on whether the design-flood discharge estimate is for a site on an ungaged stream, an ungaged site on a gaged stream, or a gaged site. When both a drainage-basin and a channel-geometry regression-equation estimate are available for a stream site, a procedure is presented for determining a weighted average of the two flood estimates.
Resumo:
A statewide study was conducted to develop regression equations for estimating flood-frequency discharges for ungaged stream sites in Iowa. Thirty-eight selected basin characteristics were quantified and flood-frequency analyses were computed for 291 streamflow-gaging stations in Iowa and adjacent States. A generalized-skew-coefficient analysis was conducted to determine whether generalized skew coefficients could be improved for Iowa. Station skew coefficients were computed for 239 gaging stations in Iowa and adjacent States, and an isoline map of generalized-skew-coefficient values was developed for Iowa using variogram modeling and kriging methods. The skew map provided the lowest mean square error for the generalized-skew- coefficient analysis and was used to revise generalized skew coefficients for flood-frequency analyses for gaging stations in Iowa. Regional regression analysis, using generalized least-squares regression and data from 241 gaging stations, was used to develop equations for three hydrologic regions defined for the State. The regression equations can be used to estimate flood discharges that have recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years for ungaged stream sites in Iowa. One-variable equations were developed for each of the three regions and multi-variable equations were developed for two of the regions. Two sets of equations are presented for two of the regions because one-variable equations are considered easy for users to apply and the predictive accuracies of multi-variable equations are greater. Standard error of prediction for the one-variable equations ranges from about 34 to 45 percent and for the multi-variable equations range from about 31 to 42 percent. A region-of-influence regression method was also investigated for estimating flood-frequency discharges for ungaged stream sites in Iowa. A comparison of regional and region-of-influence regression methods, based on ease of application and root mean square errors, determined the regional regression method to be the better estimation method for Iowa. Techniques for estimating flood-frequency discharges for streams in Iowa are presented for determining ( 1) regional regression estimates for ungaged sites on ungaged streams; (2) weighted estimates for gaged sites; and (3) weighted estimates for ungaged sites on gaged streams. The technique for determining regional regression estimates for ungaged sites on ungaged streams requires determining which of four possible examples applies to the location of the stream site and its basin. Illustrations for determining which example applies to an ungaged stream site and for applying both the one-variable and multi-variable regression equations are provided for the estimation techniques.
Resumo:
BACKGROUND AND PURPOSE: Knowledge of cerebral blood flow (CBF) alterations in cases of acute stroke could be valuable in the early management of these cases. Among imaging techniques affording evaluation of cerebral perfusion, perfusion CT studies involve sequential acquisition of cerebral CT sections obtained in an axial mode during the IV administration of iodinated contrast material. They are thus very easy to perform in emergency settings. Perfusion CT values of CBF have proved to be accurate in animals, and perfusion CT affords plausible values in humans. The purpose of this study was to validate perfusion CT studies of CBF by comparison with the results provided by stable xenon CT, which have been reported to be accurate, and to evaluate acquisition and processing modalities of CT data, notably the possible deconvolution methods and the selection of the reference artery. METHODS: Twelve stable xenon CT and perfusion CT cerebral examinations were performed within an interval of a few minutes in patients with various cerebrovascular diseases. CBF maps were obtained from perfusion CT data by deconvolution using singular value decomposition and least mean square methods. The CBF were compared with the stable xenon CT results in multiple regions of interest through linear regression analysis and bilateral t tests for matched variables. RESULTS: Linear regression analysis showed good correlation between perfusion CT and stable xenon CT CBF values (singular value decomposition method: R(2) = 0.79, slope = 0.87; least mean square method: R(2) = 0.67, slope = 0.83). Bilateral t tests for matched variables did not identify a significant difference between the two imaging methods (P >.1). Both deconvolution methods were equivalent (P >.1). The choice of the reference artery is a major concern and has a strong influence on the final perfusion CT CBF map. CONCLUSION: Perfusion CT studies of CBF achieved with adequate acquisition parameters and processing lead to accurate and reliable results.
Resumo:
There has been a lack of quick, simple and reliable methods for determination of nanoparticle size. An investigation of the size of hydrophobic (CdSe) and hydrophilic (CdSe/ZnS) quantum dots was performed by using the maximum position of the corresponding fluorescence spectrum. It has been found that fluorescence spectroscopy is a simple and reliable methodology to estimate the size of both quantum dot types. For a given solution, the homogeneity of the size of quantum dots is correlated to the relationship between the fluorescence maximum position (FMP) and the quantum dot size. This methodology can be extended to the other fluorescent nanoparticles. The employment of evolving factor analysis and multivariate curve resolution-alternating least squares for decomposition of the series of quantum dots fluorescence spectra recorded by a specific measuring procedure reveals the number of quantum dot fractions having different diameters. The size of the quantum dots in a particular group is defined by the FMP of the corresponding component in the decomposed spectrum. These results show that a combination of the fluorescence and appropriate statistical method for decomposition of the emission spectra of nanoparticles may be a quick and trusted method for the screening of the inhomogeneity of their solution.
Resumo:
Thedirect torque control (DTC) has become an accepted vector control method besidethe current vector control. The DTC was first applied to asynchronous machines,and has later been applied also to synchronous machines. This thesis analyses the application of the DTC to permanent magnet synchronous machines (PMSM). In order to take the full advantage of the DTC, the PMSM has to be properly dimensioned. Therefore the effect of the motor parameters is analysed taking the control principle into account. Based on the analysis, a parameter selection procedure is presented. The analysis and the selection procedure utilize nonlinear optimization methods. The key element of a direct torque controlled drive is the estimation of the stator flux linkage. Different estimation methods - a combination of current and voltage models and improved integration methods - are analysed. The effect of an incorrect measured rotor angle in the current model is analysed andan error detection and compensation method is presented. The dynamic performance of an earlier presented sensorless flux estimation method is made better by improving the dynamic performance of the low-pass filter used and by adapting the correction of the flux linkage to torque changes. A method for the estimation ofthe initial angle of the rotor is presented. The method is based on measuring the inductance of the machine in several directions and fitting the measurements into a model. The model is nonlinear with respect to the rotor angle and therefore a nonlinear least squares optimization method is needed in the procedure. A commonly used current vector control scheme is the minimum current control. In the DTC the stator flux linkage reference is usually kept constant. Achieving the minimum current requires the control of the reference. An on-line method to perform the minimization of the current by controlling the stator flux linkage reference is presented. Also, the control of the reference above the base speed is considered. A new estimation flux linkage is introduced for the estimation of the parameters of the machine model. In order to utilize the flux linkage estimates in off-line parameter estimation, the integration methods are improved. An adaptive correction is used in the same way as in the estimation of the controller stator flux linkage. The presented parameter estimation methods are then used in aself-commissioning scheme. The proposed methods are tested with a laboratory drive, which consists of a commercial inverter hardware with a modified software and several prototype PMSMs.
Resumo:
The determination of zirconium-hafnium mixtures is one of the most critical problem of the analytical chemistry, on account of the close similarity of their chemical properties. The spectrophotometric determination proposed by Yagodin et al. show not many practical applications due to the significant spectral interference on the 200-220 nm region. In this work we propound the use of a multivariate calibration method called partial least squares ( PLS ) for colorimetric determination of these mixtures. By using PLS and 16 calibration mixtures we obtained a model which permits determination of zirconium and hafnium with accuracy of about 1-2% and 10-20%, respectively. Using conventional univariate calibration the inaccuracy of the determination is about 10-25% for zirconium and above 57% for hafnium.
Estudo QSPR sobre os coeficientes de partição: descritores mecânico-quânticos e análise multivariada
Resumo:
Quantum chemistry and multivariate analysis were used to estimate the partition coefficients between n-octanol and water for a serie of 188 compounds, with the values of the q 2 until 0.86 for crossvalidation test. The quantum-mechanical descriptors are obtained with ab initio calculation, using the solvation effects of the Polarizable Continuum Method. Two different Hartree-Fock bases were used, and two different ways for simulating solvent cavity formation. The results for each of the cases were analised, and each methodology proposed is indicated for particular case.