978 resultados para VARIABLE SELECTION
Resumo:
A model based on chemical structure was developed for the accurate prediction of octanol/water partition coefficient (K OW) of polychlorinated biphenyls (PCBs), which are molecules of environmental interest. Partial least squares (PLS) was used to build the regression model. Topological indices were used as molecular descriptors. Variable selection was performed by Hierarchical Cluster Analysis (HCA). In the modeling process, the experimental K OW measured for 30 PCBs by thin-layer chromatography - retention time (TLC-RT) has been used. The developed model (Q² = 0,990 and r² = 0,994) was used to estimate the log K OW values for the 179 PCB congeners whose K OW data have not yet been measured by TLC-RT method. The results showed that topological indices can be very useful to predict the K OW.
Resumo:
Calibration transfer has received considerable attention in the recent literature. Several standardization methods have been proposed for transferring calibration models between equipments. The goal of this paper is to present a general revision of calibration transfer techniques. Basic concepts will be reviewed, as well as the main advantages and drawbacks of each technique. A case study based on a set of 80 NIR spectra of maize samples recorded on two different instruments is used to illustrate the main calibration transfer techniques (direct standardization, piecewise direct standardization, orthogonal signal correction and robust variable selection).
Resumo:
QSAR modeling is a novel computer program developed to generate and validate QSAR or QSPR (quantitative structure- activity or property relationships) models. With QSAR modeling, users can build partial least squares (PLS) regression models, perform variable selection with the ordered predictors selection (OPS) algorithm, and validate models by using y-randomization and leave-N-out cross validation. An additional new feature is outlier detection carried out by simultaneous comparison of sample leverage with the respective Studentized residuals. The program was developed using Java version 6, and runs on any operating system that supports Java Runtime Environment version 6. The use of the program is illustrated. This program is available for download at lqta.iqm.unicamp.br.
Resumo:
This study developed and validated a method for moisture determination in artisanal Minas cheese, using near-infrared spectroscopy and partial-least-squares. The model robustness was assured by broad sample diversity, real conditions of routine analysis, variable selection, outlier detection and analytical validation. The model was built from 28.5-55.5% w/w, with a root-mean-square-error-of-prediction of 1.6%. After its adoption, the method stability was confirmed over a period of two years through the development of a control chart. Besides this specific method, the present study sought to provide an example multivariate metrological methodology with potential for application in several areas, including new aspects, such as more stringent evaluation of the linearity of multivariate methods.
Resumo:
The aim of this present work was to provide a more fast, simple and less expensive to analyze sulfur content in diesel samples than by the standard methods currently used. Thus, samples of diesel fuel with sulfur concentrations varying from 400 and 2500 mgkg-1 were analyzed by two methodologies: X-ray fluorescence, according to ASTM D4294 and by Fourier transform infrared spectrometry (FTIR). The spectral data obtained from FTIR were used to build multivariate calibration models by partial least squares (PLS). Four models were built in three different ways: 1) a model using the full spectra (665 to 4000 cm-1), 2) two models using some specific spectrum regions and 3) a model with variable selected by classic method of variable selection stepwise. The model obtained by variable selection stepwise and the model built with region spectra between 665 and 856 cm-1 and 1145 and 2717 cm-1 showed better results in the determination of sulfur content.
Resumo:
Forest inventories are used to estimate forest characteristics and the condition of forest for many different applications: operational tree logging for forest industry, forest health state estimation, carbon balance estimation, land-cover and land use analysis in order to avoid forest degradation etc. Recent inventory methods are strongly based on remote sensing data combined with field sample measurements, which are used to define estimates covering the whole area of interest. Remote sensing data from satellites, aerial photographs or aerial laser scannings are used, depending on the scale of inventory. To be applicable in operational use, forest inventory methods need to be easily adjusted to local conditions of the study area at hand. All the data handling and parameter tuning should be objective and automated as much as possible. The methods also need to be robust when applied to different forest types. Since there generally are no extensive direct physical models connecting the remote sensing data from different sources to the forest parameters that are estimated, mathematical estimation models are of "black-box" type, connecting the independent auxiliary data to dependent response data with linear or nonlinear arbitrary models. To avoid redundant complexity and over-fitting of the model, which is based on up to hundreds of possibly collinear variables extracted from the auxiliary data, variable selection is needed. To connect the auxiliary data to the inventory parameters that are estimated, field work must be performed. In larger study areas with dense forests, field work is expensive, and should therefore be minimized. To get cost-efficient inventories, field work could partly be replaced with information from formerly measured sites, databases. The work in this thesis is devoted to the development of automated, adaptive computation methods for aerial forest inventory. The mathematical model parameter definition steps are automated, and the cost-efficiency is improved by setting up a procedure that utilizes databases in the estimation of new area characteristics.
Resumo:
Les simulations ont été implémentées avec le programme Java.
Resumo:
El presente proyecto tiene como objeto identificar cuáles son los conceptos de salud, enfermedad, epidemiología y riesgo aplicables a las empresas del sector de extracción de petróleo y gas natural en Colombia. Dado, el bajo nivel de predicción de los análisis financieros tradicionales y su insuficiencia, en términos de inversión y toma de decisiones a largo plazo, además de no considerar variables como el riesgo y las expectativas de futuro, surge la necesidad de abordar diferentes perspectivas y modelos integradores. Esta apreciación es pertinente dentro del sector de extracción de petróleo y gas natural, debido a la creciente inversión extranjera que ha reportado, US$2.862 millones en el 2010, cifra mayor a diez veces su valor en el año 2003. Así pues, se podrían desarrollar modelos multi-dimensional, con base en los conceptos de salud financiera, epidemiológicos y estadísticos. El termino de salud y su adopción en el sector empresarial, resulta útil y mantiene una coherencia conceptual, evidenciando una presencia de diferentes subsistemas o factores interactuantes e interconectados. Es necesario mencionar también, que un modelo multidimensional (multi-stage) debe tener en cuenta el riesgo y el análisis epidemiológico ha demostrado ser útil al momento de determinarlo e integrarlo en el sistema junto a otros conceptos, como la razón de riesgo y riesgo relativo. Esto se analizará mediante un estudio teórico-conceptual, que complementa un estudio previo, para contribuir al proyecto de finanzas corporativas de la línea de investigación en Gerencia.
Resumo:
The aim of this study was to evaluate the potential of near-infrared reflectance spectroscopy (NIRS) as a rapid and non-destructive method to determine the soluble solid content (SSC), pH and titratable acidity of intact plums. Samples of plum with a total solids content ranging from 5.7 to 15%, pH from 2.72 to 3.84 and titratable acidity from 0.88 a 3.6% were collected from supermarkets in Natal-Brazil, and NIR spectra were acquired in the 714 2500 nm range. A comparison of several multivariate calibration techniques with respect to several pre-processing data and variable selection algorithms, such as interval Partial Least Squares (iPLS), genetic algorithm (GA), successive projections algorithm (SPA) and ordered predictors selection (OPS), was performed. Validation models for SSC, pH and titratable acidity had a coefficient of correlation (R) of 0.95 0.90 and 0.80, as well as a root mean square error of prediction (RMSEP) of 0.45ºBrix, 0.07 and 0.40%, respectively. From these results, it can be concluded that NIR spectroscopy can be used as a non-destructive alternative for measuring the SSC, pH and titratable acidity in plums
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Avaliação do gasto energético de repouso em pacientes com sepse associada ou não à lesão renal aguda
Resumo:
Pós-graduação em Fisiopatologia em Clínica Médica - FMB
Resumo:
Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.