946 resultados para Minimum Mean Square Error of Intensity Distribution
Resumo:
Espécies forrageiras adaptadas às condições semiáridas são uma alternativa para reduzir os impactos negativos na cadeia produtiva de ruminantes da região Nordeste brasileira devido à sazonalidade na oferta de forragem, além de reduzir custo com o fornecimento de alimentos concentrados. Dentre as espécies, a vagem de algaroba (Prosopis juliflora SW D.C.) e palma forrageira (Opuntia e Nopalea) ganham destaque por tolerarem o déficit hídrico e produzirem em períodos onde a oferta de forragem está reduzida, além de apresentam bom valor nutricional e serem bem aceitas pelos animais. Porém, devido à variação na sua composição, seu uso na alimentação animal exige o conhecimento profundo da sua composição para a elaboração de dietas balanceadas. No entanto, devido ao custo e tempo para análise, os produtores não fazem uso da prática de análise da composição químico-bromatológica dos alimentos. Por isto, a espectroscopia de reflectância no infravermelho próximo (NIRS) representa uma importante alternativa aos métodos tradicionais. Objetivou-se com este estudo desenvolver e validar modelos de predição da composição bromatológica de vagem de algaroba e palma forrageira baseados em espectroscopia NIRS, escaneadas em dois modelos de equipamentos e com diferentes processamentos da amostra. Foram coletadas amostras de vagem de algaroba nos estados do Ceará, Bahia, Paraíba e Pernambuco, e amostras de palma forrageira nos estados do Ceará, Paraíba e Pernambuco, frescas (in natura) ou pré-secas e moídas. Para obtenção dos espectros utilizaram-se dois equipamentos NIR, Perten DA 7250 e FOSS 5000. Inicialmente os alimentos foram escaneados in natura em aparelho do modelo Perten, e, com o auxílio do software The Unscrambler 10.2 foi selecionado um grupo de amostras para o banco de calibração. As amostras selecionadas foram secas e moídas, e escaneadas novamente em equipamentos Perten e FOSS. Os valores dos parâmetros de referência foram obtidos por meio de metodologias tradicionalmente aplicadas em laboratório de nutrição animal para matéria seca (MS), matéria mineral (MM), matéria orgânica (MO), proteína bruta (PB), estrato etéreo (EE), fibra solúvel em detergente neutro (FDN), fibra solúvel em detergente ácido (FDA), hemicelulose (HEM) e digestibilidade in vitro da matéria seca (DIVMS). O desempenho dos modelos foi avaliado de acordo com os erros médios de calibração (RMSEC) e validação (RMSECV), coeficiente de determinação (R2 ) e da relação de desempenho de desvio dos modelos (RPD). A análise exploratória dos dados, por meio de tratamentos espectrais e análise de componentes principais (PCA), demonstraram que os bancos de dados eram similares entre si, dando segurança de desenvolver os modelos com todas as amostras selecionadas em um único modelo para cada alimento, algaroba e palma. Na avaliação dos resultados de referência, observou-se que a variação dos resultados para cada parâmetro corroboraram com os descritos na literatura. No desempenho dos modelos, aqueles desenvolvidos com pré-processamento da amostra (pré-secagem e moagem) se mostraram mais robustos do que aqueles construídos com amostras in natura. O aparelho NIRS Perten apresentou desempenho semelhante ao equipamento FOSS, apesar desse último cobrir uma faixa espectral maior e com intervalos de leituras menores. A técnica NIR, associada ao método de calibração multivariada de regressão por meio de quadrados mínimos (PLS), mostrou-se confiável para prever a composição químico-bromatológica de vagem de algaroba e da palma forrageira. Abstract: Forage species adapted to semi-arid conditions are an alternative to reduce the negative impacts in the feed supply for ruminants in the Brazilian Northeast region, due to seasonality in forage availability, as well as in the reducing of cost by providing concentrated feedstuffs. Among the species, mesquite pods (Prosopis juliflora SW DC) and spineless cactus (Opuntia and Nopalea) are highlighted for tolerating the drought and producion in periods where the forage is scarce, and have high nutritional value and also are well accepted by the animals. However, its use in animal diets requires a knowledge about its composition to prepare balanced diets. However, farmers usually do not use feed composition analysis, because their high cost and time-consuming. Thus, the Near Infrared Reflectance Spectroscopy in the (NIRS) is an important alternative to traditional methods. The objective of this study to develop and validate predictive models of the chemical composition of mesquite pods and spineless cactus-based NIRS spectroscopy, scanned in two different spectrometers and sample processing. Mesquite pods samples were collected in the states of Ceará, Bahia, Paraiba and Pernambuco, and samples of forage cactus in the states of Ceará, Paraíba and Pernambuco. In order to obtain the spectra, it was used two NIR equipment: Perten DA 7250 and FOSS 5000. sSpectra of samples were initially obtained fresh (as received) using Perten instrument, and with The Unscrambler software 10.2, a group of subsamples was selected to model development, keeping out redundant ones. The selected samples were dried and ground, and scanned again in both Perten and FOSS instruments. The values of the reference analysis were obtained by methods traditionally applied in animal nutrition laboratory to dry matter (DM), mineral matter (MM), organic matter (OM), crude protein (CP), ether extract (EE), soluble neutral detergent fiber (NDF), soluble acid detergent fiber (ADF), hemicellulose ( HEM) and in vitro digestibility of dry matter (DIVDM). The performance of the models was evaluated according to the Root Mean Square Error of Calibration (RMSEC) and cross-validation (RMSECV), coefficient of determination (R2 ) and the deviation of Ratio of performance Deviation of the models (RPD). Exploratory data analysis through spectral treatments and principal component analysis (PCA), showed that the databases were similar to each other, and may be treated asa single model for each feed - mesquite pods and cactus. Evaluating the reference results, it was observed that the variation were similar to those reported in the literature. Comparing the preprocessing of samples, the performance ofthose developed with preprocessing (dried and ground) of the sample were more robust than those built with fresh samples. The NIRS Perten device performance similar to FOSS equipment, although the latter cover a larger spectral range and with lower readings intervals. NIR technology associate do multivariate techniques is reliable to predict the bromatological composition of mesquite pods and cactus.
Resumo:
SummaryDiscrete data arise in various research fields, typically when the observations are count data.I propose a robust and efficient parametric procedure for estimation of discrete distributions. The estimation is done in two phases. First, a very robust, but possibly inefficient, estimate of the model parameters is computed and used to indentify outliers. Then the outliers are either removed from the sample or given low weights, and a weighted maximum likelihood estimate (WML) is computed.The weights are determined via an adaptive process such that if the data follow the model, then asymptotically no observation is downweighted.I prove that the final estimator inherits the breakdown point of the initial one, and that its influence function at the model is the same as the influence function of the maximum likelihood estimator, which strongly suggests that it is asymptotically fully efficient.The initial estimator is a minimum disparity estimator (MDE). MDEs can be shown to have full asymptotic efficiency, and some MDEs have very high breakdown points and very low bias under contamination. Several initial estimators are considered, and the performances of the WMLs based on each of them are studied.It results that in a great variety of situations the WML substantially improves the initial estimator, both in terms of finite sample mean square error and in terms of bias under contamination. Besides, the performances of the WML are rather stable under a change of the MDE even if the MDEs have very different behaviors.Two examples of application of the WML to real data are considered. In both of them, the necessity for a robust estimator is clear: the maximum likelihood estimator is badly corrupted by the presence of a few outliers.This procedure is particularly natural in the discrete distribution setting, but could be extended to the continuous case, for which a possible procedure is sketched.RésuméLes données discrètes sont présentes dans différents domaines de recherche, en particulier lorsque les observations sont des comptages.Je propose une méthode paramétrique robuste et efficace pour l'estimation de distributions discrètes. L'estimation est faite en deux phases. Tout d'abord, un estimateur très robuste des paramètres du modèle est calculé, et utilisé pour la détection des données aberrantes (outliers). Cet estimateur n'est pas nécessairement efficace. Ensuite, soit les outliers sont retirés de l'échantillon, soit des faibles poids leur sont attribués, et un estimateur du maximum de vraisemblance pondéré (WML) est calculé.Les poids sont déterminés via un processus adaptif, tel qu'asymptotiquement, si les données suivent le modèle, aucune observation n'est dépondérée.Je prouve que le point de rupture de l'estimateur final est au moins aussi élevé que celui de l'estimateur initial, et que sa fonction d'influence au modèle est la même que celle du maximum de vraisemblance, ce qui suggère que cet estimateur est pleinement efficace asymptotiquement.L'estimateur initial est un estimateur de disparité minimale (MDE). Les MDE sont asymptotiquement pleinement efficaces, et certains d'entre eux ont un point de rupture très élevé et un très faible biais sous contamination. J'étudie les performances du WML basé sur différents MDEs.Le résultat est que dans une grande variété de situations le WML améliore largement les performances de l'estimateur initial, autant en terme du carré moyen de l'erreur que du biais sous contamination. De plus, les performances du WML restent assez stables lorsqu'on change l'estimateur initial, même si les différents MDEs ont des comportements très différents.Je considère deux exemples d'application du WML à des données réelles, où la nécessité d'un estimateur robuste est manifeste : l'estimateur du maximum de vraisemblance est fortement corrompu par la présence de quelques outliers.La méthode proposée est particulièrement naturelle dans le cadre des distributions discrètes, mais pourrait être étendue au cas continu.
Resumo:
In this paper we describe the results of a simulation study performed to elucidate the robustness of the Lindstrom and Bates (1990) approximation method under non-normality of the residuals, under different situations. Concerning the fixed effects, the observed coverage probabilities and the true bias and mean square error values, show that some aspects of this inferential approach are not completely reliable. When the true distribution of the residuals is asymmetrical, the true coverage is markedly lower than the nominal one. The best results are obtained for the skew normal distribution, and not for the normal distribution. On the other hand, the results are partially reversed concerning the random effects. Soybean genotypes data are used to illustrate the methods and to motivate the simulation scenarios
Resumo:
The Gram-Schmidt (GS) orthogonalisation procedure has been used to improve the convergence speed of least mean square (LMS) adaptive code-division multiple-access (CDMA) detectors. However, this algorithm updates two sets of parameters, namely the GS transform coefficients and the tap weights, simultaneously. Because of the additional adaptation noise introduced by the former, it is impossible to achieve the same performance as the ideal orthogonalised LMS filter, unlike the result implied in an earlier paper. The authors provide a lower bound on the minimum achievable mean squared error (MSE) as a function of the forgetting factor λ used in finding the GS transform coefficients, and propose a variable-λ algorithm to balance the conflicting requirements of good tracking and low misadjustment.
Resumo:
Intensity modulated radiation therapy (IMRT) is a technique that delivers a highly conformal dose distribution to a target volume while attempting to maximally spare the surrounding normal tissues. IMRT is a common treatment modality used for treating head and neck (H&N) cancers, and the presence of many critical structures in this region requires accurate treatment delivery. The Radiological Physics Center (RPC) acts as both a remote and on-site quality assurance agency that credentials institutions participating in clinical trials. To date, about 30% of all IMRT participants have failed the RPC’s remote audit using the IMRT H&N phantom. The purpose of this project is to evaluate possible causes of H&N IMRT delivery errors observed by the RPC, specifically IMRT treatment plan complexity and the use of improper dosimetry data from machines that were thought to be matched but in reality were not. Eight H&N IMRT plans with a range of complexity defined by total MU (1460-3466), number of segments (54-225), and modulation complexity scores (MCS) (0.181-0.609) were created in Pinnacle v.8m. These plans were delivered to the RPC’s H&N phantom on a single Varian Clinac. One of the IMRT plans (1851 MU, 88 segments, and MCS=0.469) was equivalent to the median H&N plan from 130 previous RPC H&N phantom irradiations. This average IMRT plan was also delivered on four matched Varian Clinac machines and the dose distribution calculated using a different 6MV beam model. Radiochromic film and TLD within the phantom were used to analyze the dose profiles and absolute doses, respectively. The measured and calculated were compared to evaluate the dosimetric accuracy. All deliveries met the RPC acceptance criteria of ±7% absolute dose difference and 4 mm distance-to-agreement (DTA). Additionally, gamma index analysis was performed for all deliveries using a ±7%/4mm and ±5%/3mm criteria. Increasing the treatment plan complexity by varying the MU, number of segments, or varying the MCS resulted in no clear trend toward an increase in dosimetric error determined by the absolute dose difference, DTA, or gamma index. Varying the delivery machines as well as the beam model (use of a Clinac 6EX 6MV beam model vs. Clinac 21EX 6MV model), also did not show any clear trend towards an increased dosimetric error using the same criteria indicated above.
Resumo:
Technology changes rapidly over years providing continuously more options for computer alternatives and making life easier for economic, intra-relation or any other transactions. However, the introduction of new technology “pushes” old Information and Communication Technology (ICT) products to non-use. E-waste is defined as the quantities of ICT products which are not in use and is bivariate function of the sold quantities, and the probability that specific computers quantity will be regarded as obsolete. In this paper, an e-waste generation model is presented, which is applied to the following regions: Western and Eastern Europe, Asia/Pacific, Japan/Australia/New Zealand, North and South America. Furthermore, cumulative computer sales were retrieved for selected countries of the regions so as to compute obsolete computer quantities. In order to provide robust results for the forecasted quantities, a selection of forecasting models, namely (i) Bass, (ii) Gompertz, (iii) Logistic, (iv) Trend model, (v) Level model, (vi) AutoRegressive Moving Average (ARMA), and (vii) Exponential Smoothing were applied, depicting for each country that model which would provide better results in terms of minimum error indices (Mean Absolute Error and Mean Square Error) for the in-sample estimation. As new technology does not diffuse in all the regions of the world with the same speed due to different socio-economic factors, the lifespan distribution, which provides the probability of a certain quantity of computers to be considered as obsolete, is not adequately modeled in the literature. The time horizon for the forecasted quantities is 2014-2030, while the results show a very sharp increase in the USA and United Kingdom, due to the fact of decreasing computer lifespan and increasing sales.
Resumo:
This study aimed to describe and compare the ventilation behavior during an incremental test utilizing three mathematical models and to compare the feature of ventilation curve fitted by the best mathematical model between aerobically trained (TR) and untrained ( UT) men. Thirty five subjects underwent a treadmill test with 1 km.h(-1) increases every minute until exhaustion. Ventilation averages of 20 seconds were plotted against time and fitted by: bi-segmental regression model (2SRM); three-segmental regression model (3SRM); and growth exponential model (GEM). Residual sum of squares (RSS) and mean square error (MSE) were calculated for each model. The correlations between peak VO2 (VO2PEAK), peak speed (Speed(PEAK)), ventilatory threshold identified by the best model (VT2SRM) and the first derivative calculated for workloads below (moderate intensity) and above (heavy intensity) VT2SRM were calculated. The RSS and MSE for GEM were significantly higher (p < 0.01) than for 2SRM and 3SRM in pooled data and in UT, but no significant difference was observed among the mathematical models in TR. In the pooled data, the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.58; p < 0.01) and Speed(PEAK) (r = -0.46; p < 0.05) while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r = -0.43; p < 0.05). In UT group the first derivative of moderate intensities showed significant negative correlations with VT2SRM (r = -0.65; p < 0.05) and Speed(PEAK) (r = -0.61; p < 0.05), while the first derivative of heavy intensities showed significant negative correlation with VT2SRM (r= -0.73; p < 0.01), Speed(PEAK) (r = -0.73; p < 0.01) and VO2PEAK (r = -0.61; p < 0.05) in TR group. The ventilation behavior during incremental treadmill test tends to show only one threshold. UT subjects showed a slower ventilation increase during moderate intensities while TR subjects showed a slower ventilation increase during heavy intensities.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
Recently, the spin-echo full-intensity acquired localized (SPECIAL) spectroscopy technique was proposed to unite the advantages of short TEs on the order of milliseconds (ms) with full sensitivity and applied to in vivo rat brain. In the present study, SPECIAL was adapted and optimized for use on a clinical platform at 3T and 7T by combining interleaved water suppression (WS) and outer volume saturation (OVS), optimized sequence timing, and improved shimming using FASTMAP. High-quality single voxel spectra of human brain were acquired at TEs below or equal to 6 ms on a clinical 3T and 7T system for six volunteers. Narrow linewidths (6.6 +/- 0.6 Hz at 3T and 12.1 +/- 1.0 Hz at 7T for water) and the high signal-to-noise ratio (SNR) of the artifact-free spectra enabled the quantification of a neurochemical profile consisting of 18 metabolites with Cramér-Rao lower bounds (CRLBs) below 20% at both field strengths. The enhanced sensitivity and increased spectral resolution at 7T compared to 3T allowed a two-fold reduction in scan time, an increased precision of quantification for 12 metabolites, and the additional quantification of lactate with CRLB below 20%. Improved sensitivity at 7T was also demonstrated by a 1.7-fold increase in average SNR (= peak height/root mean square [RMS]-of-noise) per unit-time.
Resumo:
This paper conducts an empirical analysis of the relationship between wage inequality, employment structure, and returns to education in urban areas of Mexico during the past two decades (1987-2008). Applying Melly’s (2005) quantile regression based decomposition, we find that changes in wage inequality have been driven mainly by variations in educational wage premia. Additionally, we find that changes in employment structure, including occupation and firm size, have played a vital role. This evidence seems to suggest that the changes in wage inequality in urban Mexico cannot be interpreted in terms of a skill-biased change, but rather they are the result of an increasing demand for skills during that period.
Resumo:
The author studies the error and complexity of the discrete random walk Monte Carlo technique for radiosity, using both the shooting and gathering methods. The author shows that the shooting method exhibits a lower complexity than the gathering one, and under some constraints, it has a linear complexity. This is an improvement over a previous result that pointed to an O(n log n) complexity. The author gives and compares three unbiased estimators for each method, and obtains closed forms and bounds for their variances. The author also bounds the expected value of the mean square error (MSE). Some of the results obtained are also shown
Resumo:
The estimation of non available soil variables through the knowledge of other related measured variables can be achieved through pedotransfer functions (PTF) mainly saving time and reducing cost. Great differences among soils, however, can yield non desirable results when applying this method. This study discusses the application of developed PTFs by several authors using a variety of soils of different characteristics, to evaluate soil water contents of two Brazilian lowland soils. Comparisons are made between PTF evaluated data and field measured data, using statistical and geostatistical tools, like mean error, root mean square error, semivariogram, cross-validation, and regression coefficient. The eight tested PTFs to evaluate gravimetric soil water contents (Ug) at the tensions of 33 kPa and 1,500 kPa presented a tendency to overestimate Ug 33 kPa and underestimate Ug1,500 kPa. The PTFs were ranked according to their performance and also with respect to their potential in describing the structure of the spatial variability of the set of measured values. Although none of the PTFs have changed the distribution pattern of the data, all resulted in mean and variance statistically different from those observed for all measured values. The PTFs that presented the best predictive values of Ug33 kPa and Ug1,500 kPa were not the same that had the best performance to reproduce the structure of spatial variability of these variables.
Resumo:
Pedotransfer functions (PTF) were developed to estimate the parameters (α, n, θr and θs) of the van Genuchten model (1980) to describe soil water retention curves. The data came from various sources, mainly from studies conducted by universities in Northeast Brazil, by the Brazilian Agricultural Research Corporation (Embrapa) and by a corporation for the development of the São Francisco and Parnaíba river basins (Codevasf), totaling 786 retention curves, which were divided into two data sets: 85 % for the development of PTFs, and 15 % for testing and validation, considered independent data. Aside from the development of general PTFs for all soils together, specific PTFs were developed for the soil classes Ultisols, Oxisols, Entisols, and Alfisols by multiple regression techniques, using a stepwise procedure (forward and backward) to select the best predictors. Two types of PTFs were developed: the first included all predictors (soil density, proportions of sand, silt, clay, and organic matter), and the second only the proportions of sand, silt and clay. The evaluation of adequacy of the PTFs was based on the correlation coefficient (R) and Willmott index (d). To evaluate the PTF for the moisture content at specific pressure heads, we used the root mean square error (RMSE). The PTF-predicted retention curve is relatively poor, except for the residual water content. The inclusion of organic matter as a PTF predictor improved the prediction of parameter a of van Genuchten. The performance of soil-class-specific PTFs was not better than of the general PTF. Except for the water content of saturated soil estimated by particle size distribution, the tested models for water content prediction at specific pressure heads proved satisfactory. Predictions of water content at pressure heads more negative than -0.6 m, using a PTF considering particle size distribution, are only slightly lower than those obtained by PTFs including bulk density and organic matter content.