942 resultados para mean-square error
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
Hydrological models featuring root water uptake usually do not include compensation mechanisms such that reductions in uptake from dry layers are compensated by an increase in uptake from wetter layers. We developed a physically based root water uptake model with an implicit compensation mechanism. Based on an expression for the matric flux potential (M) as a function of the distance to the root, and assuming a depth-independent value of M at the root surface, uptake per layer is shown to be a function of layer bulk M, root surface M, and a weighting factor that depends on root length density and root radius. Actual transpiration can be calculated from the sum of layer uptake rates. The proposed reduction function (PRF) was built into the SWAP model, and predictions were compared to those made with the Feddes reduction function (FRF). Simulation results were tested against data from Canada (continuous spring wheat [(Triticum aestivum L.]) and Germany (spring wheat, winter barley [Hordeum vulgare L.], sugarbeet [Beta vulgaris L.], winter wheat rotation). For the Canadian data, the root mean square error of prediction (RMSEP) for water content in the upper soil layers was very similar for FRF and PRF; for the deeper layers, RMSEP was smaller for PRF. For the German data, RMSEP was lower for PRF in the upper layers and was similar for both models in the deeper layers. In conclusion, but dependent on the properties of the data sets available for testing,the incorporation of the new reduction function into SWAP was successful, providing new capabilities for simulating compensated root water uptake without increasing the number of input parameters or degrading model performance.
Resumo:
The use of remote sensing is necessary for monitoring forest carbon stocks at large scales. Optical remote sensing, although not the most suitable technique for the direct estimation of stand biomass, offers the advantage of providing large temporal and spatial datasets. In particular, information on canopy structure is encompassed in stand reflectance time series. This study focused on the example of Eucalyptus forest plantations, which have recently attracted much attention as a result of their high expansion rate in many tropical countries. Stand scale time-series of Normalized Difference Vegetation Index (NDVI) were obtained from MODIS satellite data after a procedure involving un-mixing and interpolation, on about 15,000 ha of plantations in southern Brazil. The comparison of the planting date of the current rotation (and therefore the age of the stands) estimated from these time series with real values provided by the company showed that the root mean square error was 35.5 days. Age alone explained more than 82% of stand wood volume variability and 87% of stand dominant height variability. Age variables were combined with other variables derived from the NDVI time series and simple bioclimatic data by means of linear (Stepwise) or nonlinear (Random Forest) regressions. The nonlinear regressions gave r-square values of 0.90 for volume and 0.92 for dominant height, and an accuracy of about 25 m(3)/ha for volume (15% of the volume average value) and about 1.6 m for dominant height (8% of the height average value). The improvement including NDVI and bioclimatic data comes from the fact that the cumulative NDVI since planting date integrates the interannual variability of leaf area index (LAI), light interception by the foliage and growth due for example to variations of seasonal water stress. The accuracy of biomass and height predictions was strongly improved by using the NDVI integrated over the two first years after planting, which are critical for stand establishment. These results open perspectives for cost-effective monitoring of biomass at large scales in intensively-managed plantation forests. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.
Resumo:
OBJETIVO: Avaliar a validade fatorial e de construto da versão brasileira do "Cuestionario para la Evaluación del Síndrome de Quemarse por el Trabajo" (CESQT). MÉTODOS: O processo de versão do questionário original do espanhol para o português incluiu as etapas de tradução, retrotradução e equivalência semântica. Foi realizada análise fatorial confirmatória e utilizados modelos de equações estruturais de quatro fatores, similar ao da estrutura original do CESQT. A amostra foi constituida de 714 professores que trabalhavam em instituições de ensino da cidade de Porto Alegre, RS, e região metropolitana 2008. O questionário possui 20 itens distribuídos em quatro subescalas: Ilusão pelo trabalho (5 itens), Desgaste psíquico (4 itens), Indolência (6 itens) e Culpa (5 itens). O modelo foi analisado com base no programa LISREL 8. RESULTADOS: As medidas de ajuste indicaram adequação do modelo hipotetizado: χ2(164) = 605,86 (p < 0,000), Goodness Fit Index = 0,92, Adjusted Goodness Fit Index = 0,90, Root Mean Square Error of Approximation = 0,062, Non-Normed Fit Index = 0,91, Comparative Fit Index = 0,92, Parsimony Normed Fit Index = 0,77. O valor de alfa de Cronbach para todas as subescalas foi maior que 0,70. CONCLUSÕES: Os resultados indicam que o CESQT possui validade fatorial e consistência interna adequada para avaliar burnout em professores brasileiros.
Resumo:
OBJETIVO: Realizar a adaptação transcultural da versão em português do Inventário de Burnout de Maslach para estudantes e investigar sua confiabilidade, validade e invariância transcultural. MÉTODOS: A validação de face envolveu participação de equipe multidisciplinar. Foi realizada validação de conteúdo. A versão em português foi preenchida em 2009, pela internet, por 958 estudantes universitários brasileiros e 556 portugueses da zona urbana. Realizou-se análise fatorial confirmatória utilizando-se como índices de ajustamento o χ²/df, o comparative fit index (CFI), goodness of fit index (GFI) e o root mean square error of approximation (RMSEA). Para verificação da estabilidade da solução fatorial conforme a versão original em inglês, realizou-se validação cruzada em 2/3 da amostra total e replicada no 1/3 restante. A validade convergente foi estimada pela variância extraída média e confiabilidade composta. Avaliou-se a validade discriminante e a consistência interna foi estimada pelo coeficiente alfa de Cronbach. A validade concorrente foi estimada por análise correlacional da versão em português e dos escores médios do Inventário de Burnout de Copenhague; a divergente foi comparada à Escala de Depressão de Beck. Foi avaliada a invariância do modelo entre a amostra brasileira e a portuguesa. RESULTADOS: O modelo trifatorial de Exaustão, Descrença e Eficácia apresentou ajustamento adequado (χ²/df = 8,498; CFI = 0,916; GFI = 0,902; RMSEA = 0,086). A estrutura fatorial foi estável (λ: χ²dif = 11,383, p = 0,50; Cov: χ²dif = 6,479, p = 0,372; Resíduos: χ²dif = 21,514, p = 0,121). Observou-se adequada validade convergente (VEM = 0,45;0,64, CC = 0,82;0,88), discriminante (ρ² = 0,06;0,33) e consistência interna (α = 0,83;0,88). A validade concorrente da versão em português com o Inventário de Copenhague foi adequada (r = 0,21;0,74). A avaliação da validade divergente do instrumento foi prejudicada pela aproximação do conceito teórico das dimensões Exaustão e Descrença da versão em português com a Escala de Beck. Não se observou invariância do instrumento entre as amostras brasileiras e portuguesas (λ:χ²dif = 84,768, p < 0,001; Cov: χ²dif = 129,206, p < 0,001; Resíduos: χ²dif = 518,760, p < 0,001). CONCLUSÕES: A versão em português do Inventário de Burnout de Maslach para estudantes apresentou adequada confiabilidade e validade, mas sua estrutura fatorial não foi invariante entre os países, apontando ausência de estabilidade transcultural.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.
Resumo:
The behavior of robotic manipulators with backlash is analyzed. Based on the pseudo-phase plane two indices are proposed to evaluate the backlash effect upon the robotic system: the root mean square error and the fractal dimension. For the dynamical analysis the noisy signals captured from the system are filtered through wavelets. Several tests are developed that demonstrate the coherence of the results.
Resumo:
In the present paper we assess the performance of information-theoretic inspired risks functionals in multilayer perceptrons with reference to the two most popular ones, Mean Square Error and Cross-Entropy. The information-theoretic inspired risks, recently proposed, are: HS and HR2 are, respectively, the Shannon and quadratic Rényi entropies of the error; ZED is a risk reflecting the error density at zero errors; EXP is a generalized exponential risk, able to mimic a wide variety of risk functionals, including the information-thoeretic ones. The experiments were carried out with multilayer perceptrons on 35 public real-world datasets. All experiments were performed according to the same protocol. The statistical tests applied to the experimental results showed that the ubiquitous mean square error was the less interesting risk functional to be used by multilayer perceptrons. Namely, mean square error never achieved a significantly better classification performance than competing risks. Cross-entropy and EXP were the risks found by several tests to be significantly better than their competitors. Counts of significantly better and worse risks have also shown the usefulness of HS and HR2 for some datasets.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
The development of biopharmaceutical manufacturing processes presents critical constraints, with the major constraint being that living cells synthesize these molecules, presenting inherent behavior variability due to their high sensitivity to small fluctuations in the cultivation environment. To speed up the development process and to control this critical manufacturing step, it is relevant to develop high-throughput and in situ monitoring techniques, respectively. Here, high-throughput mid-infrared (MIR) spectral analysis of dehydrated cell pellets and in situ near-infrared (NIR) spectral analysis of the whole culture broth were compared to monitor plasmid production in recombinant Escherichia coil cultures. Good partial least squares (PLS) regression models were built, either based on MIR or NIR spectral data, yielding high coefficients of determination (R-2) and low predictive errors (root mean square error, or RMSE) to estimate host cell growth, plasmid production, carbon source consumption (glucose and glycerol), and by-product acetate production and consumption. The predictive errors for biomass, plasmid, glucose, glycerol, and acetate based on MIR data were 0.7 g/L, 9 mg/L, 0.3 g/L, 0.4 g/L, and 0.4 g/L, respectively, whereas for NIR data the predictive errors obtained were 0.4 g/L, 8 mg/L, 0.3 g/L, 0.2 g/L, and 0.4 g/L, respectively. The models obtained are robust as they are valid for cultivations conducted with different media compositions and with different cultivation strategies (batch and fed-batch). Besides being conducted in situ with a sterilized fiber optic probe, NIR spectroscopy allows building PLS models for estimating plasmid, glucose, and acetate that are as accurate as those obtained from the high-throughput MIR setup, and better models for estimating biomass and glycerol, yielding a decrease in 57 and 50% of the RMSE, respectively, compared to the MIR setup. However, MIR spectroscopy could be a valid alternative in the case of optimization protocols, due to possible space constraints or high costs associated with the use of multi-fiber optic probes for multi-bioreactors. In this case, MIR could be conducted in a high-throughput manner, analyzing hundreds of culture samples in a rapid and automatic mode.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.
Resumo:
Dissertação para obtenção do Grau de Mestre em Engenharia Mecânica