945 resultados para Mean Squared Error
Resumo:
Obesity is associated with increased sympathetic activity and higher mortality. Treatment of this condition is often frustrating. Roux-en-Y gastric bypass is the most effective technique nowadays for treatment of obesity. The aim of the present study is to assess the effects of this surgery on the cardiac autonomic activity, including the influence of gender and age, through heart rate variability (HRV) analysis. The study group consisted of 71 obese patients undergoing gastric bypass. Time domain measures of HRV, obtained from 24-h Holter recordings, were evaluated before and 6 months after surgery, and the results were compared. Percentage of interval differences of successive normal sinus beats greater than 50 ms (pNN50) and square root of the mean squared differences of successive normal sinus beat intervals (rMSSD) was used to estimate the short-term components of HRV, related to the parasympathetic activity. Standard deviation of intervals between all normal sinus beats (SDNN) was related to overall HRV. SDNN, pNN50, and rMSSD showed significant increase 6 months after surgery (p < 0.001, p = 0.001 and p = 0.002, respectively). Men presented a greater increase of SDNN than women (p = 0.006) during the follow-up. There was a difference in rMSSD evolution for age groups (p = 0.002). Only younger patients presented significant increase of rMSSD. Overall HRV increased 6 months after surgery; this increase was more evident in men. Cardiac parasympathetic activity increased also, but in younger patients only.
Resumo:
The objective of this study is to compare the accuracy of sonographic estimation of fetal weight of macrosomic babies in diabetic vs non-diabetic pregnancies. Ali babies weighing 4000 g or more at birth, and who had ultrasound scans performed within one week of delivery were included in this retrospective study. Pregnancies with diabetes mellitus were compared to those without diabetes mellitus. The mean simple error (actual birthweight - estimated fetal weight); mean standardised absolute error (absolute value of simple error (g)/actual birthweight (kg)); and the percentage of estimated birthweight falling within 15% of the actual birthweight between the two groups were compared. There were 9516 deliveries during the study period. Of this total 1211 (12.7 %) babies weighed 4000 g or more. A total of 56 non-diabetic pregnancies and 19 diabetic pregnancies were compared. The average sonographic estimation of fetal weight in diabetic pregnancies was 8 % less than the actual birthweight, compared to 0.2 % in the non-diabetic group (p < 0.01). The estimated fetal weight was within 15% of the birthweight in 74 % of the diabetic pregnancies, compared to 93 % of the non-diabetic pregnancies (p < 0.05). In the diabetic group, 26.3 % of the birthweights were underestimated by more than 15 %, compared to 5.4 % in the non-diabetic group (p < 0.05). In conclusion, the prediction accuracy of fetal weight estimation using standard formulae in macrosomic fetuses is significantly worse in diabetic pregnancies compared to non-diabetic pregnancies. When sonographic fetal weight estimation is used to influence the mode of delivery for diabetic women, a more conservative cut-off needs to be considered.
Resumo:
Observations of accelerating seismic activity prior to large earthquakes in natural fault systems have raised hopes for intermediate-term eartquake forecasting. If this phenomena does exist, then what causes it to occur? Recent theoretical work suggests that the accelerating seismic release sequence is a symptom of increasing long-wavelength stress correlation in the fault region. A more traditional explanation, based on Reid's elastic rebound theory, argues that an accelerating sequence of seismic energy release could be a consequence of increasing stress in a fault system whose stress moment release is dominated by large events. Both of these theories are examined using two discrete models of seismicity: a Burridge-Knopoff block-slider model and an elastic continuum based model. Both models display an accelerating release of seismic energy prior to large simulated earthquakes. In both models there is a correlation between the rate of seismic energy release with the total root-mean-squared stress and the level of long-wavelength stress correlation. Furthermore, both models exhibit a systematic increase in the number of large events at high stress and high long-wavelength stress correlation levels. These results suggest that either explanation is plausible for the accelerating moment release in the models examined. A statistical model based on the Burridge-Knopoff block-slider is constructed which indicates that stress alone is sufficient to produce accelerating release of seismic energy with time prior to a large earthquake.
Resumo:
Low-density lipoprotein oxidation is implicated in the development of atherosclerosis. Plasma susceptibility to oxidation may be used as a marker of low-density lipoprotein oxidation and thus predict atherosclerotic risk. In this study the authors investigated the relationship between plasma susceptibility to oxidation and exposure to automotive pollution in a group of automobile mechanics (n = 16) exposed to high levels of automotive pollution, vs. matched controls (n = 13). The authors induced plasma oxidation by a free radical initiator and they determined susceptibility to oxidation by (1) change in absorbance at 234 nm, (2) lag time to conjugated diene formation, and (3) linear slope of the oxidation curve. Mechanics had significantly higher values (mean standard error) for change in absorbance (1.60 +/- 0.05 vs. 1.36 +/- 0.05; p < .002), and slope (1.6 x 10(-3) +/- 0.1 x 10(-3) vs. 1.3 x 10(-3) +/- 0.1 x 10(-3); p < .001), compared with controls. These results indicate that regular exposure to automotive pollutants increases plasma susceptibility to oxidation and may, in the long term, increase the risk of developing atherosclerosis.
Resumo:
O conhecimento do valor da erosividade da chuva (R) de determinada localidade é fundamental para a estimativa das perdas de solo feitas a partir da Equação Universal de Perdas de Solo, sendo, portanto, de grande importância no planejamento conservacionista. A fim de obter estimativas do valor de R para localidades onde este é desconhecido, desenvolveu-se uma rede neural artificial (RNA) e analisou-se a acurácia desta com o método de interpolação "Inverso de uma Potência da Distância" (ID). Comparando a RNA desenvolvida com o método de interpolação ID, verificou-se que a primeira apresentou menor erro relativo médio na estimativa de R e melhor índice de confiança, classificado como "Ótimo", podendo, portanto, ser utilizada no planejamento de uso, manejo e conservação do solo no Estado de São Paulo.
Resumo:
In distributed video coding, motion estimation is typically performed at the decoder to generate the side information, increasing the decoder complexity while providing low complexity encoding in comparison with predictive video coding. Motion estimation can be performed once to create the side information or several times to refine the side information quality along the decoding process. In this paper, motion estimation is performed at the decoder side to generate multiple side information hypotheses which are adaptively and dynamically combined, whenever additional decoded information is available. The proposed iterative side information creation algorithm is inspired in video denoising filters and requires some statistics of the virtual channel between each side information hypothesis and the original data. With the proposed denoising algorithm for side information creation, a RD performance gain up to 1.2 dB is obtained for the same bitrate.
Resumo:
OBJETIVO: Avaliar a validade fatorial e de construto da versão brasileira do "Cuestionario para la Evaluación del Síndrome de Quemarse por el Trabajo" (CESQT). MÉTODOS: O processo de versão do questionário original do espanhol para o português incluiu as etapas de tradução, retrotradução e equivalência semântica. Foi realizada análise fatorial confirmatória e utilizados modelos de equações estruturais de quatro fatores, similar ao da estrutura original do CESQT. A amostra foi constituida de 714 professores que trabalhavam em instituições de ensino da cidade de Porto Alegre, RS, e região metropolitana 2008. O questionário possui 20 itens distribuídos em quatro subescalas: Ilusão pelo trabalho (5 itens), Desgaste psíquico (4 itens), Indolência (6 itens) e Culpa (5 itens). O modelo foi analisado com base no programa LISREL 8. RESULTADOS: As medidas de ajuste indicaram adequação do modelo hipotetizado: χ2(164) = 605,86 (p < 0,000), Goodness Fit Index = 0,92, Adjusted Goodness Fit Index = 0,90, Root Mean Square Error of Approximation = 0,062, Non-Normed Fit Index = 0,91, Comparative Fit Index = 0,92, Parsimony Normed Fit Index = 0,77. O valor de alfa de Cronbach para todas as subescalas foi maior que 0,70. CONCLUSÕES: Os resultados indicam que o CESQT possui validade fatorial e consistência interna adequada para avaliar burnout em professores brasileiros.
Resumo:
OBJETIVO: Realizar a adaptação transcultural da versão em português do Inventário de Burnout de Maslach para estudantes e investigar sua confiabilidade, validade e invariância transcultural. MÉTODOS: A validação de face envolveu participação de equipe multidisciplinar. Foi realizada validação de conteúdo. A versão em português foi preenchida em 2009, pela internet, por 958 estudantes universitários brasileiros e 556 portugueses da zona urbana. Realizou-se análise fatorial confirmatória utilizando-se como índices de ajustamento o χ²/df, o comparative fit index (CFI), goodness of fit index (GFI) e o root mean square error of approximation (RMSEA). Para verificação da estabilidade da solução fatorial conforme a versão original em inglês, realizou-se validação cruzada em 2/3 da amostra total e replicada no 1/3 restante. A validade convergente foi estimada pela variância extraída média e confiabilidade composta. Avaliou-se a validade discriminante e a consistência interna foi estimada pelo coeficiente alfa de Cronbach. A validade concorrente foi estimada por análise correlacional da versão em português e dos escores médios do Inventário de Burnout de Copenhague; a divergente foi comparada à Escala de Depressão de Beck. Foi avaliada a invariância do modelo entre a amostra brasileira e a portuguesa. RESULTADOS: O modelo trifatorial de Exaustão, Descrença e Eficácia apresentou ajustamento adequado (χ²/df = 8,498; CFI = 0,916; GFI = 0,902; RMSEA = 0,086). A estrutura fatorial foi estável (λ: χ²dif = 11,383, p = 0,50; Cov: χ²dif = 6,479, p = 0,372; Resíduos: χ²dif = 21,514, p = 0,121). Observou-se adequada validade convergente (VEM = 0,45;0,64, CC = 0,82;0,88), discriminante (ρ² = 0,06;0,33) e consistência interna (α = 0,83;0,88). A validade concorrente da versão em português com o Inventário de Copenhague foi adequada (r = 0,21;0,74). A avaliação da validade divergente do instrumento foi prejudicada pela aproximação do conceito teórico das dimensões Exaustão e Descrença da versão em português com a Escala de Beck. Não se observou invariância do instrumento entre as amostras brasileiras e portuguesas (λ:χ²dif = 84,768, p < 0,001; Cov: χ²dif = 129,206, p < 0,001; Resíduos: χ²dif = 518,760, p < 0,001). CONCLUSÕES: A versão em português do Inventário de Burnout de Maslach para estudantes apresentou adequada confiabilidade e validade, mas sua estrutura fatorial não foi invariante entre os países, apontando ausência de estabilidade transcultural.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações
Resumo:
The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.
Resumo:
The behavior of robotic manipulators with backlash is analyzed. Based on the pseudo-phase plane two indices are proposed to evaluate the backlash effect upon the robotic system: the root mean square error and the fractal dimension. For the dynamical analysis the noisy signals captured from the system are filtered through wavelets. Several tests are developed that demonstrate the coherence of the results.
Resumo:
In the present paper we assess the performance of information-theoretic inspired risks functionals in multilayer perceptrons with reference to the two most popular ones, Mean Square Error and Cross-Entropy. The information-theoretic inspired risks, recently proposed, are: HS and HR2 are, respectively, the Shannon and quadratic Rényi entropies of the error; ZED is a risk reflecting the error density at zero errors; EXP is a generalized exponential risk, able to mimic a wide variety of risk functionals, including the information-thoeretic ones. The experiments were carried out with multilayer perceptrons on 35 public real-world datasets. All experiments were performed according to the same protocol. The statistical tests applied to the experimental results showed that the ubiquitous mean square error was the less interesting risk functional to be used by multilayer perceptrons. Namely, mean square error never achieved a significantly better classification performance than competing risks. Cross-entropy and EXP were the risks found by several tests to be significantly better than their competitors. Counts of significantly better and worse risks have also shown the usefulness of HS and HR2 for some datasets.
Resumo:
Dissertação para a obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Energia
Resumo:
The development of biopharmaceutical manufacturing processes presents critical constraints, with the major constraint being that living cells synthesize these molecules, presenting inherent behavior variability due to their high sensitivity to small fluctuations in the cultivation environment. To speed up the development process and to control this critical manufacturing step, it is relevant to develop high-throughput and in situ monitoring techniques, respectively. Here, high-throughput mid-infrared (MIR) spectral analysis of dehydrated cell pellets and in situ near-infrared (NIR) spectral analysis of the whole culture broth were compared to monitor plasmid production in recombinant Escherichia coil cultures. Good partial least squares (PLS) regression models were built, either based on MIR or NIR spectral data, yielding high coefficients of determination (R-2) and low predictive errors (root mean square error, or RMSE) to estimate host cell growth, plasmid production, carbon source consumption (glucose and glycerol), and by-product acetate production and consumption. The predictive errors for biomass, plasmid, glucose, glycerol, and acetate based on MIR data were 0.7 g/L, 9 mg/L, 0.3 g/L, 0.4 g/L, and 0.4 g/L, respectively, whereas for NIR data the predictive errors obtained were 0.4 g/L, 8 mg/L, 0.3 g/L, 0.2 g/L, and 0.4 g/L, respectively. The models obtained are robust as they are valid for cultivations conducted with different media compositions and with different cultivation strategies (batch and fed-batch). Besides being conducted in situ with a sterilized fiber optic probe, NIR spectroscopy allows building PLS models for estimating plasmid, glucose, and acetate that are as accurate as those obtained from the high-throughput MIR setup, and better models for estimating biomass and glycerol, yielding a decrease in 57 and 50% of the RMSE, respectively, compared to the MIR setup. However, MIR spectroscopy could be a valid alternative in the case of optimization protocols, due to possible space constraints or high costs associated with the use of multi-fiber optic probes for multi-bioreactors. In this case, MIR could be conducted in a high-throughput manner, analyzing hundreds of culture samples in a rapid and automatic mode.
Resumo:
Dissertation submitted in partial fulfillment of the requirements for the Degree of Master of Science in Geospatial Technologies.