942 resultados para mean square error
Resumo:
This research evaluated the quality of the management of Brazilian stock funds on the period from January 1997 to October 2006. The analysis was based on the Modern Portfolio Theory measures of performance. In addition, this research evaluated the relevance of the performance measures The sample with 21 funds was extracted from the 126 largest Brasilian stock options funds because they were the only with quotas on the whole period. The monthly mean rate of return and the following indexes were calculated: total return, mean monthly return, Jensen Index, Treynor Index, Sharpe Index, Sortino Index, Market Timing and the Mean Quadratic Error. The initial analysis showed that the funds in the sample had different objectives and limitations. To make valuable comparisons, the ANBID (National Association of Investment Banks) categories were used to classify the funds. The measured results were ranked. The positions of the funds on the rankings based on the mean monthly return and the indexes of Jensen, Treynor, Sortino and Sharpe were similar. All of the ten ACTIVE funds of this research were above the benchmark (IBOVESPA index) in the measures above. Based on the CAPM, the managers of these funds got superior performance because they might have compiled the available information in a superior way. The six funds belonging to the ANBID classification of INDEXED got the first six positions in the ranking based on the Mean Quadratic Error. None of the researched funds have shown market timing skills to move the beta of their portfolios in the right direction to take the benefit of the market movements, at the significance level of 5%.
Resumo:
O objetivo deste trabalho foi verificar a atividade elétrica dos músculos oblíquo externo, reto femoral e das porções supra-umbilical e infra-umbilical do reto abdominal, durante a execução de determinados exercícios abdominais realizados no meio líquido. A amostra foi composta por 20 (vinte) mulheres adaptadas ao meio líquido, com idade entre 21 e 29 anos. A atividade elétrica foi coletada com eletrodos de superfície, sendo o sistema de coleta de dados previamente adaptado, evitando o contato com a água. O exercício de flexão de tronco até a posição sentada, realizado em seco, foi utilizado como referência, sendo o valor root mean square (RMS), da fase ascendente deste exercício, empregado para a normalização da amplitude do sinal coletado durante os demais exercícios. Flexões de tronco e de quadril foram realizadas no meio líquido, na posição horizontal, com o apoio de um tubo nos membros superiores e em um ritmo padrão, sendo dois destes exercícios também realizados em máxima velocidade. Foi utilizado ANOVA two ways, em cada músculo, pelos fatores exercícios e fase. Para melhor compreender a ativação em cada fase, utilizou-se ANOVA one way no valor da fase ascendente de cada músculo pelo fator exercício, assim como somente nos valores da fase descendente. Para verificar onde estavam as diferenças, utilizou-se o Post Hoc de Tukey. Os dados foram também normalizados no tempo, sendo apresentados gráficos da atividade eletromiográfica dos músculos durante todo o ciclo de cada exercício. Ao analisar os exercícios como um todo, constatou-se que os exercícios aquáticos em ritmo padrão possuem menor atividade que o de referência Ao analisar a fase ascendente dos exercícios aquáticos em ritmo padrão, o músculo reto abdominal em todos os exercícios, e o oblíquo externo nos exercícios sem apoio são tão eficiente quanto o exercício referência, mostrando que a instabilidade da posição horizontalizada e a resistência ao movimento compensam a diminuição do peso hidrostático. Já na fase descendente, além de menor atividade, o padrão da atividade muscular modifica nos exercícios aquáticos, possivelmente, mantendo uma atividade de estabilização, enquanto outro grupo muscular é responsável pelo movimento. A realização do exercício em máxima velocidade apresentou uma grande atividade eletromiográfica dos músculos abdominais no meio líquido e, para o reto femoral, a amplitude de movimento foi muito importante para sua ativação. Dessa forma, a flexão de tronco em máxima velocidade é um exercício de grande atividade abdominal e baixa atividade dos flexores de quadril. As características das forças externas ao corpo, nos exercícios abdominais realizados no meio líquido, proporcionam uma situação única de redução de peso hidrostático, resistência ao movimento, apoio relativo e tendência de rotação para atingir o equilíbrio estável.
Resumo:
Convex combinations of long memory estimates using the same data observed at different sampling rates can decrease the standard deviation of the estimates, at the cost of inducing a slight bias. The convex combination of such estimates requires a preliminary correction for the bias observed at lower sampling rates, reported by Souza and Smith (2002). Through Monte Carlo simulations, we investigate the bias and the standard deviation of the combined estimates, as well as the root mean squared error (RMSE), which takes both into account. While comparing the results of standard methods and their combined versions, the latter achieve lower RMSE, for the two semi-parametric estimators under study (by about 30% on average for ARFIMA(0,d,0) series).
Resumo:
Our focus is on information in expectation surveys that can now be built on thousands (or millions) of respondents on an almost continuous-time basis (big data) and in continuous macroeconomic surveys with a limited number of respondents. We show that, under standard microeconomic and econometric techniques, survey forecasts are an affine function of the conditional expectation of the target variable. This is true whether or not the survey respondent knows the data-generating process (DGP) of the target variable or the econometrician knows the respondents individual loss function. If the econometrician has a mean-squared-error risk function, we show that asymptotically efficient forecasts of the target variable can be built using Hansens (Econometrica, 1982) generalized method of moments in a panel-data context, when N and T diverge or when T diverges with N xed. Sequential asymptotic results are obtained using Phillips and Moon s (Econometrica, 1999) framework. Possible extensions are also discussed.
Resumo:
The increase in ultraviolet radiation (UV) at surface, the high incidence of non-melanoma skin cancer (NMSC) in coast of Northeast of Brazil (NEB) and reduction of total ozone were the motivation for the present study. The overall objective was to identify and understand the variability of UV or Index Ultraviolet Radiation (UV Index) in the capitals of the east coast of the NEB and adjust stochastic models to time series of UV index aiming make predictions (interpolations) and forecasts / projections (extrapolations) followed by trend analysis. The methodology consisted of applying multivariate analysis (principal component analysis and cluster analysis), Predictive Mean Matching method for filling gaps in the data, autoregressive distributed lag (ADL) and Mann-Kendal. The modeling via the ADL consisted of parameter estimation, diagnostics, residuals analysis and evaluation of the quality of the predictions and forecasts via mean squared error and Pearson correlation coefficient. The research results indicated that the annual variability of UV in the capital of Rio Grande do Norte (Natal) has a feature in the months of September and October that consisting of a stabilization / reduction of UV index because of the greater annual concentration total ozone. The increased amount of aerosol during this period contributes in lesser intensity for this event. The increased amount of aerosol during this period contributes in lesser intensity for this event. The application of cluster analysis on the east coast of the NEB showed that this event also occurs in the capitals of Paraiba (João Pessoa) and Pernambuco (Recife). Extreme events of UV in NEB were analyzed from the city of Natal and were associated with absence of cloud cover and levels below the annual average of total ozone and did not occurring in the entire region because of the uneven spatial distribution of these variables. The ADL (4, 1) model, adjusted with data of the UV index and total ozone to period 2001-2012 made a the projection / extrapolation for the next 30 years (2013-2043) indicating in end of that period an increase to the UV index of one unit (approximately), case total ozone maintain the downward trend observed in study period
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Dentre todas as etapas que permeiam um laudo foliar, ainda a amostragem continua sendo a mais sujeita a erros. O presente trabalho teve como objetivo determinar o tamanho de amostras foliares e a variação do erro amostral para coleta de folhas de pomares de mangueiras. O experimento contou com delineamento inteiramente casualizado, com seis repetições e quatro tratamentos, que constaram da coleta de uma folha, em cada uma das quatro posições cardeais, em 5; 10; 20 e 40 plantas. Com base nos resultados dos teores de nutrientes, foram calculados as médias, variâncias, erros-padrão das médias, o intervalo de confiança para a média e a porcentagem de erro em relação à média, através da semi-amplitude do intervalo de confiança expresso em porcentagem da média. Concluiu-se que, para as determinações químicas dos macronutrientes, 10 plantas de mangueira seriam suficientes, coletando-se uma folha nos quatro pontos cardeais da planta. Já para os micronutrientes, seriam necessárias, no mínimo, 20 plantas e, se considerarmos o Fe, seria necessário amostrar, pelo menos, 30 plantas.
Resumo:
Com o objetivo de avaliar a importância da eletrocardiografia de alta resolução no diagnóstico da cardiomiopatia arritmogênica do ventrículo direito do Boxer, 20 cães sem evidências de doença cardíaca estrutural à avaliação ecodopplercardiográfica foram agrupados de acordo com a frequência de arritmias ventriculares, avaliadas pela eletrocardiografia ambulatorial de 24 horas, e submetidos ao exame eletrocardiográfico de alta resolução. Duração do complexo QRS filtrado, duração dos sinais de baixa amplitude (menor que 40µV) dos últimos 40 milissegundos do complexo QRS e raiz quadrada média da voltagem ao quadrado dos últimos 40 milissegundos do complexo QRS (RMS40) foram as variáveis avaliadas. Não foram observadas diferenças significativas entre os grupos em relação às variáveis estudadas. Sendo assim, os resultados do presente estudo sugerem que a eletrocardiografia de alta resolução não é uma ferramenta útil no auxílio diagnóstico da cardiomiopatia arritmogênica do ventrículo direito dos cães da raça Boxer que não apresentam alterações miocárdicas evidentes ou disfunção sistólica.
Resumo:
The objective of this study was to evaluate the use of probit and logit link functions for the genetic evaluation of early pregnancy using simulated data. The following simulation/analysis structures were constructed: logit/logit, logit/probit, probit/logit, and probit/probit. The percentages of precocious females were 5, 10, 15, 20, 25 and 30% and were adjusted based on a change in the mean of the latent variable. The parametric heritability (h²) was 0.40. Simulation and genetic evaluation were implemented in the R software. Heritability estimates (ĥ²) were compared with h² using the mean squared error. Pearson correlations between predicted and true breeding values and the percentage of coincidence between true and predicted ranking, considering the 10% of bulls with the highest breeding values (TOP10) were calculated. The mean ĥ² values were under- and overestimated for all percentages of precocious females when logit/probit and probit/logit models used. In addition, the mean squared errors of these models were high when compared with those obtained with the probit/probit and logit/logit models. Considering ĥ², probit/probit and logit/logit were also superior to logit/probit and probit/logit, providing values close to the parametric heritability. Logit/probit and probit/logit presented low Pearson correlations, whereas the correlations obtained with probit/probit and logit/logit ranged from moderate to high. With respect to the TOP10 bulls, logit/probit and probit/logit presented much lower percentages than probit/probit and logit/logit. The genetic parameter estimates and predictions of breeding values of the animals obtained with the logit/logit and probit/probit models were similar. In contrast, the results obtained with probit/logit and logit/probit were not satisfactory. There is need to compare the estimation and prediction ability of logit and probit link functions.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required
Resumo:
The aim of this study is to create an artificial neural network (ANN) capable of modeling the transverse elasticity modulus (E2) of unidirectional composites. To that end, we used a dataset divided into two parts, one for training and the other for ANN testing. Three types of architectures from different networks were developed, one with only two inputs, one with three inputs and the third with mixed architecture combining an ANN with a model developed by Halpin-Tsai. After algorithm training, the results demonstrate that the use of ANNs is quite promising, given that when they were compared with those of the Halpín-Tsai mathematical model, higher correlation coefficient values and lower root mean square values were observed
Resumo:
One of the current major concerns in engineering is the development of aircrafts that have low power consumption and high performance. So, airfoils that have a high value of Lift Coefficient and a low value for the Drag Coefficient, generating a High-Efficiency airfoil are studied and designed. When the value of the Efficiency increases, the aircraft s fuel consumption decreases, thus improving its performance. Therefore, this work aims to develop a tool for designing of airfoils from desired characteristics, as Lift and Drag coefficients and the maximum Efficiency, using an algorithm based on an Artificial Neural Network (ANN). For this, it was initially collected an aerodynamic characteristics database, with a total of 300 airfoils, from the software XFoil. Then, through the software MATLAB, several network architectures were trained, between modular and hierarchical, using the Back-propagation algorithm and the Momentum rule. For data analysis, was used the technique of cross- validation, evaluating the network that has the lowest value of Root Mean Square (RMS). In this case, the best result was obtained for a hierarchical architecture with two modules and one layer of hidden neurons. The airfoils developed for that network, in the regions of lower RMS, were compared with the same airfoils imported into the software XFoil
Resumo:
The aim of the present study was to extract vegetable oil from brown linseed (Linum usitatissimum L.), determine fatty acid levels, the antioxidant capacity of the extracted oil and perform a rapid economic assessment of the SFE process in the manufacture of oil. The experiments were conducted in a test bench extractor capable of operating with carbon dioxide and co-solvents, obeying 23 factorial planning with central point in triplicate, and having process yield as response variable and pressure, temperature and percentage of cosolvent as independent variables. The yield (mass of extracted oil/mass of raw material used) ranged from 2.2% to 28.8%, with the best results obtained at 250 bar and 50ºC, using 5% (v/v) ethanol co-solvent. The influence of the variables on extraction kinetics and on the composition of the linseed oil obtained was investigated. The extraction kinetic curves obtained were based on different mathematical models available in the literature. The Martínez et al. (2003) model and the Simple Single Plate (SSP) model discussed by Gaspar et al. (2003) represented the experimental data with the lowest mean square errors (MSE). A manufacturing cost of US$17.85/kgoil was estimated for the production of linseed oil using TECANALYSIS software and the Rosa and Meireles method (2005). To establish comparisons with SFE, conventional extraction tests were conducted with a Soxhlet device using petroleum ether. These tests obtained mean yields of 35.2% for an extraction time of 5h. All the oil samples were sterilized and characterized in terms of their composition in fatty acids (FA) using gas chromatography. The main fatty acids detected were: palmitic (C16:0), stearic (C18:0), oleic (C18:1), linoleic (C18:2n-6) and α-linolenic (C18:3n-3). The FA contents obtained with Soxhlet dif ered from those obtained with SFE, with higher percentages of saturated and monounsaturated FA with the Soxhlet technique using petroleum ether. With respect to α-linolenic content (main component of linseed oil) in the samples, SFE performed better than Soxhlet extraction, obtaining percentages between 51.18% and 52.71%, whereas with Soxhlet extraction it was 47.84%. The antioxidant activity of the oil was assessed in the β-carotene/linoleic acid system. The percentages of inhibition of the oxidative process reached 22.11% for the SFE oil, but only 6.09% for commercial oil (cold pressing), suggesting that the SFE technique better preserves the phenolic compounds present in the seed, which are likely responsible for the antioxidant nature of the oil. In vitro tests with the sample displaying the best antioxidant response were conducted in rat liver homogenate to investigate the inhibition of spontaneous lipid peroxidation or autooxidation of biological tissue. Linseed oil proved to be more efficient than fish oil (used as standard) in decreasing lipid peroxidation in the liver tissue of Wistar rats, yielding similar results to those obtained with the use of BHT (synthetic antioxidant). Inhibitory capacity may be explained by the presence of phenolic compounds with antioxidant activity in the linseed oil. The results obtained indicate the need for more detailed studies, given the importance of linseed oil as one of the greatest sources of ω3 among vegetable oils