819 resultados para Classification error rate
Resumo:
I start presenting an explicit solution to Taylorís (2001) model, in order to illustrate the link between the target interest rate and the overnight interest rate prevailing in the economy. Next, I use Vector Auto Regressions to shed some light on the evolution of key macroeconomic variables after the Central Bank of Brazil increases the target interest rate by 1%. Point estimates show a four-year accumulated output loss ranging from 0:04% (whole sample, 1980 : 1-2004 : 2; quarterly data) to 0:25% (Post-Real data only) with a Örst-year peak output response between 0:04% and 1:0%; respectively. Prices decline between 2% and 4% in a 4-year horizon. The accumulated output response is found to be between 3:5 and 6 times higher after the Real Plan than when the whole sample is considered. The 95% confidence bands obtained using bias-corrected bootstrap always include the null output response when the whole sample is used, but not when the data is restricted to the Post-Real period. Innovations to interest rates explain between 4:9% (whole sample) and 9:2% (post-Real sample) of the forecast error of GDP.
Resumo:
This research evaluated the quality of the management of Brazilian stock funds on the period from January 1997 to October 2006. The analysis was based on the Modern Portfolio Theory measures of performance. In addition, this research evaluated the relevance of the performance measures The sample with 21 funds was extracted from the 126 largest Brasilian stock options funds because they were the only with quotas on the whole period. The monthly mean rate of return and the following indexes were calculated: total return, mean monthly return, Jensen Index, Treynor Index, Sharpe Index, Sortino Index, Market Timing and the Mean Quadratic Error. The initial analysis showed that the funds in the sample had different objectives and limitations. To make valuable comparisons, the ANBID (National Association of Investment Banks) categories were used to classify the funds. The measured results were ranked. The positions of the funds on the rankings based on the mean monthly return and the indexes of Jensen, Treynor, Sortino and Sharpe were similar. All of the ten ACTIVE funds of this research were above the benchmark (IBOVESPA index) in the measures above. Based on the CAPM, the managers of these funds got superior performance because they might have compiled the available information in a superior way. The six funds belonging to the ANBID classification of INDEXED got the first six positions in the ranking based on the Mean Quadratic Error. None of the researched funds have shown market timing skills to move the beta of their portfolios in the right direction to take the benefit of the market movements, at the significance level of 5%.
Resumo:
The paper aims to investigate on empirical and theoretical grounds the Brazilian exchange rate dynamics under floating exchange rates. The empirical analysis examines the short and long term behavior of the exchange rate, interest rate (domestic and foreign) and country risk using econometric techniques such as variance decomposition, Granger causality, cointegration tests, error correction models, and a GARCH model to estimate the exchange rate volatility. The empirical findings suggest that one can argue in favor of a certain degree of endogeneity of the exchange rate and that flexible rates have not been able to insulate the Brazilian economy in the same patterns predicted by literature due to its own specificities (managed floating with the use of international reserves and domestic interest rates set according to inflation target) and to externally determined variables such as the country risk. Another important outcome is the lack of a closer association of domestic and foreign interest rates since the new exchange regime has been adopted. That is, from January 1999 to May 2004, the US monetary policy has no significant impact on the Brazilian exchange rate dynamics, which has been essentially endogenous primarily when we consider the fiscal dominance expressed by the probability of default.
Resumo:
The paper assesses the impact of intemational relative prices and domestic expenditure variables on Brazil' s foreign trade performance in the first half of the 1990s. It has been argued that the appreciation of the Real since 1994 has had a detrimental impact of the country's trade balance. However, using temporal precedence analysis, our results do not indicate that the trade balance is strongly affected by intemational rei ative prices, such as the exchange rate. Instead, domestic expenditure variables appear to be more powerful determinant of the country' s trade performance in recent years. Granger and error correction causality techniques are used to determine temporal precedence between the trade balance and the exchange rate in the period under examination. Our findings shed light on the debate over the sustainability of recent exchange rate-anchored macroeconomic stabilisation programmes, which is a topic that has encouraged a lot of debate among academics and practitioners.
Resumo:
Real exchange rate is an important macroeconomic price in the economy and a ects economic activity, interest rates, domestic prices, trade and investiments ows among other variables. Methodologies have been developed in empirical exchange rate misalignment studies to evaluate whether a real e ective exchange is overvalued or undervalued. There is a vast body of literature on the determinants of long-term real exchange rates and on empirical strategies to implement the equilibrium norms obtained from theoretical models. This study seeks to contribute to this literature by showing that it is possible to calculate the misalignment from a mixed ointegrated vector error correction framework. An empirical exercise using United States' real exchange rate data is performed. The results suggest that the model with mixed frequency data is preferred to the models with same frequency variables
Resumo:
Waste generated during the exploration and production of oil, water stands out due to various factors including the volume generated, the salt content, the presence of oil and chemicals and the water associated with oil is called produced water. The chemical composition of water is complex and depends strongly on the field generator, because it was in contact with the geological formation for thousands of years. This work aims to characterize the hydrochemical water produced in different areas of a field located in the Potiguar Basin. We collected 27 samples from 06 zones (400, 600, 400/600, 400/450/500, 350/400, A) the producing field called S and measured 50 required parameter divided between physical and chemical parameters, cations and anions. In hydrochemical characterization was used as tools of reasons ionic calculations, diagrams and they hydrochemical classification diagram Piper and Stiff diagram and also the statistic that helped in the identification of signature patterns for each production area including the area that supplies water injected this field for secondary oil recovery. The ionic balance error was calculated to assess the quality of the results of the analysis that was considered good, because 89% of the samples were below 5% error. Hydrochemical diagrams classified the waters as sodium chloride, with the exception of samples from Area A, from the injection well, which were classified as sodium bicarbonate. Through descriptive analysis and discriminant analysis was possible to obtain a function that differs chemically production areas, this function had a good hit rate of classification was 85%
Resumo:
ObjectiveTo describe onset features, classification and treatment of juvenile dermatomyositis (JDM) and juvenile polymyositis (JPM) from a multicentre registry.MethodsInclusion criteria were onset age lower than 18 years and a diagnosis of any idiopathic inflammatory myopathy (IIM) by attending physician. Bohan & Peter (1975) criteria categorisation was established by a scoring algorithm to define JDM and JPM based oil clinical protocol data.ResultsOf the 189 cases included, 178 were classified as JDM, 9 as JPM (19.8: 1) and 2 did not fit the criteria; 6.9% had features of chronic arthritis and connective tissue disease overlap. Diagnosis classification agreement occurred in 66.1%. Medial? onset age was 7 years, median follow-up duration was 3.6 years. Malignancy was described in 2 (1.1%) cases. Muscle weakness occurred in 95.8%; heliotrope rash 83.5%; Gottron plaques 83.1%; 92% had at least one abnormal muscle enzyme result. Muscle biopsy performed in 74.6% was abnormal in 91.5% and electromyogram performed in 39.2% resulted abnormal in 93.2%. Logistic regression analysis was done in 66 cases with all parameters assessed and only aldolase resulted significant, as independent variable for definite JDM (OR=5.4, 95%CI 1.2-24.4, p=0.03). Regarding treatment, 97.9% received steroids; 72% had in addition at least one: methotrexate (75.7%), hydroxychloroquine (64.7%), cyclosporine A (20.6%), IV immunoglobulin (20.6%), azathioprine (10.3%) or cyclophosphamide (9.6%). In this series 24.3% developed calcinosis and mortality rate was 4.2%.ConclusionEvaluation of predefined criteria set for a valid diagnosis indicated aldolase as the most important parameter associated with de, methotrexate combination, was the most indicated treatment.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
An accurate estimate of machining time is very important for predicting delivery time, manufacturing costs, and also to help production process planning. Most commercial CAM software systems estimate the machining time in milling operations simply by dividing the entire tool path length by the programmed feed rate. This time estimate differs drastically from the real process time because the feed rate is not always constant due to machine and computer numerical controlled (CNC) limitations. This study presents a practical mechanistic method for milling time estimation when machining free-form geometries. The method considers a variable called machine response time (MRT) which characterizes the real CNC machine's capacity to move in high feed rates in free-form geometries. MRT is a global performance feature which can be obtained for any type of CNC machine configuration by carrying out a simple test. For validating the methodology, a workpiece was used to generate NC programs for five different types of CNC machines. A practical industrial case study was also carried out to validate the method. The results indicated that MRT, and consequently, the real machining time, depends on the CNC machine's potential: furthermore, the greater MRT, the larger the difference between predicted milling time and real milling time. The proposed method achieved an error range from 0.3% to 12% of the real machining time, whereas the CAM estimation achieved from 211% to 1244% error. The MRT-based process is also suggested as an instrument for helping in machine tool benchmarking.
Resumo:
The objective of this study was to test a device developed to improve the functionality, accuracy and precision of the original technique for sweating rate measurements proposed by Schleger and Turner [Schleger AV, Turner HG (1965) Aust J Agric Res 16:92-106]. A device was built for this purpose and tested against the original Schleger and Turner technique. Testing was performed by measuring sweating rates in an experiment involving six Mertolenga heifers subjected to four different thermal levels in a climatic chamber. The device exhibited no functional problems and the results obtained with its use were more consistent than with the Schleger and Turner technique. There was no difference in the reproducibility of the two techniques (same accuracy), but measurements performed with the new device had lower repeatability, corresponding to lower variability and, consequently, to higher precision. When utilizing this device, there is no need for physical contact between the operator and the animal to maintain the filter paper discs in position. This has important advantages: the animals stay quieter, and several animals can be evaluated simultaneously. This is a major advantage because it allows more measurements to be taken in a given period of time, increasing the precision of the observations and diminishing the error associated with temporal hiatus (e.g., the solar angle during field studies). The new device has higher functional versatility when taking measurements in large-scale studies (many animals) under field conditions. The results obtained in this study suggest that the technique using the device presented here could represent an advantageous alternative to the original technique described by Schleger and Turner.
Resumo:
The internal genetic structure and outcrossing rate of a population of Araucaria angustifolia (Bert.) O. Kuntze were investigated using 16 allozyme loci. Estimates of the mean number of alleles per loci (1.6), percentage of polymorphic loci (43.8%), and expected genetic diversity (0.170) were similar to those obtained for other gymnosperms. The analysis of spatial autocorrelation demonstrated the presence of internal structure in the first distance classes (up to 70 m), suggesting the presence of family structure. The outcrossing rate was high (0.956), as expected for a dioecious species. However, it was different from unity, indicating outcrossings between related individuals and corroborating the presence of internal genetic structure. The results of this study have implications for the methodologies used in conservation collections and for the use or analysis of this forest species. © The American Genetic Association. 2006. All rights reserved.
Resumo:
Bit performance prediction has been a challenging problem for the petroleum industry. It is essential in cost reduction associated with well planning and drilling performance prediction, especially when rigs leasing rates tend to follow the projects-demand and barrel-price rises. A methodology to model and predict one of the drilling bit performance evaluator, the Rate of Penetration (ROP), is presented herein. As the parameters affecting the ROP are complex and their relationship not easily modeled, the application of a Neural Network is suggested. In the present work, a dynamic neural network, based on the Auto-Regressive with Extra Input Signals model, or ARX model, is used to approach the ROP modeling problem. The network was applied to a real oil offshore field data set, consisted of information from seven wells drilled with an equal-diameter bit.
Resumo:
The effect of snoring on the cardiovascular system is not well-known. In this study we analyzed the Heart Rate Variability (HRV) differences between light and heavy snorers. The experiments are done on the full-whole-night polysomnography (PSG) with ECG and audio channels from patient group (heavy snorer) and control group (light snorer), which are gender- and age-paired, totally 30 subjects. A feature Snoring Density (SND) of audio signal as classification criterion and HRV features are computed. Mann-Whitney statistical test and Support Vector Machine (SVM) classification are done to see the correlation. The result of this study shows that snoring has close impact on the HRV features. This result can provide a deeper insight into the physiological understand of snoring. © 2011 CCAL.
Resumo:
The objectives of this study were to assess the interrater reproducibility of the instrument to classify pediatric patients with cancer; verify the adequacy of the patient classification instrument for pediatric patients with cancer; and make a proposal for changing the instrument, thus allowing for the necessary adjustments for pediatric oncology patients. A total of 34 pediatric inpatients of a Cancer Hospital were evaluated by the teams of physicians, nurses and nursing technicians. The Kappa coefficient was used to rate the agreement between the scores, which revealed a moderate to high value in the objective classifications, and a low value in the subjective. In conclusion, the instrument is reliable and reproducible, however, it is suggested that to classify pediatric oncology patients, some items should be complemented in order to reach an outcome that is more compatible with the reality of this specific population.
Resumo:
Semi-supervised learning is applied to classification problems where only a small portion of the data items is labeled. In these cases, the reliability of the labels is a crucial factor, because mislabeled items may propagate wrong labels to a large portion or even the entire data set. This paper aims to address this problem by presenting a graph-based (network-based) semi-supervised learning method, specifically designed to handle data sets with mislabeled samples. The method uses teams of walking particles, with competitive and cooperative behavior, for label propagation in the network constructed from the input data set. The proposed model is nature-inspired and it incorporates some features to make it robust to a considerable amount of mislabeled data items. Computer simulations show the performance of the method in the presence of different percentage of mislabeled data, in networks of different sizes and average node degree. Importantly, these simulations reveals the existence of the critical points of the mislabeled subset size, below which the network is free of wrong label contamination, but above which the mislabeled samples start to propagate their labels to the rest of the network. Moreover, numerical comparisons have been made among the proposed method and other representative graph-based semi-supervised learning methods using both artificial and real-world data sets. Interestingly, the proposed method has increasing better performance than the others as the percentage of mislabeled samples is getting larger. © 2012 IEEE.