938 resultados para Non-parametric regression methods
Resumo:
OBJECTIVES To evaluate the diagnostic performance of seven non-invasive tests (NITs) of liver fibrosis and to assess fibrosis progression over time in HIV/HCV co-infected patients. METHODS Transient elastography (TE) and six blood tests were compared to histopathological fibrosis stage (METAVIR). Participants were followed over three years with NITs at yearly intervals. RESULTS Area under the receiver operating characteristic curve (AUROC) for significant fibrosis (> = F2) in 105 participants was highest for TE (0.85), followed by FIB-4 (0.77), ELF-Test (0.77), APRI (0.76), Fibrotest (0.75), hyaluronic acid (0.70), and Hepascore (0.68). AUROC for cirrhosis (F4) was 0.97 for TE followed by FIB-4 (0.91), APRI (0.89), Fibrotest (0.84), Hepascore (0.82), ELF-Test (0.82), and hyaluronic acid (0.79). A three year follow-up was completed by 87 participants, all on antiretroviral therapy and in 20 patients who completed HCV treatment (9 with sustained virologic response). TE, APRI and Fibrotest did not significantly change during follow-up. There was weak evidence for an increase of FIB-4 (mean increase: 0.22, p = 0.07). 42 participants had a second liver biopsy: Among 38 participants with F0-F3 at baseline, 10 were progessors (1-stage increase in fibrosis, 8 participants; 2-stage, 1; 3-stage, 1). Among progressors, mean increase in TE was 3.35 kPa, in APRI 0.36, and in FIB-4 0.75. Fibrotest results did not change over 3 years. CONCLUSION TE was the best NIT for liver fibrosis staging in HIV/HCV co-infected patients. APRI-Score, FIB-4 Index, Fibrotest, and ELF-Test were less reliable. Routinely available APRI and FIB-4 performed as good as more expensive tests. NITs did not change significantly during a follow-up of three years, suggesting slow liver disease progression in a majority of HIV/HCV co-infected persons on antiretroviral therapy.
Resumo:
This paper examines the mean-reverting property of real exchange rates. Earlier studies have generally not been able to reject the null hypothesis of a unit-root in real exchange rates, especially for the post-Bretton Woods floating period. The results imply that long-run purchasing power parity does not hold. More recent studies, especially those using panel unit-root tests, have found more favorable results, however. But, Karlsson and Löthgren (2000) and others have recently pointed out several potential pitfalls of panel unit-root tests. Thus, the panel unit-root test results are suggestive, but they are far from conclusive. Moreover, consistent individual country time series evidence that supports long-run purchasing power parity continues to be scarce. In this paper, we test for long memory using Lo's (1991) modified rescaled range test, and the rescaled variance test of Giraitis, Kokoszka, Leipus, and Teyssière (2003). Our testing procedure provides a non-parametric alternative to the parametric tests commonly used in this literature. Our data set consists of monthly observations from April 1973 to April 2001 of the G-7 countries in the OECD. Our two tests find conflicting results when we use U.S. dollar real exchange rates. However, when non-U.S. dollar real exchange rates are used, we find only two cases out of fifteen where the null hypothesis of an unit-root with short-term dependence can be rejected in favor of the alternative hypothesis of long-term dependence using the modified rescaled range test, and only one case when using the rescaled variance test. Our results therefore provide a contrast to the recent favorable panel unit-root test results.
Resumo:
Here, a novel and efficient moving object detection strategy by non-parametric modeling is presented. Whereas the foreground is modeled by combining color and spatial information, the background model is constructed exclusively with color information, thus resulting in a great reduction of the computational and memory requirements. The estimation of the background and foreground covariance matrices, allows us to obtain compact moving regions while the number of false detections is reduced. Additionally, the application of a tracking strategy provides a priori knowledge about the spatial position of the moving objects, which improves the performance of the Bayesian classifier
Resumo:
The use of seismic hysteretic dampers for passive control is increasing exponentially in recent years for both new and existing buildings. In order to utilize hysteretic dampers within a structural system, it is of paramount importance to have simplified design procedures based upon knowledge gained from theoretical studies and validated with experimental results. Non-linear Static Procedures (NSPs) are presented as an alternative to the force-based methods more common nowadays. The application of NSPs to conventional structures has been well established; yet there is a lack of experimental information on how NSPs apply to systems with hysteretic dampers. In this research, several shaking table tests were conducted on two single bay and single story 1:2 scale structures with and without hysteretic dampers. The maximum response of the structure with dampers in terms of lateral displacement and base shear obtained from the tests was compared with the prediction provided by three well-known NSPs: (1) the improved version of the Capacity Spectrum Method (CSM) from FEMA 440; (2) the improved version of the Displacement Coefficient Method (DCM) from FEMA 440; and (3) the N2 Method implemented in Eurocode 8. In general, the improved version of the DCM and N2 methods are found to provide acceptable accuracy in prediction, but the CSM tends to underestimate the response.
Resumo:
Along the recent years, several moving object detection strategies by non-parametric background-foreground modeling have been proposed. To combine both models and to obtain the probability of a pixel to belong to the foreground, these strategies make use of Bayesian classifiers. However, these classifiers do not allow to take advantage of additional prior information at different pixels. So, we propose a novel and efficient alternative Bayesian classifier that is suitable for this kind of strategies and that allows the use of whatever prior information. Additionally, we present an effective method to dynamically estimate prior probability from the result of a particle filter-based tracking strategy.
Resumo:
Are the learning procedures of genetic algorithms (GAs) able to generate optimal architectures for artificial neural networks (ANNs) in high frequency data? In this experimental study,GAs are used to identify the best architecture for ANNs. Additional learning is undertaken by the ANNs to forecast daily excess stock returns. No ANN architectures were able to outperform a random walk,despite the finding of non-linearity in the excess returns. This failure is attributed to the absence of suitable ANN structures and further implies that researchers need to be cautious when making inferences from ANN results that use high frequency data.
Resumo:
To carry out an analysis of variance, several assumptions are made about the nature of the experimental data which have to be at least approximately true for the tests to be valid. One of the most important of these assumptions is that a measured quantity must be a parametric variable, i.e., a member of a normally distributed population. If the data are not normally distributed, then one method of approach is to transform the data to a different scale so that the new variable is more likely to be normally distributed. An alternative method, however, is to use a non-parametric analysis of variance. There are a limited number of such tests available but two useful tests are described in this Statnote, viz., the Kruskal-Wallis test and Friedmann’s analysis of variance.
Resumo:
Different types of numerical data can be collected in a scientific investigation and the choice of statistical analysis will often depend on the distribution of the data. A basic distinction between variables is whether they are ‘parametric’ or ‘non-parametric’. When a variable is parametric, the data come from a symmetrically shaped distribution known as the ‘Gaussian’ or ‘normal distribution’ whereas non-parametric variables may have a distribution which deviates markedly in shape from normal. This article describes several aspects of the problem of non-normality including: (1) how to test for two common types of deviation from a normal distribution, viz., ‘skew’ and ‘kurtosis’, (2) how to fit the normal distribution to a sample of data, (3) the transformation of non-normally distributed data and scores, and (4) commonly used ‘non-parametric’ statistics which can be used in a variety of circumstances.
Resumo:
If in a correlation test, one or both variables are small whole numbers, scores based on a limited scale, or percentages, a non-parametric correlation coefficient should be considered as an alternative to Pearson’s ‘r’. Kendall’s t and Spearman’s rs are similar tests but the former should be considered if the analysis is to be extended to include partial correlations. If the data contain many tied values, then gamma should be considered as a suitable test.
Resumo:
In some circumstances, there may be no scientific model of the relationship between X and Y that can be specified in advance and indeed the objective of the investigation may be to provide a ‘curve of best fit’ for predictive purposes. In such an example, the fitting of successive polynomials may be the best approach. There are various strategies to decide on the polynomial of best fit depending on the objectives of the investigation.
Resumo:
Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved.
Resumo:
The increasing intensity of global competition has led organizations to utilize various types of performance measurement tools for improving the quality of their products and services. Data envelopment analysis (DEA) is a methodology for evaluating and measuring the relative efficiencies of a set of decision making units (DMUs) that use multiple inputs to produce multiple outputs. All the data in the conventional DEA with input and/or output ratios assumes the form of crisp numbers. However, the observed values of data in real-world problems are sometimes expressed as interval ratios. In this paper, we propose two new models: general and multiplicative non-parametric ratio models for DEA problems with interval data. The contributions of this paper are fourfold: (1) we consider input and output data expressed as interval ratios in DEA; (2) we address the gap in DEA literature for problems not suitable or difficult to model with crisp values; (3) we propose two new DEA models for evaluating the relative efficiencies of DMUs with interval ratios, and (4) we present a case study involving 20 banks with three interval ratios to demonstrate the applicability and efficacy of the proposed models where the traditional indicators are mostly financial ratios. © 2011 Elsevier Inc.