795 resultados para non-parametric statistics
Resumo:
The purpose of this paper is to analyse the relationship between the corporate governance system and technical efficiency in Italian manufacturing. We use a non-parametric frontier technique (DEA) to derive technical efficiency measures for a sample of Italian firms taken from nine manufacturing industries. These measures are then related to the characteristics of the corporate governance system. Two of these characteristics turn out to have a positive impact on technical efficiency: the percentage of the company shares owned by the largest shareholder and the fact that a firm belongs to a pyramidal group. Interestingly, a trade-off emerges between these influences, in the sense that one is stronger in industries where the other is weaker. Copyright © 2007 John Wiley & Sons, Ltd.
Resumo:
This paper analyses the relationship between industrial total factor productivity and public capital across the 20 Italian administrative regions. It adds upon the existing literature in a number of ways: it analyses a longer period (1970-98); it allows for the role of human capital accumulation; it tests for the existence of a long-run relationship between total factor productivity and public capital (through previously suggested panel techniques) and for weak exogeneity of public capital; and it assesses the significance of public capital within a non-parametric set-up based on the Free Disposal Hull. The results confirm that public capital has a significant impact on the evolution of total factor productivity, particularly in the Southern regions. This impact is mainly ascribed to the core infrastructures (road and airports, harbours, railroads, water and electricity, telecommunications). Also, core infrastructures are weakly exogenous. © 2005 Regional Studies Association.
Resumo:
Data envelopment analysis (DEA) is a popular non-parametric technique for determining the efficiency of a homogeneous set of decision-making units (DMUs). In many practical cases, there is some doubt if the all the DMUs form a single group with a common efficiency distribution. The Mann-Whitney rank statistic has been used in DEA both to test if two groups of DMUs come from a common efficiency distribution and also to test if the two groups have a common frontier, each of which are likely to have important but different policy implications for the management of the groups. In this paper it is demonstrated that where the Mann-Whitney rank statistic is used for the second of these it is likely to overestimate programmatic inefficiency, particularly of the smaller group. A new non-parametric statistic is proposed for the case of comparing the efficient frontiers of two groups, which overcomes the problems we identify in the use of the Mann-Whitney rank statistic for this purpose. © 2005 Operational Research Society Ltd. All rights reserved.
Resumo:
The aim in this paper is to replicate and extend the analysis of visual technical patterns by Lo et al. (2000) using data on the UK market. A non-parametric smoother is used to model a nonlinear trend in stock price series. Technical patterns, such as the 'head-and-shoulders' pattern, that are characterised by a sequence of turning points are identified in the smoothed data. Statistical tests are used to determine whether returns conditioned on the technical patterns are different from random returns and, in an extension to the analysis of Lo et al. (2000), whether they can outperform a market benchmark return. For the stocks in the FTSE 100 and FTSE 250 indices over the period 1986 to 2001, we find that technical patterns occur with different frequencies to each other and in different relativities to the frequencies found in the US market. Our extended statistical testing indicates that UK stock returns are less influenced by technical patterns than was the case for US stock returns.
Resumo:
Fare, Grosskopf, Norris and Zhang developed a non-parametric productivity index, Malmquist index, using data envelopment analysis (DEA). The Malmquist index is a measure of productivity progress (regress) and it can be decomposed to different components such as 'efficiency catch-up' and 'technology change'. However, Malmquist index and its components are based on two period of time which can capture only a part of the impact of investment in long-lived assets. The effects of lags in the investment process on the capital stock have been ignored in the current model of Malmquist index. This paper extends the recent dynamic DEA model introduced by Emrouznejad and Thanassoulis and Emrouznejad for dynamic Malmquist index. This paper shows that the dynamic productivity results for Organisation for Economic Cooperation and Development countries should reflect reality better than those based on conventional model.
Resumo:
The efficiency literature, both using parametric and non-parametric methods, has been focusing mainly on cost efficiency analysis rather than on profit efficiency. In for-profit organisations, however, the measurement of profit efficiency and its decomposition into technical and allocative efficiency is particularly relevant. In this paper a newly developed method is used to measure profit efficiency and to identify the sources of any shortfall in profitability (technical and/or allocative inefficiency). The method is applied to a set of Portuguese bank branches first assuming long run and then a short run profit maximisation objective. In the long run most of the scope for profit improvement of bank branches is by becoming more allocatively efficient. In the short run most of profit gain can be realised through higher technical efficiency. © 2003 Elsevier B.V. All rights reserved.
Resumo:
This paper develops a productivity index applicable when producers are cost minimisers and input prices are known. The index is inspired by the Malmquist index as extended to productivity measurement. The index developed here is defined in terms of input cost rather than input quantity distance functions. Hence, productivity change is decomposed into overall efficiency and cost technical change. Furthermore, overall efficiency change is decomposed into technical and allocative efficiency change and cost technical change into a part capturing shifts of input quantities and shifts of relative input prices. These decompositions provide a clearer picture of the root sources of productivity change. They are illustrated here in a sample of hospitals; results are computed using non-parametric mathematical programming. © 2003 Elsevier B.V. All rights reserved.
Resumo:
When testing the difference between two groups, if previous data indicate non-normality, then either transform the data if they comprise percentages, integers or scores or use a non-parametric test. If there is uncertainty whether the data are normally distributed, then deviations from normality are likely to be small if the data are measurements to three significant figures. Unless there is clear evidence that the distribution is non-normal, it is more efficient to use the conventional t-tests. It is poor statistical practice to carry out both the parametric and non-parametric tests on a set of data and then choose the result that is most convenient to the investigator!
Resumo:
There may be circumstances where it is necessary for microbiologists to compare variances rather than means, e,g., in analysing data from experiments to determine whether a particular treatment alters the degree of variability or testing the assumption of homogeneity of variance prior to other statistical tests. All of the tests described in this Statnote have their limitations. Bartlett’s test may be too sensitive but Levene’s and the Brown-Forsythe tests also have problems. We would recommend the use of the variance-ratio test to compare two variances and the careful application of Bartlett’s test if there are more than two groups. Considering that these tests are not particularly robust, it should be remembered that the homogeneity of variance assumption is usually the least important of those considered when carrying out an ANOVA. If there is concern about this assumption and especially if the other assumptions of the analysis are also not likely to be met, e.g., lack of normality or non additivity of treatment effects then it may be better either to transform the data or to carry out a non-parametric test on the data.
Resumo:
1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.
Resumo:
Distributed Brillouin sensing of strain and temperature works by making spatially resolved measurements of the position of the measurand-dependent extremum of the resonance curve associated with the scattering process in the weakly nonlinear regime. Typically, measurements of backscattered Stokes intensity (the dependent variable) are made at a number of predetermined fixed frequencies covering the design measurand range of the apparatus and combined to yield an estimate of the position of the extremum. The measurand can then be found because its relationship to the position of the extremum is assumed known. We present analytical expressions relating the relative error in the extremum position to experimental errors in the dependent variable. This is done for two cases: (i) a simple non-parametric estimate of the mean based on moments and (ii) the case in which a least squares technique is used to fit a Lorentzian to the data. The question of statistical bias in the estimates is discussed and in the second case we go further and present for the first time a general method by which the probability density function (PDF) of errors in the fitted parameters can be obtained in closed form in terms of the PDFs of the errors in the noisy data.
Resumo:
Citation information: Armstrong RA, Davies LN, Dunne MCM & Gilmartin B. Statistical guidelines for clinical studies of human vision. Ophthalmic Physiol Opt 2011, 31, 123-136. doi: 10.1111/j.1475-1313.2010.00815.x ABSTRACT: Statistical analysis of data can be complex and different statisticians may disagree as to the correct approach leading to conflict between authors, editors, and reviewers. The objective of this article is to provide some statistical advice for contributors to optometric and ophthalmic journals, to provide advice specifically relevant to clinical studies of human vision, and to recommend statistical analyses that could be used in a variety of circumstances. In submitting an article, in which quantitative data are reported, authors should describe clearly the statistical procedures that they have used and to justify each stage of the analysis. This is especially important if more complex or 'non-standard' analyses have been carried out. The article begins with some general comments relating to data analysis concerning sample size and 'power', hypothesis testing, parametric and non-parametric variables, 'bootstrap methods', one and two-tail testing, and the Bonferroni correction. More specific advice is then given with reference to particular statistical procedures that can be used on a variety of types of data. Where relevant, examples of correct statistical practice are given with reference to recently published articles in the optometric and ophthalmic literature.
Resumo:
Common approaches to IP-traffic modelling have featured the use of stochastic models, based on the Markov property, which can be classified into black box and white box models based on the approach used for modelling traffic. White box models, are simple to understand, transparent and have a physical meaning attributed to each of the associated parameters. To exploit this key advantage, this thesis explores the use of simple classic continuous-time Markov models based on a white box approach, to model, not only the network traffic statistics but also the source behaviour with respect to the network and application. The thesis is divided into two parts: The first part focuses on the use of simple Markov and Semi-Markov traffic models, starting from the simplest two-state model moving upwards to n-state models with Poisson and non-Poisson statistics. The thesis then introduces the convenient to use, mathematically derived, Gaussian Markov models which are used to model the measured network IP traffic statistics. As one of the most significant contributions, the thesis establishes the significance of the second-order density statistics as it reveals that, in contrast to first-order density, they carry much more unique information on traffic sources and behaviour. The thesis then exploits the use of Gaussian Markov models to model these unique features and finally shows how the use of simple classic Markov models coupled with use of second-order density statistics provides an excellent tool for capturing maximum traffic detail, which in itself is the essence of good traffic modelling. The second part of the thesis, studies the ON-OFF characteristics of VoIP traffic with reference to accurate measurements of the ON and OFF periods, made from a large multi-lingual database of over 100 hours worth of VoIP call recordings. The impact of the language, prosodic structure and speech rate of the speaker on the statistics of the ON-OFF periods is analysed and relevant conclusions are presented. Finally, an ON-OFF VoIP source model with log-normal transitions is contributed as an ideal candidate to model VoIP traffic and the results of this model are compared with those of previously published work.
Resumo:
The standard reference clinical score quantifying average Parkinson's disease (PD) symptom severity is the Unified Parkinson's Disease Rating Scale (UPDRS). At present, UPDRS is determined by the subjective clinical evaluation of the patient's ability to adequately cope with a range of tasks. In this study, we extend recent findings that UPDRS can be objectively assessed to clinically useful accuracy using simple, self-administered speech tests, without requiring the patient's physical presence in the clinic. We apply a wide range of known speech signal processing algorithms to a large database (approx. 6000 recordings from 42 PD patients, recruited to a six-month, multi-centre trial) and propose a number of novel, nonlinear signal processing algorithms which reveal pathological characteristics in PD more accurately than existing approaches. Robust feature selection algorithms select the optimal subset of these algorithms, which is fed into non-parametric regression and classification algorithms, mapping the signal processing algorithm outputs to UPDRS. We demonstrate rapid, accurate replication of the UPDRS assessment with clinically useful accuracy (about 2 UPDRS points difference from the clinicians' estimates, p < 0.001). This study supports the viability of frequent, remote, cost-effective, objective, accurate UPDRS telemonitoring based on self-administered speech tests. This technology could facilitate large-scale clinical trials into novel PD treatments.
Resumo:
This paper investigates the role of absorptive capacity in the diffusion of global technology with sector and firm heterogeneity. We construct the FDI-intensity weighted global R&D stock for each industry and link it to Chinese firm-level panel data relating to 53,981 firms over the period 2001-2005. Non-parametric frontier analysis is employed to explore how absorptive capacity affects technical change and catch-up in the presence of global knowledge spillovers. We find that R&D activities and training at individual firms serve as an effective source of absorptive capability. The contribution of absorptive capacity varies according to the type of FDI and the extent of openness.