57 resultados para non-parametric estimation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Fare, Grosskopf, Norris and Zhang developed a non-parametric productivity index, Malmquist index, using data envelopment analysis (DEA). The Malmquist index is a measure of productivity progress (regress) and it can be decomposed to different components such as 'efficiency catch-up' and 'technology change'. However, Malmquist index and its components are based on two period of time which can capture only a part of the impact of investment in long-lived assets. The effects of lags in the investment process on the capital stock have been ignored in the current model of Malmquist index. This paper extends the recent dynamic DEA model introduced by Emrouznejad and Thanassoulis and Emrouznejad for dynamic Malmquist index. This paper shows that the dynamic productivity results for Organisation for Economic Cooperation and Development countries should reflect reality better than those based on conventional model.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The efficiency literature, both using parametric and non-parametric methods, has been focusing mainly on cost efficiency analysis rather than on profit efficiency. In for-profit organisations, however, the measurement of profit efficiency and its decomposition into technical and allocative efficiency is particularly relevant. In this paper a newly developed method is used to measure profit efficiency and to identify the sources of any shortfall in profitability (technical and/or allocative inefficiency). The method is applied to a set of Portuguese bank branches first assuming long run and then a short run profit maximisation objective. In the long run most of the scope for profit improvement of bank branches is by becoming more allocatively efficient. In the short run most of profit gain can be realised through higher technical efficiency. © 2003 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditionally, geostatistical algorithms are contained within specialist GIS and spatial statistics software. Such packages are often expensive, with relatively complex user interfaces and steep learning curves, and cannot be easily integrated into more complex process chains. In contrast, Service Oriented Architectures (SOAs) promote interoperability and loose coupling within distributed systems, typically using XML (eXtensible Markup Language) and Web services. Web services provide a mechanism for a user to discover and consume a particular process, often as part of a larger process chain, with minimal knowledge of how it works. Wrapping current geostatistical algorithms with a Web service layer would thus increase their accessibility, but raises several complex issues. This paper discusses a solution to providing interoperable, automatic geostatistical processing through the use of Web services, developed in the INTAMAP project (INTeroperability and Automated MAPping). The project builds upon Open Geospatial Consortium standards for describing observations, typically used within sensor webs, and employs Geography Markup Language (GML) to describe the spatial aspect of the problem domain. Thus the interpolation service is extremely flexible, being able to support a range of observation types, and can cope with issues such as change of support and differing error characteristics of sensors (by utilising descriptions of the observation process provided by SensorML). XML is accepted as the de facto standard for describing Web services, due to its expressive capabilities which allow automatic discovery and consumption by ‘naive’ users. Any XML schema employed must therefore be capable of describing every aspect of a service and its processes. However, no schema currently exists that can define the complex uncertainties and modelling choices that are often present within geostatistical analysis. We show a solution to this problem, developing a family of XML schemata to enable the description of a full range of uncertainty types. These types will range from simple statistics, such as the kriging mean and variances, through to a range of probability distributions and non-parametric models, such as realisations from a conditional simulation. By employing these schemata within a Web Processing Service (WPS) we show a prototype moving towards a truly interoperable geostatistical software architecture.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper develops a productivity index applicable when producers are cost minimisers and input prices are known. The index is inspired by the Malmquist index as extended to productivity measurement. The index developed here is defined in terms of input cost rather than input quantity distance functions. Hence, productivity change is decomposed into overall efficiency and cost technical change. Furthermore, overall efficiency change is decomposed into technical and allocative efficiency change and cost technical change into a part capturing shifts of input quantities and shifts of relative input prices. These decompositions provide a clearer picture of the root sources of productivity change. They are illustrated here in a sample of hospitals; results are computed using non-parametric mathematical programming. © 2003 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

When testing the difference between two groups, if previous data indicate non-normality, then either transform the data if they comprise percentages, integers or scores or use a non-parametric test. If there is uncertainty whether the data are normally distributed, then deviations from normality are likely to be small if the data are measurements to three significant figures. Unless there is clear evidence that the distribution is non-normal, it is more efficient to use the conventional t-tests. It is poor statistical practice to carry out both the parametric and non-parametric tests on a set of data and then choose the result that is most convenient to the investigator!

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There may be circumstances where it is necessary for microbiologists to compare variances rather than means, e,g., in analysing data from experiments to determine whether a particular treatment alters the degree of variability or testing the assumption of homogeneity of variance prior to other statistical tests. All of the tests described in this Statnote have their limitations. Bartlett’s test may be too sensitive but Levene’s and the Brown-Forsythe tests also have problems. We would recommend the use of the variance-ratio test to compare two variances and the careful application of Bartlett’s test if there are more than two groups. Considering that these tests are not particularly robust, it should be remembered that the homogeneity of variance assumption is usually the least important of those considered when carrying out an ANOVA. If there is concern about this assumption and especially if the other assumptions of the analysis are also not likely to be met, e.g., lack of normality or non additivity of treatment effects then it may be better either to transform the data or to carry out a non-parametric test on the data.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pearson's correlation coefficient (‘r’) is one of the most widely used of all statistics. Nevertheless, care needs to be used in interpreting the results because with large numbers of observations, quite small values of ‘r’ become significant and the X variable may only account for a small proportion of the variance in Y. Hence, ‘r squared’ should always be calculated and included in a discussion of the significance of ‘r’. The use of ‘r’ also assumes that the data follow a bivariate normal distribution (see Statnote 17) and this assumption should be examined prior to the study. If the data do not conform to such a distribution, the use of a non-parametric correlation coefficient should be considered. A significant correlation should not be interpreted as indicating ‘causation’ especially in observational studies, in which the two variables may be correlated because of their mutual correlations with other confounding variables.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

1. Pearson's correlation coefficient only tests whether the data fit a linear model. With large numbers of observations, quite small values of r become significant and the X variable may only account for a minute proportion of the variance in Y. Hence, the value of r squared should always be calculated and included in a discussion of the significance of r. 2. The use of r assumes that a bivariate normal distribution is present and this assumption should be examined prior to the study. If Pearson's r is not appropriate, then a non-parametric correlation coefficient such as Spearman's rs may be used. 3. A significant correlation should not be interpreted as indicating causation especially in observational studies in which there is a high probability that the two variables are correlated because of their mutual correlations with other variables. 4. In studies of measurement error, there are problems in using r as a test of reliability and the ‘intra-class correlation coefficient’ should be used as an alternative. A correlation test provides only limited information as to the relationship between two variables. Fitting a regression line to the data using the method known as ‘least square’ provides much more information and the methods of regression and their application in optometry will be discussed in the next article.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis examines the dynamics of firm-level financing and investment decisions for six Southeast Asian countries. The study provides empirical evidence on the impacts of changes in the firm-level financing decisions during the period of financial liberalization by considering the debt and equity financing decisions of a set of non-financial firms. The empirical results show that firms in Indonesia, Pakistan, and South Korea have relatively faster speed of adjustment than other Southeast Asian countries to attain optimal debt and equity ratios in response to banking sector and stock market liberalization. In addition, contrary to widely held belief that firms adjust their financial ratios to industry levels, the results indicate that industry factors do not significantly impact on the speed of capital structure adjustments. This study also shows that non-linear estimation methods are more appropriate than linear estimation methods for capturing changes in capital structure. The empirical results also show that international stock market integration of these countries has significantly reduced the equity risk premium as well as the firm-level cost of equity capital. Thus stock market liberalization is associated with a decrease in the cost of equity capital of the firms. Developments in the securities markets infrastructure have also reduced the cost of equity capital. However, with increased integration there is the possibility of capital outflows from the emerging markets, which might reverse the pattern of decrease in cost of capital in these markets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This book is aimed primarily at microbiologists who are undertaking research and who require a basic knowledge of statistics to analyse their experimental data. Computer software employing a wide range of data analysis methods is widely available to experimental scientists. The availability of this software, however, makes it essential that investigators understand the basic principles of statistics. Statistical analysis of data can be complex with many different methods of approach, each of which applies in a particular experimental circumstance. Hence, it is possible to apply an incorrect statistical method to data and to draw the wrong conclusions from an experiment. The purpose of this book, which has its origin in a series of articles published in the Society for Applied Microbiology journal ‘The Microbiologist’, is an attempt to present the basic logic of statistics as clearly as possible and therefore, to dispel some of the myths that often surround the subject. The 28 ‘Statnotes’ deal with various topics that are likely to be encountered, including the nature of variables, the comparison of means of two or more groups, non-parametric statistics, analysis of variance, correlating variables, and more complex methods such as multiple linear regression and principal components analysis. In each case, the relevant statistical method is illustrated with examples drawn from experiments in microbiological research. The text incorporates a glossary of the most commonly used statistical terms and there are two appendices designed to aid the investigator in the selection of the most appropriate test.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Citation information: Armstrong RA, Davies LN, Dunne MCM & Gilmartin B. Statistical guidelines for clinical studies of human vision. Ophthalmic Physiol Opt 2011, 31, 123-136. doi: 10.1111/j.1475-1313.2010.00815.x ABSTRACT: Statistical analysis of data can be complex and different statisticians may disagree as to the correct approach leading to conflict between authors, editors, and reviewers. The objective of this article is to provide some statistical advice for contributors to optometric and ophthalmic journals, to provide advice specifically relevant to clinical studies of human vision, and to recommend statistical analyses that could be used in a variety of circumstances. In submitting an article, in which quantitative data are reported, authors should describe clearly the statistical procedures that they have used and to justify each stage of the analysis. This is especially important if more complex or 'non-standard' analyses have been carried out. The article begins with some general comments relating to data analysis concerning sample size and 'power', hypothesis testing, parametric and non-parametric variables, 'bootstrap methods', one and two-tail testing, and the Bonferroni correction. More specific advice is then given with reference to particular statistical procedures that can be used on a variety of types of data. Where relevant, examples of correct statistical practice are given with reference to recently published articles in the optometric and ophthalmic literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The research is concerned with the measurement of residents' evaluations of the environmental quality of residential areas. The research reflects the increased attention being given to residents' values in planning decisions affecting the residential environment. The work was undertaken in co-operation with a local authority which was in the process of revising its housing strategy, and in particular the priorities for improvement action. The study critically examines the existing evidence on environmental values and their relationship to the environment and points to a number of methodological and conceptual deficiencies. The research strategy developed on the basis of the research review was constrained by the need to keep any survey methods simple so that they could easily be repeated, when necessary, by the sponsoring authority. A basic perception model was assumed, and a social survey carried out to measure residents' responses to different environmental conditions. The data was only assumed to have ordinal properties, necessitating the extensive use of non-parametric statistics. Residents' expressions of satisfaction with the component elements of the environment (ranging from convenience to upkeep and privacy) were successfully related to 'objective' measures of the environment. However the survey evidence did not justify the use of the 'objective' variables as environmental standards. A method of using the social survey data directly as an aid to decision-making is discussed. Alternative models of the derivation of overall satisfaction with the environment are tested, and the values implied by the additive model compared with residents' preferences as measured directly in the survey. Residents' overall satisfactions with the residential environment were most closely related to their satisfactions with the "Appearance" and the "Reputation" of their areas. By contrast the most important directly measured preference was "Friendliness of area". The differences point to the need to define concepts used in social research clearly in operational terms, and to take care in the use of values 'measured' by different methods.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The standard reference clinical score quantifying average Parkinson's disease (PD) symptom severity is the Unified Parkinson's Disease Rating Scale (UPDRS). At present, UPDRS is determined by the subjective clinical evaluation of the patient's ability to adequately cope with a range of tasks. In this study, we extend recent findings that UPDRS can be objectively assessed to clinically useful accuracy using simple, self-administered speech tests, without requiring the patient's physical presence in the clinic. We apply a wide range of known speech signal processing algorithms to a large database (approx. 6000 recordings from 42 PD patients, recruited to a six-month, multi-centre trial) and propose a number of novel, nonlinear signal processing algorithms which reveal pathological characteristics in PD more accurately than existing approaches. Robust feature selection algorithms select the optimal subset of these algorithms, which is fed into non-parametric regression and classification algorithms, mapping the signal processing algorithm outputs to UPDRS. We demonstrate rapid, accurate replication of the UPDRS assessment with clinically useful accuracy (about 2 UPDRS points difference from the clinicians' estimates, p < 0.001). This study supports the viability of frequent, remote, cost-effective, objective, accurate UPDRS telemonitoring based on self-administered speech tests. This technology could facilitate large-scale clinical trials into novel PD treatments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper investigates the role of absorptive capacity in the diffusion of global technology with sector and firm heterogeneity. We construct the FDI-intensity weighted global R&D stock for each industry and link it to Chinese firm-level panel data relating to 53,981 firms over the period 2001-2005. Non-parametric frontier analysis is employed to explore how absorptive capacity affects technical change and catch-up in the presence of global knowledge spillovers. We find that R&D activities and training at individual firms serve as an effective source of absorptive capability. The contribution of absorptive capacity varies according to the type of FDI and the extent of openness.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper analyzes the performance of Dutch drinking water utilities before and after the introduction of sunshine regulation, which involves publication of the performance of utilities but no formal price regulation. By decomposing profit change into its economic drivers, our results suggest that, in the Dutch political and institutional context, sunshine regulation was effective in improving the productivity of publicly organised services. Nevertheless, while sunshine regulation did bring about a moderate reduction in water prices, sustained and substantial economic profits suggest that it may not have the potential to fully align output prices with economic costs in the long run. In methodological terms, the DEA based profit decomposition is extended to robust and conditional non-parametric efficiency measures, so as to account better for both uncertainty and differences in operating environment between utilities.