980 resultados para Stochastic Frontier Models
Resumo:
This study employs stochastic frontier analysis to analyze Malaysian commercial banks during 1996-2002, and particularly focuses on determining the impact of Islamic banking on performance. We derive both net and gross efficiency estimates, thereby demonstrating that differences in operating characteristics explain much of the difference in outputs between Malaysian banks. We also decompose productivity change into efficiency, technical, and scale change using a generalised Malmquist productivity index. On average, Malaysian banks experience mild decreasing return to scale and annual productivity change of 2.37 percent, with the latter driven primarily by technical change, which has declined over time. Our gross efficiency estimates suggest that Islamic banking is associated with higher input requirements. In addition, our productivity estimates indicate that the potential for full-fledged Islamic banks and conventional banks with Islamic banking operations to overcome the output disadvantages associated with Islamic banking are relatively limited. Merged banks are found to have higher input usage and lower productivity change, suggesting that bank mergers have not contributed positively to bank performance. Finally, our results suggest that while the East Asian financial crisis had an interim output-increasing effect in 1998, the crisis prompted a continuing negative impact on the output performance by increasing the volume of non-performing loans.
Resumo:
In efficiency studies using the stochastic frontier approach, the main focus is to explain inefficiency in terms of some exogenous variables and computation of marginal effects of each of these determinants. Although inefficiency is estimated by its mean conditional on the composed error term (the Jondrow et al., 1982 estimator), the marginal effects are computed from the unconditional mean of inefficiency (Wang, 2002). In this paper we derive the marginal effects based on the Jondrow et al. estimator and use the bootstrap method to compute confidence intervals of the marginal effects.
Resumo:
Optimal design for parameter estimation in Gaussian process regression models with input-dependent noise is examined. The motivation stems from the area of computer experiments, where computationally demanding simulators are approximated using Gaussian process emulators to act as statistical surrogates. In the case of stochastic simulators, which produce a random output for a given set of model inputs, repeated evaluations are useful, supporting the use of replicate observations in the experimental design. The findings are also applicable to the wider context of experimental design for Gaussian process regression and kriging. Designs are proposed with the aim of minimising the variance of the Gaussian process parameter estimates. A heteroscedastic Gaussian process model is presented which allows for an experimental design technique based on an extension of Fisher information to heteroscedastic models. It is empirically shown that the error of the approximation of the parameter variance by the inverse of the Fisher information is reduced as the number of replicated points is increased. Through a series of simulation experiments on both synthetic data and a systems biology stochastic simulator, optimal designs with replicate observations are shown to outperform space-filling designs both with and without replicate observations. Guidance is provided on best practice for optimal experimental design for stochastic response models. © 2013 Elsevier Inc. All rights reserved.
Resumo:
This dissertation analyzes hospital efficiency using various econometric techniques. The first essay provides additional and recent evidence to the presence of contract management behavior in the U.S. hospital industry. Unlike previous studies, which focus on either an input-demand equation or the cost function of the firm, this paper estimates the two jointly using a system of nonlinear equations. Moreover, it addresses the longitudinal problem of institutions adopting contract management in different years, by creating a matched control group of non-adopters with the same longitudinal distribution as the group under study. The estimation procedure then finds that labor, and not capital, is the preferred input in U.S. hospitals regardless of managerial contract status. With institutions that adopt contract management benefiting from lower labor inefficiencies than the simulated non-contract adopters. These results suggest that while there is a propensity for expense preference behavior towards the labor input, contract managed firms are able to introduce efficiencies over conventional, owner controlled, firms. Using data for the years 1998 through 2007, the second essay investigates the production technology and cost efficiency faced by Florida hospitals. A stochastic frontier multiproduct cost function is estimated in order to test for economies of scale, economies of scope, and relative cost efficiencies. The results suggest that small-sized hospitals experience economies of scale, while large and medium sized institutions do not. The empirical findings show that Florida hospitals enjoy significant scope economies, regardless of size. Lastly, the evidence suggests that there is a link between hospital size and relative cost efficiency. The results of the study imply that state policy makers should be focused on increasing hospital scale for smaller institutions while facilitating the expansion of multiproduct production for larger hospitals. The third and final essay employs a two staged approach in analyzing the efficiency of hospitals in the state of Florida. In the first stage, the Banker, Charnes, and Cooper model of Data Envelopment Analysis is employed in order to derive overall technical efficiency scores for each non-specialty hospital in the state. Additionally, input slacks are calculated and reported in order to identify the factors of production that each hospital may be over utilizing. In the second stage, we employ a Tobit regression model in order to analyze the effects a number of structural, managerial, and environmental factors may have on a hospital’s efficiency. The results indicated that most non-specialty hospitals in the state are operating away from the efficient production frontier. The results also indicate that the structural make up, managerial choices, and level of competition Florida hospitals face have an impact on their overall technical efficiency.
Resumo:
In Brazil, the National Agency of Electric Energy (ANEEL) represents the energy regulator. The rates review have been one of its main tasks, which establish a pricing practice at a level to cover the efficient operating costs and also the appropriate return of the distributors investments. The changes in the procedures to redefine the efficient costs and the several studies on the methodologies employed to regulate this segment denote the challenge faced by regulators about the best methodological strategy to be employed. In this context, this research aims to propose a benchmarking evaluation applied to the national regulation system in the establishment of efficient operating costs of electricity distribution utilities. The model is formulated to promote the electricity market development, partnering with government policies ant to society benefit. To conduct this research, an integration of Data Envelopment Analysis (DEA) with the Stochastic Frontier Analysis (SFA) is adopted in a three stages procedure to correct the efficiency in terms of environmental effects: (i) evaluation by means of DEA to measure operating costs slacks of the utilities, in which environmental variables are omitted; (ii) The slacks calculated in the first stage are regressed on a set of environmental variables by means of SFA and operating costs are adjusted to account the environmental impact and statistical noise effects; and, (iii) reassess the performance of the electric power distribution utilities by means of DEA. Based on this methodology it is possible to obtain a performance evaluation exclusively expressed in terms of management efficiency, in which the operating environment and statistical noise effects are controlled.
Resumo:
This study evaluates the cost effectiveness of municipalities in the state of Rio Grande do Norte in the execution of spending in basic education carried out in 2011, as well as analyze the determinants of the inefficiency of the same. For this, we used two methodological approaches (i) stochastic frontier cost, and (ii) analyze data envelopment (DEA), which allows to identify the efficient frontier of the municipalities analyzed non-parametrically. Results show that municipalities under review achieved low efficiency rates in the stochastic frontier cost, while the DEA method they achieved higher rates where nineteen among them reached full efficiency. The results suggest that a significant portion of the Potiguar municipalities should review its administrative practices, especially the means of allocation of resources. In regard to determining the efficiency observed distinct results by the two methods.
Resumo:
In the current study, we compared technical efficiency of smallholder rice farmers with and without credit in northern Ghana using data from a farm household survey. We fitted a stochastic frontier production function to input and output data to measure technical efficiency. We addressed self-selection into credit participation using propensity score matching and found that the mean efficiency did not differ between credit users and non-users. Credit-participating households had an efficiency of 63.0 percent compared to 61.7 percent for non-participants. The results indicate significant inefficiencies in production and thus a high scope for improving farmers’ technical efficiency through better use of available resources at the current level of technology. Apart from labour and capital, all the conventional farm inputs had a significant effect on rice production. The determinants of efficiency included the respondent’s age, sex, educational status, distance to the nearest market, herd ownership, access to irrigation and specialisation in rice production. From a policy perspective, we recommend that the credit should be channelled to farmers who demonstrate the need for it and show the commitment to improve their production through external financing. Such a screening mechanism will ensure that the credit goes to the right farmers who need it to improve their technical efficiency.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The main purpose of this study is to present an alternative benchmarking approach that can be used by national regulators of utilities. It is widely known that the lack of sizeable data sets limits the choice of the benchmarking method and the specification of the model to set price controls within incentive-based regulation. Ill-posed frontier models are the problem that some national regulators have been facing. Maximum entropy estimators are useful in the estimation of such ill-posed models, in particular in models exhibiting small sample sizes, collinearity and non-normal errors, as well as in models where the number of parameters to be estimated exceeds the number of observations available. The empirical study involves a sample data used by the Portuguese regulator of the electricity sector to set the parameters for the electricity distribution companies in the regulatory period of 2012-2014. DEA and maximum entropy methods are applied and the efficiency results are compared.
Resumo:
In this PhD thesis a new firm level conditional risk measure is developed. It is named Joint Value at Risk (JVaR) and is defined as a quantile of a conditional distribution of interest, where the conditioning event is a latent upper tail event. It addresses the problem of how risk changes under extreme volatility scenarios. The properties of JVaR are studied based on a stochastic volatility representation of the underlying process. We prove that JVaR is leverage consistent, i.e. it is an increasing function of the dependence parameter in the stochastic representation. A feasible class of nonparametric M-estimators is introduced by exploiting the elicitability of quantiles and the stochastic ordering theory. Consistency and asymptotic normality of the two stage M-estimator are derived, and a simulation study is reported to illustrate its finite-sample properties. Parametric estimation methods are also discussed. The relation with the VaR is exploited to introduce a volatility contribution measure, and a tail risk measure is also proposed. The analysis of the dynamic JVaR is presented based on asymmetric stochastic volatility models. Empirical results with S&P500 data show that accounting for extreme volatility levels is relevant to better characterize the evolution of risk. The work is complemented by a review of the literature, where we provide an overview on quantile risk measures, elicitable functionals and several stochastic orderings.
Resumo:
With each directed acyclic graph (this includes some D-dimensional lattices) one can associate some Abelian algebras that we call directed Abelian algebras (DAAs). On each site of the graph one attaches a generator of the algebra. These algebras depend on several parameters and are semisimple. Using any DAA, one can define a family of Hamiltonians which give the continuous time evolution of a stochastic process. The calculation of the spectra and ground-state wave functions (stationary state probability distributions) is an easy algebraic exercise. If one considers D-dimensional lattices and chooses Hamiltonians linear in the generators, in finite-size scaling the Hamiltonian spectrum is gapless with a critical dynamic exponent z=D. One possible application of the DAA is to sandpile models. In the paper we present this application, considering one- and two-dimensional lattices. In the one-dimensional case, when the DAA conserves the number of particles, the avalanches belong to the random walker universality class (critical exponent sigma(tau)=3/2). We study the local density of particles inside large avalanches, showing a depletion of particles at the source of the avalanche and an enrichment at its end. In two dimensions we did extensive Monte-Carlo simulations and found sigma(tau)=1.780 +/- 0.005.
Resumo:
The majority of past and current individual-tree growth modelling methodologies have failed to characterise and incorporate structured stochastic components. Rather, they have relied on deterministic predictions or have added an unstructured random component to predictions. In particular, spatial stochastic structure has been neglected, despite being present in most applications of individual-tree growth models. Spatial stochastic structure (also called spatial dependence or spatial autocorrelation) eventuates when spatial influences such as competition and micro-site effects are not fully captured in models. Temporal stochastic structure (also called temporal dependence or temporal autocorrelation) eventuates when a sequence of measurements is taken on an individual-tree over time, and variables explaining temporal variation in these measurements are not included in the model. Nested stochastic structure eventuates when measurements are combined across sampling units and differences among the sampling units are not fully captured in the model. This review examines spatial, temporal, and nested stochastic structure and instances where each has been characterised in the forest biometry and statistical literature. Methodologies for incorporating stochastic structure in growth model estimation and prediction are described. Benefits from incorporation of stochastic structure include valid statistical inference, improved estimation efficiency, and more realistic and theoretically sound predictions. It is proposed in this review that individual-tree modelling methodologies need to characterise and include structured stochasticity. Possibilities for future research are discussed. (C) 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This paper does two things. First, it presents alternative approaches to the standard methods of estimating productive efficiency using a production function. It favours a parametric approach (viz. the stochastic production frontier approach) over a nonparametric approach (e.g. data envelopment analysis); and, further, one that provides a statistical explanation of efficiency, as well as an estimate of its magnitude. Second, it illustrates the favoured approach (i.e. the ‘single stage procedure’) with estimates of two models of explained inefficiency, using data from the Thai manufacturing sector, after the crisis of 1997. Technical efficiency is modelled as being dependent on capital investment in three major areas (viz. land, machinery and office appliances) where land is intended to proxy the effects of unproductive, speculative capital investment; and both machinery and office appliances are intended to proxy the effects of productive, non-speculative capital investment. The estimates from these models cast new light on the five-year long, post-1997 crisis period in Thailand, suggesting a structural shift from relatively labour intensive to relatively capital intensive production in manufactures from 1998 to 2002.
Resumo:
This paper develops methods for Stochastic Search Variable Selection (currently popular with regression and Vector Autoregressive models) for Vector Error Correction models where there are many possible restrictions on the cointegration space. We show how this allows the researcher to begin with a single unrestricted model and either do model selection or model averaging in an automatic and computationally efficient manner. We apply our methods to a large UK macroeconomic model.
Resumo:
BACKGROUND: In vitro aggregating brain cell cultures containing all types of brain cells have been shown to be useful for neurotoxicological investigations. The cultures are used for the detection of nervous system-specific effects of compounds by measuring multiple endpoints, including changes in enzyme activities. Concentration-dependent neurotoxicity is determined at several time points. METHODS: A Markov model was set up to describe the dynamics of brain cell populations exposed to potentially neurotoxic compounds. Brain cells were assumed to be either in a healthy or stressed state, with only stressed cells being susceptible to cell death. Cells may have switched between these states or died with concentration-dependent transition rates. Since cell numbers were not directly measurable, intracellular lactate dehydrogenase (LDH) activity was used as a surrogate. Assuming that changes in cell numbers are proportional to changes in intracellular LDH activity, stochastic enzyme activity models were derived. Maximum likelihood and least squares regression techniques were applied for estimation of the transition rates. Likelihood ratio tests were performed to test hypotheses about the transition rates. Simulation studies were used to investigate the performance of the transition rate estimators and to analyze the error rates of the likelihood ratio tests. The stochastic time-concentration activity model was applied to intracellular LDH activity measurements after 7 and 14 days of continuous exposure to propofol. The model describes transitions from healthy to stressed cells and from stressed cells to death. RESULTS: The model predicted that propofol would affect stressed cells more than healthy cells. Increasing propofol concentration from 10 to 100 μM reduced the mean waiting time for transition to the stressed state by 50%, from 14 to 7 days, whereas the mean duration to cellular death reduced more dramatically from 2.7 days to 6.5 hours. CONCLUSION: The proposed stochastic modeling approach can be used to discriminate between different biological hypotheses regarding the effect of a compound on the transition rates. The effects of different compounds on the transition rate estimates can be quantitatively compared. Data can be extrapolated at late measurement time points to investigate whether costs and time-consuming long-term experiments could possibly be eliminated.