851 resultados para statistical methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents a metafrontier production function model for firms in different groups having different technologies. The metafrontier model enables the calculation of comparable technical efficiencies for firms operating under different technologies. The model also enables the technology gaps to be estimated for firms under different technologies relative to the potential technology available to the industry as a whole. The metafrontier model is applied in the analysis of panel data on garment firms in five different regions of Indonesia, assuming that the regional stochastic frontier production function models have technical inefficiency effects with the time-varying structure proposed by Battese and Coelli ( 1992).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Filaments of galaxies are the dominant feature of modern large-scale redshift surveys. They can account for up to perhaps half of the baryonic mass budget of the Universe and their distribution and abundance can help constrain cosmological models. However, there remains no single, definitive way in which to detect, describe, and define what filaments are and their extent. This work examines a number of physically motivated, as well as statistical, methods that can be used to define filaments and examines their relative merits.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The estimated parameters of output distance functions frequently violate the monotonicity, quasi-convexity and convexity constraints implied by economic theory, leading to estimated elasticities and shadow prices that are incorrectly signed, and ultimately to perverse conclusions concerning the effects of input and output changes on productivity growth and relative efficiency levels. We show how a Bayesian approach can be used to impose these constraints on the parameters of a translog output distance function. Implementing the approach involves the use of a Gibbs sampler with data augmentation. A Metropolis-Hastings algorithm is also used within the Gibbs to simulate observations from truncated pdfs. Our methods are developed for the case where panel data is available and technical inefficiency effects are assumed to be time-invariant. Two models-a fixed effects model and a random effects model-are developed and applied to panel data on 17 European railways. We observe significant changes in estimated elasticities and shadow price ratios when regularity restrictions are imposed. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background Regression to the mean (RTM) is a statistical phenomenon that can make natural variation in repeated data look like real change. It happens when unusually large or small measurements tend to be followed by measurements that are closer to the mean. Methods We give some examples of the phenomenon, and discuss methods to overcome it at the design and analysis stages of a study. Results The effect of RTM in a sample becomes more noticeable with increasing measurement error and when follow-up measurements are only examined on a sub-sample selected using a baseline value. Conclusions RTM is a ubiquitous phenomenon in repeated data and should always be considered as a possible cause of an observed change. Its effect can be alleviated through better study design and use of suitable statistical methods.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two stochastic production frontier models are formulated within the generalized production function framework popularized by Zellner and Revankar (Rev. Econ. Stud. 36 (1969) 241) and Zellner and Ryu (J. Appl. Econometrics 13 (1998) 101). This framework is convenient for parsimonious modeling of a production function with returns to scale specified as a function of output. Two alternatives for introducing the stochastic inefficiency term and the stochastic error are considered. In the first the errors are added to an equation of the form h(log y, theta) = log f (x, beta) where y denotes output, x is a vector of inputs and (theta, beta) are parameters. In the second the equation h(log y,theta) = log f(x, beta) is solved for log y to yield a solution of the form log y = g[theta, log f(x, beta)] and the errors are added to this equation. The latter alternative is novel, but it is needed to preserve the usual definition of firm efficiency. The two alternative stochastic assumptions are considered in conjunction with two returns to scale functions, making a total of four models that are considered. A Bayesian framework for estimating all four models is described. The techniques are applied to USDA state-level data on agricultural output and four inputs. Posterior distributions for all parameters, for firm efficiencies and for the efficiency rankings of firms are obtained. The sensitivity of the results to the returns to scale specification and to the stochastic specification is examined. (c) 2004 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Chambers and Quiggin (2000) use state-contingent representations of risky production technologies to establish important theoretical results concerning producer behavior under uncertainty. Unfortunately, perceived problems in the estimation of state-contingent models have limited the usefulness of the approach in policy formulation. We show that fixed and random effects state-contingent production frontiers can be conveniently estimated in a finite mixtures framework. An empirical example is provided. Compared to conventional estimation approaches, we find that estimating production frontiers in a state-contingent framework produces significantly different estimates of elasticities, firm technical efficiencies, and other quantities of economic interest.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The country-product-dummy (CPD) method, originally proposed in Summers (1973), has recently been revisited in its weighted formulation to handle a variety of data related situations (Rao and Timmer, 2000, 2003; Heravi et al., 2001; Rao, 2001; Aten and Menezes, 2002; Heston and Aten, 2002; Deaton et al., 2004). The CPD method is also increasingly being used in the context of hedonic modelling instead of its original purpose of filling holes in Summers (1973). However, the CPD method is seen, among practitioners, as a black box due to its regression formulation. The main objective of the paper is to establish equivalence of purchasing power parities and international prices derived from the application of the weighted-CPD method with those arising out of the Rao-system for multilateral comparisons. A major implication of this result is that the weighted-CPD method would then be a natural method of aggregation at all levels of aggregation within the context of international comparisons.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Pain is defined as both a sensory and an emotional experience. Acute postoperative tooth extraction pain is assessed and treated as a physiological (sensory) pain while chronic pain is a biopsychosocial problem. The purpose of this study was to assess whether psychological and social changes Occur in the acute pain state. Methods: A biopsychosocial pain questionnaire was completed by 438 subjects (165 males, 273 females) with acute postoperative pain at 24 hours following the surgical extraction of teeth and compared with 273 subjects (78 males, 195 females) with chronic orofacial pain. Statistical methods used a k-means cluster analysis. Results: Three clusters were identified in the acute pain group: 'unaffected', 'disabled' and 'depressed, anxious and disabled'. Psychosocial effects showed 24.8 per cent feeling 'distress/suffering' and 15.1 per cent 'sad and depressed'. Females reported higher pain intensity and more distress, depression and inadequate medication for pain relief (p

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A combination of uni- and multiplex PCR assays targeting 58 virulence genes (VGs) associated with Escherichia coli strains causing intestinal and extraintestinal disease in humans and other mammals was used to analyze the VG repertoire of 23 commensal E. coli isolates from healthy pigs and 52 clinical isolates associated with porcine neonatal diarrhea (ND) and postweaning diarrhea (PWD). The relationship between the presence and absence of VGs was interrogated using three statistical methods. According to the generalized linear model, 17 of 58 VGs were found to be significant (P < 0.05) in distinguishing between commensal and clinical isolates. Nine of the 17 genes represented by iha, hlyA, aidA, east1, aah, fimH, iroN(E).(coli), traT, and saa have not been previously identified as important VGs in clinical porcine isolates in Australia. The remaining eight VGs code for fimbriae (F4, F5, F18, and F41) and toxins (STa, STh, LT, and Stx2), normally associated with porcine enterotoxigenic E. coli. Agglomerative hierarchical algorithm analysis grouped E. coli strains into subclusters based primarily on their serogroup. Multivariate analyses of clonal relationships based on the 17 VGs were collapsed into two-dimensional space by principal coordinate analysis. PWD clones were distributed in two quadrants, separated from ND and commensal clones, which tended to cluster within one quadrant. Clonal subclusters within quadrants were highly correlated with serogroups. These methods of analysis provide different perspectives in our attempts to understand how commensal and clinical porcine enterotoxigenic E. coli strains have evolved and are engaged in the dynamic process of losing or acquiring VGs within the pig population.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Purpose: This Study evaluated the predictive validity of three previously published ActiGraph energy expenditure (EE) prediction equations developed for children and adolescents. Methods: A total of 45 healthy children and adolescents (mean age: 13.7 +/- 2.6 yr) completed four 5-min activity trials (normal walking. brisk walking, easy running, and fast running) in ail indoor exercise facility. During each trial, participants were all ActiGraph accelerometer oil the right hip. EE was monitored breath by breath using the Cosmed K4b(2) portable indirect calorimetry system. Differences and associations between measured and predicted EE were assessed using dependent t-tests and Pearson correlations, respectively. Classification accuracy was assessed using percent agreement, sensitivity, specificity, and area under the receiver operating characteristic (ROC) curve, Results: None of the equations accurately predicted mean energy expenditure during each of the four activity trials. Each equation, however, accurately predicted mean EE in at least one activity trial. The Puyau equation accurately predicted EE during slow walking. The Trost equation accurately predicted EE during slow running. The Freedson equation accurately predicted EE during fast running. None of the three equations accurately predicted EE during brisk walking. The equations exhibited fair to excellent classification accuracy with respect to activity intensity. with the Trost equation exhibiting the highest classification accuracy and the Puyau equation exhibiting the lowest. Conclusions: These data suggest that the three accelerometer prediction equations do not accurately predict EE on a minute-by-minute basis in children and adolescents during overground walking and running. The equations maybe, however, for estimating participation in moderate and vigorous activity.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Chambers and Quiggin (2000) use state-contingent representations of risky production technologies to establish important theoretical results concerning producer behavior under uncertainty. Unfortunately, perceived problems in the estimation of state-contingent models have limited the usefulness of the approach in policy formulation. We show that fixed and random effects state-contingent production frontiers can be conveniently estimated in a finite mixtures framework. An empirical example is provided. Compared to conventional estimation approaches, we find that estimating production frontiers in a statecontingent framework produces significantly different estimates of elasticities, firm technical efficiencies and other quantities of economic interest.