977 resultados para Input Distance Function
Resumo:
This article uses a semiparametric smooth coefficient model (SPSCM) to estimate TFP growth and its components (scale and technical change). The SPSCM is derived from a nonparametric specification of the production technology represented by an input distance function (IDF), using a growth formulation. The functional coefficients of the SPSCM come naturally from the model and are fully flexible in the sense that no functional form of the underlying production technology is used to derive them. Another advantage of the SPSCM is that it can estimate bias (input and scale) in technical change in a fully flexible manner. We also used a translog IDF framework to estimate TFP growth components. A panel of U.S. electricity generating plants for the period 1986–1998 is used for this purpose. Comparing estimated TFP growth results from both parametric and semiparametric models against the Divisia TFP growth, we conclude that the SPSCM performs the best in tracking the temporal behavior of TFP growth.
Resumo:
In this paper we estimate a Translog output distance function for a balanced panel of state level data for the Australian dairy processing sector. We estimate a fixed effects specification employing Bayesian methods, with and without the imposition of monotonicity and curvature restrictions. Our results indicate that Tasmania and Victoria are the most technically efficient states with New South Wales being the least efficient. The imposition of theoretical restrictions marginally affects the results especially with respect to estimates of technical change and industry deregulation. Importantly, our bias estimates show changes in both input use and output mix that result from deregulation. Specifically, we find that deregulation has positively biased the production of butter, cheese and powders.
Resumo:
The estimated parameters of output distance functions frequently violate the monotonicity, quasi-convexity and convexity constraints implied by economic theory, leading to estimated elasticities and shadow prices that are incorrectly signed, and ultimately to perverse conclusions concerning the effects of input and output changes on productivity growth and relative efficiency levels. We show how a Bayesian approach can be used to impose these constraints on the parameters of a translog output distance function. Implementing the approach involves the use of a Gibbs sampler with data augmentation. A Metropolis-Hastings algorithm is also used within the Gibbs to simulate observations from truncated pdfs. Our methods are developed for the case where panel data is available and technical inefficiency effects are assumed to be time-invariant. Two models-a fixed effects model and a random effects model-are developed and applied to panel data on 17 European railways. We observe significant changes in estimated elasticities and shadow price ratios when regularity restrictions are imposed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
A long-standing challenge of content-based image retrieval (CBIR) systems is the definition of a suitable distance function to measure the similarity between images in an application context which complies with the human perception of similarity. In this paper, we present a new family of distance functions, called attribute concurrence influence distances (AID), which serve to retrieve images by similarity. These distances address an important aspect of the psychophysical notion of similarity in comparisons of images: the effect of concurrent variations in the values of different image attributes. The AID functions allow for comparisons of feature vectors by choosing one of two parameterized expressions: one targeting weak attribute concurrence influence and the other for strong concurrence influence. This paper presents the mathematical definition and implementation of the AID family for a two-dimensional feature space and its extension to any dimension. The composition of the AID family with L (p) distance family is considered to propose a procedure to determine the best distance for a specific application. Experimental results involving several sets of medical images demonstrate that, taking as reference the perception of the specialist in the field (radiologist), the AID functions perform better than the general distance functions commonly used in CBIR.
Resumo:
This thesis deals with distance transforms which are a fundamental issue in image processing and computer vision. In this thesis, two new distance transforms for gray level images are presented. As a new application for distance transforms, they are applied to gray level image compression. The new distance transforms are both new extensions of the well known distance transform algorithm developed by Rosenfeld, Pfaltz and Lay. With some modification their algorithm which calculates a distance transform on binary images with a chosen kernel has been made to calculate a chessboard like distance transform with integer numbers (DTOCS) and a real value distance transform (EDTOCS) on gray level images. Both distance transforms, the DTOCS and EDTOCS, require only two passes over the graylevel image and are extremely simple to implement. Only two image buffers are needed: The original gray level image and the binary image which defines the region(s) of calculation. No other image buffers are needed even if more than one iteration round is performed. For large neighborhoods and complicated images the two pass distance algorithm has to be applied to the image more than once, typically 3 10 times. Different types of kernels can be adopted. It is important to notice that no other existing transform calculates the same kind of distance map as the DTOCS. All the other gray weighted distance function, GRAYMAT etc. algorithms find the minimum path joining two points by the smallest sum of gray levels or weighting the distance values directly by the gray levels in some manner. The DTOCS does not weight them that way. The DTOCS gives a weighted version of the chessboard distance map. The weights are not constant, but gray value differences of the original image. The difference between the DTOCS map and other distance transforms for gray level images is shown. The difference between the DTOCS and EDTOCS is that the EDTOCS calculates these gray level differences in a different way. It propagates local Euclidean distances inside a kernel. Analytical derivations of some results concerning the DTOCS and the EDTOCS are presented. Commonly distance transforms are used for feature extraction in pattern recognition and learning. Their use in image compression is very rare. This thesis introduces a new application area for distance transforms. Three new image compression algorithms based on the DTOCS and one based on the EDTOCS are presented. Control points, i.e. points that are considered fundamental for the reconstruction of the image, are selected from the gray level image using the DTOCS and the EDTOCS. The first group of methods select the maximas of the distance image to new control points and the second group of methods compare the DTOCS distance to binary image chessboard distance. The effect of applying threshold masks of different sizes along the threshold boundaries is studied. The time complexity of the compression algorithms is analyzed both analytically and experimentally. It is shown that the time complexity of the algorithms is independent of the number of control points, i.e. the compression ratio. Also a new morphological image decompression scheme is presented, the 8 kernels' method. Several decompressed images are presented. The best results are obtained using the Delaunay triangulation. The obtained image quality equals that of the DCT images with a 4 x 4
Resumo:
In for-profit organizations efficiency measurement with reference to the potential for profit augmentation is particularly important as is its decomposition into technical, and allocative components. Different profit efficiency approaches can be found in the literature to measure and decompose overall profit efficiency. In this paper, we highlight some problems within existing approaches and propose a new measure of profit efficiency based on a geometric mean of input/output adjustments needed for maximizing profits. Overall profit efficiency is calculated through this efficiency measure and is decomposed into its technical and allocative components. Technical efficiency is calculated based on a non-oriented geometric distance function (GDF) that is able to incorporate all the sources of inefficiency, while allocative efficiency is retrieved residually. We also define a measure of profitability efficiency which complements profit efficiency in that it makes it possible to retrieve the scale efficiency of a unit as a component of its profitability efficiency. In addition, the measure of profitability efficiency allows for a dual profitability interpretation of the GDF measure of technical efficiency. The concepts introduced in the paper are illustrated using a numerical example.
Resumo:
The paper investigates the efficiency of a sample of Islamic and conventional banks in 10 countries that operate Islamic banking for the period 1996–2002, using an output distance function approach. We obtain measures of efficiency after allowing for environmental influences such as country macroeconomic conditions, accessibility of banking services and bank type. While these factors are assumed to directly influence the shape of the technology, we assume that country dummies and bank size directly influence technical inefficiency. The parameter estimates highlight that during the sample period, Islamic banking appears to be associated with higher input usage. Furthermore, by allowing for bank size and international differences in the underlying inefficiency distributions, we are also able to demonstrate statistically significant differences in inefficiency related to these factors even after controlling for specific environmental characteristics and Islamic banking. Thus, for example, our results suggest that Sudan and Yemen have relatively higher inefficiency while Bahrain and Bangladesh have lower estimated inefficiency. Except for Sudan, where banks exhibits relatively strong returns to scale, most sample banks exhibit very slight returns to scale, although Islamic banks are found to have moderately higher returns to scale than conventional banks. While this suggests that Islamic banks may benefit from increased scale, we would emphasize that our results suggest that identifying and overcoming the factors that cause Islamic banks to have relatively low potential outputs for given input usage levels will be the key challenge for Islamic banking in the coming decades.
Resumo:
The paper investigates the efficiency of a sample of Islamic and conventional banks in 10 countries that operate Islamic banking for the period 1996 to 2002, using an output distance function approach. We obtain measures of efficiency after allowing for environmental influences such as country macroeconomic conditions, accessibility of banking services and bank type. While these factors are assumed to directly influence the shape of the technology, we assume that country dummies directly influence technical inefficiency. The parameter estimates highlight that during the sample period, Islamic banking appear to be associated with higher input usage. Furthermore, by allowing for international differences in the underlying inefficiency distributions, we are also able to demonstrate statistically significant differences in efficiency across countries even after controlling for specific environmental characteristics and Islamic banking. Thus, for example, our results suggest that Sudan and Yemen have relatively higher inefficiency while Iran and Malaysia have lower estimated inefficiency. Except for Sudan, where banks exhibits relatively strong returns to scale, most sample banks exhibit very slight returns to scale, although Islamic banks are found to have moderately higher returns to scale than conventional banks. However while this suggests that Islamic banks may benefit from increased scale, we would emphasize that our results suggest that identifying and overcoming the factors that cause Islamic banks to have relatively high input requirements will be the key challenge for Islamic banking in the coming decades.
An improved conflicting evidence combination approach based on a new supporting probability distance
Resumo:
To avoid counter-intuitive result of classical Dempster's combination rule when dealing with highly conflict information, many improved combination methods have been developed through modifying the basic probability assignments (BPAs) of bodies of evidence (BOEs) by using a certain measure of the degree of conflict or uncertain information, such as Jousselme's distance, the pignistic probability distance and the ambiguity measure. However, if BOEs contain some non-singleton elements and the differences among their BPAs are larger than 0.5, the current conflict measure methods have limitations in describing the interrelationship among the conflict BOEs and may even lead to wrong combination results. In order to solve this problem, a new distance function, which is called supporting probability distance, is proposed to characterize the differences among BOEs. With the new distance, the information of how much a focal element is supported by the other focal elements in BOEs can be given. Also, a new combination rule based on the supporting probability distance is proposed for the combination of the conflicting evidences. The credibility and the discounting factor of each BOE are generated by the supporting probability distance and the weighted BOEs are combined directly using Dempster's rules. Analytical results of numerical examples show that the new distance has a better capability of describing the interrelationships among BOEs, especially for the highly conflicting BOEs containing non-singleton elements and the proposed new combination method has better applicability and effectiveness compared with the existing methods.
Resumo:
We propose a novel skeleton-based approach to gait recognition using our Skeleton Variance Image. The core of our approach consists of employing the screened Poisson equation to construct a family of smooth distance functions associated with a given shape. The screened Poisson distance function approximation nicely absorbs and is relatively stable to shape boundary perturbations which allows us to define a rough shape skeleton. We demonstrate how our Skeleton Variance Image is a powerful gait cycle descriptor leading to a significant improvement over the existing state of the art gait recognition rate.
Resumo:
Standard tools for the analysis of economic problems involving uncertainty, including risk premiums, certainty equivalents and the notions of absolute and relative risk aversion, are developed without making specific assumptions on functional form beyond the basic requirements of monotonicity, transitivity, continuity, and the presumption that individuals prefer certainty to risk. Individuals are not required to display probabilistic sophistication. The approach relies on the distance and benefit functions to characterize preferences relative to a given state-contingent vector of outcomes. The distance and benefit functions are used to derive absolute and relative risk premiums and to characterize preferences exhibiting constant absolute risk aversion (CARA) and constant relative risk aversion (CRRA). A generalization of the notion of Schur-concavity is presented. If preferences are generalized Schur concave, the absolute and relative risk premiums are generalized Schur convex, and the certainty equivalents are generalized Schur concave.
Resumo:
The present study investigated the influence of wrinkles on facial age judgments. In Experiment 1, preadolescents, young adults, and middle-aged adults made categorical age judgments for male and female faces. The qualitative (type of wrinkle) and quantitative (density of wrinkles and depth of furrows) contributions of wrinkles were analyzed. Results indicated that the greater the number of wrinkles and the depth of furrows, the older a face was rated. The roles of the gender of the face and the age of the participants were discussed. In Experiment 2, participants performed relative age judgments by comparing pairs of faces. Results revealed that the number of wrinkles had more influence on the perceived facial age than the type of wrinkle. A MDS analysis showed the main dimensions on which participants based their judgments, namely, the number of wrinkles and the depth of furrows. We conclude that the quantitative component is more likely to increase perceived facial age. Nevertheless, other variables, such as the gender of the face and the age of the participants, also seem to be involved in the age estimation process.
Resumo:
The early effects of clinical dose of cisplatin (100 mg/m(2)) on distort ion-product otoacoustic emissions (DPOAE) thresholds and the relationship between DPOAE threshold shifts and changes in plasma concentrations of filterable and total platinum (Pt) following infusion of cisplatin in a dog model were investigated. The DPOAE thresholds (based on input-output function) were measured 2 days before a single high dose of cisplatin administration, and compared with measurements recorded 2 and 4 days after infusion. The results revealed DPOAE thresholds to be elevated by 4 days after the administration of cisplatin. However, this elevation could not be correlated with plasma concentrations of filterable and total Pt, which showed little variation over the 48-hour postinfusion period between animals. The present study demonstrated that DPOAE thresholds have the potential to be used as an indicator of cisplatin-induced ototoxicity, and cisplatin-induced ototoxicity could not be explained by plasma Pt kinetics in individual animals.
Resumo:
A continuous random variable is expanded as a sum of a sequence of uncorrelated random variables. These variables are principal dimensions in continuous scaling on a distance function, as an extension of classic scaling on a distance matrix. For a particular distance, these dimensions are principal components. Then some properties are studied and an inequality is obtained. Diagonal expansions are considered from the same continuous scaling point of view, by means of the chi-square distance. The geometric dimension of a bivariate distribution is defined and illustrated with copulas. It is shown that the dimension can have the power of continuum.
Resumo:
L’objectif à moyen terme de ce travail est d’explorer quelques formulations des problèmes d’identification de forme et de reconnaissance de surface à partir de mesures ponctuelles. Ces problèmes ont plusieurs applications importantes dans les domaines de l’imagerie médicale, de la biométrie, de la sécurité des accès automatiques et dans l’identification de structures cohérentes lagrangiennes en mécanique des fluides. Par exemple, le problème d’identification des différentes caractéristiques de la main droite ou du visage d’une population à l’autre ou le suivi d’une chirurgie à partir des données générées par un numériseur. L’objectif de ce mémoire est de préparer le terrain en passant en revue les différents outils mathématiques disponibles pour appréhender la géométrie comme variable d’optimisation ou d’identification. Pour l’identification des surfaces, on explore l’utilisation de fonctions distance ou distance orientée, et d’ensembles de niveau comme chez S. Osher et R. Fedkiw ; pour la comparaison de surfaces, on présente les constructions des métriques de Courant par A. M. Micheletti en 1972 et le point de vue de R. Azencott et A. Trouvé en 1995 qui consistent à générer des déformations d’une surface de référence via une famille de difféomorphismes. L’accent est mis sur les fondations mathématiques sous-jacentes que l’on a essayé de clarifier lorsque nécessaire, et, le cas échéant, sur l’exploration d’autres avenues.