18 resultados para Context data
em University of Connecticut - USA
Resumo:
Lovell and Rouse (LR) have recently proposed a modification of the standard DEA model that overcomes the infeasibility problem often encountered in computing super-efficiency. In the LR procedure one appropriately scales up the observed input vector (scale down the output vector) of the relevant super-efficient firm thereby usually creating its inefficient surrogate. An alternative procedure proposed in this paper uses the directional distance function introduced by Chambers, Chung, and Färe and the resulting Nerlove-Luenberger (NL) measure of super-efficiency. The fact that the directional distance function combines features of both an input-oriented and an output-oriented model, generally leads to a more complete ranking of the observations than either of the oriented models. An added advantage of this approach is that the NL super-efficiency measure is unique and does not depend on any arbitrary choice of a scaling parameter. A data set on international airlines from Coelli, Perelman, and Griffel-Tatje (2002) is utilized in an illustrative empirical application.
Resumo:
We are confident of many of the judgements we make as to what sorts of alterations the members of nature’s kinds can survive, and what sorts of events mark the ends of their existences. But is our confidence based on empirical observation of nature’s kinds and their members? Conventionalists deny that we can learn empirically which properties are essential to the members of nature’s kinds. Judgements of sameness in kind between members, and of numerical sameness of a member across time, merely project our conventions of individuation. Our confidence is warranted because apart from those conventions there are no phenomena of kind-sameness or of numerical sameness across time. There is just “stuff” displaying properties. This paper argues that conventionalists can assign no properties to the “stuff” beyond immediate phenomenal properties. Consequently they cannot explain how each of us comes to be able to wield “our conventions”.
Resumo:
This research examines the site and situation characteristics of community trails as landscapes promoting physical activity. Trail segment and neighborhood characteristics for six trails in urban, suburban, and exurban towns in northeastern Massachusetts were assessed from primary Global Positioning System (GPS) data and from secondary Census and land use data integrated in a geographic information system (GIS). Correlations between neighborhood street and housing density, land use mix, and sociodemographic characteristics and trail segment characteristics and amenities measure the degree to which trail segment attributes are associated with the surrounding neighborhood characteristics.
Resumo:
Digital terrain models (DTM) typically contain large numbers of postings, from hundreds of thousands to billions. Many algorithms that run on DTMs require topological knowledge of the postings, such as finding nearest neighbors, finding the posting closest to a chosen location, etc. If the postings are arranged irregu- larly, topological information is costly to compute and to store. This paper offers a practical approach to organizing and searching irregularly-space data sets by presenting a collection of efficient algorithms (O(N),O(lgN)) that compute important topological relationships with only a simple supporting data structure. These relationships include finding the postings within a window, locating the posting nearest a point of interest, finding the neighborhood of postings nearest a point of interest, and ordering the neighborhood counter-clockwise. These algorithms depend only on two sorted arrays of two-element tuples, holding a planimetric coordinate and an integer identification number indicating which posting the coordinate belongs to. There is one array for each planimetric coordinate (eastings and northings). These two arrays cost minimal overhead to create and store but permit the data to remain arranged irregularly.
Resumo:
Many datasets used by economists and other social scientists are collected by stratified sampling. The sampling scheme used to collect the data induces a probability distribution on the observed sample that differs from the target or underlying distribution for which inference is to be made. If this effect is not taken into account, subsequent statistical inference can be seriously biased. This paper shows how to do efficient semiparametric inference in moment restriction models when data from the target population is collected by three widely used sampling schemes: variable probability sampling, multinomial sampling, and standard stratified sampling.
Resumo:
Despite the extensive work on currency mismatches, research on the determinants and effects of maturity mismatches is scarce. In this paper I show that emerging market maturity mismatches are negatively affected by capital inflows and price volatilities. Furthermore, I find that banks with low maturity mismatches are more profitable during crisis periods but less profitable otherwise. The later result implies that banks face a tradeoff between higher returns and risk, hence channeling short term capital into long term loans is caused by cronyism and implicit guarantees rather than the depth of the financial market. The positive relationship between maturity mismatches and price volatility, on the other hand, shows that the banks of countries with high exchange rate and interest rate volatilities can not, or choose not to hedge themselves. These results follow from a panel regression on a data set I constructed by merging bank level data with aggregate data. This is advantageous over traditional studies which focus only on aggregate data.
Resumo:
A problem frequently encountered in Data Envelopment Analysis (DEA) is that the total number of inputs and outputs included tend to be too many relative to the sample size. One way to counter this problem is to combine several inputs (or outputs) into (meaningful) aggregate variables reducing thereby the dimension of the input (or output) vector. A direct effect of input aggregation is to reduce the number of constraints. This, in its turn, alters the optimal value of the objective function. In this paper, we show how a statistical test proposed by Banker (1993) may be applied to test the validity of a specific way of aggregating several inputs. An empirical application using data from Indian manufacturing for the year 2002-03 is included as an example of the proposed test.
Resumo:
This paper extends the existing research on real estate investment trust (REIT) operating efficiencies. We estimate a stochastic-frontier panel-data model specifying a translog cost function, covering 1995 to 2003. The results disagree with previous research in that we find little evidence of scale economies and some evidence of scale diseconomies. Moreover, we also generally find smaller inefficiencies than those shown by other REIT studies. Contrary to previous research, the results also show that self-management of a REIT associates with more inefficiency when we measure output with assets. When we use revenue to measure output, selfmanagement associates with less inefficiency. Also contrary with previous research, higher leverage associates with more efficiency. The results further suggest that inefficiency increases over time in three of our four specifications.
Resumo:
This paper evaluates inflation targeting and assesses its merits by comparing alternative targets in a macroeconomic model. We use European aggregate data to evaluate the performance of alternative policy rules under alternative inflation targets in terms of output losses. We employ two major alternative policy rules, forward-looking and spontaneous adjustment, and three alternative inflation targets, zero percent, two percent, and four percent inflation rates. The simulation findings suggest that forward-looking rules contributed to macroeconomic stability and increase monetary policy credibility. The superiority of a positive inflation target, in terms of output losses, emerges for the aggregate data. The same methodology, when applied to individual countries, however, suggests that country-specific flexible inflation targeting can improve employment prospects in Europe.
Resumo:
The Indian textiles industry is now at the crossroads with the phasing out of quota regime that prevailed under the Multi-Fiber Agreement (MFA) until the end of 2004. In the face of a full integration of the textiles sector in the WTO, maintaining and enhancing productive efficiency is a precondition for competitiveness of the Indian firms in the new liberalized world market. In this paper we use data obtained from the Annual Survey of Industries for a number of years to measure the levels of technical efficiency in the Indian textiles industry at the firm level. We use both a grand frontier applicable to all firms and a group frontier specific to firms from any individual state, ownership, or organization type in order to evaluate their efficiencies. This permits us to separately identify how locational, proprietary, and organizational characteristics of a firm affect its performance.
Resumo:
We present a framework for fitting multiple random walks to animal movement paths consisting of ordered sets of step lengths and turning angles. Each step and turn is assigned to one of a number of random walks, each characteristic of a different behavioral state. Behavioral state assignments may be inferred purely from movement data or may include the habitat type in which the animals are located. Switching between different behavioral states may be modeled explicitly using a state transition matrix estimated directly from data, or switching probabilities may take into account the proximity of animals to landscape features. Model fitting is undertaken within a Bayesian framework using the WinBUGS software. These methods allow for identification of different movement states using several properties of observed paths and lead naturally to the formulation of movement models. Analysis of relocation data from elk released in east-central Ontario, Canada, suggests a biphasic movement behavior: elk are either in an "encamped" state in which step lengths are small and turning angles are high, or in an "exploratory" state, in which daily step lengths are several kilometers and turning angles are small. Animals encamp in open habitat (agricultural fields and opened forest), but the exploratory state is not associated with any particular habitat type.