920 resultados para nonparametric inference
Resumo:
This paper empirically estimates and analyzes various efficiency scores of Indian banks during 1997-2003 using data envelopment analysis (DEA). During the 1990s India's financial sector underwent a process of gradual liberalization aimed at strengthening and improving the operational efficiency of the financial system. It is observed, none the less, that Indian banks are still not much differentiated in terms of input or output oriented technical efficiency and cost efficiency. However, they differ sharply in respect of revenue and profit efficiencies. The results provide interesting insight into the empirical correlates of efficiency scores of Indian banks. Bank size, ownership, and the fact of its being listed on the stock exchange are some of the factors that are found to have positive impact on the average profit efficiency and to some extent revenue efficiency scores are. Finally, we observe that the median efficiency scores of Indian banks in general and of bigger banks in particular have improved considerably during the post-reform period.
Resumo:
In this paper we use the 2004-05 Annual Survey of Industries data to estimate the levels of cost efficiency of Indian manufacturing firms in the various states and also get state level measures of industrial organization (IO) efficiency. The empirical results show the presence of considerable cost inefficiency in a majority of the states. Further, we also find that, on average, Indian firms are too small. Consolidating them to attain the optimal scale would further enhance efficiency and lower average cost.
Resumo:
The Indian textiles industry is now at the crossroads with the phasing out of quota regime that prevailed under the Multi-Fiber Agreement (MFA) until the end of 2004. In the face of a full integration of the textiles sector in the WTO, maintaining and enhancing productive efficiency is a precondition for competitiveness of the Indian firms in the new liberalized world market. In this paper we use data obtained from the Annual Survey of Industries for a number of years to measure the levels of technical efficiency in the Indian textiles industry at the firm level. We use both a grand frontier applicable to all firms and a group frontier specific to firms from any individual state, ownership, or organization type in order to evaluate their efficiencies. This permits us to separately identify how locational, proprietary, and organizational characteristics of a firm affect its performance.
Resumo:
Widely publicized reports of fresh MBAs getting multiple job offers with six-figure annual salaries leave a long-lasting general impression about the high quality of selected business schools. While such spectacular achievement in job placement rightly deserves recognition, one should not lose sight of the resources expended in order to accomplish this result. In this study, we employ a measure of Pareto-Koopmans global efficiency to evaluate the efficiency levels of the MBA programs in Business Week's top-rated list. We compute input- and output-oriented radial and non-radial efficiency measures for comparison. Among three tier groups, the schools from a higher tier group on average are more efficient than those from lower tiers, although variations in efficiency levels do occur within the same tier, which exist over different measures of efficiency.
Resumo:
This paper develops a nonparametric method of obtaining the minimum of the long run average cost curve of a firm to define its capacity output. This provides a benchmark for measuring of capacity utilization at the observed output level of the firm. In the case of long run constant returns to scale, the minimum of the short run average cost curve is determined to measure short run capacity utilization. An empirical application measures yearly rates of capacity utilization in U.S. manufacturing over the period 1968-1998. Nonparametric determination of the short run average cost curve under variable returns to scale using an iterative search procedure is described in an appendix to this paper.
Resumo:
In Part One, the foundations of Bayesian inference are reviewed, and the technicalities of the Bayesian method are illustrated. Part Two applies the Bayesian meta-analysis program, the Confidence Profile Method (CPM), to clinical trial data and evaluates the merits of using Bayesian meta-analysis for overviews of clinical trials.^ The Bayesian method of meta-analysis produced similar results to the classical results because of the large sample size, along with the input of a non-preferential prior probability distribution. These results were anticipated through explanations in Part One of the mechanics of the Bayesian approach. ^
Resumo:
Embedded context management in resource-constrained devices (e.g. mobile phones, autonomous sensors or smart objects) imposes special requirements in terms of lightness for data modelling and reasoning. In this paper, we explore the state-of-the-art on data representation and reasoning tools for embedded mobile reasoning and propose a light inference system (LIS) aiming at simplifying embedded inference processes offering a set of functionalities to avoid redundancy in context management operations. The system is part of a service-oriented mobile software framework, conceived to facilitate the creation of context-aware applications—it decouples sensor data acquisition and context processing from the application logic. LIS, composed of several modules, encapsulates existing lightweight tools for ontology data management and rule-based reasoning, and it is ready to run on Java-enabled handheld devices. Data management and reasoning processes are designed to handle a general ontology that enables communication among framework components. Both the applications running on top of the framework and the framework components themselves can configure the rule and query sets in order to retrieve the information they need from LIS. In order to test LIS features in a real application scenario, an ‘Activity Monitor’ has been designed and implemented: a personal health-persuasive application that provides feedback on the user’s lifestyle, combining data from physical and virtual sensors. In this case of use, LIS is used to timely evaluate the user’s activity level, to decide on the convenience of triggering notifications and to determine the best interface or channel to deliver these context-aware alerts.d
Resumo:
In this work, we propose the Seasonal Dynamic Factor Analysis (SeaDFA), an extension of Nonstationary Dynamic Factor Analysis, through which one can deal with dimensionality reduction in vectors of time series in such a way that both common and specific components are extracted. Furthermore, common factors are able to capture not only regular dynamics (stationary or not) but also seasonal ones, by means of the common factors following a multiplicative seasonal VARIMA(p, d, q) × (P, D, Q)s model. Additionally, a bootstrap procedure that does not need a backward representation of the model is proposed to be able to make inference for all the parameters in the model. A bootstrap scheme developed for forecasting includes uncertainty due to parameter estimation, allowing enhanced coverage of forecasting intervals. A challenging application is provided. The new proposed model and a bootstrap scheme are applied to an innovative subject in electricity markets: the computation of long-term point forecasts and prediction intervals of electricity prices. Several appendices with technical details, an illustrative example, and an additional table are available online as Supplementary Materials.
Resumo:
We propose an analysis for detecting procedures and goals that are deterministic (i.e., that produce at most one solution at most once),or predicates whose clause tests are mutually exclusive (which implies that at most one of their clauses will succeed) even if they are not deterministic. The analysis takes advantage of the pruning operator in order to improve the detection of mutual exclusion and determinacy. It also supports arithmetic equations and disequations, as well as equations and disequations on terms,for which we give a complete satisfiability testing algorithm, w.r.t. available type information. Information about determinacy can be used for program debugging and optimization, resource consumption and granularity control, abstraction carrying code, etc. We have implemented the analysis and integrated it in the CiaoPP system, which also infers automatically the mode and type information that our analysis takes as input. Experiments performed on this implementation show that the analysis is fairly accurate and efficient.
Resumo:
When mapping is formulated in a Bayesian framework, the need of specifying a prior for the environment arises naturally. However, so far, the use of a particular structure prior has been coupled to working with a particular representation. We describe a system that supports inference with multiple priors while keeping the same dense representation. The priors are rigorously described by the user in a domain-specific language. Even though we work very close to the measurement space, we are able to represent structure constraints with the same expressivity as methods based on geometric primitives. This approach allows the intrinsic degrees of freedom of the environment’s shape to be recovered. Experiments with simulated and real data sets will be presented
Resumo:
The properties of data and activities in business processes can be used to greatly facilítate several relevant tasks performed at design- and run-time, such as fragmentation, compliance checking, or top-down design. Business processes are often described using workflows. We present an approach for mechanically inferring business domain-specific attributes of workflow components (including data Ítems, activities, and elements of sub-workflows), taking as starting point known attributes of workflow inputs and the structure of the workflow. We achieve this by modeling these components as concepts and applying sharing analysis to a Horn clause-based representation of the workflow. The analysis is applicable to workflows featuring complex control and data dependencies, embedded control constructs, such as loops and branches, and embedded component services.
Resumo:
Abstract is not available.
Resumo:
Abstract is not available.
Resumo:
RDB2RDF systems generate RDF from relational databases, operating in two dierent manners: materializing the database content into RDF or acting as virtual RDF datastores that transform SPARQL queries into SQL. In the former, inferences on the RDF data (taking into account the ontologies that they are related to) are normally done by the RDF triple store where the RDF data is materialised and hence the results of the query answering process depend on the store. In the latter, existing RDB2RDF systems do not normally perform such inferences at query time. This paper shows how the algorithm used in the REQUIEM system, focused on handling run-time inferences for query answering, can be adapted to handle such inferences for query answering in combination with RDB2RDF systems.
Resumo:
RDB2RDF systems generate RDF from relational databases, operating in two di�erent manners: materializing the database content into RDF or acting as virtual RDF datastores that transform SPARQL queries into SQL. In the former, inferences on the RDF data (taking into account the ontologies that they are related to) are normally done by the RDF triple store where the RDF data is materialised and hence the results of the query answering process depend on the store. In the latter, existing RDB2RDF systems do not normally perform such inferences at query time. This paper shows how the algorithm used in the REQUIEM system, focused on handling run-time inferences for query answering, can be adapted to handle such inferences for query answering in combination with RDB2RDF systems.