922 resultados para Data Envelopment Analysis (DEA), scale efficiency, technical efficiency
Direct and Indirect Measures of Capacity Utilization: A Nonparametric Analysis of U.S. Manufacturing
Resumo:
We measure the capacity output of a firm as the maximum amount producible by a firm given a specific quantity of the quasi-fixed input and an overall expenditure constraint for its choice of variable inputs. We compute this indirect capacity utilization measure for the total manufacturing sector in the US as well as for a number of disaggregated industries, for the period 1970-2001. We find considerable variation in capacity utilization rates both across industries and over years within industries. Our results suggest that the expenditure constraint was binding, especially in periods of high interest rates.
Resumo:
To date, big data applications have focused on the store-and-process paradigm. In this paper we describe an initiative to deal with big data applications for continuous streams of events. In many emerging applications, the volume of data being streamed is so large that the traditional ‘store-then-process’ paradigm is either not suitable or too inefficient. Moreover, soft-real time requirements might severely limit the engineering solutions. Many scenarios fit this description. In network security for cloud data centres, for instance, very high volumes of IP packets and events from sensors at firewalls, network switches and routers and servers need to be analyzed and should detect attacks in minimal time, in order to limit the effect of the malicious activity over the IT infrastructure. Similarly, in the fraud department of a credit card company, payment requests should be processed online and need to be processed as quickly as possible in order to provide meaningful results in real-time. An ideal system would detect fraud during the authorization process that lasts hundreds of milliseconds and deny the payment authorization, minimizing the damage to the user and the credit card company.
Resumo:
Abstract interpretation-based data-flow analysis of logic programs is at this point relatively well understood from the point of view of general frameworks and abstract domains. On the other hand, comparatively little attention has been given to the problems which arise when analysis of a full, practical dialect of the Prolog language is attempted, and only few solutions to these problems have been proposed to date. Such problems relate to dealing correctly with all builtins, including meta-logical and extra-logical predicates, with dynamic predicates (where the program is modified during execution), and with the absence of certain program text during compilation. Existing proposals for dealing with such issues generally restrict in one way or another the classes of programs which can be analyzed if the information from analysis is to be used for program optimization. This paper attempts to fill this gap by considering a full dialect of Prolog, essentially following the recently proposed ISO standard, pointing out the problems that may arise in the analysis of such a dialect, and proposing a combination of known and novel solutions that together allow the correct analysis of arbitrary programs using the full power of the language.
Resumo:
We present a novel analysis for relating the sizes of terms and subterms occurring at diferent argument positions in logic predicates. We extend and enrich the concept of sized type as a representation that incorporates structural (shape) information and allows expressing both lower and upper bounds on the size of a set of terms and their subterms at any position and depth. For example, expressing bounds on the length of lists of numbers, together with bounds on the values of all of their elements. The analysis is developed using abstract interpretation and the novel abstract operations are based on setting up and solving recurrence relations between sized types. It has been integrated, together with novel resource usage and cardinality analyses, in the abstract interpretation framework in the Ciao preprocessor, CiaoPP, in order to assess both the accuracy of the new size analysis and its usefulness in the resource usage estimation application. We show that the proposed sized types are a substantial improvement over the previous size analyses present in CiaoPP, and also benefit the resource analysis considerably, allowing the inference of equal or better bounds than comparable state of the art systems.
Resumo:
Sentiment and Emotion Analysis strongly depend on quality language resources, especially sentiment dictionaries. These resources are usually scattered, heterogeneous and limited to specific domains of appli- cation by simple algorithms. The EUROSENTIMENT project addresses these issues by 1) developing a common language resource representation model for sentiment analysis, and APIs for sentiment analysis services based on established Linked Data formats (lemon, Marl, NIF and ONYX) 2) by creating a Language Resource Pool (a.k.a. LRP) that makes avail- able to the community existing scattered language resources and services for sentiment analysis in an interoperable way. In this paper we describe the available language resources and services in the LRP and some sam- ple applications that can be developed on top of the EUROSENTIMENT LRP.
Resumo:
We present PIPE3D, an analysis pipeline based on the FIT3D fitting tool, developed to explore the properties of the stellar populations and ionized gas of integral field spectroscopy (IFS) data. PIPE3D was created to provide coherent, simple to distribute, and comparable dataproducts, independently of the origin of the data, focused on the data of the most recent IFU surveys (e.g., CALIFA, MaNGA, and SAMI), and the last generation IFS instruments (e.g., MUSE). In this article we describe the different steps involved in the analysis of the data, illustrating them by showing the dataproducts derived for NGC 2916, observed by CALIFA and P-MaNGA. As a practical example of the pipeline we present the complete set of dataproducts derived for the 200 datacubes that comprises the V500 setup of the CALIFA Data Release 2 (DR2), making them freely available through the network. Finally, we explore the hypothesis that the properties of the stellar populations and ionized gas of galaxies at the effective radius are representative of the overall average ones, finding that this is indeed the case.
Resumo:
Currently there are an overwhelming number of scientific publications in Life Sciences, especially in Genetics and Biotechnology. This huge amount of information is structured in corporate Data Warehouses (DW) or in Biological Databases (e.g. UniProt, RCSB Protein Data Bank, CEREALAB or GenBank), whose main drawback is its cost of updating that makes it obsolete easily. However, these Databases are the main tool for enterprises when they want to update their internal information, for example when a plant breeder enterprise needs to enrich its genetic information (internal structured Database) with recently discovered genes related to specific phenotypic traits (external unstructured data) in order to choose the desired parentals for breeding programs. In this paper, we propose to complement the internal information with external data from the Web using Question Answering (QA) techniques. We go a step further by providing a complete framework for integrating unstructured and structured information by combining traditional Databases and DW architectures with QA systems. The great advantage of our framework is that decision makers can compare instantaneously internal data with external data from competitors, thereby allowing taking quick strategic decisions based on richer data.
Resumo:
Questions of handling unbalanced data considered in this article. As models for classification, PNN and MLP are used. Problem of estimation of model performance in case of unbalanced training set is solved. Several methods (clustering approach and boosting approach) considered as useful to deal with the problem of input data.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.
Resumo:
Transportation Systems Center, Cambridge, Mass.
Resumo:
[1] Skylounge Project final report.--[2] Skylounge legal, technical, and financial supplementary study.
Resumo:
Prepared in cooperation with Soil Conservation Service, U.S. Geological Survey, N.C. Dept. of Natural and Economic Resources, and N.C. State University Agricultural Experiment Station.
Resumo:
"September 6, 1989."
Resumo:
"Contract A-636."
Resumo:
Mode of access: Internet.