12 resultados para Efficient Production Scale

em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain


Relevância:

40.00% 40.00%

Publicador:

Resumo:

In microeconomic analysis functions with diminishing returns to scale (DRS) have frequently been employed. Various properties of increasing quasiconcave aggregator functions with DRS are derived. Furthermore duality in the classical sense as well as of a new type is studied for such aggregator functions in production and consumer theory. In particular representation theorems for direct and indirect aggregator functions are obtained. These involve only small sets of generator functions. The study is carried out in the contemporary framework of abstract convexity and abstract concavity.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We characterize the sharing rule for which a contribution mechanism achieves efficiency in a cooperative production setting when agents are heterogeneous. The sharing rule bears no resemblance to those considered by the previous literature. We also show for a large class of sharing rules that if Nash equilibrium yields efficient allocations, the production function displays constant returns to scale, a case in which cooperation in production is useless.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Enzymatic biodiesel is becoming an increasingly popular topic in bioenergy literature because of its potential to overcome the problems posed by chemical processes. However, the high cost of the enzymatic process still remains the main drawback for its industrial application, mostly because of the high price of refined oils. Unfortunately, low cost substrates, such as crude soybean oil, often release a product that hardly accomplishes the final required biodiesel specifications and need an additional pretreatment for gums removal. In order to reduce costs and to make the enzymatic process more efficient, we developed an innovative system for enzymatic biodiesel production involving a combination of a lipase and two phospholipases. This allows performing the enzymatic degumming and transesterification in a single step, using crude soybean oil as feedstock, and converting part of the phospholipids into biodiesel. Since the two processes have never been studied together, an accurate analysis of the different reaction components and conditions was carried out. Results Crude soybean oil, used as low cost feedstock, is characterized by a high content of phospholipids (900 ppm of phosphorus). However, after the combined activity of different phospholipases and liquid lipase Callera Trans L, a complete transformation into fatty acid methyl esters (FAMEs >95%) and a good reduction of phosphorus (P <5 ppm) was achieved. The combination of enzymes allowed avoidance of the acid treatment required for gums removal, the consequent caustic neutralization, and the high temperature commonly used in degumming systems, making the overall process more eco-friendly and with higher yield. Once the conditions were established, the process was also tested with different vegetable oils with variable phosphorus contents. Conclusions Use of liquid lipase Callera Trans L in biodiesel production can provide numerous and sustainable benefits. Besides reducing the costs derived from enzyme immobilization, the lipase can be used in combination with other enzymes such as phospholipases for gums removal, thus allowing the use of much cheaper, non-refined oils. The possibility to perform degumming and transesterification in a single tank involves a great efficiency increase in the new era of enzymatic biodiesel production at industrial scale.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A crucial step for understanding how lexical knowledge is represented is to describe the relative similarity of lexical items, and how it influences language processing. Previous studies of the effects of form similarity on word production have reported conflicting results, notably within and across languages. The aim of the present study was to clarify this empirical issue to provide specific constraints for theoretical models of language production. We investigated the role of phonological neighborhood density in a large-scale picture naming experiment using fine-grained statistical models. The results showed that increasing phonological neighborhood density has a detrimental effect on naming latencies, and re-analyses of independently obtained data sets provide supplementary evidence for this effect. Finally, we reviewed a large body of evidence concerning phonological neighborhood density effects in word production, and discussed the occurrence of facilitatory and inhibitory effects in accuracy measures. The overall pattern shows that phonological neighborhood generates two opposite forces, one facilitatory and one inhibitory. In cases where speech production is disrupted (e.g. certain aphasic symptoms), the facilitatory component may emerge, but inhibitory processes dominate in efficient naming by healthy speakers. These findings are difficult to accommodate in terms of monitoring processes, but can be explained within interactive activation accounts combining phonological facilitation and lexical competition.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The project aims at advancing the state of the art in the use of context information for classification of image and video data. The use of context in the classification of images has been showed of great importance to improve the performance of actual object recognition systems. In our project we proposed the concept of Multi-scale Feature Labels as a general and compact method to exploit the local and global context. The feature extraction from the discriminative probability or classification confidence label field is of great novelty. Moreover the use of a multi-scale representation of the feature labels lead to a compact and efficient description of the context. The goal of the project has been also to provide a general-purpose method and prove its suitability in different image/video analysis problem. The two-year project generated 5 journal publications (plus 2 under submission), 10 conference publications (plus 2 under submission) and one patent (plus 1 pending). Of these publications, a relevant number make use of the main result of this project to improve the results in detection and/or segmentation of objects.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In today’s competitive markets, the importance of goodscheduling strategies in manufacturing companies lead to theneed of developing efficient methods to solve complexscheduling problems.In this paper, we studied two production scheduling problemswith sequence-dependent setups times. The setup times areone of the most common complications in scheduling problems,and are usually associated with cleaning operations andchanging tools and shapes in machines.The first problem considered is a single-machine schedulingwith release dates, sequence-dependent setup times anddelivery times. The performance measure is the maximumlateness.The second problem is a job-shop scheduling problem withsequence-dependent setup times where the objective is tominimize the makespan.We present several priority dispatching rules for bothproblems, followed by a study of their performance. Finally,conclusions and directions of future research are presented.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Under team production, those who monitor individual productivity areusually the only ones compensated with a residual that varies withthe performance of the team. This pattern is efficient, as is shownby the prevalence of conventional firms, except for small teams andwhen specialized monitoring is ineffective. Profit sharing in repeatedteam production induces all team members to take disciplinary actionagainst underperformers through switching and separation decisions,however. Such action provides effective self-enforcemnt when themarkets for team members are competitive, even for large teams usingspecialized monitoring. The traditional share system of fishing firmsshows that for this competition to provide powerful enough incentivesthe costs of switching teams and measuring team productivity must bebellow. Risk allocation may constrain the organizational designdefined by the use of a share system. It does not account for itsexistence, however.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article investigates the main sources of heterogeneity in regional efficiency. We estimate a translog stochastic frontier production function in the analysis of Spanish regions in the period 1964-1996, to attempt to measure and explain changes in technical efficiency. Our results confirm that regional inefficiency is significantly and positively correlated with the ratio of public capital to private capital. The proportion of service industries in the private capital, the proportion of public capital devoted to transport infrastructures, the industrial specialization, and spatial spillovers from transport infrastructures in neighbouring regions significantly contributed to improve regional efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PRECON S.A is a manufacturing company dedicated to produce prefabricatedconcrete parts to several industries as rail transportation andagricultural industries.Recently, PRECON signed a contract with RENFE,the Spanish Nnational Rail Transportation Company to manufacturepre-stressed concrete sleepers for siding of the new railways of the highspeed train AVE. The scheduling problem associated with the manufacturingprocess of the sleepers is very complex since it involves severalconstraints and objectives. The constraints are related with productioncapacity, the quantity of available moulds, satisfying demand and otheroperational constraints. The two main objectives are related withmaximizing the usage of the manufacturing resources and minimizing themoulds movements. We developed a deterministic crowding genetic algorithmfor this multiobjective problem. The algorithm has proved to be a powerfuland flexible tool to solve the large-scale instance of this complex realscheduling problem.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A sequential weakly efficient two-auction game with entry costs, interdependence between objects, two potential bidders and IPV assumption is presented here in order to give some theoretical predictions on the effects of geographical scale economies on local service privatization performance. It is shown that the first object seller takes profit of this interdependence. The interdependence externality rises effective competition for the first object, expressed as the probability of having more than one final bidder. Besides, if there is more than one final bidder in the first auction, seller extracts the entire bidder¿s expected future surplus differential between having won the first auction and having lost. Consequences for second object seller are less clear, reflecting the contradictory nature of the two main effects of object interdependence. On the one hand, first auction winner becomes ¿stronger¿, so that expected payments rise in a competitive environment. On the other hand, first auction loser becomes relatively ¿weaker¿, hence (probably) reducing effective competition for the second object. Additionally, some contributions to static auction theory with entry cost and asymmetric bidders are presented in the appendix

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Research in epistasis or gene-gene interaction detection for human complex traits has grown over the last few years. It has been marked by promising methodological developments, improved translation efforts of statistical epistasis to biological epistasis and attempts to integrate different omics information sources into the epistasis screening to enhance power. The quest for gene-gene interactions poses severe multiple-testing problems. In this context, the maxT algorithm is one technique to control the false-positive rate. However, the memory needed by this algorithm rises linearly with the amount of hypothesis tests. Gene-gene interaction studies will require a memory proportional to the squared number of SNPs. A genome-wide epistasis search would therefore require terabytes of memory. Hence, cache problems are likely to occur, increasing the computation time. In this work we present a new version of maxT, requiring an amount of memory independent from the number of genetic effects to be investigated. This algorithm was implemented in C++ in our epistasis screening software MBMDR-3.0.3. We evaluate the new implementation in terms of memory efficiency and speed using simulated data. The software is illustrated on real-life data for Crohn’s disease. Results: In the case of a binary (affected/unaffected) trait, the parallel workflow of MBMDR-3.0.3 analyzes all gene-gene interactions with a dataset of 100,000 SNPs typed on 1000 individuals within 4 days and 9 hours, using 999 permutations of the trait to assess statistical significance, on a cluster composed of 10 blades, containing each four Quad-Core AMD Opteron(tm) Processor 2352 2.1 GHz. In the case of a continuous trait, a similar run takes 9 days. Our program found 14 SNP-SNP interactions with a multiple-testing corrected p-value of less than 0.05 on real-life Crohn’s disease (CD) data. Conclusions: Our software is the first implementation of the MB-MDR methodology able to solve large-scale SNP-SNP interactions problems within a few days, without using much memory, while adequately controlling the type I error rates. A new implementation to reach genome-wide epistasis screening is under construction. In the context of Crohn’s disease, MBMDR-3.0.3 could identify epistasis involving regions that are well known in the field and could be explained from a biological point of view. This demonstrates the power of our software to find relevant phenotype-genotype higher-order associations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Peer-reviewed