6 resultados para Optimal Scale

em Aston University Research Archive


Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper surveys the literature on scale and scope economies in the water and sewerage industry. The magnitude of scale and scope economies determines the cost efficient configuration of any industry. In the case of a regulated sector, reliable estimates of these economies are relevant to inform reform proposals that promote vertical (un)bundling and mergers. The empirical evidence allows some general conclusions. First, there is considerable evidence for the existence of vertical scope economies between upstream water production and distribution. Second, there is only mixed evidence on the existence of (dis)economies of scope between water and sewerage activities. Third, economies of scale exist up to certain output level, and diseconomies of scale arise if the company increases its size beyond this level. However, the optimal scale of utilities also appears to vary considerably between countries. Finally, we briefly consider the implications of our findings for water pricing and point to several directions for necessary future empirical research on the measurement of these economies, and explaining their cross country variation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When composing stock portfolios, managers frequently choose among hundreds of stocks. The stocks' risk properties are analyzed with statistical tools, and managers try to combine these to meet the investors' risk profiles. A recently developed tool for performing such optimization is called full-scale optimization (FSO). This methodology is very flexible for investor preferences, but because of computational limitations it has until now been infeasible to use when many stocks are considered. We apply the artificial intelligence technique of differential evolution to solve FSO-type stock selection problems of 97 assets. Differential evolution finds the optimal solutions by self-learning from randomly drawn candidate solutions. We show that this search technique makes large scale problem computationally feasible and that the solutions retrieved are stable. The study also gives further merit to the FSO technique, as it shows that the solutions suit investor risk profiles better than portfolios retrieved from traditional methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To examine the optimum time at which fluorescein patterns of gas permeable lenses (GPs) should be evaluated. METHODS: Aligned, 0.2mm steep and 0.2mm flat GPs were fitted to 17 patients (aged 20.6±1.1 years, 10 male). Fluorescein was applied to their upper temporal bulbar conjunctiva with a moistened fluorescein strip. Digital slit lamp images (CSO, Italy) at 10× magnification of the fluorescein pattern viewed with blue light through a yellow filter were captured every 15s. Fluorescein intensity in central, mid peripheral and edge regions of the superior, inferior, temporal and nasal quadrants of the lens were graded subjectively using a +2 to -2 scale and using ImageJ software on the simultaneously captured images. RESULTS: Subjectively graded and objectively image analysed fluorescein intensity changed with time (p<0.001), lens region (centre, mid-periphery and edge: p<0.05) and there was interaction between lens region with lens fit (p<0.001). For edge band width, there was a significant effect of time (F=118.503, p<0.001) and lens fit (F=5.1249, p=0.012). The expected alignment, flat and steep fitting patterns could be seen from approximately after 30 to 180s subjectively and 15 to 105s in captured images. CONCLUSION: Although the stability of fluorescein intensity can start to decline in as little as 45s post fluorescein instillation, the diagnostic pattern of alignment, steep or flat fit is seen in each meridian by subjective observation from about 30s to 3min indicating this is the most appropriate time window to evaluate GP lenses in clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GraphChi is the first reported disk-based graph engine that can handle billion-scale graphs on a single PC efficiently. GraphChi is able to execute several advanced data mining, graph mining and machine learning algorithms on very large graphs. With the novel technique of parallel sliding windows (PSW) to load subgraph from disk to memory for vertices and edges updating, it can achieve data processing performance close to and even better than those of mainstream distributed graph engines. GraphChi mentioned that its memory is not effectively utilized with large dataset, which leads to suboptimal computation performances. In this paper we are motivated by the concepts of 'pin ' from TurboGraph and 'ghost' from GraphLab to propose a new memory utilization mode for GraphChi, which is called Part-in-memory mode, to improve the GraphChi algorithm performance. The main idea is to pin a fixed part of data inside the memory during the whole computing process. Part-in-memory mode is successfully implemented with only about 40 additional lines of code to the original GraphChi engine. Extensive experiments are performed with large real datasets (including Twitter graph with 1.4 billion edges). The preliminary results show that Part-in-memory mode memory management approach effectively reduces the GraphChi running time by up to 60% in PageRank algorithm. Interestingly it is found that a larger portion of data pinned in memory does not always lead to better performance in the case that the whole dataset cannot be fitted in memory. There exists an optimal portion of data which should be kept in the memory to achieve the best computational performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human mesenchymal stem cell (hMSC) therapies have the potential to revolutionise the healthcare industry and replicate the success of the therapeutic protein industry; however, for this to be achieved there is a need to apply key bioprocessing engineering principles and adopt a quantitative approach for large-scale reproducible hMSC bioprocess development. Here we provide a quantitative analysis of the changes in concentration of glucose, lactate and ammonium with time during hMSC monolayer culture over 4 passages, under 100% and 20% dissolved oxgen (dO2), where either a 100%, 50% or 0% growth medium exchange was performed after 72h in culture. Yield coefficients, specific growth rates (h-1) and doubling times (h) were calculated for all cases. The 100% dO2 flasks outperformed the 20% dO2 flasks with respect to cumulative cell number, with the latter consuming more glucose and producing more lactate and ammonium. Furthermore, the 100% and 50% medium exchange conditions resulted in similar cumulative cell numbers, whilst the 0% conditions were significantly lower. Cell immunophenotype and multipotency were not affected by the experimental culture conditions. This study demonstrates the importance of determining optimal culture conditions for hMSC expansion and highlights a potential cost savings from only making a 50% medium exchange, which may prove significant for large-scale bioprocessing. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Renewable energy forms have been widely used in the past decades highlighting a "green" shift in energy production. An actual reason behind this turn to renewable energy production is EU directives which set the Union's targets for energy production from renewable sources, greenhouse gas emissions and increase in energy efficiency. All member countries are obligated to apply harmonized legislation and practices and restructure their energy production networks in order to meet EU targets. Towards the fulfillment of 20-20-20 EU targets, in Greece a specific strategy which promotes the construction of large scale Renewable Energy Source plants is promoted. In this paper, we present an optimal design of the Greek renewable energy production network applying a 0-1 Weighted Goal Programming model, considering social, environmental and economic criteria. In the absence of a panel of experts Data Envelopment Analysis (DEA) approach is used in order to filter the best out of the possible network structures, seeking for the maximum technical efficiency. Super-Efficiency DEA model is also used in order to reduce the solutions and find the best out of all the possible. The results showed that in order to achieve maximum efficiency, the social and environmental criteria must be weighted more than the economic ones.