997 resultados para volume optimisation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: Previous investigation showed that the volume-time curve technique could be an alternative for endotracheal tube (ETT) cuff management. However, the clinical impact of the volume-time curve application has not been documented. the purpose of this study was to compare the occurrence and intensity of a sore throat, cough, thoracic pain, and pulmonary function between these 2 techniques for ETT cuff management: volume-time curve technique versus minimal occlusive volume (MOV) technique after coronary artery bypass grafting. METHODS: A total of 450 subjects were randomized into 2 groups for cuff management after intubation: MOV group (n = 222) and volume-time curve group (n = 228). We measured cuff pressure before extubation. We performed spirometry 24 h before and after surgery. We graded sore throat and cough according to a 4-point scale at 1, 24, 72, and 120 h after extubation and assessed thoracic pain at 24 h after extubation and quantified the level of pain by a 10-point scale. RESULTS: the volume-time curve group presented significantly lower cuff pressure (30.9 +/- 2.8 vs 37.7 +/- 3.4 cm H2O), less incidence and intensity of sore throat (1 h, 23.7 vs 51.4%; and 24 h, 18.9 vs 40.5%, P < .001), cough (1 h, 19.3 vs 48.6%; and 24 h, 18.4 vs 42.3%, P < .001), thoracic pain (5.2 +/- 1.8 vs 7.1 +/- 1.7), better preservation of FVC (49.5 +/- 9.9 vs 41.8 +/- 12.9%, P = .005), and FEV1, (46.6 +/- 1.8 vs 38.6 +/- 1.4%, P = .005) compared with the MOV group. CONCLUSIONS: the subjects who received the volume-time curve technique for ETT cuff management presented a significantly lower incidence and severity of sore throat and cough, less thoracic pain, and minimally impaired pulmonary function than those subjects who received the MOV technique during the first 24 h after coronary artery bypass grafting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Peron, N., Cox, S.J., Hutzler, S. and Weaire, D. (2007) Steady drainage in emulsions: corrections for surface Plateau borders and a model for high aqueous volume fraction. The European Physical Journal E - Soft Matter. 22: 341-351. Sponsorship: This research was supported by the European Space Agency (14914/02/NL/SH, 14308/00/NL/SG) (AO-99-031) CCN 002 MAP Project AO-99-075) and Science Foundation Ireland (RFP 05/RFP/PHY0016). SJC acknowledges support from EPSRC (EP/D071127/1).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

R. Daly, Q. Shen and S. Aitken. Using ant colony optimisation in learning Bayesian network equivalence classes. Proceedings of the 2006 UK Workshop on Computational Intelligence, pages 111-118.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

M. Galea and Q. Shen. Simultaneous ant colony optimisation algorithms for learning linguistic fuzzy rules. A. Abraham, C. Grosan and V. Ramos (Eds.), Swarm Intelligence in Data Mining, pages 75-99.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The role of renewable energy in power systems is becoming more significant due to the increasing cost of fossil fuels and climate change concerns. However, the inclusion of Renewable Energy Generators (REG), such as wind power, has created additional problems for power system operators due to the variability and lower predictability of output of most REGs, with the Economic Dispatch (ED) problem being particularly difficult to resolve. In previous papers we had reported on the inclusion of wind power in the ED calculations. The simulation had been performed using a system model with wind power as an intermittent source, and the results of the simulation have been compared to that of the Direct Search Method (DSM) for similar cases. In this paper we report on our continuing investigations into using Genetic Algorithms (GA) for ED for an independent power system with a significant amount of wind energy in its generator portfolio. The results demonstrate, in line with previous reports in the literature, the effectiveness of GA when measured against a benchmark technique such as DSM.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

http://www.archive.org/details/memorialvolumeof00andeuoft

Relevância:

20.00% 20.00%

Publicador:

Resumo:

http://www.archive.org/details/memorialvolumeof00andeiala

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a new characterization of protein structure based on the natural tetrahedral geometry of the β carbon and a new geometric measure of structural similarity, called visible volume. In our model, the side-chains are replaced by an ideal tetrahedron, the orientation of which is fixed with respect to the backbone and corresponds to the preferred rotamer directions. Visible volume is a measure of the non-occluded empty space surrounding each residue position after the side-chains have been removed. It is a robust, parameter-free, locally-computed quantity that accounts for many of the spatial constraints that are of relevance to the corresponding position in the native structure. When computing visible volume, we ignore the nature of both the residue observed at each site and the ones surrounding it. We focus instead on the space that, together, these residues could occupy. By doing so, we are able to quantify a new kind of invariance beyond the apparent variations in protein families, namely, the conservation of the physical space available at structurally equivalent positions for side-chain packing. Corresponding positions in native structures are likely to be of interest in protein structure prediction, protein design, and homology modeling. Visible volume is related to the degree of exposure of a residue position and to the actual rotamers in native proteins. In this article, we discuss the properties of this new measure, namely, its robustness with respect to both crystallographic uncertainties and naturally occurring variations in atomic coordinates, and the remarkable fact that it is essentially independent of the choice of the parameters used in calculating it. We also show how visible volume can be used to align protein structures, to identify structurally equivalent positions that are conserved in a family of proteins, and to single out positions in a protein that are likely to be of biological interest. These properties qualify visible volume as a powerful tool in a variety of applications, from the detailed analysis of protein structure to homology modeling, protein structural alignment, and the definition of better scoring functions for threading purposes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The objective of this paper is to investigate the effect of the pad size ratio between the chip and board end of a solder joint on the shape of that solder joint in combination with the solder volume available. The shape of the solder joint is correlated to its reliability and thus of importance. For low density chip bond pad applications Flip Chip (FC) manufacturing costs can be kept down by using larger size board pads suitable for solder application. By using “Surface Evolver” software package the solder joint shapes associated with different size/shape solder preforms and chip/board pad ratios are predicted. In this case a so called Flip-Chip Over Hole (FCOH) assembly format has been used. Assembly trials involved the deposition of lead-free 99.3Sn0.7Cu solder on the board side, followed by reflow, an underfill process and back die encapsulation. During the assembly work pad off-sets occurred that have been taken into account for the Surface Evolver solder joint shape prediction and accurately matched the real assembly. Overall, good correlation was found between the simulated solder joint shape and the actual fabricated solder joint shapes. Solder preforms were found to exhibit better control over the solder volume. Reflow simulation of commercially available solder preform volumes suggests that for a fixed stand-off height and chip-board pad ratio, the solder volume value and the surface tension determines the shape of the joint.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this PhD study, mathematical modelling and optimisation of granola production has been carried out. Granola is an aggregated food product used in breakfast cereals and cereal bars. It is a baked crispy food product typically incorporating oats, other cereals and nuts bound together with a binder, such as honey, water and oil, to form a structured unit aggregate. In this work, the design and operation of two parallel processes to produce aggregate granola products were incorporated: i) a high shear mixing granulation stage (in a designated granulator) followed by drying/toasting in an oven. ii) a continuous fluidised bed followed by drying/toasting in an oven. In addition, the particle breakage of granola during pneumatic conveying produced by both a high shear granulator (HSG) and fluidised bed granulator (FBG) process were examined. Products were pneumatically conveyed in a purpose built conveying rig designed to mimic product conveying and packaging. Three different conveying rig configurations were employed; a straight pipe, a rig consisting two 45° bends and one with 90° bend. It was observed that the least amount of breakage occurred in the straight pipe while the most breakage occurred at 90° bend pipe. Moreover, lower levels of breakage were observed in two 45° bend pipe than the 90° bend vi pipe configuration. In general, increasing the impact angle increases the degree of breakage. Additionally for the granules produced in the HSG, those produced at 300 rpm have the lowest breakage rates while the granules produced at 150 rpm have the highest breakage rates. This effect clearly the importance of shear history (during granule production) on breakage rates during subsequent processing. In terms of the FBG there was no single operating parameter that was deemed to have a significant effect on breakage during subsequent conveying. A population balance model was developed to analyse the particle breakage occurring during pneumatic conveying. The population balance equations that govern this breakage process are solved using discretization. The Markov chain method was used for the solution of PBEs for this process. This study found that increasing the air velocity (by increasing the air pressure to the rig), results in increased breakage among granola aggregates. Furthermore, the analysis carried out in this work provides that a greater degree of breakage of granola aggregates occur in line with an increase in bend angle.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study has considered the optimisation of granola breakfast cereal manufacturing processes by wet granulation and pneumatic conveying. Granola is an aggregated food product used as a breakfast cereal and in cereal bars. Processing of granola involves mixing the dry ingredients (typically oats, nuts, etc.) followed by the addition of a binder which can contain honey, water and/or oil. In this work, the design and operation of two parallel wet granulation processes to produce aggregate granola products were incorporated: a) a high shear mixing granulation process followed by drying/toasting in an oven. b) a continuous fluidised bed followed by drying/toasting in an oven. In high shear granulation the influence of process parameters on key granule aggregate quality attributes such as granule size distribution and textural properties of granola were investigated. The experimental results show that the impeller rotational speed is the single most important process parameter which influences granola physical and textural properties. After that binder addition rate and wet massing time also show significant impacts on granule properties. Increasing the impeller speed and wet massing time increases the median granule size while also presenting a positive correlation with density. The combination of high impeller speed and low binder addition rate resulted in granules with the highest levels of hardness and crispness. In the fluidised bed granulation process the effect of nozzle air pressure and binder spray rate on key aggregate quality attributes were studied. The experimental results show that a decrease in nozzle air pressure leads to larger in mean granule size. The combination of lowest nozzle air pressure and lowest binder spray rate results in granules with the highest levels of hardness and crispness. Overall, the high shear granulation process led to larger, denser, less porous and stronger (less likely to break) aggregates than the fluidised bed process. The study also examined the particle breakage of granola during pneumatic conveying produced by both the high shear granulation and the fluidised bed granulation process. Products were pneumatically conveyed in a purpose built conveying rig designed to mimic product conveying and packaging. Three different conveying rig configurations were employed; a straight pipe, a rig consisting two 45° bends and one with 90° bend. Particle breakage increases with applied pressure drop, and a 90° bend pipe results in more attrition for all conveying velocities relative to other pipe geometry. Additionally for the granules produced in the high shear granulator; those produced at the highest impeller speed, while being the largest also have the lowest levels of proportional breakage while smaller granules produced at the lowest impeller speed have the highest levels of breakage. This effect clearly shows the importance of shear history (during granule production) on breakage during subsequent processing. In terms of the fluidised bed granulation, there was no single operating parameter that was deemed to have a significant effect on breakage during subsequent conveying. Finally, a simple power law breakage model based on process input parameters was developed for both manufacturing processes. It was found suitable for predicting the breakage of granola breakfast cereal at various applied air velocities using a number of pipe configurations, taking into account shear histories.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A massive change is currently taking place in the manner in which power networks are operated. Traditionally, power networks consisted of large power stations which were controlled from centralised locations. The trend in modern power networks is for generated power to be produced by a diverse array of energy sources which are spread over a large geographical area. As a result, controlling these systems from a centralised controller is impractical. Thus, future power networks will be controlled by a large number of intelligent distributed controllers which must work together to coordinate their actions. The term Smart Grid is the umbrella term used to denote this combination of power systems, artificial intelligence, and communications engineering. This thesis focuses on the application of optimal control techniques to Smart Grids with a focus in particular on iterative distributed MPC. A novel convergence and stability proof for iterative distributed MPC based on the Alternating Direction Method of Multipliers is derived. Distributed and centralised MPC, and an optimised PID controllers' performance are then compared when applied to a highly interconnected, nonlinear, MIMO testbed based on a part of the Nordic power grid. Finally, a novel tuning algorithm is proposed for iterative distributed MPC which simultaneously optimises both the closed loop performance and the communication overhead associated with the desired control.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis describes the optimisation of chemoenzymatic methods in asymmetric synthesis. Modern synthetic organic chemistry has experienced an enormous growth in biocatalytic methodologies; enzymatic transformations and whole cell bioconversions have become generally accepted synthetic tools for asymmetric synthesis. Biocatalysts are exceptional catalysts, combining broad substrate scope with high regio-, enantio- and chemoselectivities enabling the resolution of organic substrates with superb efficiency and selectivity. In this study three biocatalytic applications in enantioselective synthesis were explored and perhaps the most significant outcome of this work is the excellent enantioselectivity achieved through optimisation of reaction conditions improving the synthetic utility of the biotransformations. In the first chapter a summary of literature discussing the stereochemical control of baker’s yeast (Saccharomyces Cerevisae) mediated reduction of ketones by the introduction of sulfur moieties is presented, and sets the work of Chapter 2 in context. The focus of the second chapter was the synthesis and biocatalytic resolution of (±)-trans-2-benzenesulfonyl-3-n-butylcyclopentanone. For the first time the practical limitations of this resolution have been addressed providing synthetically useful quantities of enantiopure synthons for application in the total synthesis of both enantiomers of 4-methyloctanoic acid, the aggregation pheromone of the rhinoceros beetles of the genus Oryctes. The unique aspect of this enantioselective synthesis was the overall regio- and enantioselective introduction of the methyl group to the octanoic acid chain. This work is part of an ongoing research programme in our group focussed on baker’s yeast mediated kinetic resolution of 2-keto sulfones. The third chapter describes hydrolase-catalysed kinetic resolutions leading to a series of 3-aryl alkanoic acids. Hydrolysis of the ethyl esters with a series of hydrolases was undertaken to identify biocatalysts that yield the corresponding acids in highly enantioenriched form. Contrary to literature reports where a complete disappearance of efficiency and, accordingly enantioselection, was described upon kinetic resolution of sterically demanding 3-arylalkanoic acids, the highest reported enantiopurities of these acids was achieved (up to >98% ee) in this study through optimisation of reaction conditions. Steric and electronic effects on the efficiency and enantioselectivity of the biocatalytic transformation were also explored. Furthermore, a novel approach to determine the absolute stereochemistry of the enantiopure 3-aryl alkanoic acids was investigated through combination of co-crystallisation and X-ray diffraction linked with chiral HPLC analysis. The fourth chapter was focused on the development of a biocatalytic protocol for the asymmetric Henry reaction. Efficient kinetic resolution in hydrolase-mediated transesterification of cis- and trans- β-nitrocyclohexanol derivatives was achieved. Combination of a base-catalysed intramolecular Henry reaction coupled with the hydrolase-mediated kinetic resolution with the view to selective acetylation of a single stereoisomer was investigated. While dynamic kinetic resolution in the intramolecular Henry was not achieved, significant progress in each of the individual elements was made and significantly the feasibility of this process has been demonstrated. The final chapter contains the full experimental details, including spectroscopic and analytical data of all compounds synthesised in this project, while details of chiral HPLC analysis are included in the appendix. The data for the crystal structures are contained in the attached CD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With the proliferation of mobile wireless communication and embedded systems, the energy efficiency becomes a major design constraint. The dissipated energy is often referred as the product of power dissipation and the input-output delay. Most of electronic design automation techniques focus on optimising only one of these parameters either power or delay. Industry standard design flows integrate systematic methods of optimising either area or timing while for power consumption optimisation one often employs heuristics which are characteristic to a specific design. In this work we answer three questions in our quest to provide a systematic approach to joint power and delay Optimisation. The first question of our research is: How to build a design flow which incorporates academic and industry standard design flows for power optimisation? To address this question, we use a reference design flow provided by Synopsys and integrate in this flow academic tools and methodologies. The proposed design flow is used as a platform for analysing some novel algorithms and methodologies for optimisation in the context of digital circuits. The second question we answer is: Is possible to apply a systematic approach for power optimisation in the context of combinational digital circuits? The starting point is a selection of a suitable data structure which can easily incorporate information about delay, power, area and which then allows optimisation algorithms to be applied. In particular we address the implications of a systematic power optimisation methodologies and the potential degradation of other (often conflicting) parameters such as area or the delay of implementation. Finally, the third question which this thesis attempts to answer is: Is there a systematic approach for multi-objective optimisation of delay and power? A delay-driven power and power-driven delay optimisation is proposed in order to have balanced delay and power values. This implies that each power optimisation step is not only constrained by the decrease in power but also the increase in delay. Similarly, each delay optimisation step is not only governed with the decrease in delay but also the increase in power. The goal is to obtain multi-objective optimisation of digital circuits where the two conflicting objectives are power and delay. The logic synthesis and optimisation methodology is based on AND-Inverter Graphs (AIGs) which represent the functionality of the circuit. The switching activities and arrival times of circuit nodes are annotated onto an AND-Inverter Graph under the zero and a non-zero-delay model. We introduce then several reordering rules which are applied on the AIG nodes to minimise switching power or longest path delay of the circuit at the pre-technology mapping level. The academic Electronic Design Automation (EDA) tool ABC is used for the manipulation of AND-Inverter Graphs. We have implemented various combinatorial optimisation algorithms often used in Electronic Design Automation such as Simulated Annealing and Uniform Cost Search Algorithm. Simulated Annealing (SMA) is a probabilistic meta heuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. We used SMA to probabilistically decide between moving from one optimised solution to another such that the dynamic power is optimised under given delay constraints and the delay is optimised under given power constraints. A good approximation to the global optimum solution of energy constraint is obtained. Uniform Cost Search (UCS) is a tree search algorithm used for traversing or searching a weighted tree, tree structure, or graph. We have used Uniform Cost Search Algorithm to search within the AIG network, a specific AIG node order for the reordering rules application. After the reordering rules application, the AIG network is mapped to an AIG netlist using specific library cells. Our approach combines network re-structuring, AIG nodes reordering, dynamic power and longest path delay estimation and optimisation and finally technology mapping to an AIG netlist. A set of MCNC Benchmark circuits and large combinational circuits up to 100,000 gates have been used to validate our methodology. Comparisons for power and delay optimisation are made with the best synthesis scripts used in ABC. Reduction of 23% in power and 15% in delay with minimal overhead is achieved, compared to the best known ABC results. Also, our approach is also implemented on a number of processors with combinational and sequential components and significant savings are achieved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the European Union under the Common Agricultural Policy (CAP) milk production was restricted by milk quotas since 1984. However, due to recent changes in the Common Agricultural Policy (CAP), milk quotas will be abolished by 2015. Therefore, the European dairy sector will soon face an opportunity, for the first time in a generation, to expand. Numerous studies have shown that milk production in Ireland will increase significantly post quotas (Laepple and Hennessy (2010), Donnellan and Hennessy (2007) and Lips and Reider (2005)). The research in this thesis explored milk transport and dairy product processing in the Irish dairy processing sector in the context of milk quota removal and expansion by 2020. In this study a national milk transport model was developed for the Irish dairy industry, the model was used to examine different efficiency factors in milk transport and to estimate milk transport costs post milk quota abolition. Secondly, the impact of different milk supply profiles on milk transport costs was investigated using the milk transport model. Current processing capacity in Ireland was compared against future supply, it was concluded that additional milk processing capacity would not be sufficient to process the additional milk. Thirdly, the milk transport model was used to identify the least cost locations (based on transport costs) to process the additional milk supply in 2020. Finally, an optimisation model was developed to identify the optimum configuration for the Irish dairy processing sector in 2020 taking cognisance of increasing transport costs and decreasing processing costs.