981 resultados para optimisation model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ad hoc wireless sensor networks (WSNs) are formed from self-organising configurations of distributed, energy constrained, autonomous sensor nodes. The service lifetime of such sensor nodes depends on the power supply and the energy consumption, which is typically dominated by the communication subsystem. One of the key challenges in unlocking the potential of such data gathering sensor networks is conserving energy so as to maximize their post deployment active lifetime. This thesis described the research carried on the continual development of the novel energy efficient Optimised grids algorithm that increases the WSNs lifetime and improves on the QoS parameters yielding higher throughput, lower latency and jitter for next generation of WSNs. Based on the range and traffic relationship the novel Optimised grids algorithm provides a robust traffic dependent energy efficient grid size that minimises the cluster head energy consumption in each grid and balances the energy use throughout the network. Efficient spatial reusability allows the novel Optimised grids algorithm improves on network QoS parameters. The most important advantage of this model is that it can be applied to all one and two dimensional traffic scenarios where the traffic load may fluctuate due to sensor activities. During traffic fluctuations the novel Optimised grids algorithm can be used to re-optimise the wireless sensor network to bring further benefits in energy reduction and improvement in QoS parameters. As the idle energy becomes dominant at lower traffic loads, the new Sleep Optimised grids model incorporates the sleep energy and idle energy duty cycles that can be implemented to achieve further network lifetime gains in all wireless sensor network models. Another key advantage of the novel Optimised grids algorithm is that it can be implemented with existing energy saving protocols like GAF, LEACH, SMAC and TMAC to further enhance the network lifetimes and improve on QoS parameters. The novel Optimised grids algorithm does not interfere with these protocols, but creates an overlay to optimise the grids sizes and hence transmission range of wireless sensor nodes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of these studies was to compare the effect of liposome composition on physico-chemical characteristics and transfection efficacy of cationic liposomes both in vitro and in vivo. Comparison between 4 popularly used cationic lipids, showed 3b-N-(dimethylaminoethyl)carbamate (DC-Chol) to promote the highest transfect levels in cells in vitro with levels being at least 6 times higher than those of 1,2-di-O-octadecenyl-3-trimethylammonium propane (DOTMA). 1,2-dioleoyl-3-trimethylammonium-propane (DOTAP), and dimethyldioctadecylammonium (DDA) and approximately twice as efficient as dipalmitoyl-trimethylammonium-propane (DPTAP). To establish the role of the helper lipid, DC-Chol liposomes were formulated in combination with either 1,2-dioleoyl-sn-glycero-3-phosphatidylethanolamine (DOPE) or cholesterol (Chol) (1:1 molar ratio) with and without the addition of phosphatidyl choline. The choice of helper lipid incorporated within the bilayer was found to influence the formation of complexes, their resultant structure and their transfection efficiency in vitro, with SUV-DNA complexes containing optimum levels of DOPE giving higher transfection than those containing cholesterol. The inclusion of PC within the formulation also reduced transfection efficiency in vitro. However, when administered in vivo, SUV-DNA complexes composed of PC:Chol:DC-Chol at a molar ratio of 16:8:4 micromole/ml were the most effective at inducing splenocyte proliferation upon exposure to antigen in comparison to control spleens. These results demonstrate that there is no in vitro/in vivo correlation between the transfection efficacy of these liposome formulations and in vitro transfection in the above cell model cannot be taken as a reliable indicator for in vivo efficacy of DNA vaccines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, standard algorithms are used to carry out the optimisation of cold-formed steel purlins such as zed, channel and sigma sections, which are assumed to be simply supported and subjected to a gravity load. For zed, channel and sigma section, the local buckling, distortional buckling and lateral-torsional buckling are considered respectively herein. Currently, the local buckling is based on the BS 5950-5:1998 and EN 1993-1-3:2006. The distortional buckling is calculated by the direct strength method employing the elastic distortional buckling which is calculated by three available approaches such as Hancock (1995), Schafer and Pekoz (1998), Yu (2005). In the optimisation program, the lateral-torsional buckling based on BS 5950-5:1998, AISI and analytical model of Li (2004) are investigated. For the optimisation program, the programming codes are written for optimisation of channel, zed and sigma beam. The full study has been coded into a computer-based analysis program (MATLAB).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Analysis of the use of ICT in the aerospace industry has prompted the detailed investigation of an inventory-planning problem. There is a special class of inventory, consisting of expensive repairable spares for use in support of aircraft operations. These items, called rotables, are not well served by conventional theory and systems for inventory management. The context of the problem, the aircraft maintenance industry sector, is described in order to convey some of its special characteristics in the context of operations management. A literature review is carried out to seek existing theory that can be applied to rotable inventory and to identify a potential gap into which newly developed theory could contribute. Current techniques for rotable planning are identified in industry and the literature: these methods are modelled and tested using inventory and operational data obtained in the field. In the expectation that current practice leaves much scope for improvement, several new models are proposed. These are developed and tested on the field data for comparison with current practice. The new models are revised following testing to give improved versions. The best model developed and tested here comprises a linear programming optimisation, which finds an optimal level of inventory for multiple test cases, reflecting changing operating conditions. The new model offers an inventory plan that is up to 40% less expensive than that determined by current practice, while maintaining required performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liposomes due to their biphasic characteristic and diversity in design, composition and construction, offer a dynamic and adaptable technology for enhancing drug solubility. Starting with equimolar egg-phosphatidylcholine (PC)/cholesterol liposomes, the influence of the liposomal composition and surface charge on the incorporation and retention of a model poorly water soluble drug, ibuprofen was investigated. Both the incorporation and the release of ibuprofen were influenced by the lipid composition of the multi-lamellar vesicles (MLV) with inclusion of the long alkyl chain lipid (dilignoceroyl phosphatidylcholine (C 24PC)) resulting in enhanced ibuprofen incorporation efficiency and retention. The cholesterol content of the liposome bilayer was also shown to influence ibuprofen incorporation with maximum ibuprofen incorporation efficiency achieved when 4 μmol of cholesterol was present in the MLV formulation. Addition of anionic lipid dicetylphosphate (DCP) reduced ibuprofen drug loading presumably due to electrostatic repulsive forces between the carboxyl group of ibuprofen and the anionic head-group of DCP. In contrast, the addition of 2 μmol of the cationic lipid stearylamine (SA) to the liposome formulation (PC:Chol - 16 μmol:4 μmol) increased ibuprofen incorporation efficiency by approximately 8%. However further increases of the SA content to 4 μmol and above reduced incorporation by almost 50% compared to liposome formulations excluding the cationic lipid. Environmental scanning electron microscopy (ESEM) was used to dynamically follow the changes in liposome morphology during dehydration to provide an alternative assay of liposome stability. ESEM analysis clearly demonstrated that ibuprofen incorporation improved the stability of PC:Chol liposomes as evidenced by an increased resistance to coalescence during dehydration. These finding suggest a positive interaction between amphiphilic ibuprofen molecules and the bilayer structure of the liposome. © 2004 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a goal programming model to optimise the deployment of pyrolysis plants in Punjab, India. Punjab has an abundance of waste straw and pyrolysis can convert this waste into alternative bio-fuels, which will facilitate the provision of valuable energy services and reduce open field burning. A goal programming model is outlined and demonstrated in two case study applications: small scale operations in villages and large scale deployment across Punjab's districts. To design the supply chain, optimal decisions for location, size and number of plants, downstream energy applications and feedstocks processed are simultaneously made based on stakeholder requirements for capital cost, payback period and production cost of bio-oil and electricity. The model comprises quantitative data obtained from primary research and qualitative data gathered from farmers and potential investors. The Punjab district of Fatehgarh Sahib is found to be the ideal location to initially utilise pyrolysis technology. We conclude that goal programming is an improved method over more conventional methods used in the literature for project planning in the field of bio-energy. The model and findings developed from this study will be particularly valuable to investors, plant developers and municipalities interested in waste to energy in India and elsewhere. © 2014 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lyophilisation or freeze drying is the preferred dehydrating method for pharmaceuticals liable to thermal degradation. Most biologics are unstable in aqueous solution and may use freeze drying to prolong their shelf life. Lyophilisation is however expensive and has seen lots of work aimed at reducing cost. This thesis is motivated by the potential cost savings foreseen with the adoption of a cost efficient bulk drying approach for large and small molecules. Initial studies identified ideal formulations that adapted well to bulk drying and further powder handling requirements downstream in production. Low cost techniques were used to disrupt large dried cakes into powder while the effects of carrier agent concentration were investigated for powder flowability using standard pharmacopoeia methods. This revealed superiority of crystalline mannitol over amorphous sucrose matrices and established that the cohesive and very poor flow nature of freeze dried powders were potential barriers to success. Studies from powder characterisation showed increased powder densification was mainly responsible for significant improvements in flow behaviour and an initial bulking agent concentration of 10-15 %w/v was recommended. Further optimisation studies evaluated the effects of freezing rates and thermal treatment on powder flow behaviour. Slow cooling (0.2 °C/min) with a -25°C annealing hold (2hrs) provided adequate mechanical strength and densification at 0.5-1 M mannitol concentrations. Stable bulk powders require powder transfer into either final vials or intermediate storage closures. The targeted dosing of powder formulations using volumetric and gravimetric powder dispensing systems where evaluated using Immunoglobulin G (IgG), Lactate Dehydrogenase (LDH) and Beta Galactosidase models. Final protein content uniformity in dosed vials was assessed using activity and protein recovery assays to draw conclusions from deviations and pharmacopeia acceptance values. A correlation between very poor flowability (p<0.05), solute concentration, dosing time and accuracy was revealed. LDH and IgG lyophilised in 0.5 M and 1 M mannitol passed Pharmacopeia acceptance values criteria with 0.1-4 while formulations with micro collapse showed the best dose accuracy (0.32-0.4% deviation). Bulk mannitol content above 0.5 M provided no additional benefits to dosing accuracy or content uniformity of dosed units. This study identified considerations which included the type of protein, annealing, cake disruption process, physical form of the phases present, humidity control and recommended gravimetric transfer as optimal for dispensing powder. Dosing lyophilised powders from bulk was demonstrated as practical, time efficient, economical and met regulatory requirements in cases. Finally the use of a new non-destructive technique, X-ray microcomputer tomography (MCT), was explored for cake and particle characterisation. Studies demonstrated good correlation with traditional gas porosimetry (R2 = 0.93) and morphology studies using microscopy. Flow characterisation from sample sizes of less than 1 mL was demonstrated using three dimensional X-ray quantitative image analyses. A platinum-mannitol dispersion model used revealed a relationship between freezing rate, ice nucleation sites and variations in homogeneity within the top to bottom segments of a formulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical model for studying the performance of polymer optical fibre-based interferometric sensors is presented. The strain sensitivity of Fabry-Perot and two-beam interferometric sensors is investigated by varying the physical and optical properties corresponding to frequently used wavelengths. The developed model was used to identify the regimes in which these devices offer enhanced performance over their silica counterparts when used for stress sensing. © (2015) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the use of a formal optimisation procedure to optimise a plug-in hybrid electric bus using two different case studies to meet two different performance criteria; minimum journey cost and maximum battery life. The approach is to choose a commercially available vehicle and seek to improve its performance by varying key design parameters. Central to this approach is the ability to develop a representative backward facing model of the vehicle in MATLAB/Simulink along with appropriate optimisation objective and penalty functions. The penalty functions being the margin by which a particular design fails to meet the performance specification. The model is validated against data collected from an actual vehicle and is used to estimate the vehicle performance parameters in a model-in-the-loop process within an optimisation routine. For the purposes of this paper, the journey cost/battery life over a drive cycle is optimised whilst other performance indices are met (or exceeded). Among the available optimisation methods, Powell's method and Simulated Annealing are adopted. The results show this method as a valid alternative modelling approach to vehicle powertrain optimisation. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The multiple-input multiple-output (MIMO) technique can be used to improve the performance of ad hoc networks. Various medium access control (MAC) protocols with multiple contention slots have been proposed to exploit spatial multiplexing for increasing the transport throughput of MIMO ad hoc networks. However, the existence of multiple request-to-send/clear-to-send (RTS/CTS) contention slots represents a severe overhead that limits the improvement on transport throughput achieved by spatial multiplexing. In addition, when the number of contention slots is fixed, the efficiency of RTS/CTS contention is affected by the transmitting power of network nodes. In this study, a joint optimisation scheme on both transmitting power and contention slots number for maximising the transport throughput is presented. This includes the establishment of an analytical model of a simplified MAC protocol with multiple contention slots, the derivation of transport throughput as a function of both transmitting power and the number of contention slots, and the optimisation process based on the transport throughput formula derived. The analytical results obtained, verified by simulation, show that much higher transport throughput can be achieved using the joint optimisation scheme proposed, compared with the non-optimised cases and the results previously reported.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

From 1992 to 2012 4.4 billion people were affected by disasters with almost 2 trillion USD in damages and 1.3 million people killed worldwide. The increasing threat of disasters stresses the need to provide solutions for the challenges faced by disaster managers, such as the logistical deployment of resources required to provide relief to victims. The location of emergency facilities, stock prepositioning, evacuation, inventory management, resource allocation, and relief distribution have been identified to directly impact the relief provided to victims during the disaster. Managing appropriately these factors is critical to reduce suffering. Disaster management commonly attracts several organisations working alongside each other and sharing resources to cope with the emergency. Coordinating these agencies is a complex task but there is little research considering multiple organisations, and none actually optimising the number of actors required to avoid shortages and convergence. The aim of the this research is to develop a system for disaster management based on a combination of optimisation techniques and geographical information systems (GIS) to aid multi-organisational decision-making. An integrated decision system was created comprising a cartographic model implemented in GIS to discard floodable facilities, combined with two models focused on optimising the decisions regarding location of emergency facilities, stock prepositioning, the allocation of resources and relief distribution, along with the number of actors required to perform these activities. Three in-depth case studies in Mexico were studied gathering information from different organisations. The cartographic model proved to reduce the risk to select unsuitable facilities. The preparedness and response models showed the capacity to optimise the decisions and the number of organisations required for logistical activities, pointing towards an excess of actors involved in all cases. The system as a whole demonstrated its capacity to provide integrated support for disaster preparedness and response, along with the existence of room for improvement for Mexican organisations in flood management.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Model for Prediction Across Scales (MPAS) is a novel set of Earth system simulation components and consists of an atmospheric model, an ocean model and a land-ice model. Its distinct features are the use of unstructured Voronoi meshes and C-grid discretisation to address shortcomings of global models on regular grids and the use of limited area models nested in a forcing data set, with respect to parallel scalability, numerical accuracy and physical consistency. This concept allows one to include the feedback of regional land use information on weather and climate at local and global scales in a consistent way, which is impossible to achieve with traditional limited area modelling approaches. Here, we present an in-depth evaluation of MPAS with regards to technical aspects of performing model runs and scalability for three medium-size meshes on four different high-performance computing (HPC) sites with different architectures and compilers. We uncover model limitations and identify new aspects for the model optimisation that are introduced by the use of unstructured Voronoi meshes. We further demonstrate the model performance of MPAS in terms of its capability to reproduce the dynamics of the West African monsoon (WAM) and its associated precipitation in a pilot study. Constrained by available computational resources, we compare 11-month runs for two meshes with observations and a reference simulation from the Weather Research and Forecasting (WRF) model. We show that MPAS can reproduce the atmospheric dynamics on global and local scales in this experiment, but identify a precipitation excess for the West African region. Finally, we conduct extreme scaling tests on a global 3?km mesh with more than 65 million horizontal grid cells on up to half a million cores. We discuss necessary modifications of the model code to improve its parallel performance in general and specific to the HPC environment. We confirm good scaling (70?% parallel efficiency or better) of the MPAS model and provide numbers on the computational requirements for experiments with the 3?km mesh. In doing so, we show that global, convection-resolving atmospheric simulations with MPAS are within reach of current and next generations of high-end computing facilities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Steady-state computational fluid dynamics (CFD) simulations are an essential tool in the design process of centrifugal compressors. Whilst global parameters, such as pressure ratio and efficiency, can be predicted with reasonable accuracy, the accurate prediction of detailed compressor flow fields is a much more significant challenge. Much of the inaccuracy is associated with the incorrect selection of turbulence model. The need for a quick turnaround in simulations during the design optimisation process, also demands that the turbulence model selected be robust and numerically stable with short simulation times.
In order to assess the accuracy of a number of turbulence model predictions, the current study used an exemplar open CFD test case, the centrifugal compressor ‘Radiver’, to compare the results of three eddy viscosity models and two Reynolds stress type models. The turbulence models investigated in this study were (i) Spalart-Allmaras (SA) model, (ii) the Shear Stress Transport (SST) model, (iii) a modification to the SST model denoted the SST-curvature correction (SST-CC), (iv) Reynolds stress model of Speziale, Sarkar and Gatski (RSM-SSG), and (v) the turbulence frequency formulated Reynolds stress model (RSM-ω). Each was found to be in good agreement with the experiments (below 2% discrepancy), with respect to total-to-total parameters at three different operating conditions. However, for the off-design conditions, local flow field differences were observed between the models, with the SA model showing particularly poor prediction of local flow structures. The SST-CC showed better prediction of curved rotating flows in the impeller. The RSM-ω was better for the wake and separated flow in the diffuser. The SST model showed reasonably stable, robust and time efficient capability to predict global and local flow features.