991 resultados para optimize
Resumo:
The increasing integration of Renewable Energy Resources (RER) and the role of Electric Energy Storage (EES) in distribution systems has created interest in using energy management strategies. EES has become a suitable resource to manage energy consumption and generation in smart grid. Optimize scheduling of EES can also maximize retailer’s profit by introducing energy time-shift opportunities. This paper proposes a new strategy for scheduling EES in order to reduce the impact of electricity market price and load uncertainty on retailers’ profit. The proposed strategy optimizes the cost of purchasing energy with the objective of minimizing surplus energy cost in hedging contract. A case study is provided to demonstrate the impact of the proposed strategy on retailers’ financial benefit.
Resumo:
This paper presents the results of a research project aimed at examining the capabilities and challenges of two distinct but not mutually exclusive approaches to in-service bridge assessment: visual inspection and installed monitoring systems. In this study, the intended functionality of both approaches was evaluated on its ability to identify potential structural damage and to provide decision-making support. Inspection and monitoring are compared in terms of their functional performance, cost, and barriers (real and perceived) to implementation. Both methods have strengths and weaknesses across the metrics analyzed, and it is likely that a hybrid evaluation technique that adopts both approaches will optimize efficiency of condition assessment and ultimately lead to better decision making.
Resumo:
Purpose Traditional construction planning relies upon the critical path method (CPM) and bar charts. Both of these methods suffer from visualization and timing issues that could be addressed by 4D technology specifically geared to meet the needs of the construction industry. This paper proposed a new construction planning approach based on simulation by using a game engine. Design/methodology/approach A 4D automatic simulation tool was developed and a case study was carried out. The proposed tool was used to simulate and optimize the plans for the installation of a temporary platform for piling in a civil construction project in Hong Kong. The tool simulated the result of the construction process with three variables: 1) equipment, 2) site layout and 3) schedule. Through this, the construction team was able to repeatedly simulate a range of options. Findings The results indicate that the proposed approach can provide a user-friendly 4D simulation platform for the construction industry. The simulation can also identify the solution being sought by the construction team. The paper also identifies directions for further development of the 4D technology as an aid in construction planning and decision-making. Research limitations/implications The tests on the tool are limited to a single case study and further research is needed to test the use of game engines for construction planning in different construction projects to verify its effectiveness. Future research could also explore the use of alternative game engines and compare their performance and results. Originality/value The authors proposed the use of game engine to simulate the construction process based on resources, working space and construction schedule. The developed tool can be used by end-users without simulation experience.
Resumo:
Intermittent microwave convective drying (IMCD) is an advanced technology that improves both energy efficiency and food quality in drying. Modelling of IMCD is essential to understand the physics of this advanced drying process and to optimize the microwave power level and intermittency during drying. However, there is still a lack of modelling studies dedicated to IMCD. In this study, a mathematical model for IMCD was developed and validated with experimental data. The model showed that the interior temperature of the material was higher than the surface in IMCD, and that the temperatures fluctuated and redistributed due to the intermittency of the microwave power. This redistribution of temperature could significantly contribute to the improvement of product quality during IMCD. Limitations when using Lambert's Law for microwave heat generation were identified and discussed.
Resumo:
Close attention to technical quality or image optimization in transthoracic echocardiography (TTE) is important for the acquisition of high-quality diagnostic images and to ensure that measurements are accurately performed. For this purpose, the echocardiographer must be familiar with all the controls on the ultrasound machine that can be manipulated to optimize the two-dimensional (2D) images, color Doppler images, and spectral Doppler traces...
Resumo:
A novel and economical experimental technique has been developed to assess industrial aerosol deposition in various idealized porous channel configurations. This judicious examination on aerosol penetration in porous channels will assist engineers to better optimize designs for various engineering applications. Deposition patterns differ with porosity due to geometric configurations of the channel and superficial inlet velocities. Interestingly, it is found that two configurations of similar porosity exhibit significantly higher deposition fractions. Inertial impaction is profound at the leading edge of all obstacles, whereas particle build-up is observed at the trailing edge of the obstructions. A qualitative analysis shows that the numerical results are in good agreement with experimental results.
Resumo:
Computational fluid dynamics (CFD) and particle image velocimetry (PIV) are commonly used techniques to evaluate the flow characteristics in the development stage of blood pumps. CFD technique allows rapid change to pump parameters to optimize the pump performance without having to construct a costly prototype model. These techniques are used in the construction of a bi-ventricular assist device (BVAD) which combines the functions of LVAD and RVAD in a compact unit. The BVAD construction consists of two separate chambers with similar impellers, volutes, inlet and output sections. To achieve the required flow characteristics of an average flow rate of 5 l/min and different pressure heads (left – 100mmHg and right – 20mmHg), the impellers were set at different rotating speeds. From the CFD results, a six-blade impeller design was adopted for the development of the BVAD. It was also observed that the fluid can flow smoothly through the pump with minimum shear stress and area of stagnation which are related to haemolysis and thrombosis. Based on the compatible Reynolds number the flow through the model was calculated for the left and the right pumps. As it was not possible to have both the left and right chambers in the experimental model, the left and right pumps were tested separately.
Resumo:
Carbon nanostructures (CNs) are amongst the most promising biorecognition nanomaterials due to their unprecedented optical, electrical and structural properties. As such, CNs may be harnessed to tackle the detrimental public health and socio-economic adversities associated with neurodegenerative diseases (NDs). In particular, CNs may be tailored for a specific determination of biomarkers indicative of NDs. However, the realization of such a biosensor represents a significant technological challenge in the uniform fabrication of CNs with outstanding qualities in order to facilitate a highly-sensitive detection of biomarkers suspended in complex biological environments. Notably, the versatility of plasma-based techniques for the synthesis and surface modification of CNs may be embraced to optimize the biorecognition performance and capabilities. This review surveys the recent advances in CN-based biosensors, and highlights the benefits of plasma-processing techniques to enable, enhance, and tailor the performance and optimize the fabrication of CNs, towards the construction of biosensors with unparalleled performance for the early diagnosis of NDs, via a plethora of energy-efficient, environmentally-benign, and inexpensive approaches.
Resumo:
Objective: To review the outcome of acute liver failure (ALF) and the effect of liver transplantation in children in Australia. Methodology: A retrospective review was conducted of all paediatric patients referred with acute liver failure between 1985 and 2000 to the Queensland Liver Transplant Service, a paediatric liver transplant centre based at the Royal Children's Hospital, Brisbane, that is one of three paediatric transplant centres in Australia. Results: Twenty-six patients were referred with ALF. Four patients did not require transplantation and recovered with medical therapy while two were excluded because of irreversible neurological changes and died. Of the 20 patients considered for transplant, three refused for social and/or religious reasons, with 17 patients listed for transplantation. One patient recovered spontaneously and one died before receiving a transplant. There were 15 transplants of which 40% (6/15) were < 2 years old. Sixty-seven per cent (10/15) survived > 1 month after transplantation. Forty per cent (6/15) survived more than 6 months after transplant. There were only four long term survivors after transplant for ALF (27%). Overall, 27% (6/22) of patients referred with ALF survived. Of the 16 patients that died, 44% (7/16) were from neurological causes. Most of these were from cerebral oedema but two patients transplanted for valproate hepatotoxicity died from neurological disease despite good graft function. Conclusions: Irreversible neurological disease remains a major cause of death in children with ALF. We recommend better patient selection and early referral and transfer to a transplant centre before onset of irreversible neurological disease to optimize outcome of children transplanted for ALF.
Resumo:
Wireless adhoc networks transmit information from a source to a destination via multiple hops in order to save energy and, thus, increase the lifetime of battery-operated nodes. The energy savings can be especially significant in cooperative transmission schemes, where several nodes cooperate during one hop to forward the information to the next node along a route to the destination. Finding the best multi-hop transmission policy in such a network which determines nodes that are involved in each hop, is a very important problem, but also a very difficult one especially when the physical wireless channel behavior is to be accounted for and exploited. We model the above optimization problem for randomly fading channels as a decentralized control problem - the channel observations available at each node define the information structure, while the control policy is defined by the power and phase of the signal transmitted by each node. In particular, we consider the problem of computing an energy-optimal cooperative transmission scheme in a wireless network for two different channel fading models: (i) slow fading channels, where the channel gains of the links remain the same for a large number of transmissions, and (ii) fast fading channels, where the channel gains of the links change quickly from one transmission to another. For slow fading, we consider a factored class of policies (corresponding to local cooperation between nodes), and show that the computation of an optimal policy in this class is equivalent to a shortest path computation on an induced graph, whose edge costs can be computed in a decentralized manner using only locally available channel state information (CSI). For fast fading, both CSI acquisition and data transmission consume energy. Hence, we need to jointly optimize over both these; we cast this optimization problem as a large stochastic optimization problem. We then jointly optimize over a set of CSI functions of the local channel states, and a c- - orresponding factored class of control poli.
Resumo:
Purpose The post-illumination pupil response (PIPR) has been quantified using four metrics, but the spectral sensitivity of only one is known; here we determine the other three. To optimize the human PIPR measurement, we determine the protocol producing the largest PIPR, the duration of the PIPR, and the metric(s) with the lowest coefficient of variation. Methods The consensual pupil light reflex (PLR) was measured with a Maxwellian view pupillometer. - Experiment 1: Spectral sensitivity of four PIPR metrics [plateau, 6 s, area under curve (AUC) early and late recovery] was determined from a criterion PIPR to a 1s pulse and fitted with Vitamin A1 nomogram (λmax = 482nm). - Experiment 2: The PLR was measured as a function of three stimulus durations (1s, 10s, 30s), five irradiances spanning low to high melanopsin excitation levels (retinal irradiance: 9.8 to 14.8 log quanta.cm-2.s-1), and two wavelengths, one with high (465nm) and one with low (637nm) melanopsin excitation. Intra and inter-individual coefficients of variation (CV) were calculated. Results The melanopsin (opn4) photopigment nomogram adequately describes the spectral sensitivity of all four PIPR metrics. The PIPR amplitude was largest with 1s short wavelength pulses (≥ 12.8 log quanta.cm-2.s-1). The plateau and 6s PIPR showed the least intra and inter-individual CV (≤ 0.2). The maximum duration of the sustained PIPR was 83.0±48.0s (mean±SD) for 1s pulses and 180.1±106.2s for 30s pulses (465nm; 14.8 log quanta.cm-2.s-1). Conclusions All current PIPR metrics provide a direct measure of the intrinsic melanopsin photoresponse. To measure progressive changes in melanopsin function in disease, we recommend that the PIPR be measured using short duration pulses (e.g., ≤ 1s) with high melanopsin excitation and analyzed with plateau and/or 6s metrics. Our PIPR duration data provide a baseline for the selection of inter-stimulus intervals between consecutive pupil testing sequences.
Resumo:
Purpose The post-illumination pupil response (PIPR) has been quantified in the literature by four metrics. The spectral sensitivity of only one metric is known and this study quantifies the other three. To optimize the measurement of the PIPR in humans, we also determine the stimulus protocol producing the largest PIPR, the duration of the PIPR, and the metric(s) with the lowest coefficient of variation. Methods The consensual pupil light reflex (PLR) was measured with a Maxwellian view pupillometer (35.6° diameter stimulus). - Experiment 1: Spectral sensitivity of four PIPR metrics [plateau, 6 s, area under curve (AUC) early and late recovery] was determined from a criterion PIPR (n = 2 participants) to a 1 s pulse at five wavelengths (409-592nm) and fitted with Vitamin A nomogram (ƛmax = 482 nm). - Experiment 2: The PLR was measured in five healthy participants [29 to 42 years (mean = 32.6 years)] as a function of three stimulus durations (1 s, 10 s, 30 s), five irradiances spanning low to high melanopsin excitation levels (retinal irradiance: 9.8 to 14.8 log quanta.cm-2.s-1), and two wavelengths, one with high (465 nm) and one with low (637 nm) melanopsin excitation. Intra and inter-individual coefficients of variation (CV) were calculated. Results The melanopsin (opn4) photopigment nomogram adequately described the spectral sensitivity derived from all four PIPR metrics. The largest PIPR amplitude was observed with 1 s short wavelength pulses (retinal irradiance ≥ 12.8 log quanta.cm-2.s-1). Of the 4 PIPR metrics, the plateau and 6 s PIPR showed the least intra and inter-individual CV (≤ 0.2). The maximum duration of the sustained PIPR was 83.4 ± 48.0 s (mean ± SD) for 1 s pulses and 180.1 ± 106.2 s for 30 s pulses (465 nm; 14.8 log quanta.cm-2.s-1). Conclusions All current PIPR metrics provide a direct measure of intrinsic melanopsin retinal ganglion cell function. To measure progressive changes in melanopsin function in disease, we recommend that the intrinsic melanopsin response should be measured using a 1 s pulse with high melanopsin excitation and the PIPR should be analyzed with the plateau and/or 6 s metrics. That the PIPR can have a sustained constriction for as long as 3 minutes, our PIPR duration data provide a baseline for the selection of inter-stimulus intervals between consecutive pupil testing sequences.
Resumo:
Fluid bed granulation is a key pharmaceutical process which improves many of the powder properties for tablet compression. Dry mixing, wetting and drying phases are included in the fluid bed granulation process. Granules of high quality can be obtained by understanding and controlling the critical process parameters by timely measurements. Physical process measurements and particle size data of a fluid bed granulator that are analysed in an integrated manner are included in process analytical technologies (PAT). Recent regulatory guidelines strongly encourage the pharmaceutical industry to apply scientific and risk management approaches to the development of a product and its manufacturing process. The aim of this study was to utilise PAT tools to increase the process understanding of fluid bed granulation and drying. Inlet air humidity levels and granulation liquid feed affect powder moisture during fluid bed granulation. Moisture influences on many process, granule and tablet qualities. The approach in this thesis was to identify sources of variation that are mainly related to moisture. The aim was to determine correlations and relationships, and utilise the PAT and design space concepts for the fluid bed granulation and drying. Monitoring the material behaviour in a fluidised bed has traditionally relied on the observational ability and experience of an operator. There has been a lack of good criteria for characterising material behaviour during spraying and drying phases, even though the entire performance of a process and end product quality are dependent on it. The granules were produced in an instrumented bench-scale Glatt WSG5 fluid bed granulator. The effect of inlet air humidity and granulation liquid feed on the temperature measurements at different locations of a fluid bed granulator system were determined. This revealed dynamic changes in the measurements and enabled finding the most optimal sites for process control. The moisture originating from the granulation liquid and inlet air affected the temperature of the mass and pressure difference over granules. Moreover, the effects of inlet air humidity and granulation liquid feed rate on granule size were evaluated and compensatory techniques used to optimize particle size. Various end-point indication techniques of drying were compared. The ∆T method, which is based on thermodynamic principles, eliminated the effects of humidity variations and resulted in the most precise estimation of the drying end-point. The influence of fluidisation behaviour on drying end-point detection was determined. The feasibility of the ∆T method and thus the similarities of end-point moisture contents were found to be dependent on the variation in fluidisation between manufacturing batches. A novel parameter that describes behaviour of material in a fluid bed was developed. Flow rate of the process air and turbine fan speed were used to calculate this parameter and it was compared to the fluidisation behaviour and the particle size results. The design space process trajectories for smooth fluidisation based on the fluidisation parameters were determined. With this design space it is possible to avoid excessive fluidisation and improper fluidisation and bed collapse. Furthermore, various process phenomena and failure modes were observed with the in-line particle size analyser. Both rapid increase and a decrease in granule size could be monitored in a timely manner. The fluidisation parameter and the pressure difference over filters were also discovered to express particle size when the granules had been formed. The various physical parameters evaluated in this thesis give valuable information of fluid bed process performance and increase the process understanding.
Resumo:
This thesis discusses the use of sub- and supercritical fluids as the medium in extraction and chromatography. Super- and subcritical extraction was used to separate essential oils from herbal plant Angelica archangelica. The effect of extraction parameters was studied and sensory analyses of the extracts were done by an expert panel. The results of the sensory analyses were compared to the analytically determined contents of the extracts. Sub- and supercritical fluid chromatography (SFC) was used to separate and purify high-value pharmaceuticals. Chiral SFC was used to separate the enantiomers of racemic mixtures of pharmaceutical compounds. Very low (cryogenic) temperatures were applied to substantially enhance the separation efficiency of chiral SFC. The thermodynamic aspects affecting the resolving ability of chiral stationary phases are briefly reviewed. The process production rate which is a key factor in industrial chromatography was optimized by empirical multivariate methods. General linear model was used to optimize the separation of omega-3 fatty acid ethyl esters from esterized fish oil by using reversed-phase SFC. Chiral separation of racemic mixtures of guaifenesin and ferulic acid dimer ethyl ester was optimized by using response surface method with three variables per time. It was found that by optimizing four variables (temperature, load, flowate and modifier content) the production rate of the chiral resolution of racemic guaifenesin by cryogenic SFC could be increased severalfold compared to published results of similar application. A novel pressure-compensated design of industrial high pressure chromatographic column was introduced, using the technology developed in building the deep-sea submersibles (Mir 1 and 2). A demonstration SFC plant was built and the immunosuppressant drug cyclosporine A was purified to meet the requirements of US Pharmacopoeia. A smaller semi-pilot size column with similar design was used for cryogenic chiral separation of aromatase inhibitor Finrozole for use in its development phase 2.
Resumo:
An analytical method has been proposed to optimise the small-signaloptical gain of CO2-N2 gasdynamic lasers (gdl) employing two-dimensional (2D) wedge nozzles. Following our earlier work the equations governing the steady, inviscid, quasi-one-dimensional flow in the wedge nozzle of thegdl are reduced to a universal form so that their solutions depend on a single unifying parameter. These equations are solved numerically to obtain similar solutions for the various flow quantities, which variables are subsequently used to optimize the small-signal-gain. The corresponding optimum values like reservoir pressure and temperature and 2D nozzle area ratio also have been predicted and graphed for a wide range of laser gas compositions, with either H2O or He as the catalyst. A large number of graphs are presented which may be used to obtain the optimum values of small signal gain for a wide range of laser compositions without further computations.