931 resultados para Reactive optimal power flow
Resumo:
Ethylene-propylene rubber (EPR) functionalised with glycidyl methacrylate (GMA) (f-EPR) during melt processing in the presence of a co-monomer, such as trimethylolpropane triacrylate (Tris), was used to promote compatibilisation in blends of polyethylene terephthalate (PET) and f-EPR, and their characteristics were compared with those of PET/f-EPR reactive blends in which the f-EPR was functionalised with GMA via a conventional free radical melt reaction (in the absence of a co-monomer). Binary blends of PETand f-EPR (with two types of f-EPR prepared either in presence or absence of the co-monomer) with various compositions (80/20, 60/40 and 50/50 w/w%) were prepared in an internal mixer. The blends were evaluated by their rheology (from changes in torque during melt processing and blending reflecting melt viscosity, and their melt flow rate), morphology scanning electron microscopy (SEM), dynamic mechanical properties (DMA), Fourier transform infrared (FTIR) analysis, and solubility (Molau) test. The reactive blends (PET/f-EPR) showed a marked increase in their melt viscosities in comparison with the corresponding physical (PET/EPR) blends (higher torque during melt blending), the extent of which depended on the amount of homopolymerised GMA (poly-GMA) present and the level of GMA grafting in the f-EPR. This increase was accounted for by, most probably, the occurrence of a reaction between the epoxy groups of GMA and the hydroxyl/carboxyl end groups of PET. Morphological examination by SEM showed a large improvement of phase dispersion, indicating reduced interfacial tension and compatibilisation, in both reactive blends, but with the Tris-GMA-based blends showing an even finer morphology (these blends are characterised by absence of poly-GMA and presence of higher level of grafted GMA in its f-EPR component by comparison to the conventional GMA-based blends). Examination of the DMA for the reactive blends at different compositions showed that in both cases there was a smaller separation between the glass transition temperatures compared to their position in the corresponding physical blends, which pointed to some interaction or chemical reaction between f-EPR and PET. The DMA results also showed that the shifts in the Tgs of the Tris-GMA-based blends were slightly higher than for the conventional GMA-blends. However, the overall tendency of the Tgs to approach each other in each case was found not to be significantly different (e.g. in a 60/40 ratio the former blend shifted by up to 4.5 °C in each direction whereas in the latter blend the shifts were about 3 °C). These results would suggest that in these blends the SEM and DMA analyses are probing uncorrelatable morphological details. The evidence for the formation of in situ graft copolymer between the f-EPR and PET during reactive blending was clearly illustrated from analysis by FTIR of the separated phases from the Tris-GMA-based reactive blends, and the positive Molau test pointed out to graft copolymerisation in the interface. A mechanism for the formation of the interfacial reaction during the reactive blending process is proposed.
Resumo:
The re-entrant flow shop scheduling problem (RFSP) is regarded as a NP-hard problem and attracted the attention of both researchers and industry. Current approach attempts to minimize the makespan of RFSP without considering the interdependency between the resource constraints and the re-entrant probability. This paper proposed Multi-level genetic algorithm (GA) by including the co-related re-entrant possibility and production mode in multi-level chromosome encoding. Repair operator is incorporated in the Multi-level genetic algorithm so as to revise the infeasible solution by resolving the resource conflict. With the objective of minimizing the makespan, Multi-level genetic algorithm (GA) is proposed and ANOVA is used to fine tune the parameter setting of GA. The experiment shows that the proposed approach is more effective to find the near-optimal schedule than the simulated annealing algorithm for both small-size problem and large-size problem. © 2013 Published by Elsevier Ltd.
Resumo:
* The research is supported partly by INTAS: 04-77-7173 project, http://www.intas.be
Resumo:
Clogging is the main operational problem associated with horizontal subsurface flow constructed wetlands (HSSF CWs). The measurement of saturated hydraulic conductivity has proven to be a suitable technique to assess clogging within HSSF CWs. The vertical and horizontal distribution of hydraulic conductivity was assessed in two full-scale HSSF CWs by using two different in situ permeameter methods (falling head (FH) and constant head (CH) methods). Horizontal hydraulic conductivity profiles showed that both methods are correlated by a power function (FH= CH 0.7821, r 2=0.76) within the recorded range of hydraulic conductivities (0-70 m/day). However, the FH method provided lower values of hydraulic conductivity than the CH method (one to three times lower). Despite discrepancies between the magnitudes of reported readings, the relative distribution of clogging obtained via both methods was similar. Therefore, both methods are useful when exploring the general distribution of clogging and, specially, the assessment of clogged areas originated from preferential flow paths within full-scale HSSF CWs. Discrepancy between methods (either in magnitude and pattern) aroused from the vertical hydraulic conductivity profiles under highly clogged conditions. It is believed this can be attributed to procedural differences between the methods, such as the method of permeameter insertion (twisting versus hammering). Results from both methods suggest that clogging develops along the shortest distance between water input and output. Results also evidence that the design and maintenance of inlet distributors and outlet collectors appear to have a great influence on the pattern of clogging, and hence the asset lifetime of HSSF CWs. © Springer Science+Business Media B.V. 2011.
Resumo:
The scope of this paper is to present the Pulse Width Modulation (PWM) based method for Active Power (AP) and Reactive Power (RP) measurements as can be applied in Power Meters. Necessarily, the main aim of the material presented is a twofold, first to present a realization methodology of the proposed algorithm, and second to verify the algorithm’s robustness and validity. The method takes advantage of the fact that frequencies present in a power line are of a specific fundamental frequency range (a range centred on the 50 Hz or 60 Hz) and that in case of the presence of harmonics the frequencies of those dominating in the power line spectrum can be specified on the basis of the fundamental. In contrast to a number of existing methods a time delay or shifting of the input signal is not required by the method presented and the time delay by n/2 of the Current signal with respect to the Voltage signal required by many of the existing measurement techniques, does not apply in the case of the PWM method as well.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
Incorporating Material Balance Principle (MBP) in industrial and agricultural performance measurement systems with pollutant factors has been on the rise in recent years. Many conventional methods of performance measurement have proven incompatible with the material flow conditions. This study will address the issue of eco-efficiency measurement adjusted for pollution, taking into account materials flow conditions and the MBP requirements, in order to provide ‘real’ measures of performance that can serve as guides when making policies. We develop a new approach by integrating slacks-based measure to enhance the Malmquist Luenberger Index by a material balance condition that reflects the conservation of matter. This model is compared with a similar model, which incorporates MBP using the trade-off approach to measure productivity and eco-efficiency trends of power plants. Results reveal similar findings for both models substantiating robustness and applicability of the proposed model in this paper.
Resumo:
We present a comprehensive study of power output characteristics of random distributed feedback Raman fiber lasers. The calculated optimal slope efficiency of the backward wave generation in the one-arm configuration is shown to be as high as ∼90% for 1 W threshold. Nevertheless, in real applications a presence of a small reflection at fiber ends can appreciably deteriorate the power performance. The developed numerical model well describes the experimental data. © 2012 Optical Society of America.
Resumo:
More than 165 induction times of butyl paraben-ethanol solution in a batch moving fluid oscillation baffled crystallizer with various amplitudes (1-9 mm) and frequencies (1.0-9.0 Hz) have been determined to study the effect of COBR operating conditions on nucleation. The induction time decreases with increasing amplitude and frequency at power density below about 500 W/m3; however, a further increase of the frequency and amplitude leads to an increase of the induction time. The interfacial energies and pre-exponential factors in both homogeneous and heterogeneous nucleation are determined by classical nucleation theory at oscillatory frequency 2.0 Hz and amplitudes of 3 or 5 mm both with and without net flow. To capture the shear rate conditions in oscillatory flow crystallizers, a large eddy simulation approach in a computational fluid dynamics framework is applied. Under ideal conditions the shear rate distribution shows spatial and temporal periodicity and radial symmetry. The spatial distributions of the shear rate indicate an increase of average and maximum values of the shear rate with increasing amplitude and frequency. In continuous operation, net flow enhances the shear rate at most time points, promoting nucleation. The mechanism of the shear rate influence on nucleation is discussed.
Resumo:
For micro gas turbines (MGT) of around 1 kW or less, a commercially suitable recuperator must be used to produce a thermal efficiency suitable for use in UK Domestic Combined Heat and Power (DCHP). This paper uses computational fluid dynamics (CFD) to investigate a recuperator design based on a helically coiled pipe-in-pipe heat exchanger which utilises industry standard stock materials and manufacturing techniques. A suitable mesh strategy was established by geometrically modelling separate boundary layer volumes to satisfy y + near wall conditions. A higher mesh density was then used to resolve the core flow. A coiled pipe-in-pipe recuperator solution for a 1 kW MGT DCHP unit was established within the volume envelope suitable for a domestic wall-hung boiler. Using a low MGT pressure ratio (necessitated by using a turbocharger oil cooled journal bearing platform) meant unit size was larger than anticipated. Raising MGT pressure ratio from 2.15 to 2.5 could significantly reduce recuperator volume. Dimensional reasoning confirmed the existence of optimum pipe diameter combinations for minimum pressure drop. Maximum heat exchanger effectiveness was achieved using an optimum or minimum pressure drop pipe combination with large pipe length as opposed to a large pressure drop pipe combination with shorter pipe length. © 2011 Elsevier Ltd. All rights reserved.
Resumo:
In this study, the authors investigate the outage-optimal relay strategy under outdated channel state information (CSI) in a decode-and-forward cooperative communication system. They first confirm mathematically that minimising the outage probability under outdated CSI is equivalent to minimising the conditional outage probability on the outdated CSI of all the decodable relays' links. They then propose a multiple-relay strategy with optimised transmitting power allocation (MRS-OTPA) that minimises the conditional outage probability. It is shown that this MRS is a generalised relay approach to achieve the outage optimality under outdated CSI. To reduce the complexity, they also propose a MRS with equal transmitting power allocation (MRS-ETPA) that achieves near-optimal outage performance. It is proved that full spatial diversity, which has been achieved under ideal CSI, can still be achieved under outdated CSI through MRS-OTPA and MRS-ETPA. Finally, the outage performance and diversity order of MRS-OTPA and MRS-ETPA are evaluated by simulation.
Resumo:
Insulated-gate bipolar transistor (IGBT) power modules find widespread use in numerous power conversion applications where their reliability is of significant concern. Standard IGBT modules are fabricated for general-purpose applications while little has been designed for bespoke applications. However, conventional design of IGBTs can be improved by the multiobjective optimization technique. This paper proposes a novel design method to consider die-attachment solder failures induced by short power cycling and baseplate solder fatigue induced by the thermal cycling which are among major failure mechanisms of IGBTs. Thermal resistance is calculated analytically and the plastic work design is obtained with a high-fidelity finite-element model, which has been validated experimentally. The objective of minimizing the plastic work and constrain functions is formulated by the surrogate model. The nondominated sorting genetic algorithm-II is used to search for the Pareto-optimal solutions and the best design. The result of this combination generates an effective approach to optimize the physical structure of power electronic modules, taking account of historical environmental and operational conditions in the field.
Resumo:
A klasszikus tételnagyság probléma két fontosabb készletezési költséget ragad meg: rendelési és készlettartási költségek. Ebben a dolgozatban a vállalatok készpénz áramlásának a beszerzési tevékenységre gyakorolt hatását vizsgáljuk. Ebben az elemzésben a készpénzáramlási egyenlőséget használjuk, amely nagyban emlékeztet a készletegyenletekre. Eljárásunkban a beszerzési és rendelési folyamatot diszkontálva vizsgáljuk. A költségfüggvény lineáris készpénztartási, a pénzkiadás haszonlehetőség és lineáris kamatköltségből áll. Bemutatjuk a vizsgált modell optimális megoldását. Az optimális megoldást egy számpéldával illusztráljuk. = The classical economic order quantity model has two types of costs: ordering and inventory holding costs. In this paper we try to investigate the effect of purchasing activity on cash flow of a firm. In the examinations we use a cash flow identity similar to that of in inventory modeling. In our approach we analyze the purchasing and ordering process with discounted costs. The cost function of the model consists of linear cash holding, linear opportunity cost of spending cash, and linear interest costs. We show the optimal solution of the proposed model. The optimal solutions will be presented by numerical examples.
Resumo:
This dissertation develops a new figure of merit to measure the similarity (or dissimilarity) of Gaussian distributions through a novel concept that relates the Fisher distance to the percentage of data overlap. The derivations are expanded to provide a generalized mathematical platform for determining an optimal separating boundary of Gaussian distributions in multiple dimensions. Real-world data used for implementation and in carrying out feasibility studies were provided by Beckman-Coulter. It is noted that although the data used is flow cytometric in nature, the mathematics are general in their derivation to include other types of data as long as their statistical behavior approximate Gaussian distributions. ^ Because this new figure of merit is heavily based on the statistical nature of the data, a new filtering technique is introduced to accommodate for the accumulation process involved with histogram data. When data is accumulated into a frequency histogram, the data is inherently smoothed in a linear fashion, since an averaging effect is taking place as the histogram is generated. This new filtering scheme addresses data that is accumulated in the uneven resolution of the channels of the frequency histogram. ^ The qualitative interpretation of flow cytometric data is currently a time consuming and imprecise method for evaluating histogram data. This method offers a broader spectrum of capabilities in the analysis of histograms, since the figure of merit derived in this dissertation integrates within its mathematics both a measure of similarity and the percentage of overlap between the distributions under analysis. ^
Resumo:
The introduction of phase change material fluid and nanofluid in micro-channel heat sink design can significantly increase the cooling capacity of the heat sink because of the unique features of these two kinds of fluids. To better assist the design of a high performance micro-channel heat sink using phase change fluid and nanofluid, the heat transfer enhancement mechanism behind the flow with such fluids must be completely understood. ^ A detailed parametric study is conducted to further investigate the heat transfer enhancement of the phase change material particle suspension flow, by using the two-phase non-thermal-equilibrium model developed by Hao and Tao (2004). The parametric study is conducted under normal conditions with Reynolds numbers of Re = 90–600 and phase change material particle concentrations of ϵp ≤ 0.25, as well as extreme conditions of very low Reynolds numbers (Re < 50) and high phase change material particle concentration (ϵp = 50%–70%) slurry flow. By using the two newly-defined parameters, named effectiveness factor ϵeff and performance index PI, respectively, it is found that there exists an optimal relation between the channel design parameters L and D, particle volume fraction ϵp, Reynolds number Re, and the wall heat flux qw. The influence of the particle volume fraction ϵp, particle size dp, and the particle viscosity μ p, to the phase change material suspension flow, are investigated and discussed. The model was validated by available experimental data. The conclusions will assist designers in making their decisions that relate to the design or selection of a micro-pump suitable for micro or mini scale heat transfer devices. ^ To understand the heat transfer enhancement mechanism of the nanofluid flow from the particle level, the lattice Boltzmann method is used because of its mesoscopic feature and its many numerical advantages. By using a two-component lattice Boltzmann model, the heat transfer enhancement of the nanofluid is analyzed, through incorporating the different forces acting on the nanoparticles to the two-component lattice Boltzmann model. It is found that the nanofluid has better heat transfer enhancement at low Reynolds numbers, and the Brownian motion effect of the nanoparticles will be weakened by the increase of flow speed. ^