896 resultados para Benchmark Criteria
Resumo:
Hospitals are critical elements of health care systems and analysing their capacity to do work is a very important topic. To perform a system wide analysis of public hospital resources and capacity, a multi-objective optimization (MOO) approach has been proposed. This approach identifies the theoretical capacity of the entire hospital and facilitates a sensitivity analysis, for example of the patient case mix. It is necessary because the competition for hospital resources, for example between different entities, is highly influential on what work can be done. The MOO approach has been extensively tested on a real life case study and significant worth is shown. In this MOO approach, the epsilon constraint method has been utilized. However, for solving real life applications, with a large number of competing objectives, it was necessary to devise new and improved algorithms. In addition, to identify the best solution, a separable programming approach was developed. Multiple optimal solutions are also obtained via the iterative refinement and re-solution of the model.
Resumo:
This article analyzes the effect of devising a new failure envelope by the combination of the most commonly used failure criteria for the composite laminates, on the design of composite structures. The failure criteria considered for the study are maximum stress and Tsai-Wu criteria. In addition to these popular phenomenological-based failure criteria, a micromechanics-based failure criterion called failure mechanism-based failure criterion is also considered. The failure envelopes obtained by these failure criteria are superimposed over one another and a new failure envelope is constructed based on the lowest absolute values of the strengths predicted by these failure criteria. Thus, the new failure envelope so obtained is named as most conservative failure envelope. A minimum weight design of composite laminates is performed using genetic algorithms. In addition to this, the effect of stacking sequence on the minimum weight of the laminate is also studied. Results are compared for the different failure envelopes and the conservative design is evaluated, with respect to the designs obtained by using only one failure criteria. The design approach is recommended for structures where composites are the key load-carrying members such as helicopter rotor blades.
Resumo:
We discuss constrained and semi--constrained versions of the next--to--minimal supersymmetric extension of the Standard Model (NMSSM) in which a singlet Higgs superfield is added to the two doublet superfields that are present in the minimal extension (MSSM). This leads to a richer Higgs and neutralino spectrum and allows for many interesting phenomena that are not present in the MSSM. In particular, light Higgs particles are still allowed by current constraints and could appear as decay products of the heavier Higgs states, rendering their search rather difficult at the LHC. We propose benchmark scenarios which address the new phenomenological features, consistent with present constraints from colliders and with the dark matter relic density, and with (semi--)universal soft terms at the GUT scale. We present the corresponding spectra for the Higgs particles, their couplings to gauge bosons and fermions and their most important decay branching ratios. A brief survey of the search strategies for these states at the LHC is given.
Resumo:
Oxygen transfer rate and the corresponding power requirement to operate the rotor are vital for design and scale-up of surface aerators. Present study develops simulation or scale-up criterion correlating the oxygen transsimulation fer coefficient and power number along with a parameter governing theoretical power per unit volume (X, which is defined as equal to (FR1/3)-R-4/3, where F and R are impellers' Fronde and Reynolds number, respectively). Based on such scale-up criteria, design considerations are developed to save energy requirements while designing square tank surface aerators. It has been demonstrated that energy can be saved substantially if the aeration tanks are run at relatively higher input powers. It is also demonstrated that smaller sized tanks are more energy conservative and economical when compared to big sized tanks, while aerating the same volume of water, and at the same time by maintaining a constant input power in all the tanks irrespective of their size. An example illustrating how energy can be reduced while designing different sized aerators is given. The results presented have a wide application in biotechnology and bioengineering areas with a particular emphasis on the design of appropriate surface aeration systems.
Resumo:
Oxygen transfer rate and the corresponding power requirement to operate the rotor are vital for design and scale-up of surface aerators. Present study develops simulation or scale-up criterion correlating the oxygen transsimulation fer coefficient and power number along with a parameter governing theoretical power per unit volume (X, which is defined as equal to (FR1/3)-R-4/3, where F and R are impellers' Fronde and Reynolds number, respectively). Based on such scale-up criteria, design considerations are developed to save energy requirements while designing square tank surface aerators. It has been demonstrated that energy can be saved substantially if the aeration tanks are run at relatively higher input powers. It is also demonstrated that smaller sized tanks are more energy conservative and economical when compared to big sized tanks, while aerating the same volume of water, and at the same time by maintaining a constant input power in all the tanks irrespective of their size. An example illustrating how energy can be reduced while designing different sized aerators is given. The results presented have a wide application in biotechnology and bioengineering areas with a particular emphasis on the design of appropriate surface aeration systems.
Resumo:
In this paper, a new strategy for scaling burners based on "mild combustion" is evolved and adopted to scaling a burner from 3 to a 150 kW burner at a high heat release Late of 5 MW/m(3) Existing scaling methods (constant velocity, constant residence time, and Cole's procedure [Proc. Combust. Inst. 28 (2000) 1297]) are found to be inadequate for mild combustion burners. Constant velocity approach leads to reduced heat release rates at large sizes and constant residence time approach in unacceptable levels of pressure drop across the system. To achieve mild combustion at high heat release rates at all scales, a modified approach with high recirculation is adopted in the present studies. Major geometrical dimensions are scaled as D similar to Q(1/3) with an air injection velocity of similar to 100 m/s (Delta p similar to 600 mm water gauge). Using CFD support, the position of air injection holes is selected to enhance the recirculation rates. The precise role of secondary air is to increase the recirculation rates and burn LIP the residual CO in the downstream. Measurements of temperature and oxidizer concentrations inside 3 kW, 150 kW burner and a jet flame are used to distinguish the combustion process in these burners. The burner can be used for a wide range of fuels from LPG to producer gas as extremes. Up to 8 dB of noise level reduction is observed in comparison to the conventional combustion mode. Exhaust NO emissions below 26 and 3 ppm and temperatures 1710 and 1520 K were measured for LPG and producer gas when the burner is operated at stoichiometry. (c) 2004 The Combustion Institute. Published by Elsevier Inc. All rights reserved.
Resumo:
Different seismic hazard components pertaining to Bangalore city,namely soil overburden thickness, effective shear-wave velocity, factor of safety against liquefaction potential, peak ground acceleration at the seismic bedrock, site response in terms of amplification factor, and the predominant frequency, has been individually evaluated. The overburden thickness distribution, predominantly in the range of 5-10 m in the city, has been estimated through a sub-surface model from geotechnical bore-log data. The effective shear-wave velocity distribution, established through Multi-channel Analysis of Surface Wave (MASW) survey and subsequent data interpretation through dispersion analysis, exhibits site class D (180-360 m/s), site class C (360-760 m/s), and site class B (760-1500 m/s) in compliance to the National Earthquake Hazard Reduction Program (NEHRP) nomenclature. The peak ground acceleration has been estimated through deterministic approach, based on the maximum credible earthquake of M-W = 5.1 assumed to be nucleating from the closest active seismic source (Mandya-Channapatna-Bangalore Lineament). The 1-D site response factor, computed at each borehole through geotechnical analysis across the study region, is seen to be ranging from around amplification of one to as high as four times. Correspondingly, the predominant frequency estimated from the Fourier spectrum is found to be predominantly in range of 3.5-5.0 Hz. The soil liquefaction hazard assessment has been estimated in terms of factor of safety against liquefaction potential using standard penetration test data and the underlying soil properties that indicates 90% of the study region to be non-liquefiable. The spatial distributions of the different hazard entities are placed on a GIS platform and subsequently, integrated through analytical hierarchal process. The accomplished deterministic hazard map shows high hazard coverage in the western areas. The microzonation, thus, achieved is envisaged as a first-cut assessment of the site specific hazard in laying out a framework for higher order seismic microzonation as well as a useful decision support tool in overall land-use planning, and hazard management. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Criteria for the L2-stability of linear and nonlinear time-varying feedback systems are given. These are conditions in the time domain involving the solution of certain associated matrix Riccati equations and permitting the use of a very general class of L2-operators as multipliers.
Resumo:
The aim of the study was to explore the importance of evaluating leadership criteria in Finland at leader/subordinate levels of the insurance industry. The overall purpose of the thesis is tackled and analyzed from two different perspectives: - by examining the importance of the leadership criteria and style of Finnish insurance business leaders and their subordinates - by examining the opinions of insurance business leaders regarding leadership criteria in two culturally different countries: the US and Finland. This thesis consists of three published articles that scrutinise the focal phenomena both theoretically and empirically. The main results of the study do not lend support to the existence of a universal model of leadership criteria in the insurance business. As a matter of fact, the possible model seems to be based more on the special organizational and cultural circumstances of the country in question. The leadership criteria seem to be quite stable irrespective of the comparatively short research time period (3–5 years) and hierarchical level (subordinate/leader). Leaders have major difficulties in changing their leadership style. In fact, in order to bring about an efficient organizational change in the company you have to alternate the leader. The cultural dimensions (cooperation and monitoring) identified by Finnish subordinates were mostly in line with those of their managers, whilst emphasizing more the aspect of monitoring employees, which could be seen from their point of view as another element of managers’ optimizing/efficiency requirements. In Finnish surveys the strong emphasis on cooperation and mutual trust become apparent by both subordinates and managers. The basic problem is still how to emphasize and balance them in real life in such a way that both parties are happy to work together on a common basis. The American surveys suggests hypothetically that in a soft market period (buyer’s market) managers employ a more relationship-oriented leadership style and correspondingly adapt their leadership style to a more task-oriented approach in a hard market phase (seller’s market). In making business better Finnish insurance managers could probably concentrate more on task-oriented items such as reviewing, budgeting, monitoring and goal-orientation. The study also suggests that the social safety net of the European welfare state ideology has so far shielded the culture-specific sense of social responsibility of Finnish managers from the hazards of free competition and globalization.
Resumo:
In this two-part series of papers, a generalized non-orthogonal amplify and forward (GNAF) protocol which generalizes several known cooperative diversity protocols is proposed. Transmission in the GNAF protocol comprises of two phases - the broadcast phase and the cooperation phase. In the broadcast phase, the source broadcasts its information to the relays as well as the destination. In the cooperation phase, the source and the relays together transmit a space-time code in a distributed fashion. The GNAF protocol relaxes the constraints imposed by the protocol of Jing and Hassibi on the code structure. In Part-I of this paper, a code design criteria is obtained and it is shown that the GNAF protocol is delay efficient and coding gain efficient as well. Moreover GNAF protocol enables the use of sphere decoders at the destination with a non-exponential Maximum likelihood (ML) decoding complexity. In Part-II, several low decoding complexity code constructions are studied and a lower bound on the Diversity-Multiplexing Gain tradeoff of the GNAF protocol is obtained.
Resumo:
An algorithm for optimal allocation of reactive power in AC/DC system using FACTs devices, with an objective of improving the voltage profile and also voltage stability of the system has been presented. The technique attempts to utilize fully the reactive power sources in the system to improve the voltage stability and profile as well as meeting the reactive power requirements at the AC-DC terminals to facilitate the smooth operation of DC links. The method involves successive solution of steady-state power flows and optimization of reactive power control variables with Unified Power Flow Controller (UPFC) using linear programming technique. The proposed method has been tested on a real life equivalent 96-bus AC and a two terminal DC system under normal and contingency conditions.