941 resultados para power-function modelling


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to suggest a simple methodology to be used by renewable power generators to bid in Spanish markets in order to minimize the cost of their imbalances. As it is known, the optimal bid depends on the probability distribution function of the energy to produce, of the probability distribution function of the future system imbalance and of its expected cost. We assume simple methods for estimating any of these parameters and, using actual data of 2014, we test the potential economic benefit for a wind generator from using our optimal bid instead of just the expected power generation. We find evidence that Spanish wind generators savings would be from 7% to 26%.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this research to explore the use of modelling in the field of Purchasing and Supply Management (P/SM). We are particularly interested in identifying the specific areas of P/SM where there are opportunities for the use of modelling based methods. The paper starts with an overview of main types of modelling and also provides a categorisation of the main P/SM research themes. Our research shows that there are many opportunities for using descriptive, predictive and prescriptive modelling approaches in all areas of P/SM research from the ones with a focus on the actual function from a purely operational and execution perspective (e.g. purchasing processes and behaviour) to the ones with a focus on the organisational level from a more strategic perspective (e.g. strategy and policy). We conclude that future P/SM research needs to explore the value of modelling not just at the functional or operational level, but also at the organisation and strategic level respectively. We also acknowledge that while using empirical results to inform and improve models has advantages, there are also drawbacks, which relate to the value, the practical relevance and the generalisability of the modelling based approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper deals with the monolithic decoupled XYZ compliant parallel mechanisms (CPMs) for multi-function applications, which can be fabricated monolithically without assembly and has the capability of kinetostatic decoupling. At first, the conceptual design of monolithic decoupled XYZ CPMs is presented using identical spatial compliant multi-beam modules based on a decoupled 3-PPPR parallel kinematic mechanism. Three types of applications: motion/positioning stages, force/acceleration sensors and energy harvesting devices are described in principle. The kinetostatic and dynamic modelling is then conducted to capture the displacements of any stage under loads acting at any stage and the natural frequency with the comparisons with FEA results. Finally, performance characteristics analysis for motion stage applications is detailed investigated to show how the change of the geometrical parameter can affect the performance characteristics, which provides initial optimal estimations. Results show that the smaller thickness of beams and larger dimension of cubic stages can improve the performance characteristics excluding natural frequency under allowable conditions. In order to improve the natural frequency characteristic, a stiffness-enhanced monolithic decoupled configuration that is achieved through employing more beams in the spatial modules or reducing the mass of each cubic stage mass can be adopted. In addition, an isotropic variation with different motion range along each axis and same payload in each leg is proposed. The redundant design for monolithic fabrication is introduced in this paper, which can overcome the drawback of monolithic fabrication that the failed compliant beam is difficult to replace, and extend the CPM’s life.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: As the global population is ageing, studying cognitive impairments including dementia, one of the leading causes of disability in old age worldwide, is of fundamental importance to public health. As a major transition in older age, a focus on the complex impacts of the duration, timing, and voluntariness of retirement on health is important for policy changes in the future. Longer retirement periods, as well as leaving the workforce early, have been associated with poorer health, including reduced cognitive functioning. These associations are hypothesized to differ based on gender, as well as on pre-retirement educational and occupational experiences, and on post-retirement social factors and health conditions. Methods: A cross-sectional study is conducted to determine the relationship between duration and timing of retirement and cognitive function, using data from the five sites of International Mobility in Aging Study (IMIAS). Cognitive function is assessed using the Leganes Cognitive Test (LCT) scores in 2012. Data are analyzed using multiple linear regressions. Analyses are also done by site/region separately (Canada, Latin America, and Albania). Robustness checks are done with an analysis of cognitive change from 2012 to 2014, the effect of voluntariness of retirement on cognitive function. An instrumental variable (IV) approach is also applied to the cross-sectional and longitudinal analyses as a robustness check to address the potential endogeneity of the retirement variable. Results: Descriptive statistics highlight differences between men and women, as well as between sites. In linear regression analysis, there was no relationship between timing or duration of retirement and cognitive function in 2012, when adjusting for site/region. There was no association between retirement characteristics and cognitive function in site/region/stratified analyses. In IV analysis, longer retirement and on time or late retirement was associated with lower cognitive function among men. In IV analysis, there is no relationship between retirement characteristics and cognitive function among women. Conclusions: While results of the thesis suggest a negative effect of retirement on cognitive function, especially among men, the relationship remains uncertain. A lack of power results in the inability to draw conclusions for site/region-specific analysis and site-adjusted analysis in both linear and IV regressions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the variability and stochastic nature of wind power system, accurate wind power forecasting has an important role in developing reliable and economic power system operation and control strategies. As wind variability is stochastic, Gaussian Process regression has recently been introduced to capture the randomness of wind energy. However, the disadvantages of Gaussian Process regression include its computation complexity and incapability to adapt to time varying time-series systems. A variant Gaussian Process for time series forecasting is introduced in this study to address these issues. This new method is shown to be capable of reducing computational complexity and increasing prediction accuracy. It is further proved that the forecasting result converges as the number of available data approaches innite. Further, a teaching learning based optimization (TLBO) method is used to train the model and to accelerate
the learning rate. The proposed modelling and optimization method is applied to forecast both the wind power generation of Ireland and that from a single wind farm to show the eectiveness of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The choice of model used to study human respiratory syncytial virus (RSV) infection is extremely important. RSV is a human pathogen that is exquisitely adapted to infection of human hosts. Rodent models, such as mice and cotton rats, are semi-permissive to RSV infection and do not faithfully reproduce hallmarks of RSV disease in humans. Furthermore, immortalized airway-derived cell lines, such as HEp-2, BEAS-2B, and A549 cells, are poorly representative of the complexity of the respiratory epithelium. The development of a well-differentiated primary pediatric airway epithelial cell models (WD-PAECs) allows us to simulate several hallmarks of RSV infection of infant airways. They therefore represent important additions to RSV pathogenesis modeling in human-relevant tissues. The following protocols describe how to culture and differentiate both bronchial and nasal primary pediatric airway epithelial cells and how to use these cultures to study RSV cytopathogenesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Adjoint methods have proven to be an efficient way of calculating the gradient of an objective function with respect to a shape parameter for optimisation, with a computational cost nearly independent of the number of the design variables [1]. The approach in this paper links the adjoint surface sensitivities (gradient of objective function with respect to the surface movement) with the parametric design velocities (movement of the surface due to a CAD parameter perturbation) in order to compute the gradient of the objective function with respect to CAD variables.
For a successful implementation of shape optimization strategies in practical industrial cases, the choice of design variables or parameterisation scheme used for the model to be optimized plays a vital role. Where the goal is to base the optimization on a CAD model the choices are to use a NURBS geometry generated from CAD modelling software, where the position of the NURBS control points are the optimisation variables [2] or to use the feature based CAD model with all of the construction history to preserve the design intent [3]. The main advantage of using the feature based model is that the optimized model produced can be directly used for the downstream applications including manufacturing and process planning.
This paper presents an approach for optimization based on the feature based CAD model, which uses CAD parameters defining the features in the model geometry as the design variables. In order to capture the CAD surface movement with respect to the change in design variable, the “Parametric Design Velocity” is calculated, which is defined as the movement of the CAD model boundary in the normal direction due to a change in the parameter value.
The approach presented here for calculating the design velocities represents an advancement in terms of capability and robustness of that described by Robinson et al. [3]. The process can be easily integrated to most industrial optimisation workflows and is immune to the topology and labelling issues highlighted by other CAD based optimisation processes. It considers every continuous (“real value”) parameter type as an optimisation variable, and it can be adapted to work with any CAD modelling software, as long as it has an API which provides access to the values of the parameters which control the model shape and allows the model geometry to be exported. To calculate the movement of the boundary the methodology employs finite differences on the shape of the 3D CAD models before and after the parameter perturbation. The implementation procedure includes calculating the geometrical movement along a normal direction between two discrete representations of the original and perturbed geometry respectively. Parametric design velocities can then be directly linked with adjoint surface sensitivities to extract the gradients to use in a gradient-based optimization algorithm.
The optimisation of a flow optimisation problem is presented, in which the power dissipation of the flow in an automotive air duct is to be reduced by changing the parameters of the CAD geometry created in CATIA V5. The flow sensitivities are computed with the continuous adjoint method for a laminar and turbulent flow [4] and are combined with the parametric design velocities to compute the cost function gradients. A line-search algorithm is then used to update the design variables and proceed further with optimisation process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the first multi vector energy analysis for the interconnected energy systems of Great Britain (GB) and Ireland. Both systems share a common high penetration of wind power, but significantly different security of supply outlooks. Ireland is heavily dependent on gas imports from GB, giving significance to the interconnected aspect of the methodology in addition to the gas and power interactions analysed. A fully realistic unit commitment and economic dispatch model coupled to an energy flow model of the gas supply network is developed. Extreme weather events driving increased domestic gas demand and low wind power output were utilised to increase gas supply network stress. Decreased wind profiles had a larger impact on system security than high domestic gas demand. However, the GB energy system was resilient during high demand periods but gas network stress limited the ramping capability of localised generating units. Additionally, gas system entry node congestion in the Irish system was shown to deliver a 40% increase in short run costs for generators. Gas storage was shown to reduce the impact of high demand driven congestion delivering a reduction in total generation costs of 14% in the period studied and reducing electricity imports from GB, significantly contributing to security of supply.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermoelectric materials are revisited for various applications including power generation. The direct conversion of temperature differences into electric voltage and vice versa is known as thermoelectric effect. Possible applications of thermoelectric materials are in eco-friendly refrigeration, electric power generation from waste heat, infrared sensors, temperature controlled-seats and portable picnic coolers. Thermoelectric materials are also extensively researched upon as an alternative to compression based refrigeration. This utilizes the principle of Peltier cooling. The performance characteristic of a thermoelectric material, termed as figure of merit (ZT) is a function of several transport coefficients such as electrical conductivity (σ), thermal conductivity (κ) and Seebeck coefficient of the material (S). ZT is expressed asκσTZTS2=, where T is the temperature in degree absolute. A large value of Seebeck coefficient, high electrical conductivity and low thermal conductivity are necessary to realize a high performance thermoelectric material. The best known thermoelectric materials are phonon-glass electron – crystal (PGEC) system where the phonons are scattered within the unit cell by the rattling structure and electrons are scattered less as in crystals to obtain a high electrical conductivity. A survey of literature reveals that correlated semiconductors and Kondo insulators containing rare earth or transition metal ions are found to be potential thermoelectric materials. The structural magnetic and charge transport properties in manganese oxides having the general formula of RE1−xAExMnO3 (RE = rare earth, AE= Ca, Sr, Ba) are solely determined by the mixed valence (3+/4+) state of Mn ions. In strongly correlated electron systems, magnetism and charge transport properties are strongly correlated. Within the area of strongly correlated electron systems the study of manganese oxides, widely known as manganites exhibit unique magneto electric transport properties, is an active area of research.Strongly correlated systems like perovskite manganites, characterized by their narrow localized band and hoping conduction, were found to be good candidates for thermoelectric applications. Manganites represent a highly correlated electron system and exhibit a variety of phenomena such as charge, orbital and magnetic ordering, colossal magneto resistance and Jahn-Teller effect. The strong inter-dependence between the magnetic order parameters and the transport coefficients in manganites has generated much research interest in the thermoelectric properties of manganites. Here, large thermal motion or rattling of rare earth atoms with localized magnetic moments is believed to be responsible for low thermal conductivity of these compounds. The 4f levels in these compounds, lying near the Fermi energy, create large density of states at the Fermi level and hence they are likely to exhibit a fairly large value of Seebeck coefficient.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past, many papers have been presented which show that the coating of cutting tools often yields decreased wear rates and reduced coefficients of friction. Although different theories are proposed, covering areas such as hardness theory, diffusion barrier theory, thermal barrier theory, and reduced friction theory, most have not dealt with the question of how and why the coating of tool substrates with hard materials such as Titanium Nitride (TiN), Titanium Carbide (TiC) and Aluminium Oxide (Al203) transforms the performance and life of cutting tools. This project discusses the complex interrelationship that encompasses the thermal barrier function and the relatively low sliding friction coefficient of TiN on an undulating tool surface, and presents the result of an investigation into the cutting characteristics and performance of EDMed surface-modified carbide cutting tool inserts. The tool inserts were coated with TiN by the physical vapour deposition (PVD) method. PVD coating is also known as Ion-plating which is the general term of the coating method in which the film is created by attracting ionized metal vapour in this the metal was Titanium and ionized gas onto negatively biased substrate surface. Coating by PVD was chosen because it is done at a temperature of not more than 5000C whereas chemical Vapour Deposition CVD process is done at very high temperature of about 8500C and in two stages of heating up the substrates. The high temperatures involved in CVD affects the strength of the (tool) substrates. In this study, comparative cutting tests using TiN-coated control specimens with no EDM surface structures and TiN-coated EDMed tools with a crater-like surface topography were carried out on mild steel grade EN-3. Various cutting speeds were investigated, up to an increase of 40% of the tool manufacturer’s recommended speed. Fifteen minutes of cutting were carried out for each insert at the speeds investigated. Conventional tool inserts normally have a tool life of approximately 15 minutes of cutting. After every five cuts (passes) microscopic pictures of the tool wear profiles were taken, in order to monitor the progressive wear on the rake face and on the flank of the insert. The power load was monitored for each cut taken using an on-board meter on the CNC machine to establish the amount of power needed for each stage of operation. The spindle drive for the machine is an 11 KW/hr motor. Results obtained confirmed the advantages of cutting at all speeds investigated using EDMed coated inserts, in terms of reduced tool wear and low power loads. Moreover, the surface finish on the workpiece was consistently better for the EDMed inserts. The thesis discusses the relevance of the finite element method in the analysis of metal cutting processes, so that metal machinists can design, manufacture and deliver goods (tools) to the market quickly and on time without going through the hassle of trial and error approach for new products. Improvements in manufacturing technologies require better knowledge of modelling metal cutting processes. Technically the use of computational models has a great value in reducing or even eliminating the number of experiments traditionally used for tool design, process selection, machinability evaluation, and chip breakage investigations. In this work, much interest in theoretical and experimental investigations of metal machining were given special attention. Finite element analysis (FEA) was given priority in this study to predict tool wear and coating deformations during machining. Particular attention was devoted to the complicated mechanisms usually associated with metal cutting, such as interfacial friction; heat generated due to friction and severe strain in the cutting region, and high strain rates. It is therefore concluded that Roughened contact surface comprising of peaks and valleys coated with hard materials (TiN) provide wear-resisting properties as the coatings get entrapped in the valleys and help reduce friction at chip-tool interface. The contributions to knowledge: a. Relates to a wear-resisting surface structure for application in contact surfaces and structures in metal cutting and forming tools with ability to give wear-resisting surface profile. b. Provide technique for designing tool with roughened surface comprising of peaks and valleys covered in conformal coating with a material such as TiN, TiC etc which is wear-resisting structure with surface roughness profile compose of valleys which entrap residual coating material during wear thereby enabling the entrapped coating material to give improved wear resistance. c. Provide knowledge for increased tool life through wear resistance, hardness and chemical stability at high temperatures because of reduced friction at the tool-chip and work-tool interfaces due to tool coating, which leads to reduced heat generation at the cutting zones. d. Establishes that Undulating surface topographies on cutting tips tend to hold coating materials longer in the valleys, thus giving enhanced protection to the tool and the tool can cut faster by 40% and last 60% longer than conventional tools on the markets today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1. Genomewide association studies (GWAS) enable detailed dissections of the genetic basis for organisms' ability to adapt to a changing environment. In long-term studies of natural populations, individuals are often marked at one point in their life and then repeatedly recaptured. It is therefore essential that a method for GWAS includes the process of repeated sampling. In a GWAS, the effects of thousands of single-nucleotide polymorphisms (SNPs) need to be fitted and any model development is constrained by the computational requirements. A method is therefore required that can fit a highly hierarchical model and at the same time is computationally fast enough to be useful. 2. Our method fits fixed SNP effects in a linear mixed model that can include both random polygenic effects and permanent environmental effects. In this way, the model can correct for population structure and model repeated measures. The covariance structure of the linear mixed model is first estimated and subsequently used in a generalized least squares setting to fit the SNP effects. The method was evaluated in a simulation study based on observed genotypes from a long-term study of collared flycatchers in Sweden. 3. The method we present here was successful in estimating permanent environmental effects from simulated repeated measures data. Additionally, we found that especially for variable phenotypes having large variation between years, the repeated measurements model has a substantial increase in power compared to a model using average phenotypes as a response. 4. The method is available in the R package RepeatABEL. It increases the power in GWAS having repeated measures, especially for long-term studies of natural populations, and the R implementation is expected to facilitate modelling of longitudinal data for studies of both animal and human populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new method for the evaluation of the efficiency of parabolic trough collectors, called Rapid Test Method, is investigated at the Solar Institut Jülich. The basic concept is to carry out measurements under stagnation conditions. This allows a fast and inexpensive process due to the fact that no working fluid is required. With this approach, the temperature reached by the inner wall of the receiver is assumed to be the stagnation temperature and hence the average temperature inside the collector. This leads to a systematic error which can be rectified through the introduction of a correction factor. A model of the collector is simulated with COMSOL Multipyisics to study the size of the correction factor depending on collector geometry and working conditions. The resulting values are compared with experimental data obtained at a test rig at the Solar Institut Jülich. These results do not match with the simulated ones. Consequentially, it was not pos-sible to verify the model. The reliability of both the model with COMSOL Multiphysics and of the measurements are analysed. The influence of the correction factor on the rapid test method is also studied, as well as the possibility of neglecting it by measuring the receiver’s inner wall temperature where it receives the least amount of solar rays. The last two chapters analyse the specific heat capacity as a function of pressure and tem-perature and present some considerations about the uncertainties on the efficiency curve obtained with the Rapid Test Method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ma thèse s’intéresse aux politiques de santé conçues pour encourager l’offre de services de santé. L’accessibilité aux services de santé est un problème majeur qui mine le système de santé de la plupart des pays industrialisés. Au Québec, le temps médian d’attente entre une recommandation du médecin généraliste et un rendez-vous avec un médecin spécialiste était de 7,3 semaines en 2012, contre 2,9 semaines en 1993, et ceci malgré l’augmentation du nombre de médecins sur cette même période. Pour les décideurs politiques observant l’augmentation du temps d’attente pour des soins de santé, il est important de comprendre la structure de l’offre de travail des médecins et comment celle-ci affecte l’offre des services de santé. Dans ce contexte, je considère deux principales politiques. En premier lieu, j’estime comment les médecins réagissent aux incitatifs monétaires et j’utilise les paramètres estimés pour examiner comment les politiques de compensation peuvent être utilisées pour déterminer l’offre de services de santé de court terme. En second lieu, j’examine comment la productivité des médecins est affectée par leur expérience, à travers le mécanisme du "learning-by-doing", et j’utilise les paramètres estimés pour trouver le nombre de médecins inexpérimentés que l’on doit recruter pour remplacer un médecin expérimenté qui va à la retraite afin de garder l’offre des services de santé constant. Ma thèse développe et applique des méthodes économique et statistique afin de mesurer la réaction des médecins face aux incitatifs monétaires et estimer leur profil de productivité (en mesurant la variation de la productivité des médecins tout le long de leur carrière) en utilisant à la fois des données de panel sur les médecins québécois, provenant d’enquêtes et de l’administration. Les données contiennent des informations sur l’offre de travail de chaque médecin, les différents types de services offerts ainsi que leurs prix. Ces données couvrent une période pendant laquelle le gouvernement du Québec a changé les prix relatifs des services de santé. J’ai utilisé une approche basée sur la modélisation pour développer et estimer un modèle structurel d’offre de travail en permettant au médecin d’être multitâche. Dans mon modèle les médecins choisissent le nombre d’heures travaillées ainsi que l’allocation de ces heures à travers les différents services offerts, de plus les prix des services leurs sont imposés par le gouvernement. Le modèle génère une équation de revenu qui dépend des heures travaillées et d’un indice de prix représentant le rendement marginal des heures travaillées lorsque celles-ci sont allouées de façon optimale à travers les différents services. L’indice de prix dépend des prix des services offerts et des paramètres de la technologie de production des services qui déterminent comment les médecins réagissent aux changements des prix relatifs. J’ai appliqué le modèle aux données de panel sur la rémunération des médecins au Québec fusionnées à celles sur l’utilisation du temps de ces mêmes médecins. J’utilise le modèle pour examiner deux dimensions de l’offre des services de santé. En premierlieu, j’analyse l’utilisation des incitatifs monétaires pour amener les médecins à modifier leur production des différents services. Bien que les études antérieures ont souvent cherché à comparer le comportement des médecins à travers les différents systèmes de compensation,il y a relativement peu d’informations sur comment les médecins réagissent aux changementsdes prix des services de santé. Des débats actuels dans les milieux de politiques de santé au Canada se sont intéressés à l’importance des effets de revenu dans la détermination de la réponse des médecins face à l’augmentation des prix des services de santé. Mon travail contribue à alimenter ce débat en identifiant et en estimant les effets de substitution et de revenu résultant des changements des prix relatifs des services de santé. En second lieu, j’analyse comment l’expérience affecte la productivité des médecins. Cela a une importante implication sur le recrutement des médecins afin de satisfaire la demande croissante due à une population vieillissante, en particulier lorsque les médecins les plus expérimentés (les plus productifs) vont à la retraite. Dans le premier essai, j’ai estimé la fonction de revenu conditionnellement aux heures travaillées, en utilisant la méthode des variables instrumentales afin de contrôler pour une éventuelle endogeneité des heures travaillées. Comme instruments j’ai utilisé les variables indicatrices des âges des médecins, le taux marginal de taxation, le rendement sur le marché boursier, le carré et le cube de ce rendement. Je montre que cela donne la borne inférieure de l’élasticité-prix direct, permettant ainsi de tester si les médecins réagissent aux incitatifs monétaires. Les résultats montrent que les bornes inférieures des élasticités-prix de l’offre de services sont significativement positives, suggérant que les médecins répondent aux incitatifs. Un changement des prix relatifs conduit les médecins à allouer plus d’heures de travail au service dont le prix a augmenté. Dans le deuxième essai, j’estime le modèle en entier, de façon inconditionnelle aux heures travaillées, en analysant les variations des heures travaillées par les médecins, le volume des services offerts et le revenu des médecins. Pour ce faire, j’ai utilisé l’estimateur de la méthode des moments simulés. Les résultats montrent que les élasticités-prix direct de substitution sont élevées et significativement positives, représentant une tendance des médecins à accroitre le volume du service dont le prix a connu la plus forte augmentation. Les élasticitésprix croisées de substitution sont également élevées mais négatives. Par ailleurs, il existe un effet de revenu associé à l’augmentation des tarifs. J’ai utilisé les paramètres estimés du modèle structurel pour simuler une hausse générale de prix des services de 32%. Les résultats montrent que les médecins devraient réduire le nombre total d’heures travaillées (élasticité moyenne de -0,02) ainsi que les heures cliniques travaillées (élasticité moyenne de -0.07). Ils devraient aussi réduire le volume de services offerts (élasticité moyenne de -0.05). Troisièmement, j’ai exploité le lien naturel existant entre le revenu d’un médecin payé à l’acte et sa productivité afin d’établir le profil de productivité des médecins. Pour ce faire, j’ai modifié la spécification du modèle pour prendre en compte la relation entre la productivité d’un médecin et son expérience. J’estime l’équation de revenu en utilisant des données de panel asymétrique et en corrigeant le caractère non-aléatoire des observations manquantes à l’aide d’un modèle de sélection. Les résultats suggèrent que le profil de productivité est une fonction croissante et concave de l’expérience. Par ailleurs, ce profil est robuste à l’utilisation de l’expérience effective (la quantité de service produit) comme variable de contrôle et aussi à la suppression d’hypothèse paramétrique. De plus, si l’expérience du médecin augmente d’une année, il augmente la production de services de 1003 dollar CAN. J’ai utilisé les paramètres estimés du modèle pour calculer le ratio de remplacement : le nombre de médecins inexpérimentés qu’il faut pour remplacer un médecin expérimenté. Ce ratio de remplacement est de 1,2.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Once the preserve of university academics and research laboratories with high-powered and expensive computers, the power of sophisticated mathematical fire models has now arrived on the desk top of the fire safety engineer. It is a revolution made possible by parallel advances in PC technology and fire modelling software. But while the tools have proliferated, there has not been a corresponding transfer of knowledge and understanding of the discipline from expert to general user. It is a serious shortfall of which the lack of suitable engineering courses dealing with the subject is symptomatic, if not the cause. The computational vehicles to run the models and an understanding of fire dynamics are not enough to exploit these sophisticated tools. Too often, they become 'black boxes' producing magic answers in exciting three-dimensional colour graphics and client-satisfying 'virtual reality' imagery. As well as a fundamental understanding of the physics and chemistry of fire, the fire safety engineer must have at least a rudimentary understanding of the theoretical basis supporting fire models to appreciate their limitations and capabilities. The five day short course, "Principles and Practice of Fire Modelling" run by the University of Greenwich attempt to bridge the divide between the expert and the general user, providing them with the expertise they need to understand the results of mathematical fire modelling. The course and associated text book, "Mathematical Modelling of Fire Phenomena" are aimed at students and professionals with a wide and varied background, they offer a friendly guide through the unfamiliar terrain of mathematical modelling. These concepts and techniques are introduced and demonstrated in seminars. Those attending also gain experience in using the methods during "hands-on" tutorial and workshop sessions. On completion of this short course, those participating should: - be familiar with the concept of zone and field modelling; - be familiar with zone and field model assumptions; - have an understanding of the capabilities and limitations of modelling software packages for zone and field modelling; - be able to select and use the most appropriate mathematical software and demonstrate their use in compartment fire applications; and - be able to interpret model predictions. The result is that the fire safety engineer is empowered to realise the full value of mathematical models to help in the prediction of fire development, and to determine the consequences of fire under a variety of conditions. This in turn enables him or her to design and implement safety measures which can potentially control, or at the very least reduce the impact of fire.