989 resultados para Thermal modelling
Resumo:
A sound statistical methodology is presented for modelling the correspondence between the characteristics of individuals, their thermal environment, and their thermal sensation. The proposed methodology substantially improves that developed by P.O. Fanger, by formulating a more general and precise model of thermal comfort. It enables us to estimate the model from a sample of data where all the parameters of comfort vary at the same time, which is not possible with that adopted by Fanger. Moreover, the present model is still valid when thermal conditions are far from optimum. (C) 1997 Elsevier Science Ltd.
Resumo:
It can be assumed that the composition of Mercury’s thin gas envelope (exosphere) is related to thecomposition of the planets crustal materials. If this relationship is true, then inferences regarding the bulkchemistry of the planet might be made from a thorough exospheric study. The most vexing of allunsolved problems is the uncertainty in the source of each component. Historically, it has been believedthat H and He come primarily from the solar wind, while Na and K originate from volatilized materialspartitioned between Mercury’s crust and meteoritic impactors. The processes that eject atoms andmolecules into the exosphere of Mercury are generally considered to be thermal vaporization, photonstimulateddesorption (PSD), impact vaporization, and ion sputtering. Each of these processes has its owntemporal and spatial dependence. The exosphere is strongly influenced by Mercury’s highly ellipticalorbit and rapid orbital speed. As a consequence the surface undergoes large fluctuations in temperatureand experiences differences of insolation with longitude. We will discuss these processes but focus moreon the expected surface composition and solar wind particle sputtering which releases material like Caand other elements from the surface minerals and discuss the relevance of composition modelling
Resumo:
In recent research, both soil (root-zone) and air temperature have been used as predictors for the treeline position worldwide. In this study, we intended to (a) test the proposed temperature limitation at the treeline, and (b) investigate effects of season length for both heat sum and mean temperature variables in the Swiss Alps. As soil temperature data are available for a limited number of sites only, we developed an air-to-soil transfer model (ASTRAMO). The air-to-soil transfer model predicts daily mean root-zone temperatures (10cm below the surface) at the treeline exclusively from daily mean air temperatures. The model using calibrated air and root-zone temperature measurements at nine treeline sites in the Swiss Alps incorporates time lags to account for the damping effect between air and soil temperatures as well as the temporal autocorrelations typical for such chronological data sets. Based on the measured and modeled root-zone temperatures we analyzed. the suitability of the thermal treeline indicators seasonal mean and degree-days to describe the Alpine treeline position. The root-zone indicators were then compared to the respective indicators based on measured air temperatures, with all indicators calculated for two different indicator period lengths. For both temperature types (root-zone and air) and both indicator periods, seasonal mean temperature was the indicator with the lowest variation across all treeline sites. The resulting indicator values were 7.0 degrees C +/- 0.4 SD (short indicator period), respectively 7.1 degrees C +/- 0.5 SD (long indicator period) for root-zone temperature, and 8.0 degrees C +/- 0.6 SD (short indicator period), respectively 8.8 degrees C +/- 0.8 SD (long indicator period) for air temperature. Generally, a higher variation was found for all air based treeline indicators when compared to the root-zone temperature indicators. Despite this, we showed that treeline indicators calculated from both air and root-zone temperatures can be used to describe the Alpine treeline position.
Resumo:
Rosin is a natural product from pine forests and it is used as a raw material in resinate syntheses. Resinates are polyvalent metal salts of rosin acids and especially Ca- and Ca/Mg- resinates find wide application in the printing ink industry. In this thesis, analytical methods were applied to increase general knowledge of resinate chemistry and the reaction kinetics was studied in order to model the non linear solution viscosity increase during resinate syntheses by the fusion method. Solution viscosity in toluene is an important quality factor for resinates to be used in printing inks. The concept of critical resinate concentration, c crit, was introduced to define an abrupt change in viscosity dependence on resinate concentration in the solution. The concept was then used to explain the non-inear solution viscosity increase during resinate syntheses. A semi empirical model with two estimated parameters was derived for the viscosity increase on the basis of apparent reaction kinetics. The model was used to control the viscosity and to predict the total reaction time of the resinate process. The kinetic data from the complex reaction media was obtained by acid value titration and by FTIR spectroscopic analyses using a conventional calibration method to measure the resinate concentration and the concentration of free rosin acids. A multivariate calibration method was successfully applied to make partial least square (PLS) models for monitoring acid value and solution viscosity in both mid-infrared (MIR) and near infrared (NIR) regions during the syntheses. The calibration models can be used for on line resinate process monitoring. In kinetic studies, two main reaction steps were observed during the syntheses. First a fast irreversible resination reaction occurs at 235 °C and then a slow thermal decarboxylation of rosin acids starts to take place at 265 °C. Rosin oil is formed during the decarboxylation reaction step causing significant mass loss as the rosin oil evaporates from the system while the viscosity increases to the target level. The mass balance of the syntheses was determined based on the resinate concentration increase during the decarboxylation reaction step. A mechanistic study of the decarboxylation reaction was based on the observation that resinate molecules are partly solvated by rosin acids during the syntheses. Different decarboxylation mechanisms were proposed for the free and solvating rosin acids. The deduced kinetic model supported the analytical data of the syntheses in a wide resinate concentration region, over a wide range of viscosity values and at different reaction temperatures. In addition, the application of the kinetic model to the modified resinate syntheses gave a good fit. A novel synthesis method with the addition of decarboxylated rosin (i.e. rosin oil) to the reaction mixture was introduced. The conversion of rosin acid to resinate was increased to the level necessary to obtain the target viscosity for the product at 235 °C. Due to a lower reaction temperature than in traditional fusion synthesis at 265 °C, thermal decarboxylation is avoided. As a consequence, the mass yield of the resinate syntheses can be increased from ca. 70% to almost 100% by recycling the added rosin oil.
Resumo:
The behavior of the nuclear power plants must be known in all operational situations. Thermal hydraulics computer applications are used to simulate the behavior of the plants. The computer applications must be validated before they can be used reliably. The simulation results are compared against the experimental results. In this thesis a model of the PWR PACTEL steam generator was prepared with the TRAC/RELAP Advanced Computational Engine computer application. The simulation results can be compared against the results of the Advanced Process Simulator analysis software in future. Development of the model of the PWR PACTEL vertical steam generator is introduced in this thesis. Loss of feedwater transient simulation examples were carried out with the model.
Resumo:
The objective of this work was to develop and validate a mathematical model to estimate the duration of cotton (Gossypium hirsutum L. r. latifolium hutch) cycle in the State of Goiás, Brazil, by applying the method of growing degree-days (GD), and considering, simultaneously, its time-space variation. The model was developed as a linear combination of elevation, latitude, longitude, and Fourier series of time variation. The model parameters were adjusted by using multiple-linear regression to the observed GD accumulated with air temperature in the range of 15°C to 40°C. The minimum and maximum temperature records used to calculate the GD were obtained from 21 meteorological stations, considering data varying from 8 to 20 years of observation. The coefficient of determination, resulting from the comparison between the estimated and calculated GD along the year was 0.84. Model validation was done by comparing estimated and measured crop cycle in the period from cotton germination to the stage when 90 percent of bolls were opened in commercial crop fields. Comparative results showed that the model performed very well, as indicated by the Pearson correlation coefficient of 0.90 and Willmott agreement index of 0.94, resulting in a performance index of 0.85.
Resumo:
Hydrogen stratification and atmosphere mixing is a very important phenomenon in nuclear reactor containments when severe accidents are studied and simulated. Hydrogen generation, distribution and accumulation in certain parts of containment may pose a great risk to pressure increase induced by hydrogen combustion, and thus, challenge the integrity of NPP containment. The accurate prediction of hydrogen distribution is important with respect to the safety design of a NPP. Modelling methods typically used for containment analyses include both lumped parameter and field codes. The lumped parameter method is universally used in the containment codes, because its versatility, flexibility and simplicity. The lumped parameter method allows fast, full-scale simulations, where different containment geometries with relevant engineering safety features can be modelled. Lumped parameter gas stratification and mixing modelling methods are presented and discussed in this master’s thesis. Experimental research is widely used in containment analyses. The HM-2 experiment related to hydrogen stratification and mixing conducted at the THAI facility in Germany is calculated with the APROS lump parameter containment package and the APROS 6-equation thermal hydraulic model. The main purpose was to study, whether the convection term included in the momentum conservation equation of the 6-equation modelling gives some remarkable advantages compared to the simplified lumped parameter approach. Finally, a simple containment test case (high steam release to a narrow steam generator room inside a large dry containment) was calculated with both APROS models. In this case, the aim was to determine the extreme containment conditions, where the effect of convection term was supposed to be possibly high. Calculation results showed that both the APROS containment and the 6-equation model could model the hydrogen stratification in the THAI test well, if the vertical nodalisation was dense enough. However, in more complicated cases, the numerical diffusion may distort the results. Calculation of light gas stratification could be probably improved by applying the second order discretisation scheme for the modelling of gas flows. If the gas flows are relatively high, the convection term of the momentum equation is necessary to model the pressure differences between the adjacent nodes reasonably.
Resumo:
Innovative gas cooled reactors, such as the pebble bed reactor (PBR) and the gas cooled fast reactor (GFR) offer higher efficiency and new application areas for nuclear energy. Numerical methods were applied and developed to analyse the specific features of these reactor types with fully three dimensional calculation models. In the first part of this thesis, discrete element method (DEM) was used for a physically realistic modelling of the packing of fuel pebbles in PBR geometries and methods were developed for utilising the DEM results in subsequent reactor physics and thermal-hydraulics calculations. In the second part, the flow and heat transfer for a single gas cooled fuel rod of a GFR were investigated with computational fluid dynamics (CFD) methods. An in-house DEM implementation was validated and used for packing simulations, in which the effect of several parameters on the resulting average packing density was investigated. The restitution coefficient was found out to have the most significant effect. The results can be utilised in further work to obtain a pebble bed with a specific packing density. The packing structures of selected pebble beds were also analysed in detail and local variations in the packing density were observed, which should be taken into account especially in the reactor core thermal-hydraulic analyses. Two open source DEM codes were used to produce stochastic pebble bed configurations to add realism and improve the accuracy of criticality calculations performed with the Monte Carlo reactor physics code Serpent. Russian ASTRA criticality experiments were calculated. Pebble beds corresponding to the experimental specifications within measurement uncertainties were produced in DEM simulations and successfully exported into the subsequent reactor physics analysis. With the developed approach, two typical issues in Monte Carlo reactor physics calculations of pebble bed geometries were avoided. A novel method was developed and implemented as a MATLAB code to calculate porosities in the cells of a CFD calculation mesh constructed over a pebble bed obtained from DEM simulations. The code was further developed to distribute power and temperature data accurately between discrete based reactor physics and continuum based thermal-hydraulics models to enable coupled reactor core calculations. The developed method was also found useful for analysing sphere packings in general. CFD calculations were performed to investigate the pressure losses and heat transfer in three dimensional air cooled smooth and rib roughened rod geometries, housed inside a hexagonal flow channel representing a sub-channel of a single fuel rod of a GFR. The CFD geometry represented the test section of the L-STAR experimental facility at Karlsruhe Institute of Technology and the calculation results were compared to the corresponding experimental results. Knowledge was gained of the adequacy of various turbulence models and of the modelling requirements and issues related to the specific application. The obtained pressure loss results were in a relatively good agreement with the experimental data. Heat transfer in the smooth rod geometry was somewhat under predicted, which can partly be explained by unaccounted heat losses and uncertainties. In the rib roughened geometry heat transfer was severely under predicted by the used realisable k − epsilon turbulence model. An additional calculation with a v2 − f turbulence model showed significant improvement in the heat transfer results, which is most likely due to the better performance of the model in separated flow problems. Further investigations are suggested before using CFD to make conclusions of the heat transfer performance of rib roughened GFR fuel rod geometries. It is suggested that the viewpoints of numerical modelling are included in the planning of experiments to ease the challenging model construction and simulations and to avoid introducing additional sources of uncertainties. To facilitate the use of advanced calculation approaches, multi-physical aspects in experiments should also be considered and documented in a reasonable detail.
Resumo:
Gasification of biomass is an efficient method process to produce liquid fuels, heat and electricity. It is interesting especially for the Nordic countries, where raw material for the processes is readily available. The thermal reactions of light hydrocarbons are a major challenge for industrial applications. At elevated temperatures, light hydrocarbons react spontaneously to form higher molecular weight compounds. In this thesis, this phenomenon was studied by literature survey, experimental work and modeling effort. The literature survey revealed that the change in tar composition is likely caused by the kinetic entropy. The role of the surface material is deemed to be an important factor in the reactivity of the system. The experimental results were in accordance with previous publications on the subject. The novelty of the experimental work lies in the used time interval for measurements combined with an industrially relevant temperature interval. The aspects which are covered in the modeling include screening of possible numerical approaches, testing of optimization methods and kinetic modelling. No significant numerical issues were observed, so the used calculation routines are adequate for the task. Evolutionary algorithms gave a better performance combined with better fit than the conventional iterative methods such as Simplex and Levenberg-Marquardt methods. Three models were fitted on experimental data. The LLNL model was used as a reference model to which two other models were compared. A compact model which included all the observed species was developed. The parameter estimation performed on that model gave slightly impaired fit to experimental data than LLNL model, but the difference was barely significant. The third tested model concentrated on the decomposition of hydrocarbons and included a theoretical description of the formation of carbon layer on the reactor walls. The fit to experimental data was extremely good. Based on the simulation results and literature findings, it is likely that the surface coverage of carbonaceous deposits is a major factor in thermal reactions.
Resumo:
This thesis concentrates on the validation of a generic thermal hydraulic computer code TRACE under the challenges of the VVER-440 reactor type. The code capability to model the VVER-440 geometry and thermal hydraulic phenomena specific to this reactor design has been examined and demonstrated acceptable. The main challenge in VVER-440 thermal hydraulics appeared in the modelling of the horizontal steam generator. The major challenge here is not in the code physics or numerics but in the formulation of a representative nodalization structure. Another VVER-440 specialty, the hot leg loop seals, challenges the system codes functionally in general, but proved readily representable. Computer code models have to be validated against experiments to achieve confidence in code models. When new computer code is to be used for nuclear power plant safety analysis, it must first be validated against a large variety of different experiments. The validation process has to cover both the code itself and the code input. Uncertainties of different nature are identified in the different phases of the validation procedure and can even be quantified. This thesis presents a novel approach to the input model validation and uncertainty evaluation in the different stages of the computer code validation procedure. This thesis also demonstrates that in the safety analysis, there are inevitably significant uncertainties that are not statistically quantifiable; they need to be and can be addressed by other, less simplistic means, ultimately relying on the competence of the analysts and the capability of the community to support the experimental verification of analytical assumptions. This method completes essentially the commonly used uncertainty assessment methods, which are usually conducted using only statistical methods.
Electromagnetic and thermal design of a multilevel converter with high power density and reliability
Resumo:
Electric energy demand has been growing constantly as the global population increases. To avoid electric energy shortage, renewable energy sources and energy conservation are emphasized all over the world. The role of power electronics in energy saving and development of renewable energy systems is significant. Power electronics is applied in wind, solar, fuel cell, and micro turbine energy systems for the energy conversion and control. The use of power electronics introduces an energy saving potential in such applications as motors, lighting, home appliances, and consumer electronics. Despite the advantages of power converters, their penetration into the market requires that they have a set of characteristics such as high reliability and power density, cost effectiveness, and low weight, which are dictated by the emerging applications. In association with the increasing requirements, the design of the power converter is becoming more complicated, and thus, a multidisciplinary approach to the modelling of the converter is required. In this doctoral dissertation, methods and models are developed for the design of a multilevel power converter and the analysis of the related electromagnetic, thermal, and reliability issues. The focus is on the design of the main circuit. The electromagnetic model of the laminated busbar system and the IGBT modules is established with the aim of minimizing the stray inductance of the commutation loops that degrade the converter power capability. The circular busbar system is proposed to achieve equal current sharing among parallel-connected devices and implemented in the non-destructive test set-up. In addition to the electromagnetic model, a thermal model of the laminated busbar system is developed based on a lumped parameter thermal model. The temperature and temperature-dependent power losses of the busbars are estimated by the proposed algorithm. The Joule losses produced by non-sinusoidal currents flowing through the busbars in the converter are estimated taking into account the skin and proximity effects, which have a strong influence on the AC resistance of the busbars. The lifetime estimation algorithm was implemented to investigate the influence of the cooling solution on the reliability of the IGBT modules. As efficient cooling solutions have a low thermal inertia, they cause excessive temperature cycling of the IGBTs. Thus, a reliability analysis is required when selecting the cooling solutions for a particular application. The control of the cooling solution based on the use of a heat flux sensor is proposed to reduce the amplitude of the temperature cycles. The developed methods and models are verified experimentally by a laboratory prototype.
Resumo:
The hazards associated with major accident hazard (MAH) industries are fire, explosion and toxic gas releases. Of these, toxic gas release is the worst as it has the potential to cause extensive fatalities. Qualitative and quantitative hazard analyses are essential for the identitication and quantification of the hazards associated with chemical industries. This research work presents the results of a consequence analysis carried out to assess the damage potential of the hazardous material storages in an industrial area of central Kerala, India. A survey carried out in the major accident hazard (MAH) units in the industrial belt revealed that the major hazardous chemicals stored by the various industrial units are ammonia, chlorine, benzene, naphtha, cyclohexane, cyclohexanone and LPG. The damage potential of the above chemicals is assessed using consequence modelling. Modelling of pool fires for naphtha, cyclohexane, cyclohexanone, benzene and ammonia are carried out using TNO model. Vapor cloud explosion (VCE) modelling of LPG, cyclohexane and benzene are carried out using TNT equivalent model. Boiling liquid expanding vapor explosion (BLEVE) modelling of LPG is also carried out. Dispersion modelling of toxic chemicals like chlorine, ammonia and benzene is carried out using the ALOHA air quality model. Threat zones for different hazardous storages are estimated based on the consequence modelling. The distance covered by the threat zone was found to be maximum for chlorine release from a chlor-alkali industry located in the area. The results of consequence modelling are useful for the estimation of individual risk and societal risk in the above industrial area.Vulnerability assessment is carried out using probit functions for toxic, thermal and pressure loads. Individual and societal risks are also estimated at different locations. Mapping of threat zones due to different incident outcome cases from different MAH industries is done with the help of Are GIS.Fault Tree Analysis (FTA) is an established technique for hazard evaluation. This technique has the advantage of being both qualitative and quantitative, if the probabilities and frequencies of the basic events are known. However it is often difficult to estimate precisely the failure probability of the components due to insufficient data or vague characteristics of the basic event. It has been reported that availability of the failure probability data pertaining to local conditions is surprisingly limited in India. This thesis outlines the generation of failure probability values of the basic events that lead to the release of chlorine from the storage and filling facility of a major chlor-alkali industry located in the area using expert elicitation and proven fuzzy logic. Sensitivity analysis has been done to evaluate the percentage contribution of each basic event that could lead to chlorine release. Two dimensional fuzzy fault tree analysis (TDFFTA) has been proposed for balancing the hesitation factor invo1ved in expert elicitation .
Resumo:
Data centre is a centralized repository,either physical or virtual,for the storage,management and dissemination of data and information organized around a particular body and nerve centre of the present IT revolution.Data centre are expected to serve uniinterruptedly round the year enabling them to perform their functions,it consumes enormous energy in the present scenario.Tremendous growth in the demand from IT Industry made it customary to develop newer technologies for the better operation of data centre.Energy conservation activities in data centre mainly concentrate on the air conditioning system since it is the major mechanical sub-system which consumes considerable share of the total power consumption of the data centre.The data centre energy matrix is best represented by power utilization efficiency(PUE),which is defined as the ratio of the total facility power to the IT equipment power.Its value will be greater than one and a large value of PUE indicates that the sub-systems draw more power from the facility and the performance of the data will be poor from the stand point of energy conservation. PUE values of 1.4 to 1.6 are acievable by proper design and management techniques.Optimizing the air conditioning systems brings enormous opportunity in bringing down the PUE value.The air conditioning system can be optimized by two approaches namely,thermal management and air flow management.thermal management systems are now introduced by some companies but they are highly sophisticated and costly and do not catch much attention in the thumb rules.
Resumo:
Upgrading two widely used standard plastics, polypropylene (PP) and high density polyethylene (HDPE), and generating a variety of useful engineering materials based on these blends have been the main objective of this study. Upgradation was effected by using nanomodifiers and/or fibrous modifiers. PP and HDPE were selected for modification due to their attractive inherent properties and wide spectrum of use. Blending is the engineered method of producing new materials with tailor made properties. It has the advantages of both the materials. PP has high tensile and flexural strength and the HDPE acts as an impact modifier in the resultant blend. Hence an optimized blend of PP and HDPE was selected as the matrix material for upgradation. Nanokaolinite clay and E-glass fibre were chosen for modifying PP/HDPE blend. As the first stage of the work, the mechanical, thermal, morphological, rheological, dynamic mechanical and crystallization characteristics of the polymer nanocomposites prepared with PP/HDPE blend and different surface modified nanokaolinite clay were analyzed. As the second stage of the work, the effect of simultaneous inclusion of nanokaolinite clay (both N100A and N100) and short glass fibres are investigated. The presence of nanofiller has increased the properties of hybrid composites to a greater extent than micro composites. As the last stage, micromechanical modeling of both nano and hybrid A composite is carried out to analyze the behavior of the composite under load bearing conditions. These theoretical analyses indicate that the polymer-nanoclay interfacial characteristics partially converge to a state of perfect interfacial bonding (Takayanagi model) with an iso-stress (Reuss IROM) response. In the case of hybrid composites the experimental data follows the trend of Halpin-Tsai model. This implies that matrix and filler experience varying amount of strain and interfacial adhesion between filler and matrix and also between the two fillers which play a vital role in determining the modulus of the hybrid composites.A significant observation from this study is that the requirement of higher fibre loading for efficient reinforcement of polymers can be substantially reduced by the presence of nanofiller together with much lower fibre content in the composite. Hybrid composites with both nanokaolinite clay and micron sized E-glass fibre as reinforcements in PP/HDPE matrix will generate a novel class of high performance, cost effective engineering material.
Resumo:
It can be assumed that the composition of Mercury’s thin gas envelope (exosphere) is related to the composition of the planets crustal materials. If this relationship is true, then inferences regarding the bulk chemistry of the planet might be made from a thorough exospheric study. The most vexing of all unsolved problems is the uncertainty in the source of each component. Historically, it has been believed that H and He come primarily from the solar wind, while Na and K originate from volatilized materials partitioned between Mercury’s crust and meteoritic impactors. The processes that eject atoms and molecules into the exosphere of Mercury are generally considered to be thermal vaporization, photonstimulated desorption (PSD), impact vaporization, and ion sputtering. Each of these processes has its own temporal and spatial dependence. The exosphere is strongly influenced by Mercury’s highly elliptical orbit and rapid orbital speed. As a consequence the surface undergoes large fluctuations in temperature and experiences differences of insolation with longitude. We will discuss these processes but focus more on the expected surface composition and solar wind particle sputtering which releases material like Ca and other elements from the surface minerals and discuss the relevance of composition modelling