21 resultados para Realistic design fire conditions
em Aston University Research Archive
Resumo:
This thesis encompasses an investigation of the behaviour of concrete frame structure under localised fire scenarios by implementing a constitutive model using finite-element computer program. The investigation phase included properties of material at elevated temperature, description of computer program, thermal and structural analyses. Transient thermal properties of material have been employed in this study to achieve reasonable results. The finite-element computer package of ANSYS is utilized in the present analyses to examine the effect of fire on the concrete frame under five various fire scenarios. In addition, a report of full-scale BRE Cardington concrete building designed to Eurocode2 and BS8110 subjected to realistic compartment fire is also presented. The transient analyses of present model included additional specific heat to the base value of dry concrete at temperature 100°C and 200°C. The combined convective-radiation heat transfer coefficient and transient thermal expansion have also been considered in the analyses. For the analyses with the transient strains included, the constitutive model based on empirical formula in a full thermal strain-stress model proposed by Li and Purkiss (2005) is employed. Comparisons between the models with and without transient strains included are also discussed. Results of present study indicate that the behaviour of complete structure is significantly different from the behaviour of individual isolated members based on current design methods. Although the current tabulated design procedures are conservative when the entire building performance is considered, it should be noted that the beneficial and detrimental effects of thermal expansion in complete structures should be taken into account. Therefore, developing new fire engineering methods from the study of complete structures rather than from individual isolated member behaviour is essential.
Resumo:
Product reliability and its environmental performance have become critical elements within a product's specification and design. To obtain a high level of confidence in the reliability of the design it is customary to test the design under realistic conditions in a laboratory. The objective of the work is to examine the feasibility of designing mechanical test rigs which exhibit prescribed dynamical characteristics. The design is then attached to the rig and excitation is applied to the rig, which then transmits representative vibration levels into the product. The philosophical considerations made at the outset of the project are discussed as they form the basis for the resulting design methodologies. It is attempted to directly identify the parameters of a test rig from the spatial model derived during the system identification process. It is shown to be impossible to identify a feasible test rig design using this technique. A finite dimensional optimal design methodology is developed which identifies the parameters of a discrete spring/mass system which is dynamically similar to a point coordinate on a continuous structure. This design methodology is incorporated within another procedure which derives a structure comprising a continuous element and a discrete system. This methodology is used to obtain point coordinate similarity for two planes of motion, which is validated by experimental tests. A limitation of this approach is that it is impossible to achieve multi-coordinate similarity due to an interaction of the discrete system and the continuous element at points away from the coordinate of interest. During the work the importance of the continuous element is highlighted and a design methodology is developed for continuous structures. The design methodology is based upon distributed parameter optimal design techniques and allows an initial poor design estimate to be moved in a feasible direction towards an acceptable design solution. Cumulative damage theory is used to provide a quantitative method of assessing the quality of dynamic similarity. It is shown that the combination of modal analysis techniques and cumulative damage theory provides a feasible design synthesis methodology for representative test rigs.
Resumo:
The research is concerned with the application of the computer simulation technique to study the performance of reinforced concrete columns in a fire environment. The effect of three different concrete constitutive models incorporated in the computer simulation on the structural response of reinforced concrete columns exposed to fire is investigated. The material models differed mainly in respect to the formulation of the mechanical properties of concrete. The results from the simulation have clearly illustrated that a more realistic response of a reinforced concrete column exposed to fire is given by a constitutive model with transient creep or appropriate strain effect The assessment of the relative effect of the three concrete material models is considered from the analysis by adopting the approach of a parametric study, carried out using the results from a series of analyses on columns heated on three sides which produce substantial thermal gradients. Three different loading conditions were used on the column; axial loading and eccentric loading both to induce moments in the same sense and opposite sense to those induced by the thermal gradient. An axially loaded column heated on four sides was also considered. The computer modelling technique adopted separated the thermal and structural responses into two distinct computer programs. A finite element heat transfer analysis was used to determine the thermal response of the reinforced concrete columns when exposed to the ISO 834 furnace environment. The temperature distribution histories obtained were then used in conjunction with a structural response program. The effect of the occurrence of spalling on the structural behaviour of reinforced concrete column is also investigated. There is general recognition of the potential problems of spalling but no real investigation into what effect spalling has on the fire resistance of reinforced concrete members. In an attempt to address the situation, a method has been developed to model concrete columns exposed to fire which incorporates the effect of spalling. A total of 224 computer simulations were undertaken by varying the amounts of concrete lost during a specified period of exposure to fire. An array of six percentages of spalling were chosen for one range of simulation while a two stage progressive spalling regime was used for a second range. The quantification of the reduction in fire resistance of the columns against the amount of spalling, heating and loading patterns, and the time at which the concrete spalls appears to indicate that it is the amount of spalling which is the most significant variable in the reduction of fire resistance.
Resumo:
Background The optimisation and scale-up of process conditions leading to high yields of recombinant proteins is an enduring bottleneck in the post-genomic sciences. Typical experiments rely on varying selected parameters through repeated rounds of trial-and-error optimisation. To rationalise this, several groups have recently adopted the 'design of experiments' (DoE) approach frequently used in industry. Studies have focused on parameters such as medium composition, nutrient feed rates and induction of expression in shake flasks or bioreactors, as well as oxygen transfer rates in micro-well plates. In this study we wanted to generate a predictive model that described small-scale screens and to test its scalability to bioreactors. Results Here we demonstrate how the use of a DoE approach in a multi-well mini-bioreactor permitted the rapid establishment of high yielding production phase conditions that could be transferred to a 7 L bioreactor. Using green fluorescent protein secreted from Pichia pastoris, we derived a predictive model of protein yield as a function of the three most commonly-varied process parameters: temperature, pH and the percentage of dissolved oxygen in the culture medium. Importantly, when yield was normalised to culture volume and density, the model was scalable from mL to L working volumes. By increasing pre-induction biomass accumulation, model-predicted yields were further improved. Yield improvement was most significant, however, on varying the fed-batch induction regime to minimise methanol accumulation so that the productivity of the culture increased throughout the whole induction period. These findings suggest the importance of matching the rate of protein production with the host metabolism. Conclusion We demonstrate how a rational, stepwise approach to recombinant protein production screens can reduce process development time.
Resumo:
Eukaryotic membrane proteins cannot be produced in a reliable manner for structural analysis. Consequently, researchers still rely on trial-and-error approaches, which most often yield insufficient amounts. This means that membrane protein production is recognized by biologists as the primary bottleneck in contemporary structural genomics programs. Here, we describe a study to examine the reasons for successes and failures in recombinant membrane protein production in yeast, at the level of the host cell, by systematically quantifying cultures in high-performance bioreactors under tightlydefined growth regimes. Our data show that the most rapid growth conditions of those chosen are not the optimal production conditions. Furthermore, the growth phase at which the cells are harvested is critical: We show that it is crucial to grow cells under tightly-controlled conditions and to harvest them prior to glucose exhaustion, just before the diauxic shift. The differences in membrane protein yields that we observe under different culture conditions are not reflected in corresponding changes in mRNA levels of FPS1, but rather can be related to the differential expression of genes involved in membrane protein secretion and yeast cellular physiology. Copyright © 2005 The Protein Society.
Resumo:
Fire Service personnel face risk on a daily basis, frequently working in extremely hazardous conditions—and the severity of the danger faced can fluctuate rapidly. The Fire Service has therefore become extremely experienced at managing dynamic risks. The aim of this article is to review techniques used in the UK fire service to attenuate the effects of risk and to discuss these with respect to organisations which experience dynamic risk in other fields—even if in less dramatic conditions.
Resumo:
Modern distributed control systems comprise of a set of processors which are interconnected using a suitable communication network. For use in real-time control environments, such systems must be deterministic and generate specified responses within critical timing constraints. Also, they should be sufficiently robust to survive predictable events such as communication or processor faults. This thesis considers the problem of coordinating and synchronizing a distributed real-time control system under normal and abnormal conditions. Distributed control systems need to periodically coordinate the actions of several autonomous sites. Often the type of coordination required is the all or nothing property of an atomic action. Atomic commit protocols have been used to achieve this atomicity in distributed database systems which are not subject to deadlines. This thesis addresses the problem of applying time constraints to atomic commit protocols so that decisions can be made within a deadline. A modified protocol is proposed which is suitable for real-time applications. The thesis also addresses the problem of ensuring that atomicity is provided even if processor or communication failures occur. Previous work has considered the design of atomic commit protocols for use in non time critical distributed database systems. However, in a distributed real-time control system a fault must not allow stringent timing constraints to be violated. This thesis proposes commit protocols using synchronous communications which can be made resilient to a single processor or communication failure and still satisfy deadlines. Previous formal models used to design commit protocols have had adequate state coverability but have omitted timing properties. They also assumed that sites communicated asynchronously and omitted the communications from the model. Timed Petri nets are used in this thesis to specify and design the proposed protocols which are analysed for consistency and timeliness. Also the communication system is mcxielled within the Petri net specifications so that communication failures can be included in the analysis. Analysis of the Timed Petri net and the associated reachability tree is used to show the proposed protocols always terminate consistently and satisfy timing constraints. Finally the applications of this work are described. Two different types of applications are considered, real-time databases and real-time control systems. It is shown that it may be advantageous to use synchronous communications in distributed database systems, especially if predictable response times are required. Emphasis is given to the application of the developed commit protocols to real-time control systems. Using the same analysis techniques as those used for the design of the protocols it can be shown that the overall system performs as expected both functionally and temporally.
Resumo:
National meteorological offices are largely concerned with synoptic-scale forecasting where weather predictions are produced for a whole country for 24 hours ahead. In practice, many local organisations (such as emergency services, construction industries, forestry, farming, and sports) require only local short-term, bespoke, weather predictions and warnings. This thesis shows that the less-demanding requirements do not require exceptional computing power and can be met by a modern, desk-top system which monitors site-specific ground conditions (such as temperature, pressure, wind speed and direction, etc) augmented with above ground information from satellite images to produce `nowcasts'. The emphasis in this thesis has been towards the design of such a real-time system for nowcasting. Local site-specific conditions are monitored using a custom-built, stand alone, Motorola 6809 based sub-system. Above ground information is received from the METEOSAT 4 geo-stationary satellite using a sub-system based on a commercially available equipment. The information is ephemeral and must be captured in real-time. The real-time nowcasting system for localised weather handles the data as a transparent task using the limited capabilities of the PC system. Ground data produces a time series of measurements at a specific location which represents the past-to-present atmospheric conditions of the particular site from which much information can be extracted. The novel approach adopted in this thesis is one of constructing stochastic models based on the AutoRegressive Integrated Moving Average (ARIMA) technique. The satellite images contain features (such as cloud formations) which evolve dynamically and may be subject to movement, growth, distortion, bifurcation, superposition, or elimination between images. The process of extracting a weather feature, following its motion and predicting its future evolution involves algorithms for normalisation, partitioning, filtering, image enhancement, and correlation of multi-dimensional signals in different domains. To limit the processing requirements, the analysis in this thesis concentrates on an `area of interest'. By this rationale, only a small fraction of the total image needs to be processed, leading to a major saving in time. The thesis also proposes an extention to an existing manual cloud classification technique for its implementation in automatically classifying a cloud feature over the `area of interest' for nowcasting using the multi-dimensional signals.
Resumo:
SINNMR (Sonically Induced Narrowing of the Nuclear Magnetic Resonance spectra of solids), is a novel technique that is being developed to enable the routine study of solids by nuclear magnetic resonance spectroscopy. SINNMR aims to narrow the broad resonances that are characteristic of solid state NMR by inducing rapid incoherent motion of solid particles suspended in a support medium, using high frequency ultrasound in the range 2-10 MHz. The width of the normal broad resonances from solids are due to incomplete averaging of several components of the total spin Hamiltonian caused by restrictions placed on molecular motion within a solid. At present Magic Angle Spinning (MAS) NMR is the classical solid state technique used to reduce line broadening, but: this has associated problems, not least of which is the appearance of many spinning side bands which confuse the spectra. It is hoped that SlNNMR will offer a simple alternative, particularly as it does not reveal spinning sidebands The fundamental question concerning whether the use of ultrasound within a cryo-magnet will cause quenching has been investigated with success, as even under the most extreme conditions of power, frequency and irradiator time, the magnet does not quench. The objective of this work is to design and construct a SINNMR probe for use in a super conducting cryo-magnet NMR spectrometer. A cell for such a probe has been constructed and incorporated into an adapted high resolution broadband probe. It has been proved that the cell is capable of causing cavitation, up to 10 MHz, by running a series of ultrasonic reactions within it and observing the reaction products. It was found that the ultrasound was causing the sample to be heated to unacceptable temperatures and this necessitated the incorporation of temperature stabilisation devices. Work has been performed on the investigation of the narrowing of the solid state 23Na spectrum of tri-sodium phosphate using high frequency ultrasound. Work has also been completed on the signal enhancement and T1 reduction of a liquid mixture and a pure compound using ultrasound. Some preliminary "bench" experiments have been completed on a novel ultrasonic device designed to help minimise sample heating. The concept involves passing the ultrasound through a temperature stabilised, liquid filled funnel that has a drum skin on the end that will enable the passage of ultrasound into the sample. Bench experiments have proved that acoustic attenuation is low and that cavitation in the liquid beyond the device is still possible.
Resumo:
The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.
Resumo:
The objective of this study has been to enable a greater understanding of the biomass gasification process through the development and use of process and economic models. A new theoretical equilibrium model of gasification is described using the operating condition called the adiabatic carbon boundary. This represents an ideal gasifier working at the point where the carbon in the feedstock is completely gasified. The model can be used as a `target' against which the results of real gasifiers can be compared, but it does not simulate the results of real gasifiers. A second model has been developed which uses a stagewise approach in order to model fluid bed gasification, and its results have indicated that pyrolysis and the reactions of pyrolysis products play an important part in fluid bed gasifiers. Both models have been used in sensitivity analyses: the biomass moisture content and gasifying agent composition were found to have the largest effects on performance, whilst pressure and heat loss had lesser effects. Correlations have been produced to estimate the total installed capital cost of gasification systems and have been used in an economic model of gasification. This has been used in a sensitivity analysis to determine the factors which most affect the profitability of gasification. The most important influences on gasifier profitability have been found to be feedstock cost, product selling price and throughput. Given the economic conditions of late 1985, refuse gasification for the production of producer gas was found to be viable at throughputs of about 2.5 tonnes/h dry basis and above, in the metropolitan counties of the United Kingdom. At this throughput and above, the largest element of product gas cost is the feedstock cost, the cost element which is most variable.
Resumo:
This work is concerned with the assessment of a newer version of the spout-fluid bed where the gas is supplied from a common plenum and the distributor controls the operational phenomenon. Thus the main body of the work deals with the effect of the distributor design on the mixing and segregation of solids in a spout-filled bed. The effect of distributor design in the conventional fluidised bed and of variation of the gas inlet diameter in a spouted bed were also briefly investigated for purpose of comparison. Large particles were selected for study because they are becoming increasingly important in industrial fluidised beds but have not been thoroughly investigated. The mean particle diameters of the fraction ranged from 550 to 2400 mm, and their specific gravity from 0.97 to 2.45. Only work carried out with binary systems is reported here. The effect of air velocity, particle properties, bed height, the relative amount of jetsam and flotsam and initial conditions on the steady-state concentration profiles were assessed with selected distributors. The work is divided into three sections. Sections I and II deal with the fluidised bed and spouted bed systems. Section III covers the development of the spout-filled bed and its behaviour with reference to distributor design and it is shown how benefits of both spouting and fluidising phenomena can be exploited. In the fluidisation zone, better mixing is achieved by distributors which produce a large initial bubble diameter. Some common features exist between the behaviour of unidensity jetsam-rich systems and different density flotsam-rich systems. The shape factor does not seem to have an affect as long as it is only restricted to the minor component. However, in the case of the major component, particle shape significantly affects the final results. Studies of aspect ratio showed that there is a maximum (1.5) above which slugging occurs and the effect of the distributor design is nullified. A mixing number was developed for unidensity spherical rich systems, which proved to be extremely useful in quantifying the variation in mixing and segregation with changes in distributor design.
Resumo:
It is well established that hydrodynamic journal bearings are responsible for self-excited vibrations and have the effect of lowering the critical speeds of rotor systems. The forces within the oil film wedge, generated by the vibrating journal, may be represented by displacement and velocity coefficient~ thus allowing the dynamical behaviour of the rotor to be analysed both for stability purposes and for anticipating the response to unbalance. However, information describing these coefficients is sparse, misleading, and very often not applicable to industrial type bearings. Results of a combined analytical and experimental investigation into the hydrodynamic oil film coefficients operating in the laminar region are therefore presented, the analysis being applied to a 120 degree partial journal bearing having a 5.0 in diameter journal and a LID ratio of 1.0. The theoretical analysis shows that for this type of popular bearing, the eight linearized coefficients do not accurately describe the behaviour of the vibrating journal based on the theory of small perturbations, due to them being masked by the presence of nonlinearity. A method is developed using the second order terms of Taylor expansion whereby design charts are provided which predict the twentyeight force coefficients for both aligned, and for varying amounts of journal misalignment. The resulting non-linear equations of motion are solved using a modified Newton-Raphson method whereby the whirl trajectories are obtained, thus providing a physical appreciation of the bearing characteristics under dynamically loaded conditions.
Resumo:
The work reported in this thesis is concerned with the improvement and expansion of the assistance given to the designer by the computer in the design of cold formed sections. The main contributions have been in four areas, which have consequently led to the fifth, the development of a methodology to optimise designs. This methodology can be considered an `Expert Design System' for cold formed sections. A different method of determining section properties of profiles was introduced, using the properties of line and circular elements. Graphics were introduced to show the outline of the profile on screen. The analysis of beam loading has been expanded to beam loading conditions where the number of supports, point loads, and uniform distributive loads can be specified by the designer. The profile can then be checked for suitability for the specified type of loading. Artificial Intelligence concepts have been introduced to give the designer decision support from the computer, in combination with the computer aided design facilities. The more complex decision support was adopted through the use of production rules. All the support was based on the British standards. A method has been introduced, by which the appropriate use of stiffeners can be determined and consequently designed by the designer. Finally, the methodology by which the designer is given assistance from the computer, without constraining the designer, was developed. This methodology gives advice to the designer on possible methods of improving the design, but allows the designer to reject that option, and analyse the profile accordingly. The methodology enables optimisation to be achieved by the designer, designing variety of profiles for a particular loading, and determining which one is best suited.
Resumo:
A detailed literature survey confirmed cold roll-forming to be a complex and little understood process. In spite of its growing value, the process remains largely un-automated with few principles used in set-up of the rolling mill. This work concentrates on experimental investigations of operating conditions in order to gain a scientific understanding of the process. The operating conditions are; inter-pass distance, roll load, roll speed, horizontal roll alignment. Fifty tests have been carried out under varied operating conditions, measuring section quality and longitudinal straining to give a picture of bending. A channel section was chosen for its simplicity and compatibility with previous work. Quality measurements were measured in terms of vertical bow, twist and cross-sectional geometric accuracy, and a complete method of classifying quality has been devised. The longitudinal strain profile was recorded, by the use of strain gauges attached to the strip surface at five locations. Parameter control is shown to be important in allowing consistency in section quality. At present rolling mills are constructed with large tolerances on operating conditions. By reduction of the variability in parameters, section consistency is maintained and mill down-time is reduced. Roll load, alignment and differential roll speed are all shown to affect quality, and can be used to control quality. Set-up time is reduced by improving the design of the mill so that parameter values can be measured and set, without the need for judgment by eye. Values of parameters can be guided by models of the process, although elements of experience are still unavoidable. Despite increased parameter control, section quality is variable, if only due to variability in strip material properties. Parameters must therefore be changed during rolling. Ideally this can take place by closed-loop feedback control. Future work lies in overcoming the problems connected with this control.