19 resultados para DESIGN BASIS ACCIDENTS
em Aston University Research Archive
Resumo:
Distributed digital control systems provide alternatives to conventional, centralised digital control systems. Typically, a modern distributed control system will comprise a multi-processor or network of processors, a communications network, an associated set of sensors and actuators, and the systems and applications software. This thesis addresses the problem of how to design robust decentralised control systems, such as those used to control event-driven, real-time processes in time-critical environments. Emphasis is placed on studying the dynamical behaviour of a system and identifying ways of partitioning the system so that it may be controlled in a distributed manner. A structural partitioning technique is adopted which makes use of natural physical sub-processes in the system, which are then mapped into the software processes to control the system. However, communications are required between the processes because of the disjoint nature of the distributed (i.e. partitioned) state of the physical system. The structural partitioning technique, and recent developments in the theory of potential controllability and observability of a system, are the basis for the design of controllers. In particular, the method is used to derive a decentralised estimate of the state vector for a continuous-time system. The work is also extended to derive a distributed estimate for a discrete-time system. Emphasis is also given to the role of communications in the distributed control of processes and to the partitioning technique necessary to design distributed and decentralised systems with resilient structures. A method is presented for the systematic identification of necessary communications for distributed control. It is also shwon that the structural partitions can be used directly in the design of software fault tolerant concurrent controllers. In particular, the structural partition can be used to identify the boundary of the conversation which can be used to protect a specific part of the system. In addition, for certain classes of system, the partitions can be used to identify processes which may be dynamically reconfigured in the event of a fault. These methods should be of use in the design of robust distributed systems.
Resumo:
The objective of this work was to design, construct and commission a new ablative pyrolysis reactor and a high efficiency product collection system. The reactor was to have a nominal throughput of 10 kg/11r of dry biomass and be inherently scalable up to an industrial scale application of 10 tones/hr. The whole process consists of a bladed ablative pyrolysis reactor, two high efficiency cyclones for char removal and a disk and doughnut quench column combined with a wet walled electrostatic precipitator, which is directly mounted on top, for liquids collection. In order to aid design and scale-up calculations, detailed mathematical modelling was undertaken of the reaction system enabling sizes, efficiencies and operating conditions to be determined. Specifically, a modular approach was taken due to the iterative nature of some of the design methodologies, with the output from one module being the input to the next. Separate modules were developed for the determination of the biomass ablation rate, specification of the reactor capacity, cyclone design, quench column design and electrostatic precipitator design. These models enabled a rigorous design protocol to be developed capable of specifying the required reactor and product collection system size for specified biomass throughputs, operating conditions and collection efficiencies. The reactor proved capable of generating an ablation rate of 0.63 mm/s for pine wood at a temperature of 525 'DC with a relative velocity between the heated surface and reacting biomass particle of 12.1 m/s. The reactor achieved a maximum throughput of 2.3 kg/hr, which was the maximum the biomass feeder could supply. The reactor is capable of being operated at a far higher throughput but this would require a new feeder and drive motor to be purchased. Modelling showed that the reactor is capable of achieving a reactor throughput of approximately 30 kg/hr. This is an area that should be considered for the future as the reactor is currently operating well below its theoretical maximum. Calculations show that the current product collection system could operate efficiently up to a maximum feed rate of 10 kg/Fir, provided the inert gas supply was adjusted accordingly to keep the vapour residence time in the electrostatic precipitator above one second. Operation above 10 kg/hr would require some modifications to the product collection system. Eight experimental runs were documented and considered successful, more were attempted but due to equipment failure had to be abandoned. This does not detract from the fact that the reactor and product collection system design was extremely efficient. The maximum total liquid yield was 64.9 % liquid yields on a dry wood fed basis. It is considered that the liquid yield would have been higher had there been sufficient development time to overcome certain operational difficulties and if longer operating runs had been attempted to offset product losses occurring due to the difficulties in collecting all available product from a large scale collection unit. The liquids collection system was highly efficient and modeling determined a liquid collection efficiency of above 99% on a mass basis. This was validated due to the fact that a dry ice/acetone condenser and a cotton wool filter downstream of the collection unit enabled mass measurements of the amount of condensable product exiting the product collection unit. This showed that the collection efficiency was in excess of 99% on a mass basis.
Resumo:
The objective of this study has been to enable a greater understanding of the biomass gasification process through the development and use of process and economic models. A new theoretical equilibrium model of gasification is described using the operating condition called the adiabatic carbon boundary. This represents an ideal gasifier working at the point where the carbon in the feedstock is completely gasified. The model can be used as a `target' against which the results of real gasifiers can be compared, but it does not simulate the results of real gasifiers. A second model has been developed which uses a stagewise approach in order to model fluid bed gasification, and its results have indicated that pyrolysis and the reactions of pyrolysis products play an important part in fluid bed gasifiers. Both models have been used in sensitivity analyses: the biomass moisture content and gasifying agent composition were found to have the largest effects on performance, whilst pressure and heat loss had lesser effects. Correlations have been produced to estimate the total installed capital cost of gasification systems and have been used in an economic model of gasification. This has been used in a sensitivity analysis to determine the factors which most affect the profitability of gasification. The most important influences on gasifier profitability have been found to be feedstock cost, product selling price and throughput. Given the economic conditions of late 1985, refuse gasification for the production of producer gas was found to be viable at throughputs of about 2.5 tonnes/h dry basis and above, in the metropolitan counties of the United Kingdom. At this throughput and above, the largest element of product gas cost is the feedstock cost, the cost element which is most variable.
Resumo:
The concept of shallow fluidized bed boilers is defined and a preliminary working design for a gas-fired package boiler has been produced. Those areas of the design requiring further study have been specified. Experimental investigations concerning these areas have been carried out. A two-dimensional, conducting paper analog has been developed for the specific purpose of evaluating sheet fins. The analog has been generalised and is presented as a simple means of simulating the general, two-dimensional Helmholtz equation. By recording the transient response of spherical, calorimetric probes when plunged into heated air-fluidized beds, heat transfer coefficients have been measured at bed temperatures up to 1 100°C. A correlation fitting all the data to within ±10% has been obtained. A model of heat transfer to surfaces immersed in high temperature beds has been proposed. The model solutions are, however, only in qualitative agreement with the experimental data. A simple experimental investigation has revealed that the effective, radial, thermal conductivities of shallow fluidized beds are an order of magnitude lower than the axial conductivities. These must, consequently, be taken into account when considering heat transfer to surfaces immersed within fluidized beds. Preliminary work on pre-mixed gas combustion and some further qualitative experiments have been used as the basis for discussing the feasibility of combusting heavy fuel oils within shallow beds. The use of binary beds, within which the fuel could be both gasified and subsequently burnt, is proposed. Finally, the consequences of the experimental studies on the initial design are considered, and suggestions for further work are made.
Resumo:
Changes in modern structural design have created a demand for products which are light but possess high strength. The objective is a reduction in fuel consumption and weight of materials to satisfy both economic and environmental criteria. Cold roll forming has the potential to fulfil this requirement. The bending process is controlled by the shape of the profile machined on the periphery of the rolls. A CNC lathe can machine complicated profiles to a high standard of precision, but the expertise of a numerical control programmer is required. A computer program was developed during this project, using the expert system concept, to calculate tool paths and consequently to expedite the procurement of the machine control tapes whilst removing the need for a skilled programmer. Codifying the expertise of a human and the encapsulation of knowledge within a computer memory, destroys the dependency on highly trained people whose services can be costly, inconsistent and unreliable. A successful cold roll forming operation, where the product is geometrically correct and free from visual defects, is not easy to attain. The geometry of the sheet after travelling through the rolling mill depends on the residual strains generated by the elastic-plastic deformation. Accurate evaluation of the residual strains can provide the basis for predicting the geometry of the section. A study of geometric and material non-linearity, yield criteria, material hardening and stress-strain relationships was undertaken in this research project. The finite element method was chosen to provide a mathematical model of the bending process and, to ensure an efficient manipulation of the large stiffness matrices, the frontal solution was applied. A series of experimental investigations provided data to compare with corresponding values obtained from the theoretical modelling. A computer simulation, capable of predicting that a design will be satisfactory prior to the manufacture of the rolls, would allow effort to be concentrated into devising an optimum design where costs are minimised.
Resumo:
Product reliability and its environmental performance have become critical elements within a product's specification and design. To obtain a high level of confidence in the reliability of the design it is customary to test the design under realistic conditions in a laboratory. The objective of the work is to examine the feasibility of designing mechanical test rigs which exhibit prescribed dynamical characteristics. The design is then attached to the rig and excitation is applied to the rig, which then transmits representative vibration levels into the product. The philosophical considerations made at the outset of the project are discussed as they form the basis for the resulting design methodologies. It is attempted to directly identify the parameters of a test rig from the spatial model derived during the system identification process. It is shown to be impossible to identify a feasible test rig design using this technique. A finite dimensional optimal design methodology is developed which identifies the parameters of a discrete spring/mass system which is dynamically similar to a point coordinate on a continuous structure. This design methodology is incorporated within another procedure which derives a structure comprising a continuous element and a discrete system. This methodology is used to obtain point coordinate similarity for two planes of motion, which is validated by experimental tests. A limitation of this approach is that it is impossible to achieve multi-coordinate similarity due to an interaction of the discrete system and the continuous element at points away from the coordinate of interest. During the work the importance of the continuous element is highlighted and a design methodology is developed for continuous structures. The design methodology is based upon distributed parameter optimal design techniques and allows an initial poor design estimate to be moved in a feasible direction towards an acceptable design solution. Cumulative damage theory is used to provide a quantitative method of assessing the quality of dynamic similarity. It is shown that the combination of modal analysis techniques and cumulative damage theory provides a feasible design synthesis methodology for representative test rigs.
Resumo:
The pneumonia caused by Pneumocystis carinii is ultimately responsible for the death of many acquired immunodeficiency syndrome (AIDS) patients. Large doses of trimethoprim and pyrimethamine in combination with a sulphonamide and/or pentamidine suppress the infection but produce serious side-effects and seldom prevent recurrence after treatment withdrawal. However, the partial success of the aforementioned antifolates, and also trimetrexate used alone, does suggest dihydrofolate reductase (DHFR) as a target for the development of antipneumocystis agents. From the DHFR inhibitory activities of 3'-substituted pyrimethamine analogues it was suggested that the 3'-(3'',3''-dimethyltriazen-1''-yl) substituent may be responsible for the greater activity for the P.carinii over the mammalian enzyme. Crystallographic and molecular modeling studies revealed considerable geometrical and electronic differences between the triazene and the chemically related formamidine functions that may account for the differences in DHFR inhibitory profiles. Structural and electronic parameters calculated for a series of 3'-(3'',3''-disubstitutedtriazen-1''-yl) pyrimethamine analogues did not correlate with the DHFR inhibitory activities. However, the in vitro screening against P.carinii DHFR revealed that the 3''-hydroxyethyl-3''-benzyl analogue was the most active and selective. Models of the active sites of human and P.carinii DHFRs were constructed using DHFR sequence and structural homology data which had identified key residues involved in substrate and cofactor binding. Low energy conformations of the 3'',3''-dimethyl and 3''-hydroxyethyl-3''-benzyle analogues, determined from nuclear magnetic resonance studies and theoretical calculations, were docked by superimposing the diaminopyrimidine fragment onto a previously docked pyrimethamine analogue. Enzyme kinetic data supported the 3''-hydroxyethyl-3''-benzyl moiety being located in the NADPH binding groove. The 3''-benzyl substituent was able to locate to within 3 AA of a valine residue in the active site of P.carinii DHFR thereby producing a hydrophobic contact. The equivalent residue in human DHFR is threonine, more hydrophilic and less likely to be involved in such a contact. This difference may account for the greater inhibitory activity this analogue has for P.carinii DHFR and provide a basis for future drug design. From an in vivo model of PCP in immunosuppressed rats it was established that the 3"-hydroxyethyl-3"-benzyl analogue was able to reduce the.P.carinii burden more effectively with increasing doses, without causmg any visible signs of toxicity. However, equivalent doses were not as effective as pentamidine, a current treatment of choice for Pneumocystis carinii pneumonia.
Resumo:
This thesis describes work done exploring the application of expert system techniques to the domain of designing durable concrete. The nature of concrete durability design is described and some problems from the domain are discussed. Some related work on expert systems in concrete durability are described. Various implementation languages are considered - PROLOG and OPS5, and rejected in favour of a shell - CRYSTAL3 (later CRYSTAL4). Criteria for useful expert system shells in the domain are discussed. CRYSTAL4 is evaluated in the light of these criteria. Modules in various sub-domains (mix-design, sulphate attack, steel-corrosion and alkali aggregate reaction) are developed and organised under a BLACKBOARD system (called DEX). Extensions to the CRYSTAL4 modules are considered for different knowledge representations. These include LOTUS123 spreadsheets implementing models incorporating some of the mathematical knowledge in the domain. Design databases are used to represent tabular design knowledge. Hypertext representations of the original building standards texts are proposed as a tool for providing a well structured and extensive justification/help facility. A standardised approach to module development is proposed using hypertext development as a structured basis for expert systems development. Some areas of deficient domain knowledge are highlighted particularly in the use of data from mathematical models and in gaps and inconsistencies in the original knowledge source Digests.
Resumo:
This thesis is a theoretical study of the accuracy and usability of models that attempt to represent the environmental control system of buildings in order to improve environmental design. These models have evolved from crude representations of a building and its environment through to an accurate representation of the dynamic characteristics of the environmental stimuli on buildings. Each generation of models has had its own particular influence on built form. This thesis analyses the theory, structure and data of such models in terms of their accuracy of simulation and therefore their validity in influencing built form. The models are also analysed in terms of their compatability with the design process and hence their ability to aid designers. The conclusions are that such models are unlikely to improve environmental performance since: a the models can only be applied to a limited number of building types, b they can only be applied to a restricted number of the characteristics of a design, c they can only be employed after many major environmental decisions have been made, d the data used in models is inadequate and unrepresentative, e models do not account for occupant interaction in environmental control. It is argued that further improvements in the accuracy of simulation of environmental control will not significantly improve environmental design. This is based on the premise that strategic environmental decisions are made at the conceptual stages of design whereas models influence the detailed stages of design. It is hypothesised that if models are to improve environmental design it must be through the analysis of building typologies which provides a method of feedback between models and the conceptual stages of design. Field studies are presented to describe a method by which typologies can be analysed and a theoretical framework is described which provides a basis for further research into the implications of the morphology of buildings on environmental design.
Resumo:
The thesis deals with the background, development and description of a mathematical stock control methodology for use within an oil and chemical blending company, where demand and replenishment lead-times are generally non-stationary. The stock control model proper relies on, as input, adaptive forecasts of demand determined for an economical forecast/replenishment period precalculated on an individual stock-item basis. The control procedure is principally that of the continuous review, reorder level type, where the reorder level and reorder quantity 'float', that is, each changes in accordance with changes in demand. Two versions of the Methodology are presented; a cost minimisation version and a service level version. Realising the importance of demand forecasts, four recognised variations of the Trigg and Leach adaptive forecasting routine are examined. A fifth variation, developed, is proposed as part of the stock control methodology. The results of testing the cost minimisation version of the Methodology with historical data, by means of a computerised simulation, are presented together with a description of the simulation used. The performance of the Methodology is in addition compared favourably to a rule-of-thumb approach considered by the Company as an interim solution for reducing stack levels. The contribution of the work to the field of scientific stock control is felt to be significant for the following reasons:- (I) The Methodology is designed specifically for use with non-stationary demand and for this reason alone appears to be unique. (2) The Methodology is unique in its approach and the cost-minimisation version is shown to work successfully with the demand data presented. (3) The Methodology and the thesis as a whole fill an important gap between complex mathematical stock control theory and practical application. A brief description of a computerised order processing/stock monitoring system, designed and implemented as a pre-requisite for the Methodology's practical operation, is presented as an appendix.
Resumo:
The present scarcity of operational knowledge-based systems (KBS) has been attributed, in part, to an inadequate consideration shown to user interface design during development. From a human factors perspective the problem has stemmed from an overall lack of user-centred design principles. Consequently the integration of human factors principles and techniques is seen as a necessary and important precursor to ensuring the implementation of KBS which are useful to, and usable by, the end-users for whom they are intended. Focussing upon KBS work taking place within commercial and industrial environments, this research set out to assess both the extent to which human factors support was presently being utilised within development, and the future path for human factors integration. The assessment consisted of interviews conducted with a number of commercial and industrial organisations involved in KBS development; and a set of three detailed case studies of individual KBS projects. Two of the studies were carried out within a collaborative Alvey project, involving the Interdisciplinary Higher Degrees Scheme (IHD) at the University of Aston in Birmingham, BIS Applied Systems Ltd (BIS), and the British Steel Corporation. This project, which had provided the initial basis and funding for the research, was concerned with the application of KBS to the design of commercial data processing (DP) systems. The third study stemmed from involvement on a KBS project being carried out by the Technology Division of the Trustees Saving Bank Group plc. The preliminary research highlighted poor human factors integration. In particular, there was a lack of early consideration of end-user requirements definition and user-centred evaluation. Instead concentration was given to the construction of the knowledge base and prototype evaluation with the expert(s). In response to this identified problem, a set of methods was developed that was aimed at encouraging developers to consider user interface requirements early on in a project. These methods were then applied in the two further projects, and their uptake within the overall development process was monitored. Experience from the two studies demonstrated that early consideration of user interface requirements was both feasible, and instructive for guiding future development work. In particular, it was shown a user interface prototype could be used as a basis for capturing requirements at the functional (task) level, and at the interface dialogue level. Extrapolating from this experience, a KBS life-cycle model is proposed which incorporates user interface design (and within that, user evaluation) as a largely parallel, rather than subsequent, activity to knowledge base construction. Further to this, there is a discussion of several key elements which can be seen as inhibiting the integration of human factors within KBS development. These elements stem from characteristics of present KBS development practice; from constraints within the commercial and industrial development environments; and from the state of existing human factors support.
Resumo:
Lean is usually associated with the ‘operations’ of a manufacturing enterprise; however, there is a growing awareness that these principles may be transferred readily to other functions and sectors. The application to knowledge-based activities such as engineering design is of particular relevance to UK plc. Hence, the purpose of this study has been to establish the state-of-the-art, in terms of the adoption of Lean in new product development, by carrying out a systematic review of the literature. The authors' findings confirm the view that Lean can be applied beneficially away from the factory; that an understanding and definition of value is key to success; that a set-based (or Toyota methodology) approach to design is favoured together with the strong leadership of a chief engineer; and that the successful implementation requires organization-wide changes to systems, practices, and behaviour. On this basis it is felt that this review paper provides a useful platform for further research in this topic.
Resumo:
Manufacturing systems that are heavily dependent upon direct workers have an inherent complexity that the system designer is often ill-equipped to understand. This complexity is due to the interactions that cause variations in performance of the workers. Variation in human performance can be explained by many factors, however one important factor that is not currently considered in any detail during the design stage is the physical working environment. This paper presents the findings of ongoing research investigating human performance within manufacturing systems. It sets out to identify the form of the relationships that exist between changes in physical working environmental variables and operator performance. These relationships can provide managers with a decision basis when designing and managing manufacturing systems and their environments.
Resumo:
In this letter, we analyze and develop the required basis for a precise grating design in a scheme based on two oppositely chirped fiber Bragg gratings, and apply it in several examples which are numerically simulated. We obtain the interesting result that the broader bandwidth of the reshaped pulse, the shorter gratings required.
Resumo:
Distributed network utility maximization (NUM) is receiving increasing interests for cross-layer optimization problems in multihop wireless networks. Traditional distributed NUM algorithms rely heavily on feedback information between different network elements, such as traffic sources and routers. Because of the distinct features of multihop wireless networks such as time-varying channels and dynamic network topology, the feedback information is usually inaccurate, which represents as a major obstacle for distributed NUM application to wireless networks. The questions to be answered include if distributed NUM algorithm can converge with inaccurate feedback and how to design effective distributed NUM algorithm for wireless networks. In this paper, we first use the infinitesimal perturbation analysis technique to provide an unbiased gradient estimation on the aggregate rate of traffic sources at the routers based on locally available information. On the basis of that, we propose a stochastic approximation algorithm to solve the distributed NUM problem with inaccurate feedback. We then prove that the proposed algorithm can converge to the optimum solution of distributed NUM with perfect feedback under certain conditions. The proposed algorithm is applied to the joint rate and media access control problem for wireless networks. Numerical results demonstrate the convergence of the proposed algorithm. © 2013 John Wiley & Sons, Ltd.