17 resultados para Design procedures
em Aston University Research Archive
Resumo:
There is a great deal of literature about the initial stages of innovative design. This is the process whereby a completely new product is conceived, invented and developed. In industry, however, the continuing success of a company is more often achieved by improving or developing existing designs to maintain their marketability. Unfortunately, this process of design by evolution is less well documented. This thesis reports the way in which this process was improved for the sponsoring company. The improvements were achieved by implementing a new form of computer aided design (C.A.D.) system. The advent of this system enabled the company to both shorten the design and development time and also to review the principles underlying the existing design procedures. C.A.D. was a new venture for the company and care had to be taken to ensure that the new procedures were compatible with the existing design office environment. In particular, they had to be acceptable to the design office staff. The C.A.D. system produced guides the designer from the draft specification to the first prototype layout. The computer presents the consequences of the designer's decisions clearly and fully, often by producing charts and sketches. The C.A.D. system and the necessary peripheral facilities were implemented, monitored and maintained. The system structure was left sufficiently flexible for maintenance to be undertaken quickly and effectively. The problems encountered during implementation are well documented in this thesis.
Resumo:
Investigations into the modelling techniques that depict the transport of discrete phases (gas bubbles or solid particles) and model biochemical reactions in a bubble column reactor are discussed here. The mixture model was used to calculate gas-liquid, solid-liquid and gasliquid-solid interactions. Multiphase flow is a difficult phenomenon to capture, particularly in bubble columns where the major driving force is caused by the injection of gas bubbles. The gas bubbles cause a large density difference to occur that results in transient multi-dimensional fluid motion. Standard design procedures do not account for the transient motion, due to the simplifying assumptions of steady plug flow. Computational fluid dynamics (CFD) can assist in expanding the understanding of complex flows in bubble columns by characterising the flow phenomena for many geometrical configurations. Therefore, CFD has a role in the education of chemical and biochemical engineers, providing the examples of flow phenomena that many engineers may not experience, even through experimentation. The performance of the mixture model was investigated for three domains (plane, rectangular and cylindrical) and three flow models (laminar, k-e turbulence and the Reynolds stresses). mThis investigation raised many questions about how gas-liquid interactions are captured numerically. To answer some of these questions the analogy between thermal convection in a cavity and gas-liquid flow in bubble columns was invoked. This involved modelling the buoyant motion of air in a narrow cavity for a number of turbulence schemes. The difference in density was caused by a temperature gradient that acted across the width of the cavity. Multiple vortices were obtained when the Reynolds stresses were utilised with the addition of a basic flow profile after each time step. To implement the three-phase models an alternative mixture model was developed and compared against a commercially available mixture model for three turbulence schemes. The scheme where just the Reynolds stresses model was employed, predicted the transient motion of the fluids quite well for both mixture models. Solid-liquid and then alternative formulations of gas-liquid-solid model were compared against one another. The alternative form of the mixture model was found to perform particularly well for both gas and solid phase transport when calculating two and three-phase flow. The improvement in the solutions obtained was a result of the inclusion of the Reynolds stresses model and differences in the mixture models employed. The differences between the alternative mixture models were found in the volume fraction equation (flux and deviatoric stress tensor terms) and the viscosity formulation for the mixture phase.
Resumo:
Most pavement design procedures incorporate reliability to account for design inputs-associated uncertainty and variability effect on predicted performance. The load and resistance factor design (LRFD) procedure, which delivers economical section while considering design inputs variability separately, has been recognised as an effective tool to incorporate reliability into design procedures. This paper presents a new reliability-based calibration in LRFD format for a mechanics-based fatigue cracking analysis framework. This paper employs a two-component reliability analysis methodology that utilises a central composite design-based response surface approach and a first-order reliability method. The reliability calibration was achieved based on a number of field pavement sections that have well-documented performance history and high-quality field and laboratory data. The effectiveness of the developed LRFD procedure was evaluated by performing pavement designs of various target reliabilities and design conditions. The result shows an excellent agreement between the target and actual reliabilities. Furthermore, it is clear from the results that more design features need to be included in the reliability calibration to minimise the deviation of the actual reliability from the target reliability.
Resumo:
This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.
A study of load support and other criteria appropriate to the selection of industrial conveyor belts
Resumo:
A study of conveying practice demonstrates that belt conveyors provide a versatile and. much-used method of transporting bulk materials, but a review of belting manufacturers' design procedures shows that belt design and selection rules are often based on experience with all-cotton belts no longer in common use, and are net completely relevant to modern synthetic constructions. In particular, provision of the property "load support", which was not critical with cotton belts, is shown to determine the outcome of most belt selection exercises and lead to gross over specification of other design properties in many cases. The results of an original experimental investigation into this property, carried out to determine the belt and conveyor parameters that affect it, how the major role that belt stiffness plays in its provision; the basis for a belt stiffness test relevant to service conditions is given. A proposal for a more rational method of specifying load support data results from the work, but correlation of the test results with service performance is necessary before the absolute toad support capability required from a belt for given working conditions can be quantified. A study to attain this correlation is the major proposal for future work resulting from the present investigation, but a full review of the literature on conveyor design and a study of present practice within the belting industry demonstrate other, less critical, factors that could profitably be investigated. It is suggested that the most suitable method of studying these would be a rational data collection system to provide information on various facets of belt service behaviour; a basis for such a system is proposed. In addition to the work above, proposals for simplifying the present belt selection methods are made and a strain transducer suitable for use in future experimental investigations is developed.
Resumo:
This thesis encompasses an investigation of the behaviour of concrete frame structure under localised fire scenarios by implementing a constitutive model using finite-element computer program. The investigation phase included properties of material at elevated temperature, description of computer program, thermal and structural analyses. Transient thermal properties of material have been employed in this study to achieve reasonable results. The finite-element computer package of ANSYS is utilized in the present analyses to examine the effect of fire on the concrete frame under five various fire scenarios. In addition, a report of full-scale BRE Cardington concrete building designed to Eurocode2 and BS8110 subjected to realistic compartment fire is also presented. The transient analyses of present model included additional specific heat to the base value of dry concrete at temperature 100°C and 200°C. The combined convective-radiation heat transfer coefficient and transient thermal expansion have also been considered in the analyses. For the analyses with the transient strains included, the constitutive model based on empirical formula in a full thermal strain-stress model proposed by Li and Purkiss (2005) is employed. Comparisons between the models with and without transient strains included are also discussed. Results of present study indicate that the behaviour of complete structure is significantly different from the behaviour of individual isolated members based on current design methods. Although the current tabulated design procedures are conservative when the entire building performance is considered, it should be noted that the beneficial and detrimental effects of thermal expansion in complete structures should be taken into account. Therefore, developing new fire engineering methods from the study of complete structures rather than from individual isolated member behaviour is essential.
Resumo:
This paper proposes a methodological scheme for the photovoltaic (PV) simulator design. With the advantages of a digital controller system, linear interpolation is proposed for precise fitting with higher computational efficiency. A novel control strategy that directly tackles two different duty cycles is proposed and implemented to achieve a full-range operation including short circuit (SC) and open circuit (OC) conditions. Systematic design procedures for both hardware and algorithm are explained, and a prototype is built. Experimental results confirm an accurate steady state performance under different load conditions, including SC and OC. This low power apparatus can be adopted for PV education and research with a limited budget.
Resumo:
The literature on the potential use of liquid ammonia as a solvent for the extraction of aromatic hydrocarbons from mixtures with paraffins, and the application of reflux, has been reviewed. Reference is made to extractors suited to this application. A pilot scale extraction plant was designed comprising a Scm. diameter by 12Scm. high, 50 stage Rotating Disc Contactor with 2 external settlers. Provision was made for operation with, or without, reflux at a pressure of 10 bar and ambient temperature. The solvent recovery unit consisted of an evaporator, compressor and condenser in a refrigeration cycle. Two systems were selected for study, Cumene-n-Heptane-Ammonia and Toluene-Methylcyclohexane-Ammonia. Equlibrium data for the first system was determined experimentally in a specially-designed, equilibrium bomb. A technique was developed to withdraw samples under pressure for analysis by chromatography and titration. The extraction plant was commissioned with a kerosine-water system; detailed operating procedures were developed based on a Hazard and Operability Study. Experimental runs were carried out with both ternary ammonia systems. With the system Toluene-Methylcyclohexane-Ammonia the extraction plant and the solvent recovery facility, operated satisfactorily, and safely,in accordance with the operating procedures. Experimental data gave reasonable agreement with theory. Recommendations are made for further work with plant.
Resumo:
This thesis describes the design and engineering of a pressurised biomass gasification test facility. A detailed examination of the major elements within the plant has been undertaken in relation to specification of equipment, evaluation of options and final construction. The retrospective project assessment was developed from consideration of relevant literature and theoretical principles. The literature review includes a discussion on legislation and applicable design codes. From this analysis, each of the necessary equipment units was reviewed and important design decisions and procedures highlighted and explored. Particular emphasis was placed on examination of the stringent demands of the ASME VIII design codes. The inter-relationship of functional units was investigated and areas of deficiency, such as biomass feeders and gas cleaning, have been commented upon. Finally, plant costing was summarized in relation to the plant design and proposed experimental programme. The main conclusion drawn from the study is that pressurised gasification of biomass is far more difficult and expensive to support than atmospheric gasification. A number of recommendations have been made regarding future work in this area.
Resumo:
Gas absorption, the removal of one or more constitutents from a gas mixture, is widely used in chemical processes. In many gas absorption processes, the gas mixture is already at high pressure and in recent years organic solvents have been developed for the process of physical absorption at high pressure followed by low pressure regeneration of the solvent and recovery of the absorbed gases. Until now the discovery of new solvents has usually been by expensive and time consuming trial and error laboratory tests. This work describes a new approach, whereby a solvent is selected from considerations of its molecular structure by applying recently published methods of predicting gas solubility from the molecular groups which make up the solvent molecule. The removal of the acid gases of carbon dioxide and hydrogen sulfide from methane or hydrogen was used as a commercially important example. After a preliminary assessment to identify promising moecular groups, more than eighty new solvent molecules were designed and evaluated by predicting gas solubility. The other important physical properties were also predicted by appropriate theoretical procedures, and a commercially promising new solvent was chosen to have a high solubility for acid gases, a low solubility for methane and hydrogen, a low vapour pressure, and a low viscosity. The solvent chosen, of molecular structure Ch3-COCH2-CH2-CO-CH3, was tested in the laboratory and shown to have physical properties, except for vapour pressures, close to those predicted. That is gas solubilities were within 10% but lower than predicted. Viscosity within 10% but higher than predicted and a vapour pressure significantly lower than predicted. A computer program was written to predict gas solubility in the new solvent at the high pressures (25 bar) used in practice. This is based on the group contribution method of Skold Jorgensen (1984). Before using this with the new solvent, Acetonyl acetone, the method was show to be sufficiently accurate by comparing predicted values of gas solubility with experimental solubilities from the literature for 14 systems up to 50 bar. A test of the commercial potential of the new solvent was made by means of two design studies which compared the size of plant and approximate relative costs of absorbing acid gases by means of the new solvent with other commonly used solvents. These were refrigerated methanol(Rectisol process) and Dimethyl Ether or Polyethylene Glycol(Selexol process). Both studies showed in terms of capital and operating cost some significant advantage for plant designed for the new solvent process.
Resumo:
This research investigates the general user interface problems in using networked services. Some of the problems are: users have to recall machine names and procedures to. invoke networked services; interactions with some of the services are by means of menu-based interfaces which are quite cumbersome to use; inconsistencies exist between the interfaces for different services because they were developed independently. These problems have to be removed so that users can use the services effectively. A prototype system has been developed to help users interact with networked services. This consists of software which gives the user an easy and consistent interface with the various services. The prototype is based on a graphical user interface and it includes the following appJications: Bath Information & Data Services; electronic mail; file editor. The prototype incorporates an online help facility to assist users using the system. The prototype can be divided into two parts: the user interface part that manages interactlon with the user; the communicatIon part that enables the communication with networked services to take place. The implementation is carried out using an object-oriented approach where both the user interface part and communication part are objects. The essential characteristics of object-orientation, - abstraction, encapsulation, inheritance and polymorphism - can all contribute to the better design and implementation of the prototype. The Smalltalk Model-View-Controller (MVC) methodology has been the framework for the construction of the prototype user interface. The purpose of the development was to study the effectiveness of users interaction to networked services. Having completed the prototype, tests users were requested to use the system to evaluate its effectiveness. The evaluation of the prototype is based on observation, i.e. observing the way users use the system and the opinion rating given by the users. Recommendations to improve further the prototype are given based on the results of the evaluation. based on the results of the evah:1ation. . .'. " "', ':::' ,n,<~;'.'
Resumo:
This work attempts to create a systemic design framework for man-machine interfaces which is self consistent, compatible with other concepts, and applicable to real situations. This is tackled by examining the current architecture of computer applications packages. The treatment in the main is philosophical and theoretical and analyses the origins, assumptions and current practice of the design of applications packages. It proposes that the present form of packages is fundamentally contradictory to the notion of packaging itself. This is because as an indivisible ready-to-implement solution, current package architecture displays the following major disadvantages. First, it creates problems as a result of user-package interactions, in which the designer tries to mould all potential individual users, no matter how diverse they are, into one model. This is worsened by the minute provision, if any, of important properties such as flexibility, independence and impartiality. Second, it displays rigid structure that reduces the variety and/or multi-use of the component parts of such a package. Third, it dictates specific hardware and software configurations which probably results in reducing the number of degrees of freedom of its user. Fourth, it increases the dependence of its user upon its supplier through inadequate documentation and understanding of the package. Fifth, it tends to cause a degeneration of the expertise of design of the data processing practitioners. In view of this understanding an alternative methodological design framework which is both consistent with systems approach and the role of a package in its likely context is proposed. The proposition is based upon an extension of the identified concept of the hierarchy of holons* which facilitates the examination of the complex relationships of a package with its two principal environments. First, the user characteristics and his decision making practice and procedures; implying an examination of the user's M.I.S. network. Second, the software environment and its influence upon a package regarding support, control and operation of the package. The framework is built gradually as discussion advances around the central theme of a compatible M.I.S., software and model design. This leads to the formation of the alternative package architecture that is based upon the design of a number of independent, self-contained small parts. Such is believed to constitute the nucleus around which not only packages can be more effectively designed, but is also applicable to many man-machine systems design.
Resumo:
Concurrent engineering and design for manufacture and assembly strategies have become pervasive in use in a wide array of industrial settings. These strategies have generally focused on product and process design issues based on capability concerns. The strategies have been historically justified using cost savings calculations focusing on easily quantifiable costs such as raw material savings or manufacturing or assembly operations no longer required. It is argued herein that neither the focus of the strategies nor the means of justification are adequate. Product and process design strategies should include both capability and capacity concerns and justification procedures should include the financial effects that the product and process changes would have on the entire company. The authors of this paper take this more holistic view of the problem and examine an innovative new design strategy using a comprehensive enterprise simulation tool. The results indicate that both the design strategy and the simulator show promise for further industrial use. © 2001 Elsevier Science B.V. All rights reserved.
Resumo:
This paper explores the use of the optimization procedures in SAS/OR software with application to the contemporary logistics distribution network design using an integrated multiple criteria decision making approach. Unlike the traditional optimization techniques, the proposed approach, combining analytic hierarchy process (AHP) and goal programming (GP), considers both quantitative and qualitative factors. In the integrated approach, AHP is used to determine the relative importance weightings or priorities of alternative warehouses with respect to both deliverer oriented and customer oriented criteria. Then, a GP model incorporating the constraints of system, resource, and AHP priority is formulated to select the best set of warehouses without exceeding the limited available resources. To facilitate the use of integrated multiple criteria decision making approach by SAS users, an ORMCDM code was implemented in the SAS programming language. The SAS macro developed in this paper selects the chosen variables from a SAS data file and constructs sets of linear programming models based on the selected GP model. An example is given to illustrate how one could use the code to design the logistics distribution network.