15 resultados para Design methods
em Aston University Research Archive
Resumo:
Tne object of this research was to investigate the behaviour of birdcage scaffolding as used in falsework structures, assess the suitability of existing design methods and make recommendations for a set of design rules. Since excessive deflection is as undesirable in a structure as total collapse, the project was divided into two sections. These were to determine the ultimate vertical and horizontal load-carrying capacity and also the deflection characteristics of any falsework. So theoretical analyses were developed to ascertain the ability of both the individual standards to resist vertical load, and of the bracing to resist horizontal load.Furthermore a model was evolved which would predict the horizontal deflection of a scaffold under load using strain energy methods. These models were checked by three series of experiments. The first was on individual standards under vertical load only. The second series was carried out on full scale falsework structures loading vertically and horizontally to failure. Finally experiments were conducted on scaffold couplers to provide additional verification of the method of predicting deflections. This thesis gives the history of the project and an introduction into the field of scaffolding. It details both the experiments conducted and the theories developed and the correlation between theory and experiment. Finally it makes recommendations for a design method to be employed by scaffolding designers.
Resumo:
The Internet is becoming an increasingly important portal to health information and means for promoting health in user populations. As the most frequent users of online health information, young women are an important target population for e-health promotion interventions. Health-related websites have traditionally been generic in design, resulting in poor user engagement and affecting limited impacts on health behaviour change. Mounting evidence suggests that the most effective health promotion communication strategies are collaborative in nature, fully engaging target users throughout the development process. Participatory design approaches to interface development enable researchers to better identify the needs and expectations of users, thus increasing user engagement in, and promoting behaviour change via, online health interventions. This article introduces participatory design methods applicable to online health intervention design and presents an argument for the use of such methods in the development of e-Health applications targeted at young women.
Resumo:
Design methods and tools are generally best learned and developed experientially [1]. Finding appropriate vehicles for delivering these to students is becoming increasingly challenging, especially when considering only those that will enthuse, intrigue and inspire. This paper traces the development of different eco-car design and build projects which competed in the Shell Eco-Marathon. The cars provided opportunities for experiential learning through a formal learning cycle of CDIO (Conceive, Design, Implement, Operate) or the more traditional understand, explore, create, validate, with both teams developing a functional finished prototype. Lessons learned were applied through the design of a third and fourth eco-car using experimental techniques with bio-composites, combining the knowledge of fibre reinforced composite materials and adhesives with the plywood construction techniques of the two teams. The paper discusses the importance of applying materials and techniques to a real world problem. It will also explore how eco-car and comparing traditional materials and construction techniques with high tech composite materials is an ideal teaching, learning and assessment vehicle for technical design techniques.
Resumo:
The development of a Laser Doppler Anemometer technique to measure the velocity distribution in a commercial plate heat exchanger is described. Detailed velocity profiles are presented and a preliminary investigation is reported on flow behaviour through a single cell in the channel matrix. The objective of the study was to extend previous investigations of plate heat exchanger flow patterns in the laminar range with the eventual aim of establishing the effect of flow patterns on heat transfer performance, thus leading to improved plate heat exchanger design and design methods. Accurate point velocities were obtained by Laser Anemometry in a perspex replica of the metal channel. Oil was used as a circulating liquid with a refractive index matched to that of the perspex so that the laser beams were not distorted. Cell-by-cell velocity measurements over a range of Reynolds number up to ten showed significant liquid mal-distribution. Local cell velocities were found to be as high as twenty seven times average velocity, contrary to the previously held belief of four times. The degree of mal-distribution varied across the channel as well as in the vertical direction, and depended on the upward or downward direction of flow. At Reynolds numbers less than one, flow zig-zagged from one side of the channel to the other in wave form, but increases in Reynolds number improved liquid distribution. A detailed examination of selected cells showed velocity variations in different directions, together with variation within individual cells. Experimental results are also reported on the flow split when passing through a single cell in a section of a channel . These observations were used to explain mal-distribution in the perspex channel itself.
Resumo:
Symbiotic design methods aim to take into account technical, social and organizational criteria simultaneously. Over the years, many symbiotic methods have been developed and applied in various countries. Nevertheless, the diagnosis that only technical criteria receive attention in the design of production systems, is still made repeatedly. Examples of symbiotic approaches are presented at three different levels: technical systems, organizations, and the process. From these, discussion points are generated concerning the character of the approaches, the importance of economic motives, the impact of national environments, the necessity of a guided design process, the use of symbiotic methods, and the roles of participants in the design process.
Resumo:
This work reports the developnent of a mathenatical model and distributed, multi variable computer-control for a pilot plant double-effect climbing-film evaporator. A distributed-parameter model of the plant has been developed and the time-domain model transformed into the Laplace domain. The model has been further transformed into an integral domain conforming to an algebraic ring of polynomials, to eliminate the transcendental terms which arise in the Laplace domain due to the distributed nature of the plant model. This has made possible the application of linear control theories to a set of linear-partial differential equations. The models obtained have well tracked the experimental results of the plant. A distributed-computer network has been interfaced with the plant to implement digital controllers in a hierarchical structure. A modern rnultivariable Wiener-Hopf controller has been applled to the plant model. The application has revealed a limitation condition that the plant matrix should be positive-definite along the infinite frequency axis. A new multi variable control theory has emerged fram this study, which avoids the above limitation. The controller has the structure of the modern Wiener-Hopf controller, but with a unique feature enabling a designer to specify the closed-loop poles in advance and to shape the sensitivity matrix as required. In this way, the method treats directly the interaction problems found in the chemical processes with good tracking and regulation performances. Though the ability of the analytical design methods to determine once and for all whether a given set of specifications can be met is one of its chief advantages over the conventional trial-and-error design procedures. However, one disadvantage that offsets to some degree the enormous advantages is the relatively complicated algebra that must be employed in working out all but the simplest problem. Mathematical algorithms and computer software have been developed to treat some of the mathematical operations defined over the integral domain, such as matrix fraction description, spectral factorization, the Bezout identity, and the general manipulation of polynomial matrices. Hence, the design problems of Wiener-Hopf type of controllers and other similar algebraic design methods can be easily solved.
Resumo:
Packed beds have many industrial applications and are increasingly used in the process industries due to their low pressure drop. With the introduction of more efficient packings, novel packing materials (i.e. adsorbents) and new applications (i.e. flue gas desulphurisation); the aspect ratio (height to diameter) of such beds is decreasing. Obtaining uniform gas distribution in such beds is of crucial importance in minimising operating costs and optimising plant performance. Since to some extent a packed bed acts as its own distributor the importance of obtaining uniform gas distribution has increased as aspect ratios (bed height to diameter) decrease. There is no rigorous design method for distributors due to a limited understanding of the fluid flow phenomena and in particular of the effect of the bed base / free fluid interface. This study is based on a combined theoretical and modelling approach. The starting point is the Ergun Equation which is used to determine the pressure drop over a bed where the flow is uni-directional. This equation has been applied in a vectorial form so it can be applied to maldistributed and multi-directional flows and has been realised in the Computational Fluid Dynamics code PHOENICS. The use of this equation and its application has been verified by modelling experimental measurements of maldistributed gas flows, where there is no free fluid / bed base interface. A novel, two-dimensional experiment has been designed to investigate the fluid mechanics of maldistributed gas flows in shallow packed beds. The flow through the outlet of the duct below the bed can be controlled, permitting a rigorous investigation. The results from this apparatus provide useful insights into the fluid mechanics of flow in and around a shallow packed bed and show the critical effect of the bed base. The PHOENICS/vectorial Ergun Equation model has been adapted to model this situation. The model has been improved by the inclusion of spatial voidage variations in the bed and the prescription of a novel bed base boundary condition. This boundary condition is based on the logarithmic law for velocities near walls without restricting the velocity at the bed base to zero and is applied within a turbulence model. The flow in a curved bed section, which is three-dimensional in nature, is examined experimentally. The effect of the walls and the changes in gas direction on the gas flow are shown to be particularly significant. As before, the relative amounts of gas flowing through the bed and duct outlet can be controlled. The model and improved understanding of the underlying physical phenomena form the basis for the development of new distributors and rigorous design methods for them.
Resumo:
This work studies the development of polymer membranes for the separation of hydrogen and carbon monoxide from a syngas produced by the partial oxidation of natural gas. The CO product is then used for the large scale manufacture of acetic acid by reaction with methanol. A method of economic evaluation has been developed for the process as a whole and a comparison is made between separation of the H2/CO mixture by a membrane system and the conventional method of cryogenic distillation. Costs are based on bids obtained from suppliers for several different specifications for the purity of the CO fed to the acetic acid reactor. When the purity of the CO is set at that obtained by cryogenic distillation it is shown that the membrane separator offers only a marginal cost advantage. Cost parameters for the membrane separation systems have been defined in terms of effective selectivity and cost permeability. These new parameters, obtained from an analysis of the bids, are then used in a procedure which defines the optimum degree of separation and recovery of carbon monoxide for a minimum cost of manufacture of acetic acid. It is shown that a significant cost reduction is achieved with a membrane separator at the optimum process conditions. A method of "targeting" the properties of new membranes has been developed. This involves defining the properties for new (hypothetical -yet to be developed) membranes such that their use for the hydrogen/carbon monoxide separation will produce a reduced cost of acetic acid manufacture. The use of the targeting method is illustrated in the development of new membranes for the separation of hydrogen and carbon monoxide. The selection of polymeric materials for new membranes is based on molecular design methods which predict the polymer properties from the molecular groups making up the polymer molecule. Two approaches have been used. One method develops the analogy between gas solubility in liquids and that in polymers. The UNIFAC group contribution method is then used to predict gas solubility in liquids. In the second method the polymer Permachor number, developed by Salame, has been correlated with hydrogen and carbon monoxide permeabilities. These correlations are used to predict the permeabilities of gases through polymers. Materials have been tested for hydrogen and carbon monoxide permeabilities and improvements in expected economic performance have been achieved.
Resumo:
The main theme of research of this project concerns the study of neutral networks to control uncertain and non-linear control systems. This involves the control of continuous time, discrete time, hybrid and stochastic systems with input, state or output constraints by ensuring good performances. A great part of this project is devoted to the opening of frontiers between several mathematical and engineering approaches in order to tackle complex but very common non-linear control problems. The objectives are: 1. Design and develop procedures for neutral network enhanced self-tuning adaptive non-linear control systems; 2. To design, as a general procedure, neural network generalised minimum variance self-tuning controller for non-linear dynamic plants (Integration of neural network mapping with generalised minimum variance self-tuning controller strategies); 3. To develop a software package to evaluate control system performances using Matlab, Simulink and Neural Network toolbox. An adaptive control algorithm utilising a recurrent network as a model of a partial unknown non-linear plant with unmeasurable state is proposed. Appropriately, it appears that structured recurrent neural networks can provide conveniently parameterised dynamic models for many non-linear systems for use in adaptive control. Properties of static neural networks, which enabled successful design of stable adaptive control in the state feedback case, are also identified. A survey of the existing results is presented which puts them in a systematic framework showing their relation to classical self-tuning adaptive control application of neural control to a SISO/MIMO control. Simulation results demonstrate that the self-tuning design methods may be practically applicable to a reasonably large class of unknown linear and non-linear dynamic control systems.
Resumo:
This thesis encompasses an investigation of the behaviour of concrete frame structure under localised fire scenarios by implementing a constitutive model using finite-element computer program. The investigation phase included properties of material at elevated temperature, description of computer program, thermal and structural analyses. Transient thermal properties of material have been employed in this study to achieve reasonable results. The finite-element computer package of ANSYS is utilized in the present analyses to examine the effect of fire on the concrete frame under five various fire scenarios. In addition, a report of full-scale BRE Cardington concrete building designed to Eurocode2 and BS8110 subjected to realistic compartment fire is also presented. The transient analyses of present model included additional specific heat to the base value of dry concrete at temperature 100°C and 200°C. The combined convective-radiation heat transfer coefficient and transient thermal expansion have also been considered in the analyses. For the analyses with the transient strains included, the constitutive model based on empirical formula in a full thermal strain-stress model proposed by Li and Purkiss (2005) is employed. Comparisons between the models with and without transient strains included are also discussed. Results of present study indicate that the behaviour of complete structure is significantly different from the behaviour of individual isolated members based on current design methods. Although the current tabulated design procedures are conservative when the entire building performance is considered, it should be noted that the beneficial and detrimental effects of thermal expansion in complete structures should be taken into account. Therefore, developing new fire engineering methods from the study of complete structures rather than from individual isolated member behaviour is essential.
Resumo:
PURPOSE: To examine whether objective performance of near tasks is improved with various electronic vision enhancement systems (EVES) compared with the subject's own optical magnifier. DESIGN: Experimental study, randomized, within-patient design. METHODS: This was a prospective study, conducted in a hospital ophthalmology low-vision clinic. The patient population comprised 70 sequential visually impaired subjects. The magnifying devices examined were: patient's optimum optical magnifier; magnification and field-of-view matched mouse EVES with monitor or head-mounted display (HMD) viewing; and stand EVES with monitor viewing. The tasks performed were: reading speed and acuity; time taken to track from one column of print to the next; follow a route map, and locate a specific feature; and identification of specific information from a medicine label. RESULTS: Mouse EVES with HMD viewing caused lower reading speeds than stand EVES with monitor viewing (F = 38.7, P < .001). Reading with the optical magnifier was slower than with the mouse or stand EVES with monitor viewing at smaller print sizes (P < .05). The column location task was faster with the optical magnifier than with any of the EVES (F = 10.3, P < .001). The map tracking and medicine label identification task was slower with the mouse EVES with HMD viewing than with the other magnifiers (P < .01). Previous EVES experience had no effect on task performance (P > .05), but subjects with previous optical magnifier experience were significantly slower at performing the medicine label identification task with all of the EVES (P < .05). CONCLUSIONS: Although EVES provide objective benefits to the visually impaired in reading speed and acuity, together with some specific near tasks, some can be performed just as fast using optical magnification. © 2003 by Elsevier Inc. All rights reserved.
Resumo:
Ongoing advances in technology are increasing the scope for enhancing and supporting older adults’ daily living. The digital divide between older and younger adults raises concerns, however, about the suitability of technological solutions for older adults, especially for those with impairments. Taking older adults with Age-Related Macular Degeneration (AMD) as a case study, we used user-centred and participatory design approaches to develop an assistive mobile app for self-monitoring their intake of food [12,13]. In this paper we report on findings of a longitudinal field evaluation of our app that was conducted to investigate how it was received and adopted by older adults with AMD and its impact on their lives. Demonstrating the benefit of applying inclusive design methods for technology for older adults, our findings reveal how the use of the app raises participants’ awareness and facilitates self-monitoring of diet, encourages positive (diet) behaviour change, and encourages learning.
Resumo:
The correction of presbyopia and restoration of true accommodative function to the ageing eye is the focus of much ongoing research and clinical work. A range of accommodating intraocular lenses (AIOLs) implanted during cataract surgery has been developed and they are designed to change either their position or shape in response to ciliary muscle contraction to generate an increase in dioptric power. Two main design concepts exist. First, axial shift concepts rely on anterior axial movement of one or two optics creating accommodative ability. Second, curvature change designs are designed to provide significant amplitudes of accommodation with little physical displacement. Single-optic devices have been used most widely, although the true accommodative ability provided by forward shift of the optic appears limited and recent findings indicate that alternative factors such as flexing of the optic to alter ocular aberrations may be responsible for the enhanced near vision reported in published studies. Techniques for analysing the performance of AIOLs have not been standardised and clinical studies have reported findings using a wide range of both subjective and objective methods, making it difficult to gauge the success of these implants. There is a need for longitudinal studies using objective methods to assess long-term performance of AIOLs and to determine if true accommodation is restored by the designs available. While dual-optic and curvature change IOLs are designed to provide greater amplitudes of accommodation than is possible with single-optic devices, several of these implants are in the early stages of development and require significant further work before human use is possible. A number of challenges remain and must be addressed before the ultimate goal of restoring youthful levels of accommodation to the presbyopic eye can be achieved.
Resumo:
The key to the correct application of ANOVA is careful experimental design and matching the correct analysis to that design. The following points should therefore, be considered before designing any experiment: 1. In a single factor design, ensure that the factor is identified as a 'fixed' or 'random effect' factor. 2. In more complex designs, with more than one factor, there may be a mixture of fixed and random effect factors present, so ensure that each factor is clearly identified. 3. Where replicates can be grouped or blocked, the advantages of a randomised blocks design should be considered. There should be evidence, however, that blocking can sufficiently reduce the error variation to counter the loss of DF compared with a randomised design. 4. Where different treatments are applied sequentially to a patient, the advantages of a three-way design in which the different orders of the treatments are included as an 'effect' should be considered. 5. Combining different factors to make a more efficient experiment and to measure possible factor interactions should always be considered. 6. The effect of 'internal replication' should be taken into account in a factorial design in deciding the number of replications to be used. Where possible, each error term of the ANOVA should have at least 15 DF. 7. Consider carefully whether a particular factorial design can be considered to be a split-plot or a repeated measures design. If such a design is appropriate, consider how to continue the analysis bearing in mind the problem of using post hoc tests in this situation.
Resumo:
DUE TO COPYRIGHT RESTRICTIONS ONLY AVAILABLE FOR CONSULTATION AT ASTON UNIVERSITY LIBRARY WITH PRIOR ARRANGEMENT