33 resultados para finite difference methods
em Aston University Research Archive
Resumo:
Recent developments in aerostatic thrust bearings have included: (a) the porous aerostatic thrust bearing containing a porous pad and (b) the inherently compensated compliant surface aerostatic thrust bearing containing a thin elastomer layer. Both these developments have been reported to improve the bearing load capacity compared to conventional aerostatic thrust bearings with rigid surfaces. This development is carried one stage further in a porous and compliant aerostatic thrust bearing incorporating both a porous pad and an opposing compliant surface. The thin elastomer layer forming the compliant surface is bonded to a rigid backing and is of a soft rubber like material. Such a bearing is studied experimentally and theoretically under steady state operating conditions. A mathematical model is presented to predict the bearing performance. In this model is a simplified solution to the elasticity equations for deflections of the compliant surface. Account is also taken of deflections in the porous pad due to the pressure difference across its thickness. The lubrication equations for flow in the porous pad and bearing clearance are solved by numerical finite difference methods. An iteration procedure is used to couple deflections of the compliant surface and porous pad with solutions to the lubrication equations. Comparisons between experimental results and theoretically predicted bearing performance are in good agreement. However these results show that the porous and compliant aerostatic thrust bearing performance is lower than that of a porous aerostatic thrust bearing with a rigid surface in place of the compliant surface. This discovery is accounted to the recess formed in the bearing clearance by deflections of the compliant surface and its effect on flow through the porous pad.
Resumo:
This thesis demonstrates that the use of finite elements need not be confined to space alone, but that they may also be used in the time domain, It is shown that finite element methods may be used successfully to obtain the response of systems to applied forces, including, for example, the accelerations in a tall structure subjected to an earthquake shock. It is further demonstrated that at least one of these methods may be considered to be a practical alternative to more usual methods of solution. A detailed investigation of the accuracy and stability of finite element solutions is included, and methods of applications to both single- and multi-degree of freedom systems are described. Solutions using two different temporal finite elements are compared with those obtained by conventional methods, and a comparison of computation times for the different methods is given. The application of finite element methods to distributed systems is described, using both separate discretizations in space and time, and a combined space-time discretization. The inclusion of both viscous and hysteretic damping is shown to add little to the difficulty of the solution. Temporal finite elements are also seen to be of considerable interest when applied to non-linear systems, both when the system parameters are time-dependent and also when they are functions of displacement. Solutions are given for many different examples, and the computer programs used for the finite element methods are included in an Appendix.
Resumo:
Background: Electrosurgery units are widely employed in modern surgery. Advances in technology have enhanced the safety of these devices, nevertheless, accidental burns are still regularly reported. This study focuses on possible causes of sacral burns as complication of the use of electrosurgery. Burns are caused by local densifications of the current, but the actual pathway of current within patient's body is unknown. Numerical electromagnetic analysis can help in understanding the issue. Methods: To this aim, an accurate heterogeneous model of human body (including seventy-seven different tissues), electrosurgery electrodes, operating table and mattress was build to resemble a typical surgery condition. The patient lays supine on the mattress with the active electrode placed onto the thorax and the return electrode on his back. Common operating frequencies of electrosurgery units were considered. Finite Difference Time Domain electromagnetic analysis was carried out to compute the spatial distribution of current density within the patient's body. A differential analysis by changing the electrical properties of the operating table from a conductor to an insulator was also performed. Results: Results revealed that distributed capacitive coupling between patient body and the conductive operating table offers an alternative path to the electrosurgery current. The patient's anatomy, the positioning and the different electromagnetic properties of tissues promote a densification of the current at the head and sacral region. In particular, high values of current density were located behind the sacral bone and beneath the skin. This did not occur in the case of non-conductive operating table. Conclusion: Results of the simulation highlight the role played from capacitive couplings between the return electrode and the conductive operating table. The concentration of current density may result in an undesired rise in temperature, originating burns in body region far from the electrodes. This outcome is concordant with the type of surgery-related sacral burns reported in literature. Such burns cannot be immediately detected after surgery, but appear later and can be confused with bedsores. In addition, the dosimetric analysis suggests that reducing the capacity coupling between the return electrode and the operating table can decrease or avoid this problem. © 2013 Bifulco et al.; licensee BioMed Central Ltd.
Resumo:
The aim of this work has been to investigate the behaviour of a continuous rotating annular chromatograph (CRAC) under a combined biochemical reaction and separation duty. Two biochemical reactions have been employed, namely the inversion of sucrose to glucose and fructose in the presence of the enzyme invertase and the saccharification of liquefied starch to maltose and dextrin using the enzyme maltogenase. Simultaneous biochemical reaction and separation has been successfully carried out for the first time in a CRAC by inverting sucrose to fructose and glucose using the enzyme invertase and collecting continuously pure fractions of glucose and fructose from the base of the column. The CRAC was made of two concentric cylinders which form an annulus 140 cm long by 1.2 cm wide, giving an annular space of 14.5 dm3. The ion exchange resin used was an industrial grade calcium form Dowex 50W-X4 with a mean diameter of 150 microns. The mobile phase used was deionised and dearated water and contained the appropriate enzyme. The annular column was slowly rotated at speeds of up to 240°h-1 while the sucrose substrate was fed continuously through a stationary feed pipe to the top of the resin bed. A systematic investigation of the factors affecting the performance of the CRAC under simultaneous biochemical reaction and separation conditions was carried out by employing a factorial experimental procedure. The main factors affecting the performance of the system were found to be the feed rate, feed concentrations and eluent rate. Results from the experiments indicated that complete conversion could be achieved for feed concentrations of up to 50% w/v sucrose and at feed throughputs of up to 17.2 kg sucrose per m3 resin/h. The second enzymic reaction, namely the saccharification of liquefied starch to maltose employing the enzyme maltogenase has also been successfully carried out on a CRAC. Results from the experiments using soluble potato starch showed that conversions of up to 79% were obtained for a feed concentration of 15.5% w/v at a feed flowrate of 400 cm3/h. The product maltose obtained was over 95% pure. Mathematical modelling and computer simulation of the sucrose inversion system has been carried out. A finite difference method was used to solve the partial differential equations and the simulation results showed good agreement with the experimental results obtained.
Resumo:
The objective of this work has been to study the behaviour and performance of a batch chromatographic column under simultaneous bioreaction and separation conditions for several carbohydrate feedstocks. Four bioreactions were chosen, namely the hydrolysis of sucrose to glucose and fructose using the enzyme invertase, the hydrolysis of inulin to fructose and glucose using inulinase, the hydrolysis of lactose to glucose and galactose using lactase and the isomerization of glucose to fructose using glucose isomerase. The chromatographic columns employed were jacketed glass columns ranging from 1 m to 2 m long and the internal diameter ranging from 0.97 cm to 1.97 cm. The stationary phase used was a cation exchange resin (PUROLITE PCR-833) in the Ca2+ form for the hydrolysis and the Mg2+ form for the isomerization reactions. The mobile phase used was a diluted enzyme solution which was continuously pumped through the chromatographic bed. The substrate was injected at the top of the bed as a pulse. The effect of the parameters pulse size, the amount of substrate solution introduced into the system corresponding to a percentage of the total empty column volume (% TECV), pulse concentration, eluent flowrate and the enzyme activity of the eluent were investigated. For the system sucrose-invertase complete conversions of substrate were achieved for pulse sizes and pulse concentrations of up to 20% TECV and 60% w/v, respectively. Products with purity above 90% were obtained. The enzyme consumption was 45% of the amount theoretically required to produce the same amount of product as in a conventional batch reactor. A value of 27 kg sucrose/m3 resin/h for the throughput of the system was achieved. The systematic investigation of the factors affecting the performance of the batch chromatographic bioreactor-separator was carried out by employing a factorial experimental procedure. The main factors affecting the performance of the system were the flowrate and enzyme activity. For the system inulin-inulinase total conversions were also obtained for pulses sizes of up to 20 % TECV and a pulse concentration of 10 % w/v. Fructose rich fractions with 100 % purity and representing up to 99.4 % of the total fructose generated were obtained with an enzyme consumption of 32 % of the amount theoretically required to produce the same amount of product in a conventional batch reactor. The hydrolysis of lactose by lactase was studied in the glass columns and also in an SCCR-S unit adapted for batch operation, in co-operation with Dr. Shieh, a fellow researcher in the Chemical Engineering and Applied Chemistry Department at Aston University. By operating at up to 30 % w/v lactose feed concentrations complete conversions were obtained and the purities of the products generated were above 90%. An enzyme consumption of 48 % of the amount theoretically required to produce the same amount of product in a conventional batch reactor was achieved. On working with the system glucose-glucose isomerase, which is a reversible reaction, the separation obtained with the stationary phase conditioned in the magnesium form was very poor although the conversion obtained was compatible with those for conventional batch reactors. By working with a mixed pulse of enzyme and substrate, up to 82.5 % of the fructose generated with a purity of 100 % was obtained. The mathematical modelling and computer simulation of the batch chromatographic bioreaction-separation has been performed on a personal computer. A finite difference method was used to solve the partial differential equations and the simulation results showed good agreement with the experimental results.
Resumo:
This thesis is concerned with the experimental and theoretical investigation into the compression bond of column longitudinal reinforcement in the transference of axial load from a reinforced concrete column to a base. Experimental work includes twelve tests with square twisted bars and twenty four tests with ribbed bars. The effects of bar size, anchorage length in the base, plan area of the base, provision of bae tensile reinforcement, links around the column bars in the base, plan area of column and concrete compressive strength were investigated in the tests. The tests indicated that the strength of the compression anchorage of deformed reinforcing steel in the concrete was primarily dependent on the concrete strength and the resistance to bursting, which may be available within the anchorage . It was shown in the tests without concreted columns that due to a large containment over the bars in the foundation, failure occurred due to the breakdown of bond followed by the slip of the column bars along the anchorage length. The experimental work showed that the bar size , the stress in the bar, the anchorage length, provision of the transverse steel and the concrete compressive strength significantly affect the bond stress at failure. The ultimate bond stress decreases as the anchorage length is increased, while the ultimate bond stress increases with increasing each of the remainder parameters. Tests with concreted columns also indicated that a section of the column contributed to the bond length in the foundation by acting as an extra anchorage length. The theoretical work is based on the Mindlin equation( 3), an analytical method used in conjunction with finite difference calculus. The theory is used to plot the distribution of bond stress in the elastic and the elastic-plastic stage of behaviour. The theory is also used to plot the load-vertical displacement relationship of the column bars in the anchorage length, and also to determine the theoretical failure load of foundation. The theoretical solutions are in good agreement with the experimental results and the distribution of bond stress is shown to be significantly influenced by the bar stiffness factor K. A comparison of the experimental results with the current codes shows that the bond stresses currently used are low and in particular, CPIlO(56) specifies very conservative design bond stresses .
Resumo:
Particulate solids are complex redundant systems which consist of discrete particles. The interactions between the particles are complex and have been the subject of many theoretical and experimental investigations. Invetigations of particulate material have been restricted by the lack of quantitative information on the mechanisms occurring within an assembly. Laboratory experimentation is limited as information on the internal behaviour can only be inferred from measurements on the assembly boundary, or the use of intrusive measuring devices. In addition comparisons between test data are uncertain due to the difficulty in reproducing exact replicas of physical systems. Nevertheless, theoretical and technological advances require more detailed material information. However, numerical simulation affords access to information on every particle and hence the micro-mechanical behaviour within an assembly, and can replicate desired systems. To use a computer program to numerically simulate material behaviour accurately it is necessary to incorporte realistic interaction laws. This research programme used the finite difference simulation program `BALL', developed by Cundall (1971), which employed linear spring force-displacement laws. It was thus necessary to incorporate more realistic interaction laws. Therefore, this research programme was primarily concerned with the implementation of the normal force-displacement law of Hertz (1882) and the tangential force-displacement laws of Mindlin and Deresiewicz (1953). Within this thesis the contact mechanics theories employed in the program are developed and the adaptations which were necessary to incorporate these laws are detailed. Verification of the new contact force-displacement laws was achieved by simulating a quasi-static oblique contact and single particle oblique impact. Applications of the program to the simulation of large assemblies of particles is given, and the problems in undertaking quasi-static shear tests along with the results from two successful shear tests are described.
Resumo:
An initial review of the subject emphasises the need for improved fuel efficiency in vehicles and the possible role of aluminium in reducing weight. The problems of formability generally in manufacture and of aluminium in particular are discussed in the light of published data. A range of thirteen commercially available sheet aluminium alloys have been compared with respect to mechanical properties as these affect forming processes and behaviour in service. Four alloys were selected for detailed comparison. The formability and strength of these were investigated in terms of underlying mechanisms of deformation as well as the microstructural characteristics of the alloys including texture, particle dispersion, grain size and composition. In overall terms, good combinations of strength and ductility are achievable with alloys of the 2xxx and 6xxx series. Some specific alloys are notably better than others. The strength of formed components is affected by paint baking in the final stages of manufacture. Generally, alloys of the 6xxx family are strengthened while 2xxx and 5xxx become weaker. Some anomalous behaviour exists, however. Work hardening of these alloys appears to show rather abrupt decreases over certain strain ranges which is probably responsible for the relatively low strains at which both diffuse and local necking occur. Using data obtained from extended range tensile tests, the strain distribution in more complex shapes can be successfully modelled using finite element methods.Sheet failure during forming occurs by abrupt shear fracture in many instances. This condition is favoured by states of biaxial tension, surface defects in the form of fine scratches and certain types of crystallographic texture. The measured limit strains of the materials can be understood on the basis of attainment of a critical shear stress for fracture.
Resumo:
Analysis of covariance (ANCOVA) is a useful method of ‘error control’, i.e., it can reduce the size of the error variance in an experimental or observational study. An initial measure obtained before the experiment, which is closely related to the final measurement, is used to adjust the final measurements, thus reducing the error variance. When this method is used to reduce the error term, the X variable must not itself be affected by the experimental treatments, because part of the treatment effect would then also be removed. Hence, the method can only be safely used when X is measured before an experiment. A further limitation of the analysis is that only the linear effect of Y on X is being removed and it is possible that Y could be a curvilinear function of X. A question often raised is whether ANCOVA should be used routinely in experiments rather than a randomized blocks or split-plot design, which may also reduce the error variance. The answer to this question depends on the relative precision of the difference methods with reference to each scenario. Considerable judgment is often required to select the best experimental design and statistical help should be sought at an early stage of an investigation.
Resumo:
During the last decade, microfabrication of photonic devices by means of intense femtosecond (fs) laser pulses has emerged as a novel technology. A common requirement for the production of these devices is that the refractive index modification pitch size should be smaller than the inscribing wavelength. This can be achieved by making use of the nonlinear propagation of intense fs laser pulses. Nonlinear propagation of intense fs laser pulses is an extremely complicated phenomenon featuring complex multiscale spatiotemporal dynamics of the laser pulses. We have utilized a principal approach based on finite difference time domain (FDTD) modeling of the full set of Maxwell's equations coupled to the conventional Drude model for generated plasma. Nonlinear effects are included, such as self-phase modulation and multiphoton absorption. Such an approach resolves most problems related to the inscription of subwavelength structures, when the paraxial approximation is not applicable to correctly describe the creation of and scattering on the structures. In a representative simulation of the inscription process, the signature of degenerate four wave mixing has been found. © 2012 Optical Society of America.
Resumo:
In this second article, statistical ideas are extended to the problem of testing whether there is a true difference between two samples of measurements. First, it will be shown that the difference between the means of two samples comes from a population of such differences which is normally distributed. Second, the 't' distribution, one of the most important in statistics, will be applied to a test of the difference between two means using a simple data set drawn from a clinical experiment in optometry. Third, in making a t-test, a statistical judgement is made as to whether there is a significant difference between the means of two samples. Before the widespread use of statistical software, this judgement was made with reference to a statistical table. Even if such tables are not used, it is useful to understand their logical structure and how to use them. Finally, the analysis of data, which are known to depart significantly from the normal distribution, will be described.
Resumo:
A formalism recently introduced by Prugel-Bennett and Shapiro uses the methods of statistical mechanics to model the dynamics of genetic algorithms. To be of more general interest than the test cases they consider. In this paper, the technique is applied to the subset sum problem, which is a combinatorial optimization problem with a strongly non-linear energy (fitness) function and many local minima under single spin flip dynamics. It is a problem which exhibits an interesting dynamics, reminiscent of stabilizing selection in population biology. The dynamics are solved under certain simplifying assumptions and are reduced to a set of difference equations for a small number of relevant quantities. The quantities used are the population's cumulants, which describe its shape, and the mean correlation within the population, which measures the microscopic similarity of population members. Including the mean correlation allows a better description of the population than the cumulants alone would provide and represents a new and important extension of the technique. The formalism includes finite population effects and describes problems of realistic size. The theory is shown to agree closely to simulations of a real genetic algorithm and the mean best energy is accurately predicted.
Resumo:
The problem of regression under Gaussian assumptions is treated generally. The relationship between Bayesian prediction, regularization and smoothing is elucidated. The ideal regression is the posterior mean and its computation scales as O(n3), where n is the sample size. We show that the optimal m-dimensional linear model under a given prior is spanned by the first m eigenfunctions of a covariance operator, which is a trace-class operator. This is an infinite dimensional analogue of principal component analysis. The importance of Hilbert space methods to practical statistics is also discussed.
Resumo:
We investigate the performance of parity check codes using the mapping onto spin glasses proposed by Sourlas. We study codes where each parity check comprises products of K bits selected from the original digital message with exactly C parity checks per message bit. We show, using the replica method, that these codes saturate Shannon's coding bound for K?8 when the code rate K/C is finite. We then examine the finite temperature case to asses the use of simulated annealing methods for decoding, study the performance of the finite K case and extend the analysis to accommodate different types of noisy channels. The analogy between statistical physics methods and decoding by belief propagation is also discussed.
Resumo:
We determine the critical noise level for decoding low density parity check error correcting codes based on the magnetization enumerator , rather than on the weight enumerator employed in the information theory literature. The interpretation of our method is appealingly simple, and the relation between the different decoding schemes such as typical pairs decoding, MAP, and finite temperature decoding (MPM) becomes clear. In addition, our analysis provides an explanation for the difference in performance between MN and Gallager codes. Our results are more optimistic than those derived via the methods of information theory and are in excellent agreement with recent results from another statistical physics approach.