33 resultados para Continuous damage model

em Aston University Research Archive


Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a new interpretation for the Superpave IDT strength test based on a viscoelastic-damage framework. The framework is based on continuum damage mechanics and the thermodynamics of irreversible processes with an anisotropic damage representation. The new approach introduces considerations for the viscoelastic effects and the damage accumulation that accompanies the fracture process in the interpretation of the Superpave IDT strength test for the identification of the Dissipated Creep Strain Energy (DCSE) limit from the test result. The viscoelastic model is implemented in a Finite Element Method (FEM) program for the simulation of the Superpave IDT strength test. The DCSE values obtained using the new approach is compared with the values obtained using the conventional approach to evaluate the validity of the assumptions made in the conventional interpretation of the test results. The result shows that the conventional approach over-estimates the DCSE value with increasing estimation error at higher deformation rates.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effects of an experimental model of hydrogen-peroxide-induced foot pad oedema on indices of oxidative damage to biomolecules have been investigated. We have demonstrated increased levels of fluorescent protein and lipid peroxides occurring in plasma at 24 and 48 h post-injection. In addition, a decrease in the degree of galactosylation of IgG was observed which kinetically related the degree of inflammation and to the increase in protein autofluorescence (a specific index of oxidative damage). The effects of ebselen, a novel organoselenium compound which protects against oxidative tissue injury in a glutathione-peroxidase-like manner, have also been examined in this model. Pretreatment of animals with a dose of 50 mg/kg ebselen afforded significant and selective protection against lipid peroxidation only. This effect may contribute to the anti-inflammatory effect of this agent in hydroperoxide-linked tissue damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the advent of globalisation companies all around the world must improve their performance in order to survive. The threats are coming from everywhere, and in different ways, such as low cost products, high quality products, new technologies, and new products. Different companies in different countries are using various techniques and using quality criteria items to strive for excellence. Continuous improvement techniques are used to enable companies to improve their operations. Therefore, companies are using techniques such as TQM, Kaizen, Six-Sigma, Lean Manufacturing, and quality award criteria items such as Customer Focus, Human Resources, Information & Analysis, and Process Management. The purpose of this paper is to compare the use of these techniques and criteria items in two countries, Mexico and the United Kingdom, which differ in culture and industrial structure. In terms of the use of continuous improvement tools and techniques, Mexico formally started to deal with continuous improvement by creating its National Quality Award soon after the Americans began the Malcolm Baldrige National Quality Award. The United Kingdom formally started by using the European Quality Award (EQA), modified and renamed as the EFQM Excellence Model. The methodology used in this study was to undertake a literature review of the subject matter and to study some general applications around the world. A questionnaire survey was then designed and a survey undertaken based on the same scale, about the same sample size, and the about the same industrial sector within the two countries. The survey presents a brief definition of each of the constructs to facilitate understanding of the questions. The analysis of the data was then conducted with the assistance of a statistical software package. The survey results indicate both similarities and differences in the strengths and weaknesses of the companies in the two countries. One outcome of the analysis is that it enables the companies to use the results to benchmark themselves and thus act to reinforce their strengths and to reduce their weaknesses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The work describes the programme of activities relating to a mechanical study of the Conform extrusion process. The main objective was to provide a basic understanding of the mechanics of the Conform process with particular emphasis placed on modelling using experimental and theoretical considerations. The experimental equipment used includes a state of the art computer-aided data-logging system and high temperature loadcells (up to 260oC) manufactured from tungsten carbide. Full details of the experimental equipment is presented in sections 3 and 4. A theoretical model is given in Section 5. The model presented is based on the upper bound theorem using a variation of the existing extrusion theories combined with temperature changes in the feed metal across the deformation zone. In addition, constitutive equations used in the model have been generated from existing experimental data. Theoretical and experimental data are presented in tabular form in Section 6. The discussion of results includes a comprehensive graphical presentation of the experimental and theoretical data. The main findings are: (i) the establishment of stress/strain relationships and an energy balance in order to study the factors affecting redundant work, and hence a model suitable for design purposes; (ii) optimisation of the process, by determination of the extrusion pressure for the range of reduction and changes in the extrusion chamber geometry at lower wheel speeds; and (iii) an understanding of the control of the peak temperature reach during extrusion.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Lipid-mobilising factor (LMF) is produced by cachexia-inducing tumours and is involved in the degradation of adipose tissue, with increased oxidation of the released fatty acids through an induction of uncoupling protein (UCP) expression. Since UCP-2 is thought to be involved in the detoxification of free radicals if LMF induced UCP-2 expression in tumour cells, it might attenuate free radical toxicity. As a model system we have used MAC13 tumour cells, which do not produce LMF. Addition of LMF caused a concentration-dependent increase in UCP-2 expression, as determined by immunoblotting. This effect was attenuated by the β3 antagonist SR59230A, suggesting that it was mediated through a β3 adrenoreceptor. Co-incubation of LMF with MAC13 cells reduced the growth-inhibitory effects of bleomycin, paraquat and hydrogen peroxide, known to be free radical generators, but not chlorambucil, an alkylating agent. There was no effect of LMF alone on cellular proliferation. These results indicate that LMF antagonises the antiproliferative effect of agents working through a free radical mechanism, and may partly explain the unresponsiveness to the chemotherapy of cachexia-inducing tumours. © 2004 Cancer Research UK.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: To evaluate the relationship between ocular perfusion pressure and color Doppler measurements in patients with glaucoma. MATERIALS AND METHODS: Twenty patients with primary open-angle glaucoma with visual field deterioration in spite of an intraocular pressure lowered below 21 mm Hg, 20 age-matched patients with glaucoma with stable visual fields, and 20 age-matched healthy controls were recruited. After a 20-minute rest in a supine position, intraocular pressure and color Doppler measurements parameters of the ophthalmic artery and the central retinal artery were obtained. Correlations between mean ocular perfusion pressure and color Doppler measurements parameters were determined. RESULTS: Patients with glaucoma showed a higher intraocular pressure (P <.0008) and a lower mean ocular perfusion pressure (P <.0045) compared with healthy subjects. Patients with deteriorating glaucoma showed a lower mean blood pressure (P =.033) and a lower end diastolic velocity in the central retinal artery (P =.0093) compared with normals. Mean ocular perfusion pressure correlated positively with end diastolic velocity in the ophthalmic artery (R = 0.66, P =.002) and central retinal artery (R = 0.74, P <.0001) and negatively with resistivity index in the ophthalmic artery (R = -0.70, P =.001) and central retinal artery (R = -0.62, P =.003) in patients with deteriorating glaucoma. Such correlations did not occur in patients with glaucoma with stable visual fields or in normal subjects. The correlations were statistically significantly different between the study groups (parallelism of regression lines in an analysis of covariance model) for end diastolic velocity (P =.001) and resistivity index (P =.0001) in the ophthalmic artery, as well as for end diastolic velocity (P =.0009) and resistivity index (P =. 001) in the central retinal artery. CONCLUSIONS: The present findings suggest that alterations in ocular blood flow regulation may contribute to the progression in glaucomatous damage.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When constructing and using environmental models, it is typical that many of the inputs to the models will not be known perfectly. In some cases, it will be possible to make observations, or occasionally physics-based uncertainty propagation, to ascertain the uncertainty on these inputs. However, such observations are often either not available or even possible, and another approach to characterising the uncertainty on the inputs must be sought. Even when observations are available, if the analysis is being carried out within a Bayesian framework then prior distributions will have to be specified. One option for gathering or at least estimating this information is to employ expert elicitation. Expert elicitation is well studied within statistics and psychology and involves the assessment of the beliefs of a group of experts about an uncertain quantity, (for example an input / parameter within a model), typically in terms of obtaining a probability distribution. One of the challenges in expert elicitation is to minimise the biases that might enter into the judgements made by the individual experts, and then to come to a consensus decision within the group of experts. Effort is made in the elicitation exercise to prevent biases clouding the judgements through well-devised questioning schemes. It is also important that, when reaching a consensus, the experts are exposed to the knowledge of the others in the group. Within the FP7 UncertWeb project (http://www.uncertweb.org/), there is a requirement to build a Webbased tool for expert elicitation. In this paper, we discuss some of the issues of building a Web-based elicitation system - both the technological aspects and the statistical and scientific issues. In particular, we demonstrate two tools: a Web-based system for the elicitation of continuous random variables and a system designed to elicit uncertainty about categorical random variables in the setting of landcover classification uncertainty. The first of these examples is a generic tool developed to elicit uncertainty about univariate continuous random variables. It is designed to be used within an application context and extends the existing SHELF method, adding a web interface and access to metadata. The tool is developed so that it can be readily integrated with environmental models exposed as web services. The second example was developed for the TREES-3 initiative which monitors tropical landcover change through ground-truthing at confluence points. It allows experts to validate the accuracy of automated landcover classifications using site-specific imagery and local knowledge. Experts may provide uncertainty information at various levels: from a general rating of their confidence in a site validation to a numerical ranking of the possible landcover types within a segment. A key challenge in the web based setting is the design of the user interface and the method of interacting between the problem owner and the problem experts. We show the workflow of the elicitation tool, and show how we can represent the final elicited distributions and confusion matrices using UncertML, ready for integration into uncertainty enabled workflows.We also show how the metadata associated with the elicitation exercise is captured and can be referenced from the elicited result, providing crucial lineage information and thus traceability in the decision making process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combined bioreaction separation studies have been carried out for the first time on a moving port semi-continuous counter-current chromatographic reactor-separator (SCCR-S1) consisting of twelve 5.4cm id x 75cm long columns packed with calcium charged cross-linked polystyrene resin (KORELA V07C). The inversion of sucrose to glucose and fructose in the presence of the enzyme invertase and the biochemIcal synthesis of dextran and fructose from sucrose in the presence of the enzyme dextransucrase were investigated. A dilute stream of the appropriate enzyme in deionised water was used as the eluent stream. The effect of switch time, feed concentration, enzyme activity, eluent rate and enzyme to feed concentration ratio on the combined bioreaction-separation were investigated. For the invertase reaction, at 20.77% w/v sucrose feed concentrations complete conversions were achieved. The enzyme usage was 34% of the theoretical enzyme amount needed to convert an equivalent amount of sucrose over the same time period when using a conventional fermenter. The fructose rich (FRP) and glucose rich (GRP) product purities obtained were over 90%. By operating at 35% w/v sucrose feed concentration and employing the product splitting and recycling techniques, the total concentration and purity of the GRP increased from 32% w/v to 4.6% and from 92.3% to 95% respectively. The FRP concentration also increased from 1.82% w/v to 2.88% w/v. A mathematical model was developed for the combined reaction-separation and used to simulate the continuous inversion of sucrose and product separation using the SCCR-S1. In the biosynthesis of dextran studies, 52% conversion of a 2% w/v sucrose concentration feed was achieved. An average dextran molecular weight of 4 millIon was obtained in the dextran rich (DRP) product stream. The enzyme dextransucrase was purifed successfully using centrifugation and ultrafiltration techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research initiates a study of the mechanics of four roll plate bending and provides a methodology to investigate the process experimentally. To carry out the research a suitable model bender was designed and constructed. The model bender was comprehensively instrumented with ten load cells, three torquemeters and a tachometer. A rudimentary analysis of the four roll pre-bending mode considered the three critical bending operations. The analysis also gave an assessment of the model bender capacity for the design stage. The analysis indicated that an increase in the coefficient of friction in the contact region of the pinch rolls and the plate would reduce the pinch resultant force required to end a plate to a particular bend radius. The mechanisms involved in the four roll plate bending process were investigated and a mathematical model evolved to determine the mechanics of four roll thin plate bending. A theoretical and experimental investigation was conducted for the bending of HP30 aluminium plates in both single and multipass bending modes. The study indicated that the multipass plate bending mechanics of the process varied according to the number of bending passes executed and the step decrement of the anticipated finished bend radius in any two successive passes (i.e. the bending route). Experimental results for single pass bending indicated that the rollers normally exert a higher bending load for the steady-continous bending with the pre-inactive side roll oper?tive. For the pre-bending mode and the steady-continous bending mode with the pre-active side roll operative, the former exerted the higher loads. The single pass results also indicated that the force on the side roll, the torque and power steadily increased as the anticipated bend radius decreased. Theoretical predictions for the plate internal resistance to accomplish finished bend radii of between 2500mm and 500mm for multipass bending HP30 aluminium plates, suggested that there was a certain bending route which would effectively optimise the bender capacity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis considers two basic aspects of impact damage in composite materials, namely damage severity discrimination and impact damage location by using Acoustic Emissions (AE) and Artificial Neural Networks (ANNs). The experimental work embodies a study of such factors as the application of AE as Non-destructive Damage Testing (NDT), and the evaluation of ANNs modelling. ANNs, however, played an important role in modelling implementation. In the first aspect of the study, different impact energies were used to produce different level of damage in two composite materials (T300/914 and T800/5245). The impacts were detected by their acoustic emissions (AE). The AE waveform signals were analysed and modelled using a Back Propagation (BP) neural network model. The Mean Square Error (MSE) from the output was then used as a damage indicator in the damage severity discrimination study. To evaluate the ANN model, a comparison was made of the correlation coefficients of different parameters, such as MSE, AE energy, AE counts, etc. MSE produced an outstanding result based on the best performance of correlation. In the second aspect, a new artificial neural network model was developed to provide impact damage location on a quasi-isotropic composite panel. It was successfully trained to locate impact sites by correlating the relationship between arriving time differences of AE signals at transducers located on the panel and the impact site coordinates. The performance of the ANN model, which was evaluated by calculating the distance deviation between model output and real location coordinates, supports the application of ANN as an impact damage location identifier. In the study, the accuracy of location prediction decreased when approaching the central area of the panel. Further investigation indicated that this is due to the small arrival time differences, which defect the performance of ANN prediction. This research suggested increasing the number of processing neurons in the ANNs as a practical solution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Much research is currently centred on the detection of damage in structures using vibrational data. The work presented here examined several areas of interest in support of a practical technique for identifying and locating damage within bridge structures using apparent changes in their vibrational response to known excitation. The proposed goals of such a technique included the need for the measurement system to be operated on site by a minimum number of staff and that the procedure should be as non-invasive to the bridge traffic-flow as possible. Initially the research investigated changes in the vibrational bending characteristics of two series of large-scale model bridge-beams in the laboratory and these included ordinary-reinforced and post-tensioned, prestressed designs. Each beam was progressively damaged at predetermined positions and its vibrational response to impact excitation was analysed. For the load-regime utilised the results suggested that the infuced damage manifested itself as a function of the span of a beam rather than a localised area. A power-law relating apparent damage with the applied loading and prestress levels was then proposed, together with a qualitative vibrational measure of structural damage. In parallel with the laboratory experiments a series of tests were undertaken at the sites of a number of highway bridges. The bridges selected had differing types of construction and geometric design including composite-concrete, concrete slab-and-beam, concrete-slab with supporting steel-troughing constructions together with regular-rectangular, skewed and heavily-skewed geometries. Initial investigations were made of the feasibility and reliability of various methods of structure excitation including traffic and impulse methods. It was found that localised impact using a sledge-hammer was ideal for the purposes of this work and that a cartridge `bolt-gun' could be used in some specific cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work was to investigate the feasibility of detecting and locating damage in large frame structures where visual inspection would be difficult or impossible. This method is based on a vibration technique for non-destructively assessing the integrity of structures by using measurements of changes in the natural frequencies. Such measurements can be made at a single point in the structure. The method requires that initially a comprehensive theoretical vibration analysis of the structure is undertaken and from it predictions are made of changes in dynamic characteristics that will occur if each member of the structure is damaged in turn. The natural frequencies of the undamaged structure are measured, and then routinely remeasured at intervals . If a change in the natural frequencies is detected a statistical method. is used to make the best match between the measured changes in frequency and the family of theoretical predictions. This predicts the most likely damage site. The theoretical analysis was based on the finite element method. Many structures were extensively studied and a computer model was used to simulate the effect of the extent and location of the damage on natural frequencies. Only one such analysis is required for each structure to be investigated. The experimental study was conducted on small structures In the laboratory. Frequency changes were found from inertance measurements on various plane and space frames. The computational requirements of the location analysis are small and a desk-top micro computer was used. Results of this work showed that the method was successful in detecting and locating damage in the test structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Product reliability and its environmental performance have become critical elements within a product's specification and design. To obtain a high level of confidence in the reliability of the design it is customary to test the design under realistic conditions in a laboratory. The objective of the work is to examine the feasibility of designing mechanical test rigs which exhibit prescribed dynamical characteristics. The design is then attached to the rig and excitation is applied to the rig, which then transmits representative vibration levels into the product. The philosophical considerations made at the outset of the project are discussed as they form the basis for the resulting design methodologies. It is attempted to directly identify the parameters of a test rig from the spatial model derived during the system identification process. It is shown to be impossible to identify a feasible test rig design using this technique. A finite dimensional optimal design methodology is developed which identifies the parameters of a discrete spring/mass system which is dynamically similar to a point coordinate on a continuous structure. This design methodology is incorporated within another procedure which derives a structure comprising a continuous element and a discrete system. This methodology is used to obtain point coordinate similarity for two planes of motion, which is validated by experimental tests. A limitation of this approach is that it is impossible to achieve multi-coordinate similarity due to an interaction of the discrete system and the continuous element at points away from the coordinate of interest. During the work the importance of the continuous element is highlighted and a design methodology is developed for continuous structures. The design methodology is based upon distributed parameter optimal design techniques and allows an initial poor design estimate to be moved in a feasible direction towards an acceptable design solution. Cumulative damage theory is used to provide a quantitative method of assessing the quality of dynamic similarity. It is shown that the combination of modal analysis techniques and cumulative damage theory provides a feasible design synthesis methodology for representative test rigs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.