893 resultados para Two-stage MCMC method
Resumo:
Corrosion of reinforcing steel in concrete due to chloride ingress is one of the main causes of the deterioration of reinforced concrete structures. Structures most affected by such a corrosion are marine zone buildings and structures exposed to de-icing salts like highways and bridges. Such process is accompanied by an increase in volume of the corrosión products on the rebarsconcrete interface. Depending on the level of oxidation, iron can expand as much as six times its original volume. This increase in volume exerts tensile stresses in the surrounding concrete which result in cracking and spalling of the concrete cover if the concrete tensile strength is exceeded. The mechanism by which steel embedded in concrete corrodes in presence of chloride is the local breakdown of the passive layer formed in the highly alkaline condition of the concrete. It is assumed that corrosion initiates when a critical chloride content reaches the rebar surface. The mathematical formulation idealized the corrosion sequence as a two-stage process: an initiation stage, during which chloride ions penetrate to the reinforcing steel surface and depassivate it, and a propagation stage, in which active corrosion takes place until cracking of the concrete cover has occurred. The aim of this research is to develop computer tools to evaluate the duration of the service life of reinforced concrete structures, considering both the initiation and propagation periods. Such tools must offer a friendly interface to facilitate its use by the researchers even though their background is not in numerical simulation. For the evaluation of the initiation period different tools have been developed: Program TavProbabilidade: provides means to carry out a probability analysis of a chloride ingress model. Such a tool is necessary due to the lack of data and general uncertainties associated with the phenomenon of the chloride diffusion. It differs from the deterministic approach because it computes not just a chloride profile at a certain age, but a range of chloride profiles for each probability or occurrence. Program TavProbabilidade_Fiabilidade: carries out reliability analyses of the initiation period. It takes into account the critical value of the chloride concentration on the steel that causes breakdown of the passive layer and the beginning of the propagation stage. It differs from the deterministic analysis in that it does not predict if the corrosion is going to begin or not, but to quantifies the probability of corrosion initiation. Program TavDif_1D: was created to do a one dimension deterministic analysis of the chloride diffusion process by the finite element method (FEM) which numerically solves Fick’second Law. Despite of the different FEM solver already developed in one dimension, the decision to create a new code (TavDif_1D) was taken because of the need to have a solver with friendly interface for pre- and post-process according to the need of IETCC. An innovative tool was also developed with a systematic method devised to compare the ability of the different 1D models to predict the actual evolution of chloride ingress based on experimental measurements, and also to quantify the degree of agreement of the models with each others. For the evaluation of the entire service life of the structure: a computer program has been developed using finite elements method to do the coupling of both service life periods: initiation and propagation. The program for 2D (TavDif_2D) allows the complementary use of two external programs in a unique friendly interface: • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. This program (TavDif_2D) is responsible to decide in each time step when and where to start applying the boundary conditions of fracture mechanics module in function of the amount of chloride concentration and corrosion parameters (Icorr, etc). This program is also responsible to verify the presence and the degree of fracture in each element to send the Information of diffusion coefficient variation with the crack width. • GMSH - an finite element mesh generator and post-processing viewer • OOFEM – a finite element solver. The advantages of the FEM with the interface provided by the tool are: • the flexibility to input the data such as material property and boundary conditions as time dependent function. • the flexibility to predict the chloride concentration profile for different geometries. • the possibility to couple chloride diffusion (initiation stage) with chemical and mechanical behavior (propagation stage). The OOFEM code had to be modified to accept temperature, humidity and the time dependent values for the material properties, which is necessary to adequately describe the environmental variations. A 3-D simulation has been performed to simulate the behavior of the beam on both, action of the external load and the internal load caused by the corrosion products, using elements of imbedded fracture in order to plot the curve of the deflection of the central region of the beam versus the external load to compare with the experimental data.
Resumo:
Hybrid Stepper Motors are widely used in open-loop position applications. They are the choice of actuation for the collimators in the Large Hadron Collider, the largest particle accelerator at CERN. In this case the positioning requirements and the highly radioactive operating environment are unique. The latter forces both the use of long cables to connect the motors to the drives which act as transmission lines and also prevents the use of standard position sensors. However, reliable and precise operation of the collimators is critical for the machine, requiring the prevention of step loss in the motors and maintenance to be foreseen in case of mechanical degradation. In order to make the above possible, an approach is proposed for the application of an Extended Kalman Filter to a sensorless stepper motor drive, when the motor is separated from its drive by long cables. When the long cables and high frequency pulse width modulated control voltage signals are used together, the electrical signals difer greatly between the motor and drive-side of the cable. Since in the considered case only drive-side data is available, it is therefore necessary to estimate the motor-side signals. Modelling the entire cable and motor system in an Extended Kalman Filter is too computationally intensive for standard embedded real-time platforms. It is, in consequence, proposed to divide the problem into an Extended Kalman Filter, based only on the motor model, and separated motor-side signal estimators, the combination of which is less demanding computationally. The efectiveness of this approach is shown in simulation. Then its validity is experimentally demonstrated via implementation in a DSP based drive. A testbench to test its performance when driving an axis of a Large Hadron Collider collimator is presented along with the results achieved. It is shown that the proposed method is capable of achieving position and load torque estimates which allow step loss to be detected and mechanical degradation to be evaluated without the need for physical sensors. These estimation algorithms often require a precise model of the motor, but the standard electrical model used for hybrid stepper motors is limited when currents, which are high enough to produce saturation of the magnetic circuit, are present. New model extensions are proposed in order to have a more precise model of the motor independently of the current level, whilst maintaining a low computational cost. It is shown that a significant improvement in the model It is achieved with these extensions, and their computational performance is compared to study the cost of model improvement versus computation cost. The applicability of the proposed model extensions is demonstrated via their use in an Extended Kalman Filter running in real-time for closed-loop current control and mechanical state estimation. An additional problem arises from the use of stepper motors. The mechanics of the collimators can wear due to the abrupt motion and torque profiles that are applied by them when used in the standard way, i.e. stepping in open-loop. Closed-loop position control, more specifically Field Oriented Control, would allow smoother profiles, more respectful to the mechanics, to be applied but requires position feedback. As mentioned already, the use of sensors in radioactive environments is very limited for reliability reasons. Sensorless control is a known option but when the speed is very low or zero, as is the case most of the time for the motors used in the LHC collimator, the loss of observability prevents its use. In order to allow the use of position sensors without reducing the long term reliability of the whole system, the possibility to switch from closed to open loop is proposed and validated, allowing the use of closed-loop control when the position sensors function correctly and open-loop when there is a sensor failure. A different approach to deal with the switched drive working with long cables is also presented. Switched mode stepper motor drives tend to have poor performance or even fail completely when the motor is fed through a long cable due to the high oscillations in the drive-side current. The design of a stepper motor output fillter which solves this problem is thus proposed. A two stage filter, one devoted to dealing with the diferential mode and the other with the common mode, is designed and validated experimentally. With this ?lter the drive performance is greatly improved, achieving a positioning repeatability even better than with the drive working without a long cable, the radiated emissions are reduced and the overvoltages at the motor terminals are eliminated.
Resumo:
A fast, simple and environmentally friendly ultrasound-assisted dispersive liquid-liquid microextraction (USA-DLLME) procedure has been developed to preconcentrate eight cyclic and linear siloxanes from wastewater samples prior to quantification by gas chromatography-mass spectrometry (GC-MS). A two-stage multivariate optimization approach has been developed employing a Plackett-Burman design for screening and selecting the significant factors involved in the USA-DLLME procedure, which was later optimized by means of a circumscribed central composite design. The optimum conditions were: extractant solvent volume, 13 µL; solvent type, chlorobenzene; sample volume, 13 mL; centrifugation speed, 2300 rpm; centrifugation time, 5 min; and sonication time, 2 min. Under the optimized experimental conditions the method gave levels of repeatability with coefficients of variation between 10 and 24% (n=7). Limits of detection were between 0.002 and 1.4 µg L−1. Calculated calibration curves gave high levels of linearity with correlation coefficient values between 0.991 and 0.9997. Finally, the proposed method was applied for the analysis of wastewater samples. Relative recovery values ranged between 71–116% showing that the matrix had a negligible effect upon extraction. To our knowledge, this is the first time that combines LLME and GC-MS for the analysis of methylsiloxanes in wastewater samples.
Resumo:
A novel and environment friendly analytical method is reported for total chromium determination and chromium speciation in water samples, whereby tungsten coil atomic emission spectrometry (WCAES) is combined with in situ ionic liquid formation dispersive liquid–liquid microextraction (in situ IL-DLLME). A two stage multivariate optimization approach has been developed employing a Plackett–Burman design for screening and selection of the significant factor involved in the in situ IL-DLLME procedure, which was later optimized by means of a circumscribed central composite design. The optimum conditions were complexant concentration: 0.5% (or 0.1%); complexant type: DDTC; IL anion: View the MathML sourcePF6−; [Hmim][Cl] IL amount: 60 mg; ionic strength: 0% NaCl; pH: 5 (or 2); centrifugation time: 10 min; and centrifugation speed: 1000 rpm. Under the optimized experimental conditions the method was evaluated and proper linearity was obtained with a correlation coefficient of 0.991 (5 calibration standards). Limits of detection and quantification for both chromium species were 3 and 10 µg L−1, respectively. This is a 233-fold improvement when compared with chromium determination by WCAES without using preconcentration. The repeatability of the proposed method was evaluated at two different spiking levels (10 and 50 µg L−1) obtaining coefficients of variation of 11.4% and 3.6% (n=3), respectively. A certified reference material (SRM-1643e NIST) was analyzed in order to determine the accuracy of the method for total chromium determination and 112.3% and 2.5 µg L−1 were the recovery (trueness) and standard deviation values, respectively. Tap, bottled mineral and natural mineral water samples were analyzed at 60 µg L−1 spiking level of total Cr content at two Cr(VI)/Cr(III) ratios, and relative recovery values ranged between 88% and 112% showing that the matrix has a negligible effect. To our knowledge, this is the first time that combines in situ IL-DLLME and WCAES.
Resumo:
In this paper we construct implicit stochastic Runge-Kutta (SRK) methods for solving stochastic differential equations of Stratonovich type. Instead of using the increment of a Wiener process, modified random variables are used. We give convergence conditions of the SRK methods with these modified random variables. In particular, the truncated random variable is used. We present a two-stage stiffly accurate diagonal implicit SRK (SADISRK2) method with strong order 1.0 which has better numerical behaviour than extant methods. We also construct a five-stage diagonal implicit SRK method and a six-stage stiffly accurate diagonal implicit SRK method with strong order 1.5. The mean-square and asymptotic stability properties of the trapezoidal method and the SADISRK2 method are analysed and compared with an explicit method and a semi-implicit method. Numerical results are reported for confirming convergence properties and for comparing the numerical behaviour of these methods.
Resumo:
Objective: Existing evidence suggests that vocational rehabilitation services, in particular individual placement and support (IPS), are effective in assisting people with schizophrenia and related conditions gain open employment. Despite this, such services are not available to all unemployed people with schizophrenia who wish to work. Existing evidence suggests that while IPS confers no clinical advantages over routine care, it does improve the proportion of people returning to employment. The objective of the current study is to investigate the net benefit of introducing IPS services into current mental health services in Australia. Method: The net benefit of IPS is assessed from a health sector perspective using cost-benefit analysis. A two-stage approach is taken to the assessment of benefit. The first stage involves a quantitative analysis of the net benefit, defined as the benefits of IPS (comprising transfer payments averted, income tax accrued and individual income earned) minus the costs. The second stage involves application of 'second-filter' criteria (including equity, strength of evidence, feasibility and acceptability to stakeholders) to results. The robustness of results is tested using the multivariate probabilistic sensitivity analysis. Results: The costs of IPS are $A10.3M (95% uncertainty interval $A7.4M-$A13.6M), the benefits are $A4.7M ($A3.1M-$A6.5M), resulting in a negative net benefit of $A5.6M ($A8.4M-$A3.4M). Conclusions: The current analysis suggests that IPS costs are greater than the monetary benefits. However, the evidence-base of the current analysis is weak. Structural conditions surrounding welfare payments in Australia create disincentives to full-time employment for people with disabilities.
Resumo:
The crystallization behavior and crystallization kinetics Of (CU60Zr30Ti10)(99)Sn-1 bulk metallic glass was studied by X-ray diffractometry and differential scanning calorimetry. It was found that a two-stage crystallization took place during continuous heating of the bulk metallic glass. Both the glass transition temperature T-g and the crystallization peak temperatures T-p displayed a strong dependence on the heating rate. The activation energy was determined by the Kissinger analysis method. In the first-stage of the crystallization, the transformation of the bulk metallic glass to the phase one occurred with an activation energy of 386 kJ/mol; in the second-stage, the formation of the phase two took place at an activation energy of 381 kJ/mol.
Resumo:
Background: Lean bodyweight (LBW) has been recommended for scaling drug doses. However, the current methods for predicting LBW are inconsistent at extremes of size and could be misleading with respect to interpreting weight-based regimens. Objective: The objective of the present study was to develop a semi-mechanistic model to predict fat-free mass (FFM) from subject characteristics in a population that includes extremes of size. FFM is considered to closely approximate LBW. There are several reference methods for assessing FFM, whereas there are no reference standards for LBW. Patients and methods: A total of 373 patients (168 male, 205 female) were included in the study. These data arose from two populations. Population A (index dataset) contained anthropometric characteristics, FFM estimated by dual-energy x-ray absorptiometry (DXA - a reference method) and bioelectrical impedance analysis (BIA) data. Population B (test dataset) contained the same anthropometric measures and FFM data as population A, but excluded BIA data. The patients in population A had a wide range of age (18-82 years), bodyweight (40.7-216.5kg) and BMI values (17.1-69.9 kg/m(2)). Patients in population B had BMI values of 18.7-38.4 kg/m(2). A two-stage semi-mechanistic model to predict FFM was developed from the demographics from population A. For stage 1 a model was developed to predict impedance and for stage 2 a model that incorporated predicted impedance was used to predict FFM. These two models were combined to provide an overall model to predict FFM from patient characteristics. The developed model for FFM was externally evaluated by predicting into population B. Results: The semi-mechanistic model to predict impedance incorporated sex, height and bodyweight. The developed model provides a good predictor of impedance for both males and females (r(2) = 0.78, mean error [ME] = 2.30 x 10(-3), root mean square error [RMSE] = 51.56 [approximately 10% of mean]). The final model for FFM incorporated sex, height and bodyweight. The developed model for FFM provided good predictive performance for both males and females (r(2) = 0.93, ME = -0.77, RMSE = 3.33 [approximately 6% of mean]). In addition, the model accurately predicted the FFM of subjects in population B (r(2) = 0.85, ME -0.04, RMSE = 4.39 [approximately 7% of mean]). Conclusions: A semi-mechanistic model has been developed to predict FFM (and therefore LBW) from easily accessible patient characteristics. This model has been prospectively evaluated and shown to have good predictive performance.
Resumo:
Objective: To assess from a health sector perspective the incremental cost-effectiveness of eight drug treatment scenarios for established schizophrenia. Method: Using a standardized methodology, costs and outcomes are modelled over the lifetime of prevalent cases of schizophrenia in Australia in 2000. A two-stage approach to assessment of health benefit is used. The first stage involves a quantitative analysis based on disability-adjusted life years (DALYs) averted, using best available evidence. The robustness of results is tested using probabilistic uncertainty analysis. The second stage involves application of 'second filter' criteria (equity, strength of evidence, feasibility and acceptability) to allow broader concepts of benefit to be considered. Results: Replacing oral typicals with risperidone or olanzapine has an incremental cost-effectiveness ratio (ICER) of A$48 000 and A$92 000/DALY respectively. Switching from low-dose typicals to risperidone has an ICER of A$80 000. Giving risperidone to people experiencing side-effects on typicals is more cost-effective at A$20 000. Giving clozapine to people taking typicals, with the worst course of the disorder and either little or clear deterioration, is cost-effective at A$42 000 or A$23 000/DALY respectively. The least cost-effective intervention is to replace risperidone with olanzapine at A$160 000/DALY. Conclusions: Based on an A$50 000/DALY threshold, low-dose typical neuroleptics are indicated as the treatment of choice for established schizophrenia, with risperidone being reserved for those experiencing moderate to severe side-effects on typicals. The more expensive olanzapine should only be prescribed when risperidone is not clinically indicated. The high cost of risperidone and olanzapine relative to modest health gains underlie this conclusion. Earlier introduction of clozapine however, would be cost-effective. This work is limited by weaknesses in trials (lack of long-term efficacy data, quality of life and consumer satisfaction evidence) and the translation of effect size into a DALY change. Some stakeholders, including SANE Australia, argue the modest health gains reported in the literature do not adequately reflect perceptions by patients, clinicians and carers, of improved quality of life with these atypicals.
Resumo:
Our understanding of early spatial vision owes much to contrast masking and summation paradigms. In particular, the deep region of facilitation at low mask contrasts is thought to indicate a rapidly accelerating contrast transducer (eg a square-law or greater). In experiment 1, we tapped an early stage of this process by measuring monocular and binocular thresholds for patches of 1 cycle deg-1 sine-wave grating. Threshold ratios were around 1.7, implying a nearly linear transducer with an exponent around 1.3. With this form of transducer, two previous models (Legge, 1984 Vision Research 24 385 - 394; Meese et al, 2004 Perception 33 Supplement, 41) failed to fit the monocular, binocular, and dichoptic masking functions measured in experiment 2. However, a new model with two-stages of divisive gain control fits the data very well. Stage 1 incorporates nearly linear monocular transducers (to account for the high level of binocular summation and slight dichoptic facilitation), and monocular and interocular suppression (to fit the profound 42 Oral presentations: Spatial vision Thursday dichoptic masking). Stage 2 incorporates steeply accelerating transduction (to fit the deep regions of monocular and binocular facilitation), and binocular summation and suppression (to fit the monocular and binocular masking). With all model parameters fixed from the discrimination thresholds, we examined the slopes of the psychometric functions. The monocular and binocular slopes were steep (Weibull ߘ3-4) at very low mask contrasts and shallow (ߘ1.2) at all higher contrasts, as predicted by all three models. The dichoptic slopes were steep (ߘ3-4) at very low contrasts, and very steep (ß>5.5) at high contrasts (confirming Meese et al, loco cit.). A crucial new result was that intermediate dichoptic mask contrasts produced shallow slopes (ߘ2). Only the two-stage model predicted the observed pattern of slope variation, so providing good empirical support for a two-stage process of binocular contrast transduction. [Supported by EPSRC GR/S74515/01]
Resumo:
The article deals with the CFD modelling of fast pyrolysis of biomass in an Entrained Flow Reactor (EFR). The Lagrangian approach is adopted for the particle tracking, while the flow of the inert gas is treated with the standard Eulerian method for gases. The model includes the thermal degradation of biomass to char with simultaneous evolution of gases and tars from a discrete biomass particle. The chemical reactions are represented using a two-stage, semi-global model. The radial distribution of the pyrolysis products is predicted as well as their effect on the particle properties. The convective heat transfer to the surface of the particle is computed using the Ranz-Marshall correlation.
Resumo:
Safety enforcement practitioners within Europe and marketers, designers or manufacturers of consumer products need to determine compliance with the legal test of "reasonable safety" for consumer goods, to reduce the "risks" of injury to the minimum. To enable freedom of movement of products, a method for safety appraisal is required for use as an "expert" system of hazard analysis by non-experts in safety testing of consumer goods for implementation consistently throughout Europe. Safety testing approaches and the concept of risk assessment and hazard analysis are reviewed in developing a model for appraising consumer product safety which seeks to integrate the human factors contribution of risk assessment, hazard perception, and information processing. The model develops a system of hazard identification, hazard analysis and risk assessment which can be applied to a wide range of consumer products through use of a series of systematic checklists and matrices and applies alternative numerical and graphical methods for calculating a final product safety risk assessment score. It is then applied in its pilot form by selected "volunteer" Trading Standards Departments to a sample of consumer products. A series of questionnaires is used to select participating Trading Standards Departments, to explore the contribution of potential subjective influences, to establish views regarding the usability and reliability of the model and any preferences for the risk assessment scoring system used. The outcome of the two stage hazard analysis and risk assessment process is considered to determine consistency in results of hazard analysis, final decisions regarding the safety of the sample product and to determine any correlation in the decisions made using the model and alternative scoring methods of risk assessment. The research also identifies a number of opportunities for future work, and indicates a number of areas where further work has already begun.
Resumo:
The research is concerned with the application of the computer simulation technique to study the performance of reinforced concrete columns in a fire environment. The effect of three different concrete constitutive models incorporated in the computer simulation on the structural response of reinforced concrete columns exposed to fire is investigated. The material models differed mainly in respect to the formulation of the mechanical properties of concrete. The results from the simulation have clearly illustrated that a more realistic response of a reinforced concrete column exposed to fire is given by a constitutive model with transient creep or appropriate strain effect The assessment of the relative effect of the three concrete material models is considered from the analysis by adopting the approach of a parametric study, carried out using the results from a series of analyses on columns heated on three sides which produce substantial thermal gradients. Three different loading conditions were used on the column; axial loading and eccentric loading both to induce moments in the same sense and opposite sense to those induced by the thermal gradient. An axially loaded column heated on four sides was also considered. The computer modelling technique adopted separated the thermal and structural responses into two distinct computer programs. A finite element heat transfer analysis was used to determine the thermal response of the reinforced concrete columns when exposed to the ISO 834 furnace environment. The temperature distribution histories obtained were then used in conjunction with a structural response program. The effect of the occurrence of spalling on the structural behaviour of reinforced concrete column is also investigated. There is general recognition of the potential problems of spalling but no real investigation into what effect spalling has on the fire resistance of reinforced concrete members. In an attempt to address the situation, a method has been developed to model concrete columns exposed to fire which incorporates the effect of spalling. A total of 224 computer simulations were undertaken by varying the amounts of concrete lost during a specified period of exposure to fire. An array of six percentages of spalling were chosen for one range of simulation while a two stage progressive spalling regime was used for a second range. The quantification of the reduction in fire resistance of the columns against the amount of spalling, heating and loading patterns, and the time at which the concrete spalls appears to indicate that it is the amount of spalling which is the most significant variable in the reduction of fire resistance.
Resumo:
In this paper we propose a two phases control method for DSRC vehicle networks at road intersection, where multiple road safety applications may coexist. We consider two safety applications, emergency safety application with high priority and routine safety applications with low priority. The control method is designed to provide high availability and low latency for emergency safety applications while leave as much as possible bandwidth for routine applications. It is expected to be capable of adapting to changing network conditions. In the first phase of the method we use a simulation based offline approach to find out the best configurations for message rate and MAC layer parameters for given numbers of vehicles. In the second phase we use the configurations identified by simulations at roadside access point (AP) for system operation. A utilization function is proposed to balance the QoS performances provided to multiple safety applications. It is demonstrated that the proposed method can largely improve the system performance when compared to fixed control method.
Resumo:
Background: This study examines perceived stress and its potential causal factors in nurses. Stress has been seen as a routine and accepted part of the healthcare worker’s role. The lack of research on stress in nurses in Ireland motivated this study. Aims: The aims of this study are to examine the level of stress experienced by nurses working in an Irish teaching hospital, and investigate differences in perceived stress levels by ward area and associations with work characteristics. Method: A cross-sectional study design was employed, with a two-stage cluster sampling process. A self-administered questionnaire was used to collect the data and nurses were investigated across ten different wards using the Nursing Stress Scale and the Demand Control Support Scales. Results: The response rate was 62%. Using outpatients as a reference ward, perceived stress levels were found to be significantly higher in the medical ward, accident and emergency, intensive care unit and paediatric wards (p<0.05). There was no significant difference between the wards with regard to job strain, however, differences did occur with levels of support; the day unit and paediatric ward reporting the lowest level of supervisor support (p<0.01). A significant association was seen between the wards and perceived stress even after adjustment (p<0.05). Conclusion: The findings suggest that perceived stress does vary within different work areas in the same hospital. Work factors, such as demand and support are important with regard to perceived stress. Job control was not found to play an important role.