967 resultados para Average method
Resumo:
This paper describes a prognostic method which combines the physics of failure models with probability reasoning algorithm. The measured real time data (temperature vs. time) was used as the loading profile for the PoF simulations. The response surface equation of the accumulated plastic strain in the solder interconnect in terms of two variables (average temperature, and temperature amplitude) was constructed. This response surface equation was incorporated into the lifetime model of solder interconnect, and therefore the remaining life time of the solder component under current loading condition was predicted. The predictions from PoF models were also used to calculate the conditional probability table for a Bayesian Network, which was used to take into account of the impacts of the health observations of each product in lifetime prediction. The prognostic prediction in the end was expressed as the probability for the product to survive the expected future usage. As a demonstration, this method was applied to an IGBT power module used for aircraft applications.
Resumo:
Abstract: Raman spectroscopy has been used for the first time to predict the FA composition of unextracted adipose tissue of pork, beef, lamb, and chicken. It was found that the bulk unsaturation parameters could be predicted successfully [R-2 = 0.97, root mean square error of prediction (RMSEP) = 4.6% of 4 sigma], with cis unsaturation, which accounted for the majority of the unsaturation, giving similar correlations. The combined abundance of all measured PUFA (>= 2 double bonds per chain) was also well predicted with R-2 = 0.97 and RMSEP = 4.0% of 4 sigma. Trans unsaturation was not as well modeled (R-2 = 0.52, RMSEP = 18% of 4 sigma); this reduced prediction ability can be attributed to the low levels of trans FA found in adipose tissue (0.035 times the cis unsaturation level). For the individual FA, the average partial least squares (PLS) regression coefficient of the 18 most abundant FA (relative abundances ranging from 0.1 to 38.6% of the total FA content) was R-2 = 0.73; the average RMSEP = 11.9% of 4 sigma. Regression coefficients and prediction errors for the five most abundant FA were all better than the average value (in some cases as low as RMSEP = 4.7% of 4 sigma). Cross-correlation between the abundances of the minor FA and more abundant acids could be determined by principal component analysis methods, and the resulting groups of correlated compounds were also well-predicted using PLS. The accuracy of the prediction of individual FA was at least as good as other spectroscopic methods, and the extremely straightforward sampling method meant that very rapid analysis of samples at ambient temperature was easily achieved. This work shows that Raman profiling of hundreds of samples per day is easily achievable with an automated sampling system.
Resumo:
A flexible, mass-conservative numerical technique for solving the advection-dispersion equation for miscible contaminant transport is presented. The method combines features of puff transport models from air pollution studies with features from the random walk particle method used in water resources studies, providing a deterministic time-marching algorithm which is independent of the grid Peclet number and scales from one to higher dimensions simply. The concentration field is discretised into a number of particles, each of which is treated as a point release which advects and disperses over the time interval. The dispersed puff is itself discretised into a spatial distribution of particles whose masses can be pre-calculated. Concentration within the simulation domain is then calculated from the mass distribution as an average over some small volume. Comparison with analytical solutions for a one-dimensional fixed-duration concentration pulse and for two-dimensional transport in an axisymmetric flow field indicate that the algorithm performs well. For a given level of accuracy the new method has lower computation times than the random walk particle method.
Resumo:
This study describes an optimized protocol for the generation of Amplified Fragment Length Polymorphism (AFLP) markers in a stingless bee. Essential modifications to standard protocols are a restriction enzyme digestion (EcoRI and Tru1I) in a two-step procedure, combined with a touchdown program in the selective PCR amplification step and product labelling by incorporation of alpha[P-33]dATP. In an analysis of 75 workers collected from three colonies of Melipona quadrifasciata we obtained 719 markers. Analysis of genetic variability revealed that on average 32% of the markers were polymorphic within a colony. Compared to the overall percentage of polymorphism (44% of the markers detected in our bee samples), the observed rates of within-colony polymorphism are remarkably high, considering that the workers of each colony were all of spring of a singly mated queen.
Resumo:
Orthogonal frequency division multiplexing (OFDM) requires an expensive linear amplifier at the transmitter due to its high peak-to-average power ratio (PAPR). Single carrier with cyclic prefix (SC-CP) is a closely related transmission scheme that possesses most of the benefits of OFDM but does not have the PAPR problem. Although in a multipath environment, SC-CP is very robust to frequency-selective fading, it is sensitive to the time-selective fading characteristics of the wireless channel that disturbs the orthogonality of the channel matrix (CM) and increases the computational complexity of the receiver. In this paper, we propose a time-domain low-complexity iterative algorithm to compensate for the effects of time selectivity of the channel that exploits the sparsity present in the channel convolution matrix. Simulation results show the superior performance of the proposed algorithm over the standard linear minimum mean-square error (L-MMSE) equalizer for SC-CP.
Resumo:
Organic gels have been synthesized by sol–gel polycondensation of phenol (P) and formaldehyde (F) catalyzed by sodium carbonate (C). The effect of synthesis parameters such as phenol/catalyst ratio (P/C), solvent exchange liquid and drying method, on the porous structure of the gels have been investigated. The total and mesopore volumes of the PF gels increased with increasing P/C ratio in the range of P/C B 8, after this both properties started to decrease with P/C ratio for P/C[8 and the gel with P/C = 8 showed the highest total and mesopore volumes of 1.281 and 1.279 cm3 g-1 respectively. The gels prepared by freeze drying possessed significantly higher porosities than the vacuum dried gels. The pore volume and average pore diameter of the freeze dried gels were significantly higher than those of the vacuum dried gels. T-butanol emerged as the preferred solvent for the removal of water from the PF hydrogel prior to drying, as significantly higher pore volumes and specific surface areas were obtained in the corresponding dried gels. The results showed that freeze drying with t-butanol and lower P/C ratios were favourable conditions for the synthesis of highly mesoporous phenol–formaldehyde gels.
Resumo:
This paper describes the computation of stress intensity factors (SIFs) for cracks in functionally graded materials (FGMs) using an extended element-free Galerkin (XEFG) method. The SIFs are extracted through the crack closure integral (CCI) with a local smoothing technique, non-equilibrium and incompatibility formulations of the interaction integral and the displacement method. The results for mode I and mixed mode case studies are presented and compared with those available in the literature. They are found to be in good agreement where the average absolute error for the CCI with local smoothing, despite its simplicity, yielded a high level of accuracy.
Resumo:
The synthesis of cobalt-doped ZnO nanowires is achieved using a simple, metal salt decomposition growth technique. A sequence of drop casting on a quartz substrate held at 100 degrees C and annealing results in the growth of nanowires of average (modal) length similar to 200 nm and diameter of 15 +/- 4 nm and consequently an aspect ratio of similar to 13. A variation in the synthesis process, where the solution of mixed salts is deposited on the substrate at 25 degrees C, yields a grainy film structure which constitutes a useful comparator case. X-ray diffraction shows a preferred [0001] growth direction for the nanowires while a small unit cell volume contraction for Co-doped samples and data from Raman spectroscopy indicate incorporation of the Co dopant into the lattice; neither technique shows explicit evidence of cobalt oxides. Also the nanowire samples display excellent optical transmission across the entire visible range, as well as strong photoluminescence (exciton emission) in the near UV, centered at 3.25 eV. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Hard turning (HT) is a material removal process employing a combination of a single point cutting tool and high speeds to machine hard ferrous alloys which exhibit hardness values over 45 HRC. In this paper, a surface defect machining (SDM) method for HT is proposed which harnesses the combined advantages of porosity machining and pulsed laser pre-treatment processing. From previous experimental work, this was shown to provide better controllability of the process and improved quality of the machined surface. While the experiments showed promising results, a comprehensive understanding of this new technique could only be achieved through a rigorous, in depth theoretical analysis. Therefore, an assessment of the SDM technique was carried out using both finite element method (FEM) and molecular dynamics (MD) simulations.
FEM modelling was used to compare the conventional HT of AISI 4340 steel (52 HRC) using an Al2O3 insert with the proposed SDM method. The simulations showed very good agreement with the previously published experimental results. Compared to conventional HT, SDM provided favourable machining outcomes, such as reduced shear plane angle, reduced average cutting forces, improved surface roughness, lower residual stresses on the machined surface, reduced tool–chip interface contact length and increased chip flow velocity. Furthermore, a scientific explanation of the improved surface finish was revealed using a state-of-the-art MD simulation model which suggested that during SDM, a combination of both the cutting action and rough polishing action help improve the machined surface finish.
Resumo:
This paper discusses the application of the Taguchi experimental design approach in optimizing the key process parameters for micro-welding of thin AISI 316L foil using the 100W CW fibre laser. A L16 Taguchi experiment was conducted to systematically understand how the power, scanning velocity, focus position, gas flow rate and type of shielding gas affect the bead dimensions. The welds produced in the L16 Taguchi experiment was mainly of austenite cellular-dendrite structure with an average grain size of 5µm. An exact penetration weld with the largest penetration to fusion width ratio was obtained. Among those process parameters, the interaction between power and scanning velocity presented the strongest effect to the penetration to fusion width ratio and the power was found to be the predominantly important factor that drives the interaction with other factors to appreciably affect the bead dimensions.
Stochastic Analysis of Saltwater Intrusion in Heterogeneous Aquifers using Local Average Subdivision
Resumo:
This study investigates the effects of ground heterogeneity, considering permeability as a random variable, on an intruding SW wedge using Monte Carlo simulations. Random permeability fields were generated, using the method of Local Average Subdivision (LAS), based on a lognormal probability density function. The LAS method allows the creation of spatially correlated random fields, generated using coefficients of variation (COV) and horizontal and vertical scales of fluctuation (SOF). The numerical modelling code SUTRA was employed to solve the coupled flow and transport problem. The well-defined 2D dispersive Henry problem was used as the test case for the method. The intruding SW wedge is defined by two key parameters, the toe penetration length (TL) and the width of mixing zone (WMZ). These parameters were compared to the results of a homogeneous case simulated using effective permeability values. The simulation results revealed: (1) an increase in COV resulted in a seaward movement of TL; (2) the WMZ extended with increasing COV; (3) a general increase in horizontal and vertical SOF produced a seaward movement of TL, with the WMZ increasing slightly; (4) as the anisotropic ratio increased the TL intruded further inland and the WMZ reduced in size. The results show that for large values of COV, effective permeability parameters are inadequate at reproducing the effects of heterogeneity on SW intrusion.
Resumo:
Background
]In modern radiotherapy, it is crucial to monitor the performance of all linac components including gantry, collimation system and electronic portal imaging device (EPID) during arc deliveries. In this study, a simple EPID-based measurement method has been introduced in conjunction with an algorithm to investigate the stability of these systems during arc treatments with the aim of ensuring the accuracy of linac mechanical performance.
The Varian EPID sag, gantry sag, changes in source-to-detector distance (SDD), EPID and collimator skewness, EPID tilt, and the sag in MLC carriages as a result of linac rotation were separately investigated by acquisition of EPID images of a simple phantom comprised of 5 ball-bearings during arc delivery. A fast and robust software package was developed for automated analysis of image data. Twelve Varian linacs of different models were investigated.
The average EPID sag was within 1 mm for all tested linacs. All machines showed less than 1 mm gantry sag. Changes in SDD values were within 1.7 mm except for three linacs of one centre which were within 9 mm. Values of EPID skewness and tilt were negligible in all tested linacs. The maximum sag in MLC leaf bank assemblies was around 1 mm. The EPID sag showed a considerable improvement in TrueBeam linacs.
The methodology and software developed in this study provide a simple tool for effective investigation of the behaviour of linac components with gantry rotation. It is reproducible and accurate and can be easily performed as a routine test in clinics.
Resumo:
Objectives: To determine whether adjusting the denominator of the common hospital antibiotic use measurement unit (defined daily doses/100 bed-days) by including age-adjusted comorbidity score (100 bed-days/age-adjusted comorbidity score) would result in more accurate and meaningful assessment of hospital antibiotic use.
Methods: The association between the monthly sum of age-adjusted comorbidity and monthly antibiotic use was measured using time-series analysis (January 2008 to June 2012). For the purposes of conducting internal benchmarking, two antibiotic usage datasets were constructed, i.e. 2004-07 (first study period) and 2008-11 (second study period). Monthly antibiotic use was normalized per 100 bed-days and per 100 bed-days/age-adjusted comorbidity score.
Results: Results showed that antibiotic use had significant positive relationships with the sum of age-adjusted comorbidity score (P = 0.0004). The results also showed that there was a negative relationship between antibiotic use and (i) alcohol-based hand rub use (P = 0.0370) and (ii) clinical pharmacist activity (P = 0.0031). Normalizing antibiotic use per 100 bed-days contributed to a comparative usage rate of 1.31, i.e. the average antibiotic use during the second period was 31% higher than during the first period. However, normalizing antibiotic use per 100 bed-days per age-adjusted comorbidity score resulted in a comparative usage rate of 0.98, i.e. the average antibiotic use was 2% lower in the second study period. Importantly, the latter comparative usage rate is independent of differences in patient density and case mix characteristics between the two studied populations.
Conclusions: The proposed modified antibiotic measure provides an innovative approach to compare variations in antibiotic prescribing while taking account of patient case mix effects.
Resumo:
Multicarrier Index Keying (MCIK) is a recently developed technique that modulates subcarriers but also indices of the subcarriers. In this paper a novel low-complexity detection scheme of subcarrier indices is proposed for an MCIK system and addresses a substantial reduction in complexity over the optimalmaximum likelihood (ML) detection. For the performance evaluation, a closed-form expression for the pairwise error probability (PEP) of an active subcarrier index, and a tight approximation of the average PEP of multiple subcarrier indices are derived in closed-form. The theoretical outcomes are validated usingsimulations, at a difference of less than 0.1dB. Compared to the optimal ML, the proposed detection achieves a substantial reduction in complexity with small loss in error performance (<= 0.6dB).
Resumo:
Pollen data have been recorded at Novi Sad in Serbia since 2000. The adopted method of producing pollen counts has been the use of five longitudinal transects that examine 19.64% of total sample surface. However, counting five transects is time consuming and so the main objective of this study is to investigate whether reducing the number to three or even two transects would have a significant effect on daily average and bi-hourly pollen concentrations, as well as the main characteristics of the pollen season and long-term trends. This study has shown that there is a loss of accuracy in daily average and bi-hourly pollen concentrations (an increase in % ERROR) as the sub-sampling area is reduced from five to three or two longitudinal transects. However, this loss of accuracy does not impact on the main characteristics of the season or long-term trends. As a result, this study can be used to justify changing the sub-sampling method used at Novi Sad from five to three longitudinal transects. The use of two longitudinal transects has been ruled out because, although quicker, the counts produced: (a) had the greatest amount of % ERROR, (b) altered the amount of influence of the independent variable on the dependent variable (the slope in regression analysis) and (c) the total sampled surface (7.86%) was less than the minimum requirement recommended by the European Aerobiology Society working group on Quality Control (at least 10% of total slide area).