897 resultados para Transfusion threshold
Resumo:
Background and Objectives In Australia, the risk of transfusion-transmitted malaria is managed through the identification of ‘at-risk’ donors, antibody screening enzyme-linked immunoassay (EIA) and, if reactive, exclusion from fresh blood component manufacture. Donor management depends on the duration of exposure in malarious regions (>6 months: ‘Resident’, <6 months: ‘Visitor’) or a history of malaria diagnosis. We analysed antibody testing and demographic data to investigate antibody persistence dynamics. To assess the yield from retesting 3 years after an initial EIA reactive result, we estimated the proportion of donors who would become non-reactive over this period. Materials and Methods Test results and demographic data from donors who were malaria EIA reactive were analysed. Time since possible exposure was estimated and antibody survival modelled. Results Among seroreverters, the time since last possible exposure was significantly shorter in ‘Visitors’ than in ‘Residents’. The antibody survival modelling predicted 20% of previously EIA reactive ‘Visitors’, but only 2% of ‘Residents’ would become non-reactive within 3 years of their first reactive EIA. Conclusion Antibody persistence in donors correlates with exposure category, with semi-immune ‘Residents’ maintaining detectable antibodies significantly longer than non-immune ‘Visitors’.
Resumo:
Objective: Menopause is the consequence of exhaustion of the ovarian follicular pool. AMH, an indirect hormonal marker of ovarian reserve, has been recently proposed as a predictor for age at menopause. Since BMI and smoking status are relevant independent factors associated with age at menopause we evaluated whether a model including all three of these variables could improve AMH-based prediction of age at menopause. Methods: In the present cohort study, participants were 375 eumenorrheic women aged 19–44 years and a sample of 2,635 Italian menopausal women. AMH values were obtained from the eumenorrheic women. Results: Regression analysis of the AMH data showed that a quadratic function of age provided a good description of these data plotted on a logarithmic scale, with a distribution of residual deviates that was not normal but showed significant leftskewness. Under the hypothesis that menopause can be predicted by AMH dropping below a critical threshold, a model predicting menopausal age was constructed from the AMH regression model and applied to the data on menopause. With the AMH threshold dependent on the covariates BMI and smoking status, the effects of these covariates were shown to be highly significant. Conclusions: In the present study we confirmed the good level of conformity between the distributions of observed and AMH-predicted ages at menopause, and showed that using BMI and smoking status as additional variables improves AMH-based prediction of age at menopause.
Resumo:
Context: Anti-Müllerian hormone (AMH) concentration reflects ovarian aging and is argued to be a useful predictor of age at menopause (AMP). It is hypothesized that AMH falling below a critical threshold corresponds to follicle depletion, which results in menopause. With this threshold, theoretical predictions of AMP can be made. Comparisons of such predictions with observed AMP from population studies support the role for AMH as a forecaster of menopause. Objective: The objective of the study was to investigate whether previous relationships between AMH and AMP are valid using a much larger data set. Setting: AMH was measured in 27 563 women attending fertility clinics. Study Design: From these data a model of age-related AMH change was constructed using a robust regression analysis. Data on AMP from subfertile women were obtained from the population-based Prospect-European Prospective Investigation into Cancer and Nutrition (Prospect- EPIC) cohort (n � 2249). By constructing a probability distribution of age at which AMH falls below a critical threshold and fitting this to Prospect-EPIC menopausal age data using maximum likelihood, such a threshold was estimated. Main Outcome: The main outcome was conformity between observed and predicted AMP. Results: To get a distribution of AMH-predicted AMP that fit the Prospect-EPIC data, we found the critical AMH threshold should vary among women in such a way that women with low age-specific AMH would have lower thresholds, whereas women with high age-specific AMH would have higher thresholds (mean 0.075 ng/mL; interquartile range 0.038–0.15 ng/mL). Such a varying AMH threshold for menopause is a novel and biologically plausible finding. AMH became undetectable (�0.2 ng/mL) approximately 5 years before the occurrence of menopause, in line with a previous report. Conclusions: The conformity of the observed and predicted distributions of AMP supports the hypothesis that declining population averages of AMH are associated with menopause, making AMH an excellent candidate biomarker for AMP prediction. Further research will help establish the accuracy of AMH levels to predict AMP within individuals.
Resumo:
It has not yet been established whether the spatial variation of particle number concentration (PNC) within a microscale environment can have an effect on exposure estimation results. In general, the degree of spatial variation within microscale environments remains unclear, since previous studies have only focused on spatial variation within macroscale environments. The aims of this study were to determine the spatial variation of PNC within microscale school environments, in order to assess the importance of the number of monitoring sites on exposure estimation. Furthermore, this paper aims to identify which parameters have the largest influence on spatial variation, as well as the relationship between those parameters and spatial variation. Air quality measurements were conducted for two consecutive weeks at each of the 25 schools across Brisbane, Australia. PNC was measured at three sites within the grounds of each school, along with the measurement of meteorological and several other air quality parameters. Traffic density was recorded for the busiest road adjacent to the school. Spatial variation at each school was quantified using coefficient of variation (CV). The portion of CV associated with instrument uncertainty was found to be 0.3 and therefore, CV was corrected so that only non-instrument uncertainty was analysed in the data. The median corrected CV (CVc) ranged from 0 to 0.35 across the schools, with 12 schools found to exhibit spatial variation. The study determined the number of required monitoring sites at schools with spatial variability and tested the deviation in exposure estimation arising from using only a single site. Nine schools required two measurement sites and three schools required three sites. Overall, the deviation in exposure estimation from using only one monitoring site was as much as one order of magnitude. The study also tested the association of spatial variation with wind speed/direction and traffic density, using partial correlation coefficients to identify sources of variation and non-parametric function estimation to quantify the level of variability. Traffic density and road to school wind direction were found to have a positive effect on CVc, and therefore, also on spatial variation. Wind speed was found to have a decreasing effect on spatial variation when it exceeded a threshold of 1.5 (m/s), while it had no effect below this threshold. Traffic density had a positive effect on spatial variation and its effect increased until it reached a density of 70 vehicles per five minutes, at which point its effect plateaued and did not increase further as a result of increasing traffic density.
Resumo:
We consider how data from scientific research should be used for decision making in health services. Whether a hand hygiene intervention to reduce risk of nosocomial infection should be widely adopted is the case study. Improving hand hygiene has been described as the most important measure to prevent nosocomial infection. 1 Transmission of microorganisms is reduced, and fewer infections arise, which leads to a reduction in mortality2 and cost savings.3 Implementing a hand hygiene program is itself costly, so the extra investment should be tested for cost-effectiveness.4,5 The first part of our commentary is about cost-effectiveness models and how they inform decision making for health services. The second part is about how data on the effectiveness of hand hygiene programs arising from scientific studies are used, and 2 points are made: the threshold for statistical inference of .05 used to judge effectiveness studies is not important for decision making,6,7 and potentially valuable evidence about effectiveness might be excluded by decision makers because it is deemed low quality.8 The ideas put forward will help researchers and health services decision makers to appraise scientific evidence in a more powerful way.
Resumo:
Speaker diarization is the process of annotating an input audio with information that attributes temporal regions of the audio signal to their respective sources, which may include both speech and non-speech events. For speech regions, the diarization system also specifies the locations of speaker boundaries and assign relative speaker labels to each homogeneous segment of speech. In short, speaker diarization systems effectively answer the question of ‘who spoke when’. There are several important applications for speaker diarization technology, such as facilitating speaker indexing systems to allow users to directly access the relevant segments of interest within a given audio, and assisting with other downstream processes such as summarizing and parsing. When combined with automatic speech recognition (ASR) systems, the metadata extracted from a speaker diarization system can provide complementary information for ASR transcripts including the location of speaker turns and relative speaker segment labels, making the transcripts more readable. Speaker diarization output can also be used to localize the instances of specific speakers to pool data for model adaptation, which in turn boosts transcription accuracies. Speaker diarization therefore plays an important role as a preliminary step in automatic transcription of audio data. The aim of this work is to improve the usefulness and practicality of speaker diarization technology, through the reduction of diarization error rates. In particular, this research is focused on the segmentation and clustering stages within a diarization system. Although particular emphasis is placed on the broadcast news audio domain and systems developed throughout this work are also trained and tested on broadcast news data, the techniques proposed in this dissertation are also applicable to other domains including telephone conversations and meetings audio. Three main research themes were pursued: heuristic rules for speaker segmentation, modelling uncertainty in speaker model estimates, and modelling uncertainty in eigenvoice speaker modelling. The use of heuristic approaches for the speaker segmentation task was first investigated, with emphasis placed on minimizing missed boundary detections. A set of heuristic rules was proposed, to govern the detection and heuristic selection of candidate speaker segment boundaries. A second pass, using the same heuristic algorithm with a smaller window, was also proposed with the aim of improving detection of boundaries around short speaker segments. Compared to single threshold based methods, the proposed heuristic approach was shown to provide improved segmentation performance, leading to a reduction in the overall diarization error rate. Methods to model the uncertainty in speaker model estimates were developed, to address the difficulties associated with making segmentation and clustering decisions with limited data in the speaker segments. The Bayes factor, derived specifically for multivariate Gaussian speaker modelling, was introduced to account for the uncertainty of the speaker model estimates. The use of the Bayes factor also enabled the incorporation of prior information regarding the audio to aid segmentation and clustering decisions. The idea of modelling uncertainty in speaker model estimates was also extended to the eigenvoice speaker modelling framework for the speaker clustering task. Building on the application of Bayesian approaches to the speaker diarization problem, the proposed approach takes into account the uncertainty associated with the explicit estimation of the speaker factors. The proposed decision criteria, based on Bayesian theory, was shown to generally outperform their non- Bayesian counterparts.
Resumo:
A key issue in the field of inclusive design is the ability to provide designers with an understanding of people's range of capabilities. Since it is not feasible to assess product interactions with a large sample, this paper assesses a range of proxy measures of design-relevant capabilities. It describes a study that was conducted to identify which measures provide the best prediction of people's abilities to use a range of products. A detailed investigation with 100 respondents aged 50-80 years was undertaken to examine how they manage typical household products. Predictor variables included self-report and performance measures across a variety of capabilities (vision, hearing, dexterity and cognitive function), component activities used in product interactions (e.g. using a remote control, touch screen) and psychological characteristics (e.g. self-efficacy, confidence with using electronic devices). Results showed, as expected, a higher prevalence of visual, hearing, dexterity, cognitive and product interaction difficulties in the 65-80 age group. Regression analyses showed that, in addition to age, performance measures of vision (acuity, contrast sensitivity) and hearing (hearing threshold) and self-report and performance measures of component activities are strong predictors of successful product interactions. These findings will guide the choice of measures to be used in a subsequent national survey of design-relevant capabilities, which will lead to the creation of a capability database. This will be converted into a tool for designers to understand the implications of their design decisions, so that they can design products in a more inclusive way.
Resumo:
Anisotropic damage distribution and evolution have a profound effect on borehole stress concentrations. Damage evolution is an irreversible process that is not adequately described within classical equilibrium thermodynamics. Therefore, we propose a constitutive model, based on non-equilibrium thermodynamics, that accounts for anisotropic damage distribution, anisotropic damage threshold and anisotropic damage evolution. We implemented this constitutive model numerically, using the finite element method, to calculate stress–strain curves and borehole stresses. The resulting stress–strain curves are distinctively different from linear elastic-brittle and linear elastic-ideal plastic constitutive models and realistically model experimental responses of brittle rocks. We show that the onset of damage evolution leads to an inhomogeneous redistribution of material properties and stresses along the borehole wall. The classical linear elastic-brittle approach to borehole stability analysis systematically overestimates the stress concentrations on the borehole wall, because dissipative strain-softening is underestimated. The proposed damage mechanics approach explicitly models dissipative behaviour and leads to non-conservative mud window estimations. Furthermore, anisotropic rocks with preferential planes of failure, like shales, can be addressed with our model.
Resumo:
Quantitative imaging methods to analyze cell migration assays are not standardized. Here we present a suite of two–dimensional barrier assays describing the collective spreading of an initially–confined population of 3T3 fibroblast cells. To quantify the motility rate we apply two different automatic image detection methods to locate the position of the leading edge of the spreading population after 24, 48 and 72 hours. These results are compared with a manual edge detection method where we systematically vary the detection threshold. Our results indicate that the observed spreading rates are very sensitive to the choice of image analysis tools and we show that a standard measure of cell migration can vary by as much as 25% for the same experimental images depending on the details of the image analysis tools. Our results imply that it is very difficult, if not impossible, to meaningfully compare previously published measures of cell migration since previous results have been obtained using different image analysis techniques and the details of these techniques are not always reported. Using a mathematical model, we provide a physical interpretation of our edge detection results. The physical interpretation is important since edge detection algorithms alone do not specify any physical measure, or physical definition, of the leading edge of the spreading population. Our modeling indicates that variations in the image threshold parameter correspond to a consistent variation in the local cell density. This means that varying the threshold parameter is equivalent to varying the location of the leading edge in the range of approximately 1–5% of the maximum cell density.
Resumo:
Ambiguity resolution plays a crucial role in real time kinematic GNSS positioning which gives centimetre precision positioning results if all the ambiguities in each epoch are correctly fixed to integers. However, the incorrectly fixed ambiguities can result in large positioning offset up to several meters without notice. Hence, ambiguity validation is essential to control the ambiguity resolution quality. Currently, the most popular ambiguity validation is ratio test. The criterion of ratio test is often empirically determined. Empirically determined criterion can be dangerous, because a fixed criterion cannot fit all scenarios and does not directly control the ambiguity resolution risk. In practice, depending on the underlying model strength, the ratio test criterion can be too conservative for some model and becomes too risky for others. A more rational test method is to determine the criterion according to the underlying model and user requirement. Miss-detected incorrect integers will lead to a hazardous result, which should be strictly controlled. In ambiguity resolution miss-detected rate is often known as failure rate. In this paper, a fixed failure rate ratio test method is presented and applied in analysis of GPS and Compass positioning scenarios. A fixed failure rate approach is derived from the integer aperture estimation theory, which is theoretically rigorous. The criteria table for ratio test is computed based on extensive data simulations in the approach. The real-time users can determine the ratio test criterion by looking up the criteria table. This method has been applied in medium distance GPS ambiguity resolution but multi-constellation and high dimensional scenarios haven't been discussed so far. In this paper, a general ambiguity validation model is derived based on hypothesis test theory, and fixed failure rate approach is introduced, especially the relationship between ratio test threshold and failure rate is examined. In the last, Factors that influence fixed failure rate approach ratio test threshold is discussed according to extensive data simulation. The result shows that fixed failure rate approach is a more reasonable ambiguity validation method with proper stochastic model.
Resumo:
Introduction. Calculating segmental (vertebral level-by-level) torso masses in Adolescent Idiopathic Scoliosis (AIS) patients allows the gravitational loading on the scoliotic spine during relaxed standing to be determined. This study used CT scans of AIS patients to measure segmental torso masses and explores how joint moments in the coronal plane are affected by changes in the position of the intervertebral joint’s axis of rotation; particularly at the apex of a scoliotic major curve. Methods. Existing low dose CT data from the Paediatric Spine Research Group was used to calculate vertebral level-by-level torso masses and joint torques occurring in the spine for a group of 20 female AIS patients (mean age 15.0 ± 2.7 years, mean Cobb angle 53 ± 7.1°). Image processing software, ImageJ (v1.45 NIH USA) was used to threshold the T1 to L5 CT images and calculate the segmental torso volume and mass corresponding to each vertebral level. Body segment masses for the head, neck and arms were taken from published anthropometric data. Intervertebral (IV) joint torques at each vertebral level were found using principles of static equilibrium together with the segmental body mass data. Summing the torque contributions for each level above the required joint, allowed the cumulative joint torque at a particular level to be found. Since there is some uncertainty in the position of the coronal plane Instantaneous Axis of Rotation (IAR) for scoliosis patients, it was assumed the IAR was located in the centre of the IV disc. A sensitivity analysis was performed to see what effect the IAR had on the joint torques by moving it laterally 10mm in both directions. Results. The magnitude of the torso masses from T1-L5 increased inferiorly, with a 150% increase in mean segmental torso mass from 0.6kg at T1 to 1.5kg at L5. The magnitudes of the calculated coronal plane joint torques during relaxed standing were typically 5-7 Nm at the apex of the curve, with the highest apex joint torque of 7Nm being found in patient 13. Shifting the assumed IAR by 10mm towards the convexity of the spine, increased the joint torque at that level by a mean 9.0%, showing that calculated joint torques were moderately sensitive to the assumed IAR location. When the IAR midline position was moved 10mm away from the convexity of the spine, the joint torque reduced by a mean 8.9%. Conclusion. Coronal plane joint torques as high as 7Nm can occur during relaxed standing in scoliosis patients, which may help to explain the mechanics of AIS progression. This study provides new anthropometric reference data on vertebral level-by-level torso mass in AIS patients which will be useful for biomechanical models of scoliosis progression and treatment. However, the CT scans were performed in supine (no gravitational load on spine) and curve magnitudes are known to be smaller than those measured in standing.
Resumo:
Experimentally, hydrogen-free diamond-like carbon (DLC) films were assembled by means of pulsed laser deposition (PLD), where energetic small-carbon-clusters were deposited on the substrate. In this paper, the chemisorption of energetic C2 and C10 clusters on diamond (001)-( 2×1) surface was investigated by molecular dynamics simulation. The influence of cluster size and the impact energy on the structure character of the deposited clusters is mainly addressed. The impact energy was varied from a few tens eV to 100 eV. The chemisorption of C10 was found to occur only when its incident energy is above a threshold value ( E th). While, the C2 cluster was easily to adsorb on the surface even at much lower incident energy. With increasing the impact energy, the structures of the deposited C2 and C10 are different from the free clusters. Finally, the growth of films synthesized by energetic C2 and C10 clusters were simulated. The statistics indicate the C2 cluster has high probability of adsorption and films assembled of C2 present slightly higher SP3 fraction than that of C10-films, especially at higher impact energy and lower substrate temperature. Our result supports the experimental findings. Moreover, the simulation underlines the deposition mechanism at atomic scale.
Resumo:
The impact induced chemisorption of hydrocarbon molecules (CH3 and CH2) on H-terminated diamond (001)-(2x1) surface was investigated by molecular dynamics simulation using the many-body Brenner potential. The deposition dynamics of the CH3 radical at impact energies of 0.1-50 eV per molecule was studied and the energy threshold for chemisorption was calculated. The impact-induced decomposition of hydrogen atoms and the dimer opening mechanism on the surface was investigated. Furthermore, the probability for dimer opening event induced by chemisorption of CH, was simulated by randomly varying the impact position as well as the orientation of the molecule relative to the surface. Finally, the energetic hydrocarbons were modeled, slowing down one after the other to simulate the initial fabrication of diamond-like carbon (DLC) films. The structure characteristic in synthesized films with different hydrogen flux was studied. Our results indicate that CH3, CH2 and H are highly reactive and important species in diamond growth. Especially, the fraction of C-atoms in the film having sp(3) hybridization will be enhanced in the presence of H atoms, which is in good agreement with experimental observations. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The adsorption of low-energy C20 isomers on diamond (0 0 1)–(2×1) surface was investigated by molecular dynamics simulation using the Brenner potential. The energy dependence of chemisorption characteristic was studied. We found that there existed an energy threshold for chemisorption of C20 to occur. Between 10 and 20 eV, the C20 fullerene has high probability of chemisorption and the adsorbed cage retains its original structure, which supports the experimental observations of memory effects. However, the structures of the adsorbed bowl and ring C20 were different from their original ones. In this case, the local order in cluster-assembled films would be different from the free clusters.
Resumo:
The deposition of hyperthermal CH3 on diamond (001)-(2×1) surface at room temperature has been studied by means of molecular dynamics simulation using the many-body hydrocarbon potential. The energy threshold effect has been observed. That is, with fixed collision geometry, chemisorption can occur only when the incident energy of CH3 is above a critical value (Eth). Increasing the incident energy, dissociation of hydrogen atoms from the incident molecule was observed. The chemisorption probability of CH3 as a function of its incident energy was calculated and compared with that of C2H2. We found that below 10 eV, the chemisorption probability of C2H2 is much lower than that of CH3 on the same surface. The interesting thing is that it is even lower than that of CH3 on a hydrogen covered surface at the same impact energy. It indicates that the reactive CH3 molecule is the more important species than C2H2 in diamond synthesis at low energy, which is in good agreement with the experimental observation.