918 resultados para Measure of riskiness
Resumo:
We propose a new characterization of protein structure based on the natural tetrahedral geometry of the β carbon and a new geometric measure of structural similarity, called visible volume. In our model, the side-chains are replaced by an ideal tetrahedron, the orientation of which is fixed with respect to the backbone and corresponds to the preferred rotamer directions. Visible volume is a measure of the non-occluded empty space surrounding each residue position after the side-chains have been removed. It is a robust, parameter-free, locally-computed quantity that accounts for many of the spatial constraints that are of relevance to the corresponding position in the native structure. When computing visible volume, we ignore the nature of both the residue observed at each site and the ones surrounding it. We focus instead on the space that, together, these residues could occupy. By doing so, we are able to quantify a new kind of invariance beyond the apparent variations in protein families, namely, the conservation of the physical space available at structurally equivalent positions for side-chain packing. Corresponding positions in native structures are likely to be of interest in protein structure prediction, protein design, and homology modeling. Visible volume is related to the degree of exposure of a residue position and to the actual rotamers in native proteins. In this article, we discuss the properties of this new measure, namely, its robustness with respect to both crystallographic uncertainties and naturally occurring variations in atomic coordinates, and the remarkable fact that it is essentially independent of the choice of the parameters used in calculating it. We also show how visible volume can be used to align protein structures, to identify structurally equivalent positions that are conserved in a family of proteins, and to single out positions in a protein that are likely to be of biological interest. These properties qualify visible volume as a powerful tool in a variety of applications, from the detailed analysis of protein structure to homology modeling, protein structural alignment, and the definition of better scoring functions for threading purposes.
Resumo:
Oxysterols are products of cholesterol oxidation, which may be produced endogenously or may be absorbed from the diet where they are commonly found in foods of animal origin. Oxysterols are known to be cyctotoxic to cells in culture and mode of toxicity has been identified as apoptosis in certain cell lines. The cytotoxicity of the oxysterols 25-hydroxycholesterol (25-OH) and 7β-hydroxycholesterol (7β-OH) was examined in two human cell lines, HepG2, a hepatoma cell line, and U937, a monocytic cell line. Both 25-OH and 7β-OH were cytotoxic to the HepG2 cell line but apoptotic cells were not detected and it was concluded that cells underwent necrosis. 25-OH was not cytotoxic to the U937 cell line but it was found to have a cytostatic effect. 7β-OH was shown to induce apoptosis in the U937 line. The mechanism of oxysterol-induced apoptosis has not yet been fully elucidated, however the generation of an oxidative stress and the depletion of glutathione have been associated with the initial stages of the apoptotic process. The concentration of cellular antioxidant enzyme, superoxide dismutase (SOD) was increased in association with 7β-OH induced apoptosis in the U937 cell line. There was no change in the glutathione concentration or the SOD activity of HepG2 cells, which underwent necrosis in the presence of 7β-OH. Many apoptotic pathways center on the activation of caspase-3, which is the key executioner protease of apoptosis. Caspase-3 activity was also shown to increase in association with 7β-OH-induced apoptosis in U937 cells but there was no significant increase in caspase-3 activity in HepG2 cells. DNA fragmentation is regarded as the biochemical hallmark of apoptosis, therefore the comet assay as a measure of DNA fragmentation was assessed as a measure of apoptosis. The level of DNA fragmentation induced by 7β-OH, as measured using the comet assay, was similar for both cell lines. Therefore, it was concluded that the comet assay could not be used to distinguish between 7β-OH-induced apoptosis in U937 cells and 7β-OH-induced necrosis in HepG2 cells. The cytotoxicity and apoptotic potency of oxysterols 25-OH, 7β-OH, cholesterol- 5a,6a-epoxide (a-epoxide), cholesterol-5β,6β-epoxide (β-epoxide), 19-hydroxy-cholesterol (19-OH), and 7-ketocholesterol (7-keto) was compared in the U937 cell line. 7 β-OH, β-epoxide and 7-keto were found to induce apoptosis in U937 cells. 7β-OH-induced apoptosis was associated with a decrease in the cellular glutathione concentration and an increase in SOD activity, 7-keto and β-epoxide did not affect the glutathione concentration or the SOD activity of the cells.a-Epoxide, 19-OH and 25-OH were not cytotoxic to the U937 cell line.
Resumo:
The measurement of users’ attitudes towards and confidence with using the Internet is an important yet poorly researched topic. Previous research has encountered issues that serve to obfuscate rather than clarify. Such issues include a lack of distinction between the terms ‘attitude’ and ‘self-efficacy’, the absence of a theoretical framework to measure each concept, and failure to follow well-established techniques for the measurement of both attitude and self-efficacy. Thus, the primary aim of this research was to develop two statistically reliable scales which independently measure attitudes towards the Internet and Internet self-efficacy. This research addressed the outlined issues by applying appropriate theoretical frameworks to each of the constructs under investigation. First, the well-known three component (affect, behaviour, cognition) model of attitudes was applied to previous Internet attitude statements. The scale was distributed to four large samples of participants. Exploratory factor analyses revealed four underlying factors in the scale: Internet Affect, Internet Exhilaration, Social Benefit of the Internet and Internet Detriment. The final scale contains 21 items, demonstrates excellent reliability and achieved excellent model fit in the confirmatory factor analysis. Second, Bandura’s (1997) model of self-efficacy was followed to develop a reliable measure of Internet self-efficacy. Data collected as part of this research suggests that there are ten main activities which individuals can carry out on the Internet. Preliminary analyses suggested that self-efficacy is confounded with previous experience; thus, individuals were invited to indicate how frequently they performed the listed Internet tasks in addition to rating their feelings of self-efficacy for each task. The scale was distributed to a sample of 841 participants. Results from the analyses suggest that the more frequently an individual performs an activity on the Internet, the higher their self-efficacy score for that activity. This suggests that frequency of use ought to be taken into account in individual’s self-efficacy scores to obtain a ‘true’ self-efficacy score for the individual. Thus, a formula was devised to incorporate participants’ previous experience of Internet tasks in their Internet self-efficacy scores. This formula was then used to obtain an overall Internet self-efficacy score for participants. Following the development of both scales, gender and age differences were explored in Internet attitudes and Internet self-efficacy scores. The analyses indicated that there were no gender differences between groups for Internet attitude or Internet self-efficacy scores. However, age group differences were identified for both attitudes and self-efficacy. Individuals aged 25-34 years achieved the highest scores on both the Internet attitude and Internet self-efficacy measures. Internet attitude and self-efficacy scores tended to decrease with age with older participants achieving lower scores on both measures than younger participants. It was also found that the more exposure individuals had to the Internet, the higher their Internet attitude and Internet self-efficacy scores. Examination of the relationship between attitude and self-efficacy found a significantly positive relationship between the two measures suggesting that the two constructs are related. Implication of such findings and directions for future research are outlined in detail in the Discussion section of this thesis.
Resumo:
Background: Elective repeat caesarean delivery (ERCD) rates have been increasing worldwide, thus prompting obstetric discourse on the risks and benefits for the mother and infant. Yet, these increasing rates also have major economic implications for the health care system. Given the dearth of information on the cost-effectiveness related to mode of delivery, the aim of this paper was to perform an economic evaluation on the costs and short-term maternal health consequences associated with a trial of labour after one previous caesarean delivery compared with ERCD for low risk women in Ireland.Methods: Using a decision analytic model, a cost-effectiveness analysis (CEA) was performed where the measure of health gain was quality-adjusted life years (QALYs) over a six-week time horizon. A review of international literature was conducted to derive representative estimates of adverse maternal health outcomes following a trial of labour after caesarean (TOLAC) and ERCD. Delivery/procedure costs derived from primary data collection and combined both "bottom-up" and "top-down" costing estimations.Results: Maternal morbidities emerged in twice as many cases in the TOLAC group than the ERCD group. However, a TOLAC was found to be the most-effective method of delivery because it was substantially less expensive than ERCD ((sic)1,835.06 versus (sic)4,039.87 per women, respectively), and QALYs were modestly higher (0.84 versus 0.70). Our findings were supported by probabilistic sensitivity analysis.Conclusions: Clinicians need to be well informed of the benefits and risks of TOLAC among low risk women. Ideally, clinician-patient discourse would address differences in length of hospital stay and postpartum recovery time. While it is premature advocate a policy of TOLAC across maternity units, the results of the study prompt further analysis and repeat iterations, encouraging future studies to synthesis previous research and new and relevant evidence under a single comprehensive decision model.
Resumo:
Exogenous gene delivery to alter the function of the heart is a potential novel therapeutic strategy for treatment of cardiovascular diseases such as heart failure (HF). Before gene therapy approaches to alter cardiac function can be realized, efficient and reproducible in vivo gene techniques must be established to efficiently transfer transgenes globally to the myocardium. We have been testing the hypothesis that genetic manipulation of the myocardial beta-adrenergic receptor (beta-AR) system, which is impaired in HF, can enhance cardiac function. We have delivered adenoviral transgenes, including the human beta2-AR (Adeno-beta2AR), to the myocardium of rabbits using an intracoronary approach. Catheter-mediated Adeno-beta2AR delivery produced diffuse multichamber myocardial expression, peaking 1 week after gene transfer. A total of 5 x 10(11) viral particles of Adeno-beta2AR reproducibly produced 5- to 10-fold beta-AR overexpression in the heart, which, at 7 and 21 days after delivery, resulted in increased in vivo hemodynamic function compared with control rabbits that received an empty adenovirus. Several physiological parameters, including dP/dtmax as a measure of contractility, were significantly enhanced basally and showed increased responsiveness to the beta-agonist isoproterenol. Our results demonstrate that global myocardial in vivo gene delivery is possible and that genetic manipulation of beta-AR density can result in enhanced cardiac performance. Thus, replacement of lost receptors seen in HF may represent novel inotropic therapy.
Resumo:
Functional neuroimaging studies of episodic memory retrieval generally measure brain activity while participants remember items encountered in the laboratory ("controlled laboratory condition") or events from their own life ("open autobiographical condition"). Differences in activation between these conditions may reflect differences in retrieval processes, memory remoteness, emotional content, retrieval success, self-referential processing, visual/spatial memory, and recollection. To clarify the nature of these differences, a functional MRI study was conducted using a novel "photo paradigm," which allows greater control over the autobiographical condition, including a measure of retrieval accuracy. Undergraduate students took photos in specified campus locations ("controlled autobiographical condition"), viewed in the laboratory similar photos taken by other participants (controlled laboratory condition), and were then scanned while recognizing the two kinds of photos. Both conditions activated a common episodic memory network that included medial temporal and prefrontal regions. Compared with the controlled laboratory condition, the controlled autobiographical condition elicited greater activity in regions associated with self-referential processing (medial prefrontal cortex), visual/spatial memory (visual and parahippocampal regions), and recollection (hippocampus). The photo paradigm provides a way of investigating the functional neuroanatomy of real-life episodic memory under rigorous experimental control.
Resumo:
BACKGROUND: Automated reporting of estimated glomerular filtration rate (eGFR) is a recent advance in laboratory information technology (IT) that generates a measure of kidney function with chemistry laboratory results to aid early detection of chronic kidney disease (CKD). Because accurate diagnosis of CKD is critical to optimal medical decision-making, several clinical practice guidelines have recommended the use of automated eGFR reporting. Since its introduction, automated eGFR reporting has not been uniformly implemented by U. S. laboratories despite the growing prevalence of CKD. CKD is highly prevalent within the Veterans Health Administration (VHA), and implementation of automated eGFR reporting within this integrated healthcare system has the potential to improve care. In July 2004, the VHA adopted automated eGFR reporting through a system-wide mandate for software implementation by individual VHA laboratories. This study examines the timing of software implementation by individual VHA laboratories and factors associated with implementation. METHODS: We performed a retrospective observational study of laboratories in VHA facilities from July 2004 to September 2009. Using laboratory data, we identified the status of implementation of automated eGFR reporting for each facility and the time to actual implementation from the date the VHA adopted its policy for automated eGFR reporting. Using survey and administrative data, we assessed facility organizational characteristics associated with implementation of automated eGFR reporting via bivariate analyses. RESULTS: Of 104 VHA laboratories, 88% implemented automated eGFR reporting in existing laboratory IT systems by the end of the study period. Time to initial implementation ranged from 0.2 to 4.0 years with a median of 1.8 years. All VHA facilities with on-site dialysis units implemented the eGFR software (52%, p<0.001). Other organizational characteristics were not statistically significant. CONCLUSIONS: The VHA did not have uniform implementation of automated eGFR reporting across its facilities. Facility-level organizational characteristics were not associated with implementation, and this suggests that decisions for implementation of this software are not related to facility-level quality improvement measures. Additional studies on implementation of laboratory IT, such as automated eGFR reporting, could identify factors that are related to more timely implementation and lead to better healthcare delivery.
Resumo:
BACKGROUND: While smoking is the major cause of chronic obstructive pulmonary disease (COPD), occupational exposures to vapors, gases, dusts, and fumes (VGDF) increase COPD risk. This case-control study estimated the risk of COPD attributable to occupational exposures among construction workers. METHODS: The study population included 834 cases and 1243 controls participating in a national medical screening program for older construction workers between 1997 and 2013. Qualitative exposure indices were developed based on lifetime work and exposure histories. RESULTS: Approximately 18% (95% CI = 2-24%) of COPD risk can be attributed to construction-related exposures, which are additive to the risk contributed by smoking. A measure of all VGDF exposures combined was a strong predictor of COPD risk. CONCLUSIONS: Construction workers are at increased risk of COPD as a result of broad and complex effects of many exposures acting independently or interactively. Control methods should be implemented to prevent worker exposures, and smoking cessation should be promoted.
Resumo:
Intraoperative assessment of surgical margins is critical to ensuring residual tumor does not remain in a patient. Previously, we developed a fluorescence structured illumination microscope (SIM) system with a single-shot field of view (FOV) of 2.1 × 1.6 mm (3.4 mm2) and sub-cellular resolution (4.4 μm). The goal of this study was to test the utility of this technology for the detection of residual disease in a genetically engineered mouse model of sarcoma. Primary soft tissue sarcomas were generated in the hindlimb and after the tumor was surgically removed, the relevant margin was stained with acridine orange (AO), a vital stain that brightly stains cell nuclei and fibrous tissues. The tissues were imaged with the SIM system with the primary goal of visualizing fluorescent features from tumor nuclei. Given the heterogeneity of the background tissue (presence of adipose tissue and muscle), an algorithm known as maximally stable extremal regions (MSER) was optimized and applied to the images to specifically segment nuclear features. A logistic regression model was used to classify a tissue site as positive or negative by calculating area fraction and shape of the segmented features that were present and the resulting receiver operator curve (ROC) was generated by varying the probability threshold. Based on the ROC curves, the model was able to classify tumor and normal tissue with 77% sensitivity and 81% specificity (Youden's index). For an unbiased measure of the model performance, it was applied to a separate validation dataset that resulted in 73% sensitivity and 80% specificity. When this approach was applied to representative whole margins, for a tumor probability threshold of 50%, only 1.2% of all regions from the negative margin exceeded this threshold, while over 14.8% of all regions from the positive margin exceeded this threshold.
Resumo:
Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria and in cabin crew training and post mortuum accident investigation. As the risk of personal injury and costs involved in performing large-scale evacuation experiments for the next generation `Ultra High Capacity Aircraft' (UHCA) are expected to be high, the development and use of these evacuation modelling tools may become essential if these aircraft are to prove a viable reality. This paper describes the capabilities and limitations of the airEXODUS evacuation model and some attempts at validation, including its successful application to the prediction of a recent certification trial, prior to the actual trial taking place, is described. Also described is a newly defined parameter known as OPS which can be used as a measure of evacuation trial optimality. In addition, sample evacuation simulations in the presence of fire atmospheres are described.
Resumo:
Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, the implementation of safer and more rigorous certification criteria, in cabin crew training and post-mortem accident investigation. As the risk of personal injury and the costs involved in performing large-scale evacuation experiments for the next generation ultra high capacity aircraft (UHCA) are expected to be high, the development and use of these evacuation modelling tools may become essential if these aircraft are to prove a viable reality. This paper describes the capabilities and limitations of the airEXODUS evacuation model and some attempts at validation, including its successful application to the prediction of a recent certification trial, prior to the actual trial taking place. Also described is a newly defined performance parameter known as OPS that can be used as a measure of evacuation trial optimality. In addition, sample evacuation simulations in the presence of fire atmospheres are described.
Resumo:
Computer based mathematical models describing the aircraft evacuation process have a vital role to play in the design and development of safer aircraft, in the implementation of safer and more rigorous certification criteria and in post mortuuum accident investigation. As the risk of personal injury and costs involved in performing large-scale evacuation experiments for the next generation 'Ultra High Capacity Aircraft' (UHCA) are expected to be high, the development and use of these evacuation modelling tools may become essential if these aircraft are to prove a viable reality. In this paper the capabilities and limitation of the air-EXODUS evacuation model are described. Its successful application to the prediction of a recent certificaiton trial, prior to the actual trial taking place, is described. Also described is a newly defined parameter known as OPS which can be used as a measure of evacuation trial optimality. Finally, the data requirements of aircraft evacuation models is discussed along with several projects currently underway at the University of Greenwich designed to obtain this data. Included in this discussion is a description of the AASK - Aircraft Accident Statistics and Knowledge - data base which contains detailed information from aircraft accident survivors.
Resumo:
The rotating-frame nuclear magnetic relaxation rate of spins diffusing on a disordered lattice has been calculated by Monte Carlo methods. The disorder includes not only variation in the distances between neighbouring spin sites but also variation in the hopping rate associated with each site. The presence of the disorder, particularly the hopping rate disorder, causes changes in the time-dependent spin correlation functions which translate into asymmetry in the characteristic peak in the temperature dependence of the dipolar relaxation rate. The results may be used to deduce the average hopping rate from the relaxation but the effect is not sufficiently marked to enable the distribution of the hopping rates to be evaluated. The distribution, which is a measure of the degree of disorder, is the more interesting feature and it has been possible to show from the calculation that measurements of the relaxation rate as a function of the strength of the radiofrequency spin-locking magnetic field can lead to an evaluation of its width. Some experimental data on an amorphous metal - hydrogen alloy are reported which demonstrate the feasibility of this novel approach to rotating-frame relaxation in disordered materials.
Resumo:
This paper extends the standard network centrality measures of degree, closeness and betweenness to apply to groups and classes as well as individuals. The group centrality measures will enable researchers to answer such questions as ‘how central is the engineering department in the informal influence network of this company?’ or ‘among middle managers in a given organization, which are more central, the men or the women?’ With these measures we can also solve the inverse problem: given the network of ties among organization members, how can we form a team that is maximally central? The measures are illustrated using two classic network data sets. We also formalize a measure of group centrality efficiency, which indicates the extent to which a group's centrality is principally due to a small subset of its members.
Resumo:
High-integrity castings require sophisticated design and manufacturing procedures to ensure they are essentially macrodefect free. Unfortunately, an important class of such defects—macroporosity, misruns, and pipe shrinkage—are all functions of the interactions of free surface flow, heat transfer, and solidication in complex geometries. Because these defects arise as an interaction of the preceding continuum phenomena, genuinely predictive models of these defects must represent these interactions explicitly. This work describes an attempt to model the formation of macrodefects explicitly as a function of the interacting continuum phenomena in arbitrarily complex three-dimensional geometries. The computational approach exploits a compatible set of finite volume procedures extended to unstructured meshes. The implementation of the model is described together with its testing and a measure of validation. The model demonstrates the potential to predict reliably shrinkage macroporosity, misruns, and pipe shrinkage directly as a result of interactions among free-surface fluid flow, heat transfer, and solidification.