69 resultados para importance of robotics to biology
Resumo:
BACKGROUND: To understand cancer-related modifications to transcriptional programs requires detailed knowledge about the activation of signal-transduction pathways and gene expression programs. To investigate the mechanisms of target gene regulation by human estrogen receptor alpha (hERalpha), we combine extensive location and expression datasets with genomic sequence analysis. In particular, we study the influence of patterns of DNA occupancy by hERalpha on expression phenotypes. RESULTS: We find that strong ChIP-chip sites co-localize with strong hERalpha consensus sites and detect nucleotide bias near hERalpha sites. The localization of ChIP-chip sites relative to annotated genes shows that weak sites are enriched near transcription start sites, while stronger sites show no positional bias. Assessing the relationship between binding configurations and expression phenotypes, we find binding sites downstream of the transcription start site (TSS) to be equally good or better predictors of hERalpha-mediated expression as upstream sites. The study of FOX and SP1 cofactor sites near hERalpha ChIP sites shows that induced genes frequently have FOX or SP1 sites. Finally we integrate these multiple datasets to define a high confidence set of primary hERalpha target genes. CONCLUSION: Our results support the model of long-range interactions of hERalpha with the promoter-bound cofactor SP1 residing at the promoter of hERalpha target genes. FOX motifs co-occur with hERalpha motifs along responsive genes. Importantly we show that the spatial arrangement of sites near the start sites and within the full transcript is important in determining response to estrogen signaling.
Resumo:
Creatine deficiency syndromes, due to deficiencies in AGAT, GAMT (creatine synthesis pathway) or SLC6A8 (creatine transporter), lead to complete absence or very strong decrease of creatine in CNS as measured by magnetic resonance spectroscopy. Brain is the main organ affected in creatine-deficient patients, who show severe neurodevelopmental delay and present neurological symptoms in early infancy. AGAT- and GAMT-deficient patients can be treated by oral creatine supplementation which improves their neurological status, while this treatment is inefficient on SLC6A8-deficient patients. While it has long been thought that most, if not all, of brain creatine was of peripheral origin, the past years have brought evidence that creatine can cross blood-brain barrier, however, only with poor efficiency, and that CNS must ensure parts of its creatine needs by its own endogenous synthesis. Moreover, we showed very recently that in many brain structures, including cortex and basal ganglia, AGAT and GAMT, while found in every brain cell types, are not co-expressed but are rather expressed in a dissociated way. This suggests that to allow creatine synthesis in these structures, guanidinoacetate must be transported from AGAT- to GAMT-expressing cells, most probably through SLC6A8. This new understanding of creatine metabolism and transport in CNS will not only allow a better comprehension of brain consequences of creatine deficiency syndromes, but will also contribute to better decipher creatine roles in CNS, not only in energy as ATP regeneration and buffering, but also in its recently suggested functions as neurotransmitter or osmolyte.
Resumo:
Self-potential (SP) data are of interest to vadose zone hydrology because of their direct sensitivity to water flow and ionic transport. There is unfortunately little consensus in the literature about how to best model SP data under partially saturated conditions, and different approaches (often supported by one laboratory data set alone) have been proposed. We argue that this lack of agreement can largely be traced to electrode effects that have not been properly taken into account. A series of drainage and imbibition experiments were considered in which we found that previously proposed approaches to remove electrode effects were unlikely to provide adequate corrections. Instead, we explicitly modeled the electrode effects together with classical SP contributions using a flow and transport model. The simulated data agreed overall with the observed SP signals and allowed decomposing the different signal contributions to analyze them separately. After reviewing other published experimental data, we suggest that most of them include electrode effects that have not been properly taken into account. Our results suggest that previously presented SP theory works well when considering the modeling uncertainties presently associated with electrode effects. Additional work is warranted to not only develop suitable electrodes for laboratory experiments but also to assure that associated electrode effects that appear inevitable in longer term experiments are predictable, so that they can be incorporated into the modeling framework.
Resumo:
Rationale: Although associated with adverse outcomes in other cardiopulmonary conditions, the prognostic value of hyponatremia, a marker of neurohormonal activation, in patients with acute pulmonary embolism (PE) is unknown. Objectives: To examine the associations between hyponatremia and mortality and hospital readmission rates for patients hospitalized with PE. METHODS: We evaluated 13,728 patient discharges with a primary diagnosis of PE from 185 hospitals in Pennsylvania (January 2000 to November 2002). We used random-intercept logistic regression to assess the independent association between serum sodium levels at the time of presentation and mortality and hospital readmission within 30 days, adjusting for patient (race, insurance, severity of illness, use of thrombolytic therapy) and hospital factors (region, size, teaching status). Measurements and Main Results: Hyponatremia (sodium ?135 mmol/L) was present in 2,907 patients (21.1%). Patients with a sodium level greater than 135, 130-135, and less than 130 mmol/L had a cumulative 30-day mortality of 8.0, 13.6, and 28.5% (P < 0.001), and a readmission rate of 11.8, 15.6, and 19.3% (P < 0.001), respectively. Compared with patients with a sodium greater than 135 mmol/L, the adjusted odds of dying were significantly greater for patients with a sodium 130-135 mmol/L (odds ratio [OR], 1.53; 95% confidence interval [CI], 1.33-1.76) and a sodium less than 130 mmol/L (OR, 3.26; 95% CI, 2.48-4.29). The adjusted odds of readmission were also increased for patients with a sodium of 130-135 mmol/L (OR, 1.28; 95% CI, 1.12-1.46) and a sodium less than 130 mmol/L (OR, 1.44; 95% CI, 1.02-2.02). Conclusions: Hyponatremia is common in patients presenting with PE, and is an independent predictor of short-term mortality and hospital readmission.
Resumo:
Screening people without symptoms of disease is an attractive idea. Screening allows early detection of disease or elevated risk of disease, and has the potential for improved treatment and reduction of mortality. The list of future screening opportunities is set to grow because of the refinement of screening techniques, the increasing frequency of degenerative and chronic diseases, and the steadily growing body of evidence on genetic predispositions for various diseases. But how should we decide on the diseases for which screening should be done and on recommendations for how it should be implemented? We use the examples of prostate cancer and genetic screening to show the importance of considering screening as an ongoing population-based intervention with beneficial and harmful effects, and not simply the use of a test. Assessing whether screening should be recommended and implemented for any named disease is therefore a multi-dimensional task in health technology assessment. There are several countries that already use established processes and criteria to assess the appropriateness of screening. We argue that the Swiss healthcare system needs a nationwide screening commission mandated to conduct appropriate evidence-based evaluation of the impact of proposed screening interventions, to issue evidence-based recommendations, and to monitor the performance of screening programmes introduced. Without explicit processes there is a danger that beneficial screening programmes could be neglected and that ineffective, and potentially harmful, screening procedures could be introduced.
Resumo:
The aim of our work was to show how a chosen normal-isation strategy can affect the outcome of quantitative gene expression studies. As an example, we analysed the expression of three genes known to be upregulated under hypoxic conditions: HIF1A, VEGF and SLC2A1 (GLUT1). Raw RT-qPCR data were normalised using two different strategies: a straightforward normalisation against a single reference gene, GAPDH, using the 2(-ΔΔCt) algorithm and a more complex normalisation against a normalisation factor calculated from the quantitative raw data from four previously validated reference genes. We found that the two different normalisation strategies revealed contradicting results: normalising against a validated set of reference genes revealed an upregulation of the three genes of interest in three post-mortem tissue samples (cardiac muscle, skeletal muscle and brain) under hypoxic conditions. Interestingly, we found a statistically significant difference in the relative transcript abundance of VEGF in cardiac muscle between donors who died of asphyxia versus donors who died from cardiac death. Normalisation against GAPDH alone revealed no upregulation but, in some instances, a downregulation of the genes of interest. To further analyse this discrepancy, the stability of all reference genes used were reassessed and the very low expression stability of GAPDH was found to originate from the co-regulation of this gene under hypoxic conditions. We concluded that GAPDH is not a suitable reference gene for the quantitative analysis of gene expression in hypoxia and that validation of reference genes is a crucial step for generating biologically meaningful data.
Resumo:
Summary Background The dose-response between ultraviolet (UV) exposure patterns and skin cancer occurrence is not fully understood. Sun-protection messages often focus on acute exposure, implicitly assuming that direct UV radiation is the key contributor to the overall UV exposure. However, little is known about the relative contribution of the direct, diffuse and reflected radiation components. Objective To investigate solar UV exposure patterns at different body sites with respect to the relative contribution of the direct, diffuse and reflected radiation. Methods A three-dimensional numerical model was used to assess exposure doses for various body parts and exposure scenarios of a standing individual (static and dynamic postures). The model was fed with erythemally weighted ground irradiance data for the year 2009 in Payerne, Switzerland. A year-round daily exposure (08:00-17:00 h) without protection was assumed. Results For most anatomical sites, mean daily doses were high (typically 6·2-14·6 standard erythemal doses) and exceeded the recommended exposure values. Direct exposure was important during specific periods (e.g. midday during summer), but contributed moderately to the annual dose, ranging from 15% to 24% for vertical and horizontal body parts, respectively. Diffuse irradiation explained about 80% of the cumulative annual exposure dose. Acute diffuse exposures were also observed during cloudy summer days. Conclusions The importance of diffuse UV radiation should not be underestimated when advocating preventive measures. Messages focused on avoiding acute direct exposures may be of limited efficiency to prevent skin cancers associated with chronic exposure.
Resumo:
The value of forensic results crucially depends on the propositions and the information under which they are evaluated. For example, if a full single DNA profile for a contemporary marker system matching the profile of Mr A is assessed, given the propositions that the DNA came from Mr A and given it came from an unknown person, the strength of evidence can be overwhelming (e.g., in the order of a billion). In contrast, if we assess the same result given that the DNA came from Mr A and given it came from his twin brother (i.e., a person with the same DNA profile), the strength of evidence will be 1, and therefore neutral, unhelpful and irrelevant 1 to the case at hand. While this understanding is probably uncontroversial and obvious to most, if not all practitioners dealing with DNA evidence, the practical precept of not specifying an alternative source with the same characteristics as the one considered under the first proposition may be much less clear in other circumstances. During discussions with colleagues and trainees, cases have come to our attention where forensic scientists have difficulty with the formulation of propositions. It is particularly common to observe that results (e.g., observations) are included in the propositions, whereas-as argued throughout this note-they should not be. A typical example could be a case where a shoe-mark with a logo and the general pattern characteristics of a Nike Air Jordan shoe is found at the scene of a crime. A Nike Air Jordan shoe is then seized at Mr A's house and control prints of this shoe compared to the mark. The results (e.g., a trace with this general pattern and acquired characteristics corresponding to the sole of Mr A's shoe) are then evaluated given the propositions 'The mark was left by Mr A's Nike Air Jordan shoe-sole' and 'The mark was left by an unknown Nike Air Jordan shoe'. As a consequence, the footwear examiner will not evaluate part of the observations (i.e., the mark presents the general pattern of a Nike Air Jordan) whereas they can be highly informative. Such examples can be found in all forensic disciplines. In this article, we present a few such examples and discuss aspects that will help forensic scientists with the formulation of propositions. In particular, we emphasise on the usefulness of notation to distinguish results that forensic scientists should evaluate from case information that the Court will evaluate.
Resumo:
The role of humans in facilitating the rapid spread of plants at a scale that is considered invasive is one manifestation of the Anthropocene, now framed as a geological period in which humans are the dominant force in landscape transformation. Invasive plant management faces intensified challenges, and can no longer be viewed in terms of 'eradication' or 'restoration of original landscapes'. In this perspectives piece, we focus on the practice and experience of people engaged in invasive plant management, using examples from Australia and Canada. We show how managers 1) face several pragmatic trade-offs; 2) must reconcile diverse views, even within stakeholder groups; 3) must balance competing temporal scales; 4) encounter tensions with policy; and 5) face critical and under-acknowledged labour challenges. These themes show the variety of considerations based on which invasive plant managers make complex decisions about when, where, and how to intervene. Their widespread pragmatic acceptance of small, situated gains (as well as losses) combines with impressive long-term commitments to the task of invasives management. We suggest that the actual practice of weed management challenges those academic perspectives that still aspire to attain pristine nature.