93 resultados para High-dimensional data visualization
Resumo:
Somatic copy number aberrations (CNA) represent a mutation type encountered in the majority of cancer genomes. Here, we present the 2014 edition of arrayMap (http://www.arraymap.org), a publicly accessible collection of pre-processed oncogenomic array data sets and CNA profiles, representing a vast range of human malignancies. Since the initial release, we have enhanced this resource both in content and especially with regard to data mining support. The 2014 release of arrayMap contains more than 64,000 genomic array data sets, representing about 250 tumor diagnoses. Data sets included in arrayMap have been assembled from public repositories as well as additional resources, and integrated by applying custom processing pipelines. Online tools have been upgraded for a more flexible array data visualization, including options for processing user provided, non-public data sets. Data integration has been improved by mapping to multiple editions of the human reference genome, with the majority of the data now being available for the UCSC hg18 as well as GRCh37 versions. The large amount of tumor CNA data in arrayMap can be freely downloaded by users to promote data mining projects, and to explore special events such as chromothripsis-like genome patterns.
Resumo:
Neurocritical care depends, in part, on careful patient monitoring but as yet there are little data on what processes are the most important to monitor, how these should be monitored, and whether monitoring these processes is cost-effective and impacts outcome. At the same time, bioinformatics is a rapidly emerging field in critical care but as yet there is little agreement or standardization on what information is important and how it should be displayed and analyzed. The Neurocritical Care Society in collaboration with the European Society of Intensive Care Medicine, the Society for Critical Care Medicine, and the Latin America Brain Injury Consortium organized an international, multidisciplinary consensus conference to begin to address these needs. International experts from neurosurgery, neurocritical care, neurology, critical care, neuroanesthesiology, nursing, pharmacy, and informatics were recruited on the basis of their research, publication record, and expertise. They undertook a systematic literature review to develop recommendations about specific topics on physiologic processes important to the care of patients with disorders that require neurocritical care. This review does not make recommendations about treatment, imaging, and intraoperative monitoring. A multidisciplinary jury, selected for their expertise in clinical investigation and development of practice guidelines, guided this process. The GRADE system was used to develop recommendations based on literature review, discussion, integrating the literature with the participants' collective experience, and critical review by an impartial jury. Emphasis was placed on the principle that recommendations should be based on both data quality and on trade-offs and translation into clinical practice. Strong consideration was given to providing pragmatic guidance and recommendations for bedside neuromonitoring, even in the absence of high quality data.
Resumo:
To compare the effect of hyperthermia on maximal oxygen uptake (VO2max) in men and women, VO2max was measured in 11 male and 11 female runners under seven conditions involving various ambient temperatures (Ta at 50% RH) and preheating designed to manipulate the esophageal (Tes) and mean skin (Tsk) temperatures at VO2max. The conditions were: 25 degrees C, no preheating (control); 25, 35, 40, and 45 degrees C, with exercise-induced preheating by a 20-min walk at approximately 33% of control VO2max; 45 degrees C, no preheating; and 45 degrees C, with passive preheating during which Tes and Tsk were increased to the same degree as at the end of the 20-min walk at 45 degrees C. Compared to VO2max (l x min(-1)) in the control condition (4.52+/-0.46 in men, 3.01+/-0.45 in women), VO2max in men and women was reduced with exercise-induced or passive preheating and increased Ta, approximately 4% at 35 degrees C, approximately 9% at 40 degrees C and approximately 18% at 45 degrees C. Percentage reductions (7-36%) in physical performance (treadmill test time to exhaustion) were strongly related to reductions in VO2max (r=0.82-0.84). The effects of hyperthermia on VO2max and physical performance in men and women were almost identical. We conclude that men and women do not differ in their thermal responses to maximal exercise, or in the relationship of hyperthermia to reductions in VO2max and physical performance at high temperature. Data are reported as mean (SD) unless otherwise stated.
Resumo:
The paper proposes an approach aimed at detecting optimal model parameter combinations to achieve the most representative description of uncertainty in the model performance. A classification problem is posed to find the regions of good fitting models according to the values of a cost function. Support Vector Machine (SVM) classification in the parameter space is applied to decide if a forward model simulation is to be computed for a particular generated model. SVM is particularly designed to tackle classification problems in high-dimensional space in a non-parametric and non-linear way. SVM decision boundaries determine the regions that are subject to the largest uncertainty in the cost function classification, and, therefore, provide guidelines for further iterative exploration of the model space. The proposed approach is illustrated by a synthetic example of fluid flow through porous media, which features highly variable response due to the parameter values' combination.
Resumo:
The mission of the Encyclopedia of DNA Elements (ENCODE) Project is to enable the scientific and medical communities to interpret the human genome sequence and apply it to understand human biology and improve health. The ENCODE Consortium is integrating multiple technologies and approaches in a collective effort to discover and define the functional elements encoded in the human genome, including genes, transcripts, and transcriptional regulatory regions, together with their attendant chromatin states and DNA methylation patterns. In the process, standards to ensure high-quality data have been implemented, and novel algorithms have been developed to facilitate analysis. Data and derived results are made available through a freely accessible database. Here we provide an overview of the project and the resources it is generating and illustrate the application of ENCODE data to interpret the human genome.
Resumo:
Since the end of the last millennium, the focused ion beam scanning electron microscopy (FIB-SEM) has progressively found use in biological research. This instrument is a scanning electron microscope (SEM) with an attached gallium ion column and the 2 beams, electrons and ions (FIB) are focused on one coincident point. The main application is the acquisition of three-dimensional data, FIB-SEM tomography. With the ion beam, some nanometres of the surface are removed and the remaining block-face is imaged with the electron beam in a repetitive manner. The instrument can also be used to cut open biological structures to get access to internal structures or to prepare thin lamella for imaging by (cryo-) transmission electron microscopy. Here, we will present an overview of the development of FIB-SEM and discuss a few points about sample preparation and imaging.
Resumo:
New precise zircon U-Pb ages are proposed for the Triassic-Jurassic (Rhetian-Hettangian) and the Hettangian-Sinemurian boundaries, The ages were obtained by ID-TIMS dating of single chemical-abraded zircons from volcanic ash layers within the Pucara Group, Aramachay Formation in the Utcubamba valley, northern Peru. Ash layers situated between last and first occurrences of boundary-defining ammonites yielded Pb-206/U-238 ages of 201.58 +/- 0.17/0.28 Ma (95% c.l., uncertainties without/with decay constant errors, respectively) for the Triassic-Jurassic and of 199.53 +/- 0.19/0.29 Ma for the Hettangian-Sinemurian boundaries. The former is established on a tuff located 1 m above the last local occurrence of the topmost Triassic genus Choristoceras, and 5 m below the Hettangian genus Psiloceras. The latter sample was obtained from a tuff collected within the Badouxia canadensis beds. Our new ages document total duration of the Hettagian of no more than c. 2 m.y., which has fundamental implications for the interpretation and significance of the ammonite recovery after the topmost Triassic extinction. The U-Pb age is about 0.8 +/- 0.5% older than Ar-40-Ar-39 dates determined on flood basalts of the Central Atlantic Magmatic Province (CAMP). Given the widely accepted hypothesis that inaccuracies in the K-40 decay constants or physical constants create a similar bias between the two dating methods, our new U-Pb zircon age determination for the T/J boundary corroborates the hypothesis that the CAMP was emplaced at the same time and may be responsible for a major climatic turnover and mass extinction. The zircon Pb-206/U-238 age for the T/J boundary is marginally older than the North Mountain Basalt (Newark Supergroup, Nova Scotia, Canada), which has been dated at 201.27 +/- 0.06 Ma [Schoene et al., 2006. Geochim. Cosmochim. Acta 70, 426-445]. It will be important to look for older eruptions of the CAMP and date them precisely by U-Pb techniques while addressing all sources of systematic uncertainty to further test the hypothesis of volcanic induced climate change leading to extinction. Such high-precision, high-accuracy data will be instrumental for constraining the contemporaneity of geological events at a 100 kyr level. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The present paper studies the probability of ruin of an insurer, if excess of loss reinsurance with reinstatements is applied. In the setting of the classical Cramer-Lundberg risk model, piecewise deterministic Markov processes are used to describe the free surplus process in this more general situation. It is shown that the finite-time ruin probability is both the solution of a partial integro-differential equation and the fixed point of a contractive integral operator. We exploit the latter representation to develop and implement a recursive algorithm for numerical approximation of the ruin probability that involves high-dimensional integration. Furthermore we study the behavior of the finite-time ruin probability under various levels of initial surplus and security loadings and compare the efficiency of the numerical algorithm with the computational alternative of stochastic simulation of the risk process. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
In July 2006, approximately 2 million m3 of massive limestone began to move on the east flank of the Eiger in central Switzerland. For more than two years after the initial failure, the rock mass moved at rates of up to 70 cm per day. A detailed analysis of the structures and velocities of the different moving blocks was conducted with the aid of terrestrial laser scanning. The moving rock mass included a rear block that subsided, pushing a frontal block forward. Movement directions were controlled by discontinuity sets that formed wedges bounded on one side by sub-vertical bedding planes. The instability was, until recently, buttressed by a glacier. Slope observations and results of continuum and discontinuum modeling indicate that the structure of the rock mass and topography were the main causes of the instability. Progressive weathering and mechanical fatigue of the rock mass appear to have led to the failure. A dynamic analytical model further indicates that the rockslide was primarily controlled by a reduction in the strength of discontinuities, the effects of ice deformation, and ? to a limited extent ? groundwater flow. This study shows that realistic and simple instability models can be constructed for rock-slope failures if high-resolution data are available.
Resumo:
The safe and responsible development of engineered nanomaterials (ENM), nanotechnology-based materials and products, together with the definition of regulatory measures and implementation of "nano"-legislation in Europe require a widely supported scientific basis and sufficient high quality data upon which to base decisions. At the very core of such a scientific basis is a general agreement on key issues related to risk assessment of ENMs which encompass the key parameters to characterise ENMs, appropriate methods of analysis and best approach to express the effect of ENMs in widely accepted dose response toxicity tests. The following major conclusions were drawn: Due to high batch variability of ENMs characteristics of commercially available and to a lesser degree laboratory made ENMs it is not possible to make general statements regarding the toxicity resulting from exposure to ENMs. 1) Concomitant with using the OECD priority list of ENMs, other criteria for selection of ENMs like relevance for mechanistic (scientific) studies or risk assessment-based studies, widespread availability (and thus high expected volumes of use) or consumer concern (route of consumer exposure depending on application) could be helpful. The OECD priority list is focussing on validity of OECD tests. Therefore source material will be first in scope for testing. However for risk assessment it is much more relevant to have toxicity data from material as present in products/matrices to which men and environment are be exposed. 2) For most, if not all characteristics of ENMs, standardized methods analytical methods, though not necessarily validated, are available. Generally these methods are only able to determine one single characteristic and some of them can be rather expensive. Practically, it is currently not feasible to fully characterise ENMs. Many techniques that are available to measure the same nanomaterial characteristic produce contrasting results (e.g. reported sizes of ENMs). It was recommended that at least two complementary techniques should be employed to determine a metric of ENMs. The first great challenge is to prioritise metrics which are relevant in the assessment of biological dose response relations and to develop analytical methods for characterising ENMs in biological matrices. It was generally agreed that one metric is not sufficient to describe fully ENMs. 3) Characterisation of ENMs in biological matrices starts with sample preparation. It was concluded that there currently is no standard approach/protocol for sample preparation to control agglomeration/aggregation and (re)dispersion. It was recommended harmonization should be initiated and that exchange of protocols should take place. The precise methods used to disperse ENMs should be specifically, yet succinctly described within the experimental section of a publication. 4) ENMs need to be characterised in the matrix as it is presented to the test system (in vitro/ in vivo). 5) Alternative approaches (e.g. biological or in silico systems) for the characterisation of ENMS are simply not possible with the current knowledge. Contributors: Iseult Lynch, Hans Marvin, Kenneth Dawson, Markus Berges, Diane Braguer, Hugh J. Byrne, Alan Casey, Gordon Chambers, Martin Clift, Giuliano Elia1, Teresa F. Fernandes, Lise Fjellsbø, Peter Hatto, Lucienne Juillerat, Christoph Klein, Wolfgang Kreyling, Carmen Nickel1, and Vicki Stone.
Resumo:
Abstract This paper presents the outcomes from a workshop of the European Network on the Health and Environmental Impact of Nanomaterials (NanoImpactNet). During the workshop, 45 experts in the field of safety assessment of engineered nanomaterials addressed the need to systematically study sets of engineered nanomaterials with specific metrics to generate a data set which would allow the establishment of dose-response relations. The group concluded that international cooperation and worldwide standardization of terminology, reference materials and protocols are needed to make progress in establishing lists of essential metrics. High quality data necessitates the development of harmonized study approaches and adequate reporting of data. Priority metrics can only be based on well-characterized dose-response relations derived from the systematic study of the bio-kinetics and bio-interactions of nanomaterials at both organism and (sub)-cellular levels. In addition, increased effort is needed to develop and validate analytical methods to determine these metrics in a complex matrix.
Resumo:
The focus of my PhD research was the concept of modularity. In the last 15 years, modularity has become a classic term in different fields of biology. On the conceptual level, a module is a set of interacting elements that remain mostly independent from the elements outside of the module. I used modular analysis techniques to study gene expression evolution in vertebrates. In particular, I identified ``natural'' modules of gene expression in mouse and human, and I showed that expression of organ-specific and system-specific genes tends to be conserved between such distance vertebrates as mammals and fishes. Also with a modular approach, I studied patterns of developmental constraints on transcriptome evolution. I showed that none of the two commonly accepted models of the evolution of embryonic development (``evo-devo'') are exclusively valid. In particular, I found that the conservation of the sequences of regulatory regions is highest during mid-development of zebrafish, and thus it supports the ``hourglass model''. In contrast, events of gene duplication and new gene introduction are most rare in early development, which supports the ``early conservation model''. In addition to the biological insights on transcriptome evolution, I have also discussed in detail the advantages of modular approaches in large-scale data analysis. Moreover, I re-analyzed several studies (published in high-ranking journals), and showed that their conclusions do not hold out under a detailed analysis. This demonstrates that complex analysis of high-throughput data requires a co-operation between biologists, bioinformaticians, and statisticians.
Resumo:
The present research studies the spatial patterns of the distribution of the Swiss population (DSP). This description is carried out using a wide variety of global spatial structural analysis tools such as topological, statistical and fractal measures, which enable the estimation of the spatial degree of clustering of a point pattern. A particular attention is given to the analysis of the multifractality to characterize the spatial structure of the DSP at different scales. This will be achieved by measuring the generalized q-dimensions and the singularity spectrum. This research is based on high quality data of the Swiss Population Census of the Year 2000 at a hectometric resolution (grid 100 x 100 m) issued by the Swiss Federal Statistical Office (FSO).
Resumo:
In Switzerland, organ procurement is well organized at the national-level but transplant outcomes have not been systematically monitored so far. Therefore, a novel project, the Swiss Transplant Cohort Study (STCS), was established. The STCS is a prospective multicentre study, designed as a dynamic cohort, which enrolls all solid organ recipients at the national level. The features of the STCS are a flexible patient-case system that allows capturing all transplant scenarios and collection of patient-specific and allograft-specific data. Beyond comprehensive clinical data, specific focus is directed at psychosocial and behavioral factors, infectious disease development, and bio-banking. Between May 2008 and end of 2011, the six Swiss transplant centers recruited 1,677 patients involving 1,721 transplantations, and a total of 1,800 organs implanted in 15 different transplantation scenarios. 10 % of all patients underwent re-transplantation and 3% had a second transplantation, either in the past or during follow-up. 34% of all kidney allografts originated from living donation. Until the end of 2011 we observed 4,385 infection episodes in our patient population. The STCS showed operative capabilities to collect high-quality data and to adequately reflect the complexity of the post-transplantation process. The STCS represents a promising novel project for comparative effectiveness research in transplantation medicine.
Resumo:
Plants have the ability to use the composition of incident light as a cue to adapt development and growth to their environment. Arabidopsis thaliana as well as many crops are best adapted to sunny habitats. When subjected to shade, these plants exhibit a variety of physiological responses collectively called shade avoidance syndrome (SAS). It includes increased growth of hypocotyl and petioles, decreased growth rate of cotyledons and reduced branching and crop yield. These responses are mainly mediated by phytochrome photoreceptors, which exist either in an active, far-red light (FR) absorbing or an inactive, red light (R) absorbing isoform. In direct sunlight, the R to FR light (R/FR) ratio is high and converts the phytochromes into their physiologically active state. The phytochromes interact with downstream transcription factors such as PHYTOCHROME INTERACTING FACTOR (PIF), which are subsequently degraded. Light filtered through a canopy is strongly depleted in R, which result in a low R/FR ratio and renders the phytochromes inactive. Protein levels of downstream transcription factors are stabilized, which initiates the expression of shade-induced genes such as HFR1, PIL1 or ATHB-2. In my thesis, I investigated transcriptional responses mediated by the SAS in whole Arabidopsis seedlings. Using microarray and chromatin immunoprecipitation data, we identified genome-wide PIF4 and PIF5 dependent shade regulated gene as well as putative direct target genes of PIF5. This revealed evidence for a direct regulatory link between phytochrome signaling and the growth promoting phytohormone auxin (IAA) at the level of biosynthesis, transport and signaling. Subsequently, it was shown, that free-IAA levels are upregulated in response to shade. It is assumed that shade-induced auxin production takes predominantly place in cotyledons of seedlings. This implies, that IAA is subsequently transported basipetally to the hypocotyl and enhances elongation growth. The importance of auxin transport for growth responses has been established by chemical and genetic approaches. To gain a better understanding of spatio-temporal transcriptional regulation of shade-induce auxin, I generated in a second project, an organ specific high throughput data focusing on cotyledon and hypocotyl of young Arabidopsis seedlings. Interestingly, both organs show an opposite growth regulation by shade. I first investigated the spatio-transcriptional regulation of auxin re- sponsive gene, in order to determine how broad gene expression pattern can be explained by the hypothesized movement of auxin from cotyledons to hypocotyls in shade. The analysis suggests, that several genes are indeed regulated according to our prediction and others are regulated in a more complex manner. In addition, analysis of gene families of auxin biosynthetic and transport components, lead to the identification of essential family members for shade-induced growth re- sponses, which were subsequently experimentally confirmed. Finally, the analysis of expression pattern identified several candidate genes, which possibly explain aspects of the opposite growth response of the different organs.