938 resultados para source analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study is an analysis of IR sources in the Alpha Persei open cluster region from the IRAS Point Source Catalog and from ground-based photometric observations. Cross-identification between stars in the region and IRAS Point Source Catalog was performed and nine new associations were found. BVRI Johnson photometry for 24 of the matched objects have been carried out. Physical identity of visual and IRAS sources and relationship to the Alpha Persei open cluster are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The COMPTEL unidentified source GRO J1411-64 was observed by INTEGRAL, and its central part, also by XMM-Newton. The data analysis shows no hint for new detections at hard X-rays. The upper limits in flux herein presented constrain the energy spectrum of whatever was producing GRO J1411-64, imposing, in the framework of earlier COMPTEL observations, the existence of a peak in power output located somewhere between 300-700 keV for the so-called low state. The Circinus Galaxy is the only source detected within the 4$\sigma$ location error of GRO J1411-64, but can be safely excluded as the possible counterpart: the extrapolation of the energy spectrum is well below the one for GRO J1411-64 at MeV energies. 22 significant sources (likelihood $> 10$) were extracted and analyzed from XMM-Newton data. Only one of these sources, XMMU J141255.6-635932, is spectrally compatible with GRO J1411-64 although the fact the soft X-ray observations do not cover the full extent of the COMPTEL source position uncertainty make an association hard to quantify and thus risky. The unique peak of the power output at high energies (hard X-rays and gamma-rays) resembles that found in the SED seen in blazars or microquasars. However, an analysis using a microquasar model consisting on a magnetized conical jet filled with relativistic electrons which radiate through synchrotron and inverse Compton scattering with star, disk, corona and synchrotron photons shows that it is hard to comply with all observational constrains. This and the non-detection at hard X-rays introduce an a-posteriori question mark upon the physical reality of this source, which is discussed in some detail.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Leakage detection is an important issue in many chemical sensing applications. Leakage detection hy thresholds suffers from important drawbacks when sensors have serious drifts or they are affected by cross-sensitivities. Here we present an adaptive method based in a Dynamic Principal Component Analysis that models the relationships between the sensors in the may. In normal conditions a certain variance distribution characterizes sensor signals. However, in the presence of a new source of variance the PCA decomposition changes drastically. In order to prevent the influence of sensor drifts the model is adaptive and it is calculated in a recursive manner with minimum computational effort. The behavior of this technique is studied with synthetic signals and with real signals arising by oil vapor leakages in an air compressor. Results clearly demonstrate the efficiency of the proposed method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article analyses and discusses issues that pertain to the choice of relevant databases for assigning values to the components of evaluative likelihood ratio procedures at source level. Although several formal likelihood ratio developments currently exist, both case practitioners and recipients of expert information (such as judiciary) may be reluctant to consider them as a framework for evaluating scientific evidence in context. The recent ruling R v T and ensuing discussions in many forums provide illustrative examples for this. In particular, it is often felt that likelihood ratio-based reasoning amounts to an application that requires extensive quantitative information along with means for dealing with technicalities related to the algebraic formulation of these approaches. With regard to this objection, this article proposes two distinct discussions. In a first part, it is argued that, from a methodological point of view, there are additional levels of qualitative evaluation that are worth considering prior to focusing on particular numerical probability assignments. Analyses will be proposed that intend to show that, under certain assumptions, relative numerical values, as opposed to absolute values, may be sufficient to characterize a likelihood ratio for practical and pragmatic purposes. The feasibility of such qualitative considerations points out that the availability of hard numerical data is not a necessary requirement for implementing a likelihood ratio approach in practice. It is further argued that, even if numerical evaluations can be made, qualitative considerations may be valuable because they can further the understanding of the logical underpinnings of an assessment. In a second part, the article will draw a parallel to R v T by concentrating on a practical footwear mark case received at the authors' institute. This case will serve the purpose of exemplifying the possible usage of data from various sources in casework and help to discuss the difficulty associated with reconciling the depth of theoretical likelihood ratio developments and limitations in the degree to which these developments can actually be applied in practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Alzheimer's disease (AD) disrupts functional connectivity in distributed cortical networks. We analyzed changes in the S-estimator, a measure of multivariate intraregional synchronization, in electroencephalogram (EEG) source space in 15 mild AD patients versus 15 age-matched controls to evaluate its potential as a marker of AD progression. All participants underwent 2 clinical evaluations and 2 EEG recording sessions on diagnosis and after a year. The main effect of AD was hyposynchronization in the medial temporal and frontal regions and relative hypersynchronization in posterior cingulate, precuneus, cuneus, and parietotemporal cortices. However, the S-estimator did not change over time in either group. This result motivated an analysis of rapidly progressing AD versus slow-progressing patients. Rapidly progressing AD patients showed a significant reduction in synchronization with time, manifest in left frontotemporal cortex. Thus, the evolution of source EEG synchronization over time is correlated with the rate of disease progression and should be considered as a cost-effective AD biomarker.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Soil infiltration is a key link of the natural water cycle process. Studies on soil permeability are conducive for water resources assessment and estimation, runoff regulation and management, soil erosion modeling, nonpoint and point source pollution of farmland, among other aspects. The unequal influence of rainfall duration, rainfall intensity, antecedent soil moisture, vegetation cover, vegetation type, and slope gradient on soil cumulative infiltration was studied under simulated rainfall and different underlying surfaces. We established a six factor-model of soil cumulative infiltration by the improved back propagation (BP)-based artificial neural network algorithm with a momentum term and self-adjusting learning rate. Compared to the multiple nonlinear regression method, the stability and accuracy of the improved BP algorithm was better. Based on the improved BP model, the sensitive index of these six factors on soil cumulative infiltration was investigated. Secondly, the grey relational analysis method was used to individually study grey correlations among these six factors and soil cumulative infiltration. The results of the two methods were very similar. Rainfall duration was the most influential factor, followed by vegetation cover, vegetation type, rainfall intensity and antecedent soil moisture. The effect of slope gradient on soil cumulative infiltration was not significant.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During my PhD, my aim was to provide new tools to increase our capacity to analyse gene expression patterns, and to study on a large-scale basis the evolution of gene expression in animals. Gene expression patterns (when and where a gene is expressed) are a key feature in understanding gene function, notably in development. It appears clear now that the evolution of developmental processes and of phenotypes is shaped both by evolution at the coding sequence level, and at the gene expression level.Studying gene expression evolution in animals, with complex expression patterns over tissues and developmental time, is still challenging. No tools are available to routinely compare expression patterns between different species, with precision, and on a large-scale basis. Studies on gene expression evolution are therefore performed only on small genes datasets, or using imprecise descriptions of expression patterns.The aim of my PhD was thus to develop and use novel bioinformatics resources, to study the evolution of gene expression. To this end, I developed the database Bgee (Base for Gene Expression Evolution). The approach of Bgee is to transform heterogeneous expression data (ESTs, microarrays, and in-situ hybridizations) into present/absent calls, and to annotate them to standard representations of anatomy and development of different species (anatomical ontologies). An extensive mapping between anatomies of species is then developed based on hypothesis of homology. These precise annotations to anatomies, and this extensive mapping between species, are the major assets of Bgee, and have required the involvement of many co-workers over the years. My main personal contribution is the development and the management of both the Bgee database and the web-application.Bgee is now on its ninth release, and includes an important gene expression dataset for 5 species (human, mouse, drosophila, zebrafish, Xenopus), with the most data from mouse, human and zebrafish. Using these three species, I have conducted an analysis of gene expression evolution after duplication in vertebrates.Gene duplication is thought to be a major source of novelty in evolution, and to participate to speciation. It has been suggested that the evolution of gene expression patterns might participate in the retention of duplicate genes. I performed a large-scale comparison of expression patterns of hundreds of duplicated genes to their singleton ortholog in an outgroup, including both small and large-scale duplicates, in three vertebrate species (human, mouse and zebrafish), and using highly accurate descriptions of expression patterns. My results showed unexpectedly high rates of de novo acquisition of expression domains after duplication (neofunctionalization), at least as high or higher than rates of partitioning of expression domains (subfunctionalization). I found differences in the evolution of expression of small- and large-scale duplicates, with small-scale duplicates more prone to neofunctionalization. Duplicates with neofunctionalization seemed to evolve under more relaxed selective pressure on the coding sequence. Finally, even with abundant and precise expression data, the majority fate I recovered was neither neo- nor subfunctionalization of expression domains, suggesting a major role for other mechanisms in duplicate gene retention.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes methods to analyze the brain's electric fields recorded with multichannel Electroencephalogram (EEG) and demonstrates their implementation in the software CARTOOL. It focuses on the analysis of the spatial properties of these fields and on quantitative assessment of changes of field topographies across time, experimental conditions, or populations. Topographic analyses are advantageous because they are reference independents and thus render statistically unambiguous results. Neurophysiologically, differences in topography directly indicate changes in the configuration of the active neuronal sources in the brain. We describe global measures of field strength and field similarities, temporal segmentation based on topographic variations, topographic analysis in the frequency domain, topographic statistical analysis, and source imaging based on distributed inverse solutions. All analysis methods are implemented in a freely available academic software package called CARTOOL. Besides providing these analysis tools, CARTOOL is particularly designed to visualize the data and the analysis results using 3-dimensional display routines that allow rapid manipulation and animation of 3D images. CARTOOL therefore is a helpful tool for researchers as well as for clinicians to interpret multichannel EEG and evoked potentials in a global, comprehensive, and unambiguous way.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In October 1998, Hurricane Mitch triggered numerous landslides (mainly debris flows) in Honduras and Nicaragua, resulting in a high death toll and in considerable damage to property. The potential application of relatively simple and affordable spatial prediction models for landslide hazard mapping in developing countries was studied. Our attention was focused on a region in NW Nicaragua, one of the most severely hit places during the Mitch event. A landslide map was obtained at 1:10 000 scale in a Geographic Information System (GIS) environment from the interpretation of aerial photographs and detailed field work. In this map the terrain failure zones were distinguished from the areas within the reach of the mobilized materials. A Digital Elevation Model (DEM) with 20 m×20 m of pixel size was also employed in the study area. A comparative analysis of the terrain failures caused by Hurricane Mitch and a selection of 4 terrain factors extracted from the DEM which, contributed to the terrain instability, was carried out. Land propensity to failure was determined with the aid of a bivariate analysis and GIS tools in a terrain failure susceptibility map. In order to estimate the areas that could be affected by the path or deposition of the mobilized materials, we considered the fact that under intense rainfall events debris flows tend to travel long distances following the maximum slope and merging with the drainage network. Using the TauDEM extension for ArcGIS software we generated automatically flow lines following the maximum slope in the DEM starting from the areas prone to failure in the terrain failure susceptibility map. The areas crossed by the flow lines from each terrain failure susceptibility class correspond to the runout susceptibility classes represented in a runout susceptibility map. The study of terrain failure and runout susceptibility enabled us to obtain a spatial prediction for landslides, which could contribute to landslide risk mitigation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE: Although the central role of the immune system for tumor prognosis is generally accepted, a single robust marker is not yet available. EXPERIMENTAL DESIGN: On the basis of receiver operating characteristic analyses, robust markers were identified from a 60-gene B cell-derived metagene and analyzed in gene expression profiles of 1,810 breast cancer; 1,056 non-small cell lung carcinoma (NSCLC); 513 colorectal; and 426 ovarian cancer patients. Protein and RNA levels were examined in paraffin-embedded tissue of 330 breast cancer patients. The cell types were identified with immunohistochemical costaining and confocal fluorescence microscopy. RESULTS: We identified immunoglobulin κ C (IGKC) which as a single marker is similarly predictive and prognostic as the entire B-cell metagene. IGKC was consistently associated with metastasis-free survival across different molecular subtypes in node-negative breast cancer (n = 965) and predicted response to anthracycline-based neoadjuvant chemotherapy (n = 845; P < 0.001). In addition, IGKC gene expression was prognostic in NSCLC and colorectal cancer. No association was observed in ovarian cancer. IGKC protein expression was significantly associated with survival in paraffin-embedded tissues of 330 breast cancer patients. Tumor-infiltrating plasma cells were identified as the source of IGKC expression. CONCLUSION: Our findings provide IGKC as a novel diagnostic marker for risk stratification in human cancer and support concepts to exploit the humoral immune response for anticancer therapy. It could be validated in several independent cohorts and carried out similarly well in RNA from fresh frozen as well as from paraffin tissue and on protein level by immunostaining.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Advanced neuroinformatics tools are required for methods of connectome mapping, analysis, and visualization. The inherent multi-modality of connectome datasets poses new challenges for data organization, integration, and sharing. We have designed and implemented the Connectome Viewer Toolkit - a set of free and extensible open source neuroimaging tools written in Python. The key components of the toolkit are as follows: (1) The Connectome File Format is an XML-based container format to standardize multi-modal data integration and structured metadata annotation. (2) The Connectome File Format Library enables management and sharing of connectome files. (3) The Connectome Viewer is an integrated research and development environment for visualization and analysis of multi-modal connectome data. The Connectome Viewer's plugin architecture supports extensions with network analysis packages and an interactive scripting shell, to enable easy development and community contributions. Integration with tools from the scientific Python community allows the leveraging of numerous existing libraries for powerful connectome data mining, exploration, and comparison. We demonstrate the applicability of the Connectome Viewer Toolkit using Diffusion MRI datasets processed by the Connectome Mapper. The Connectome Viewer Toolkit is available from http://www.cmtk.org/

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The field of Connectomic research is growing rapidly, resulting from methodological advances in structural neuroimaging on many spatial scales. Especially progress in Diffusion MRI data acquisition and processing made available macroscopic structural connectivity maps in vivo through Connectome Mapping Pipelines (Hagmann et al, 2008) into so-called Connectomes (Hagmann 2005, Sporns et al, 2005). They exhibit both spatial and topological information that constrain functional imaging studies and are relevant in their interpretation. The need for a special-purpose software tool for both clinical researchers and neuroscientists to support investigations of such connectome data has grown. Methods: We developed the ConnectomeViewer, a powerful, extensible software tool for visualization and analysis in connectomic research. It uses the novel defined container-like Connectome File Format, specifying networks (GraphML), surfaces (Gifti), volumes (Nifti), track data (TrackVis) and metadata. Usage of Python as programming language allows it to by cross-platform and have access to a multitude of scientific libraries. Results: Using a flexible plugin architecture, it is possible to enhance functionality for specific purposes easily. Following features are already implemented: * Ready usage of libraries, e.g. for complex network analysis (NetworkX) and data plotting (Matplotlib). More brain connectivity measures will be implemented in a future release (Rubinov et al, 2009). * 3D View of networks with node positioning based on corresponding ROI surface patch. Other layouts possible. * Picking functionality to select nodes, select edges, get more node information (ConnectomeWiki), toggle surface representations * Interactive thresholding and modality selection of edge properties using filters * Arbitrary metadata can be stored for networks, thereby allowing e.g. group-based analysis or meta-analysis. * Python Shell for scripting. Application data is exposed and can be modified or used for further post-processing. * Visualization pipelines using filters and modules can be composed with Mayavi (Ramachandran et al, 2008). * Interface to TrackVis to visualize track data. Selected nodes are converted to ROIs for fiber filtering The Connectome Mapping Pipeline (Hagmann et al, 2008) processed 20 healthy subjects into an average Connectome dataset. The Figures show the ConnectomeViewer user interface using this dataset. Connections are shown that occur in all 20 subjects. The dataset is freely available from the homepage (connectomeviewer.org). Conclusions: The ConnectomeViewer is a cross-platform, open-source software tool that provides extensive visualization and analysis capabilities for connectomic research. It has a modular architecture, integrates relevant datatypes and is completely scriptable. Visit www.connectomics.org to get involved as user or developer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this work is to present a new concept, called on-line desorption of dried blood spots (on-line DBS), allowing the direct analysis of a dried blood spot coupled to liquid chromatography mass spectrometry device (LC/MS). The system is based on an inox cell which can receive a blood sample (10 microL) previously spotted on a filter paper. The cell is then integrated into LC/MS system where the analytes are desorbed out of the paper towards a column switching system ensuring the purification and separation of the compounds before their detection on a single quadrupole MS coupled to atmospheric pressure chemical ionisation (APCI) source. The described procedure implies that no pretreatment is necessary in spite the analysis is based on whole blood sample. To ensure the applicability of the concept, saquinavir, imipramine, and verapamil were chosen. Despite the use of a small sampling volume and a single quadrupole detector, on-line DBS allowed the analyses of these three compounds over their therapeutic concentrations from 50 to 500 ng/mL for imipramine and verapamil and from 100 to 1000 ng/mL for saquinavir. Moreover, the method showed good repeatability with relative standard deviation (RSD) lower than 15% based on two levels of concentration (low and high). Function responses were found to be linear over the therapeutic concentration for each compound and were used to determine the concentrations of real patient samples for saquinavir. Comparison of the founded values with those of a validated method used routinely in a reference laboratory showed a good correlation between the two methods. Moreover, good selectivity was observed ensuring that no endogenous or chemical components interfered with the quantitation of the analytes. This work demonstrates the feasibility and applicability of the on-line DBS procedure for bioanalysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

AbstractBACKGROUND: Scientists have been trying to understand the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. For some diseases, it has become evident that it is not enough to obtain a catalogue of the disease-related genes but to uncover how disruptions of molecular networks in the cell give rise to disease phenotypes. Moreover, with the unprecedented wealth of information available, even obtaining such catalogue is extremely difficult.PRINCIPAL FINDINGS: We developed a comprehensive gene-disease association database by integrating associations from several sources that cover different biomedical aspects of diseases. In particular, we focus on the current knowledge of human genetic diseases including mendelian, complex and environmental diseases. To assess the concept of modularity of human diseases, we performed a systematic study of the emergent properties of human gene-disease networks by means of network topology and functional annotation analysis. The results indicate a highly shared genetic origin of human diseases and show that for most diseases, including mendelian, complex and environmental diseases, functional modules exist. Moreover, a core set of biological pathways is found to be associated with most human diseases. We obtained similar results when studying clusters of diseases, suggesting that related diseases might arise due to dysfunction of common biological processes in the cell.CONCLUSIONS: For the first time, we include mendelian, complex and environmental diseases in an integrated gene-disease association database and show that the concept of modularity applies for all of them. We furthermore provide a functional analysis of disease-related modules providing important new biological insights, which might not be discovered when considering each of the gene-disease association repositories independently. Hence, we present a suitable framework for the study of how genetic and environmental factors, such as drugs, contribute to diseases.AVAILABILITY: The gene-disease networks used in this study and part of the analysis are available at http://ibi.imim.es/DisGeNET/DisGeNETweb.html#Download