947 resultados para Data aggregation
Resumo:
Despite the central role of quantitative PCR (qPCR) in the quantification of mRNA transcripts, most analyses of qPCR data are still delegated to the software that comes with the qPCR apparatus. This is especially true for the handling of the fluorescence baseline. This article shows that baseline estimation errors are directly reflected in the observed PCR efficiency values and are thus propagated exponentially in the estimated starting concentrations as well as 'fold-difference' results. Because of the unknown origin and kinetics of the baseline fluorescence, the fluorescence values monitored in the initial cycles of the PCR reaction cannot be used to estimate a useful baseline value. An algorithm that estimates the baseline by reconstructing the log-linear phase downward from the early plateau phase of the PCR reaction was developed and shown to lead to very reproducible PCR efficiency values. PCR efficiency values were determined per sample by fitting a regression line to a subset of data points in the log-linear phase. The variability, as well as the bias, in qPCR results was significantly reduced when the mean of these PCR efficiencies per amplicon was used in the calculation of an estimate of the starting concentration per sample.
Resumo:
The response to intra- and interspecific faecal assembling signals was tested in Rhodnius prolixus. Papers impregnated with excrement of R. prolixus induced the aggregation of larvae of this species, but also of those of Triatoma infestans. However, faeces belonging to T. infestans were not able to assemble larvae of R. prolixus. On the other hand, there was no response of R. prolixus to putative chemical factors from their cuticle (footprints), in contrast to T. infestans. Results are discussed as related to the ecology of both species.
Resumo:
The DNA microarray technology has arguably caught the attention of the worldwide life science community and is now systematically supporting major discoveries in many fields of study. The majority of the initial technical challenges of conducting experiments are being resolved, only to be replaced with new informatics hurdles, including statistical analysis, data visualization, interpretation, and storage. Two systems of databases, one containing expression data and one containing annotation data are quickly becoming essential knowledge repositories of the research community. This present paper surveys several databases, which are considered "pillars" of research and important nodes in the network. This paper focuses on a generalized workflow scheme typical for microarray experiments using two examples related to cancer research. The workflow is used to reference appropriate databases and tools for each step in the process of array experimentation. Additionally, benefits and drawbacks of current array databases are addressed, and suggestions are made for their improvement.
Resumo:
The behavioural response of Triatoma pseudomaculata to chemical substances present in their faeces or cuticle (footprints) was analyzed. Groups of larvae were simultaneously exposed to a clean filter paper and to another paper impregnated with a chemical stimulus in a circular arena. In these choice experiments, the insects aggregated significantly around papers impregnated with dry faeces. In addition, the bugs also showed a significant aggregation response to papers impregnated with compounds derived from their cuticle that were deposited by contact on the substrate. These results indicate that chemical compounds that affect the behaviour of T. pseudomaculata are present in the faeces and in the cuticle of this species. Results are discussed in relation to chemical communication in the Triatominae, as well as to the potential use of these substances in traps or sensors for the detection of this species.
Resumo:
OBJECTIVE: The optimal coronary MR angiography sequence has yet to be determined. We sought to quantitatively and qualitatively compare four coronary MR angiography sequences. SUBJECTS AND METHODS. Free-breathing coronary MR angiography was performed in 12 patients using four imaging sequences (turbo field-echo, fast spin-echo, balanced fast field-echo, and spiral turbo field-echo). Quantitative comparisons, including signal-to-noise ratio, contrast-to-noise ratio, vessel diameter, and vessel sharpness, were performed using a semiautomated analysis tool. Accuracy for detection of hemodynamically significant disease (> 50%) was assessed in comparison with radiographic coronary angiography. RESULTS: Signal-to-noise and contrast-to-noise ratios were markedly increased using the spiral (25.7 +/- 5.7 and 15.2 +/- 3.9) and balanced fast field-echo (23.5 +/- 11.7 and 14.4 +/- 8.1) sequences compared with the turbo field-echo (12.5 +/- 2.7 and 8.3 +/- 2.6) sequence (p < 0.05). Vessel diameter was smaller with the spiral sequence (2.6 +/- 0.5 mm) than with the other techniques (turbo field-echo, 3.0 +/- 0.5 mm, p = 0.6; balanced fast field-echo, 3.1 +/- 0.5 mm, p < 0.01; fast spin-echo, 3.1 +/- 0.5 mm, p < 0.01). Vessel sharpness was highest with the balanced fast field-echo sequence (61.6% +/- 8.5% compared with turbo field-echo, 44.0% +/- 6.6%; spiral, 44.7% +/- 6.5%; fast spin-echo, 18.4% +/- 6.7%; p < 0.001). The overall accuracies of the sequences were similar (range, 74% for turbo field-echo, 79% for spiral). Scanning time for the fast spin-echo sequences was longest (10.5 +/- 0.6 min), and for the spiral acquisitions was shortest (5.2 +/- 0.3 min). CONCLUSION: Advantages in signal-to-noise and contrast-to-noise ratios, vessel sharpness, and the qualitative results appear to favor spiral and balanced fast field-echo coronary MR angiography sequences, although subjective accuracy for the detection of coronary artery disease was similar to that of other sequences.
Resumo:
This paper presents general problems and approaches for the spatial data analysis using machine learning algorithms. Machine learning is a very powerful approach to adaptive data analysis, modelling and visualisation. The key feature of the machine learning algorithms is that they learn from empirical data and can be used in cases when the modelled environmental phenomena are hidden, nonlinear, noisy and highly variable in space and in time. Most of the machines learning algorithms are universal and adaptive modelling tools developed to solve basic problems of learning from data: classification/pattern recognition, regression/mapping and probability density modelling. In the present report some of the widely used machine learning algorithms, namely artificial neural networks (ANN) of different architectures and Support Vector Machines (SVM), are adapted to the problems of the analysis and modelling of geo-spatial data. Machine learning algorithms have an important advantage over traditional models of spatial statistics when problems are considered in a high dimensional geo-feature spaces, when the dimension of space exceeds 5. Such features are usually generated, for example, from digital elevation models, remote sensing images, etc. An important extension of models concerns considering of real space constrains like geomorphology, networks, and other natural structures. Recent developments in semi-supervised learning can improve modelling of environmental phenomena taking into account on geo-manifolds. An important part of the study deals with the analysis of relevant variables and models' inputs. This problem is approached by using different feature selection/feature extraction nonlinear tools. To demonstrate the application of machine learning algorithms several interesting case studies are considered: digital soil mapping using SVM, automatic mapping of soil and water system pollution using ANN; natural hazards risk analysis (avalanches, landslides), assessments of renewable resources (wind fields) with SVM and ANN models, etc. The dimensionality of spaces considered varies from 2 to more than 30. Figures 1, 2, 3 demonstrate some results of the studies and their outputs. Finally, the results of environmental mapping are discussed and compared with traditional models of geostatistics.
Resumo:
Background- Formation of platelet plug initiates hemostasis after vascular injury and triggers thrombosis in ischemic disease. However, the mechanisms leading to the formation of a stable thrombus are poorly understood. Connexins comprise a family of proteins that form gap junctions enabling intercellular coordination of tissue activity, a process termed gap junctional intercellular communication. Methods and Results- In the present study, we show that megakaryocytes and platelets express connexin 37 (Cx37). Deletion of the Cx37 gene in mice shortens bleeding time and increases thrombus propensity. Aggregation is increased in murine Cx37(-/-) platelets or in murine Cx37(+/+) and human platelets treated with gap junction blockers. Intracellular microinjection of neurobiotin, a Cx37-permeant tracer, revealed gap junctional intercellular communication in platelet aggregates, which was impaired in Cx37(-/-) platelets and in human platelets exposed to gap junction blockers. Finally, healthy subjects homozygous for Cx37-1019C, a prognostic marker for atherosclerosis, display increased platelet responses compared with subjects carrying the Cx37-1019T allele. Expression of these polymorphic channels in communication-deficient cells revealed a decreased permeability of Cx37-1019C channels for neurobiotin. Conclusions- We propose that the establishment of gap junctional communication between Cx37-expressing platelets provides a mechanism to limit thrombus propensity. To our knowledge, these data provide the first evidence incriminating gap junctions in the pathogenesis of thrombosis.
Resumo:
Executive Summary The first essay of this dissertation investigates whether greater exchange rate uncertainty (i.e., variation over time in the exchange rate) fosters or depresses the foreign investment of multinational firms. In addition to the direct capital financing it supplies, foreign investment can be a source of valuable technology and know-how, which can have substantial positive effects on a host country's economic growth. Thus, it is critically important for policy makers and central bankers, among others, to understand how multinationals base their investment decisions on the characteristics of foreign exchange markets. In this essay, I first develop a theoretical framework to improve our knowledge regarding how the aggregate level of foreign investment responds to exchange rate uncertainty when an economy consists of many firms, each of which is making decisions. The analysis predicts a U-shaped effect of exchange rate uncertainty on the total level of foreign investment of the economy. That is, the effect is negative for low levels of uncertainty and positive for higher levels of uncertainty. This pattern emerges because the relationship between exchange rate volatility and 'the probability of investment is negative for firms with low productivity at home (i.e., firms that find it profitable to invest abroad) and the relationship is positive for firms with high productivity at home (i.e., firms that prefer exporting their product). This finding stands in sharp contrast to predictions in the existing literature that consider a single firm's decision to invest in a unique project. The main contribution of this research is to show that the aggregation over many firms produces a U-shaped pattern between exchange rate uncertainty and the probability of investment. Using data from industrialized countries for the period of 1982-2002, this essay offers a comprehensive empirical analysis that provides evidence in support of the theoretical prediction. In the second essay, I aim to explain the time variation in sovereign credit risk, which captures the risk that a government may be unable to repay its debt. The importance of correctly evaluating such a risk is illustrated by the central role of sovereign debt in previous international lending crises. In addition, sovereign debt is the largest asset class in emerging markets. In this essay, I provide a pricing formula for the evaluation of sovereign credit risk in which the decision to default on sovereign debt is made by the government. The pricing formula explains the variation across time in daily credit spreads - a widely used measure of credit risk - to a degree not offered by existing theoretical and empirical models. I use information on a country's stock market to compute the prevailing sovereign credit spread in that country. The pricing formula explains a substantial fraction of the time variation in daily credit spread changes for Brazil, Mexico, Peru, and Russia for the 1998-2008 period, particularly during the recent subprime crisis. I also show that when a government incentive to default is allowed to depend on current economic conditions, one can best explain the level of credit spreads, especially during the recent period of financial distress. In the third essay, I show that the risk of sovereign default abroad can produce adverse consequences for the U.S. equity market through a decrease in returns and an increase in volatility. The risk of sovereign default, which is no longer limited to emerging economies, has recently become a major concern for financial markets. While sovereign debt plays an increasing role in today's financial environment, the effects of sovereign credit risk on the U.S. financial markets have been largely ignored in the literature. In this essay, I develop a theoretical framework that explores how the risk of sovereign default abroad helps explain the level and the volatility of U.S. equity returns. The intuition for this effect is that negative economic shocks deteriorate the fiscal situation of foreign governments, thereby increasing the risk of a sovereign default that would trigger a local contraction in economic growth. The increased risk of an economic slowdown abroad amplifies the direct effect of these shocks on the level and the volatility of equity returns in the U.S. through two channels. The first channel involves a decrease in the future earnings of U.S. exporters resulting from unfavorable adjustments to the exchange rate. The second channel involves investors' incentives to rebalance their portfolios toward safer assets, which depresses U.S. equity prices. An empirical estimation of the model with monthly data for the 1994-2008 period provides evidence that the risk of sovereign default abroad generates a strong leverage effect during economic downturns, which helps to substantially explain the level and the volatility of U.S. equity returns.
Resumo:
Background: Gene expression analysis has emerged as a major biological research area, with real-time quantitative reverse transcription PCR (RT-QPCR) being one of the most accurate and widely used techniques for expression profiling of selected genes. In order to obtain results that are comparable across assays, a stable normalization strategy is required. In general, the normalization of PCR measurements between different samples uses one to several control genes (e. g. housekeeping genes), from which a baseline reference level is constructed. Thus, the choice of the control genes is of utmost importance, yet there is not a generally accepted standard technique for screening a large number of candidates and identifying the best ones. Results: We propose a novel approach for scoring and ranking candidate genes for their suitability as control genes. Our approach relies on publicly available microarray data and allows the combination of multiple data sets originating from different platforms and/or representing different pathologies. The use of microarray data allows the screening of tens of thousands of genes, producing very comprehensive lists of candidates. We also provide two lists of candidate control genes: one which is breast cancer-specific and one with more general applicability. Two genes from the breast cancer list which had not been previously used as control genes are identified and validated by RT-QPCR. Open source R functions are available at http://www.isrec.isb-sib.ch/similar to vpopovic/research/ Conclusion: We proposed a new method for identifying candidate control genes for RT-QPCR which was able to rank thousands of genes according to some predefined suitability criteria and we applied it to the case of breast cancer. We also empirically showed that translating the results from microarray to PCR platform was achievable.
Resumo:
This annual analysis of data provides an overview of HIV and STI epidemiology in Northern Ireland for the calendar year 2009. Information from a variety of sources is collated and analysed in detail, while any evident trends over time are highlightedwithgraphs and tables. As well as a general summary of STI diagnoses and a number of overall conclusions, the report looks specifically at each of the following STIs: chlamydia, gonorrhoea, genital herpes, genital warts, syphilis, lymphogranuloma venereum (LGV) and HIV.
Resumo:
Assessing the impact of cultural change on parasitism has been a central goal in archaeoparasitology. The influence of civilization and the development of empires on parasitism has not been evaluated. Presented here is a preliminary analysis of the change in human parasitism associated with the Inca conquest of the Lluta Valley in Northern Chile. Changes in parasite prevalence are described. It can be seen that the change in life imposed on the inhabitants of the Lluta Valley by the Incas caused an increase in parasitism.