755 resultados para pipeline programs
Resumo:
We used ground surveys to identify breeding habitat for Whimbrel (Numenius phaeopus) in the outer Mackenzie Delta, Northwest Territories, and to test the value of high-resolution IKONOS imagery for mapping additional breeding habitat in the Delta. During ground surveys, we found Whimbrel nests (n = 28) in extensive areas of wet-sedge low-centered polygon (LCP) habitat on two islands in the Delta (Taglu and Fish islands) in 2006 and 2007. Supervised classification using spectral analysis of IKONOS imagery successfully identified additional areas of wet-sedge habitat in the region. However, ground surveys to test this classification found that many areas of wet-sedge habitat had dense shrubs, no standing water, and/or lacked polygon structure and did not support breeding Whimbrel. Visual examination of the IKONOS imagery was necessary to determine which areas exhibited LCP structure. Much lower densities of nesting Whimbrel were also found in upland habitats near wetlands. We used habitat maps developed from a combination of methods, to perform scenario analyses to estimate the potential effects of the Mackenzie Gas Project on Whimbrel habitat. Assuming effective complete habitat loss within 20 m, 50 m, or 250 m of any infrastructure or pipeline, the currently proposed pipeline development would result in loss of 8%, 12%, or 30% of existing Whimbrel habitat. If subsidence were to occur, most Whimbrel habitat could become unsuitable. If the facility is developed, follow-up surveys will be required to test these models.
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with 14N and 15N in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of uniformly 14N/15N-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with N-14 and N-15 in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of Uniformly N-14/N-15-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
This paper is concerned with the uniformization of a system of afine recurrence equations. This transformation is used in the design (or compilation) of highly parallel embedded systems (VLSI systolic arrays, signal processing filters, etc.). In this paper, we present and implement an automatic system to achieve uniformization of systems of afine recurrence equations. We unify the results from many earlier papers, develop some theoretical extensions, and then propose effective uniformization algorithms. Our results can be used in any high level synthesis tool based on polyhedral representation of nested loop computations.
Resumo:
The orthodox approach for incentivising Demand Side Participation (DSP) programs is that utility losses from capital, installation and planning costs should be recovered under financial incentive mechanisms which aim to ensure that utilities have the right incentives to implement DSP activities. The recent national smart metering roll-out in the UK implies that this approach needs to be reassessed since utilities will recover the capital costs associated with DSP technology through bills. This paper introduces a reward and penalty mechanism focusing on residential users. DSP planning costs are recovered through payments from those consumers who do not react to peak signals. Those consumers who do react are rewarded by paying lower bills. Because real-time incentives to residential consumers tend to fail due to the negligible amounts associated with net gains (and losses) or individual users, in the proposed mechanism the regulator determines benchmarks which are matched against responses to signals and caps the level of rewards/penalties to avoid market distortions. The paper presents an overview of existing financial incentive mechanisms for DSP; introduces the reward/penalty mechanism aimed at fostering DSP under the hypothesis of smart metering roll-out; considers the costs faced by utilities for DSP programs; assesses linear rate effects and value changes; introduces compensatory weights for those consumers who have physical or financial impediments; and shows findings based on simulation runs on three discrete levels of elasticity.
Lost in flatlands: will the next generation of page layout programs give us back our sense of space?
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
We examine whether and under what circumstances World Bank and International Monetary Fund (IMF) programs affect the likelihood of major government crises. We find that crises are, on average, more likely as a consequence of World Bank programs. We also find that governments face an increasing risk of entering a crisis when they remain under an IMF or World Bank arrangement once the economy's performance improves. The international financial institution's (IFI) scapegoat function thus seems to lose its value when the need for financial support is less urgent. While the probability of a crisis increases when a government turns to the IFIs, programs inherited by preceding governments do not affect the probability of a crisis. This is in line with two interpretations. First, the conclusion of IFI programs can signal the government's incompetence, and second, governments that inherit programs might be less likely to implement program conditions agreed to by their predecessors.
Resumo:
This article reports the results of a mixed-methods approach to investigating the association between globalisation and MATESOL in UK universities. Qualitative and quantitative data collected from academic staff through eight emails, four interviews and 41 questionnaires indicate that the globalised context of higher education have affected these programmes in a number of ways including an increasing interest in recruiting more international students and a growing awareness about the need for curriculum and content modifications. The analysis of the data suggests that although change has been an inherent characteristic of these MAs over the past decade, it has been implemented gradually and conservatively, often relying on a dialectic relationship between academic staff and universities’ policies. The results imply that factors other than globalisation have also been at work. Many of the participants contend that globalisation has not lowered the quality of these MAs or standards of good practice.