36 resultados para submarine pipeline
em CentAUR: Central Archive University of Reading - UK
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with 14N and 15N in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of uniformly 14N/15N-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
Submarine cliffs are typically crowded with sessile organisms, most of which are ultimately exported downwards. Here we report a 24 month study of benthic fauna dropping from such cliffs at sites of differing cliff angle and flow rates at Lough Hyne Marine Nature Reserve, Co. Cork, Ireland. The magnitude of 'fall out' material collected in capture nets was highly seasonal and composed of sessile and mobile elements. Sponges, ascidians, cnidarians, polychaetes, bryozoans and barnacles dominated the sessile forms. The remainder (mobile fauna) were scavengers and predators such as asteroid echinoderms, gastropod molluscs and malacostracan crustaceans. These were probably migrants targeting fallen sessile organisms. 'Fall out' material (including mobile forms) increased between May and August in both years. This increase in 'fall out' material was correlated with wrasse abundance at the cliffs (with a one month lag period). The activities of the wrasse on the cliffs (feeding, nest building and territory defence) were considered responsible for the majority of 'fall out' material, with natural mortality and the activity of other large mobile organisms (e.g. crustaceans) also being triplicated. Current flow rate and cliff profile were important in amount of 'fall out' material collected. In low current situations export of fallen material was vertical, while both horizontal and vertical export was associated with moderate to high current environments. Higher 'fall out' was associated with overhanging than vertical cliff surfaces. The 'fall out' of marine organisms in low current situations is likely to provide ail important source of nutrition in close proximity to the cliff, in an otherwise impoverished soft sediment habitat. However, in high current areas material will be exported some distance from the source, with final settlement again occurring in soft sediment habitats (as current speed decreases).
Resumo:
The mobile component of a community inhabiting a submarine boulder scree/cliff was investigated at Lough Hyne, Ireland at dawn, midday, dusk and night over a 1-week period. Line transects (50 m) were placed in the infralittoral (6 m) and circumlittoral (18 m) zones and also the interface between these two zones (12 m). The dominant mobile fauna of this cliff consisted of echinoderms (6 species), crustaceans (10 species) and fish (23 species). A different component community was identified at each time/depth interval using Multi-Dimensional Scaling (MDS) even though both species diversity (Shannon-Wiener indices) and richness (number of species) remained constant. These changes in community composition provided indirect evidence for migration by these mobile organisms. However, little evidence was found for migration between different zones with the exception of the several wrasse species. These species were observed to spend the daytime foraging in the deeper zone, but returned to the upper zone at night presumably for protection from predators. For the majority of species, migration was considered to occur to cryptic habitats such as holes and crevices. The number of organisms declined during the night, although crustacean numbers peaked, while fish and echinoderms were most abundant during day, possibly due to predator-prey interactions. This submarine community is in a state of flux, whereby, community characteristics, including trophic and energetic relationships, varied over small temporal (daily) and spatial (m) scales.
Resumo:
Stable isotope labeling combined with MS is a powerful method for measuring relative protein abundances, for instance, by differential metabolic labeling of some or all amino acids with N-14 and N-15 in cell culture or hydroponic media. These and most other types of quantitative proteomics experiments using high-throughput technologies, such as LC-MS/MS, generate large amounts of raw MS data. This data needs to be processed efficiently and automatically, from the mass spectrometer to statistically evaluated protein identifications and abundance ratios. This paper describes in detail an approach to the automated analysis of Uniformly N-14/N-15-labeled proteins using MASCOT peptide identification in conjunction with the trans-proteomic pipeline (TPP) and a few scripts to integrate the analysis workflow. Two large proteomic datasets from uniformly labeled Arabidopsis thaliana were used to illustrate the analysis pipeline. The pipeline can be fully automated and uses only common or freely available software.
Resumo:
Background: Expression microarrays are increasingly used to obtain large scale transcriptomic information on a wide range of biological samples. Nevertheless, there is still much debate on the best ways to process data, to design experiments and analyse the output. Furthermore, many of the more sophisticated mathematical approaches to data analysis in the literature remain inaccessible to much of the biological research community. In this study we examine ways of extracting and analysing a large data set obtained using the Agilent long oligonucleotide transcriptomics platform, applied to a set of human macrophage and dendritic cell samples. Results: We describe and validate a series of data extraction, transformation and normalisation steps which are implemented via a new R function. Analysis of replicate normalised reference data demonstrate that intrarray variability is small (only around 2 of the mean log signal), while interarray variability from replicate array measurements has a standard deviation (SD) of around 0.5 log(2) units (6 of mean). The common practise of working with ratios of Cy5/Cy3 signal offers little further improvement in terms of reducing error. Comparison to expression data obtained using Arabidopsis samples demonstrates that the large number of genes in each sample showing a low level of transcription reflect the real complexity of the cellular transcriptome. Multidimensional scaling is used to show that the processed data identifies an underlying structure which reflect some of the key biological variables which define the data set. This structure is robust, allowing reliable comparison of samples collected over a number of years and collected by a variety of operators. Conclusions: This study outlines a robust and easily implemented pipeline for extracting, transforming normalising and visualising transcriptomic array data from Agilent expression platform. The analysis is used to obtain quantitative estimates of the SD arising from experimental (non biological) intra- and interarray variability, and for a lower threshold for determining whether an individual gene is expressed. The study provides a reliable basis for further more extensive studies of the systems biology of eukaryotic cells.
Resumo:
We describe infinitely scalable pipeline machines with perfect parallelism, in the sense that every instruction of an inline program is executed, on successive data, on every clock tick. Programs with shared data effectively execute in less than a clock tick. We show that pipeline machines are faster than single or multi-core, von Neumann machines for sufficiently many program runs of a sufficiently time consuming program. Our pipeline machines exploit the totality of transreal arithmetic and the known waiting time of statically compiled programs to deliver the interesting property that they need no hardware or software exception handling.
Resumo:
The influence of a large meridional submarine ridge on the decay of Agulhas rings is investigated with a 1 and 2-layer setup of the isopycnic primitive-equation ocean model MICOM. In the single-layer case we show that the SSH decay of the ring is primarily governed by bottom friction and secondly by the radiation of Rossby waves. When a topographic ridge is present, the effect of the ridge on SSH decay and loss of tracer from the ring is negligible. However, the barotropic ring cannot pass the ridge due to energy and vorticity constraints. In the case of a two-layer ring the initial SSH decay is governed by a mixed barotropic–baroclinic instability of the ring. Again, radiation of barotropic Rossby waves is present. When the ring passes the topographic ridge, it shows a small but significant stagnation of SSH decay, agreeing with satellite altimetry observations. This is found to be due to a reduction of the growth rate of the m = 2 instability, to conversions of kinetic energy to the upper layer, and to a decrease in Rossby-wave radiation. The energy transfer is related to the fact that coherent structures in the lower layer cannot pass the steep ridge due to energy constraints. Furthermore, the loss of tracer from the ring through filamentation is less than for a ring moving over a flat bottom, related to a decrease in propagation speed of the ring. We conclude that ridges like the Walvis Ridge tend to stabilize a multi-layer ring and reduce its decay.
Resumo:
Deposits of coral-bearing, marine shell conglomerate exposed at elevations higher than 20 m above present-day mean sea level (MSL) in Bermuda and the Bahamas have previously been interpreted as relict intertidal deposits formed during marine isotope stage (MIS) I I, ca. 360-420 ka before present. On the strength of this evidence, a sea level highstand more than 20 m higher than present-day MSL was inferred for the MIS I I interglacial, despite a lack of clear supporting evidence in the oxygen-isotope records of deep-sea sediment cores. We have critically re-examined the elevated marine deposits in Bermuda, and find their geological setting, sedimentary relations, and microfaunal assemblages to be inconsistent with intertidal deposition over an extended period. Rather, these deposits, which comprise a poorly sorted mixture of reef, lagoon and shoreline sediments, appear to have been carried tens of meters inside karst caves, presumably by large waves, at some time earlier than ca. 310-360 ka before present (MIS 9-11). We hypothesize that these deposits are the result of a large tsunami during the mid-Pleistocene, in which Bermuda was impacted by a wave set that carried sediments from the surrounding reef platform and nearshore waters over the eolianite atoll. Likely causes for such a megatsunami are the flank collapse of an Atlantic island volcano, such as the roughly synchronous Julan or Orotava submarine landslides in the Canary Islands, or a giant submarine landslide on the Atlantic continental margin. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
The authors propose a bit serial pipeline used to perform the genetic operators in a hardware genetic algorithm. The bit-serial nature of the dataflow allows the operators to be pipelined, resulting in an architecture which is area efficient, easily scaled and is independent of the lengths of the chromosomes. An FPGA implementation of the device achieves a throughput of >25 million genes per second
Resumo:
The Konstanz Information Miner is a modular environment which enables easy visual assembly and interactive execution of a data pipeline. It is designed as a teaching, research and collaboration platform, which enables easy integration of new algorithms, data manipulation or visualization methods as new modules or nodes. In this paper we describe some of the design aspects of the underlying architecture and briefly sketch how new nodes can be incorporated.
Resumo:
The paper presents a design for a hardware genetic algorithm which uses a pipeline of systolic arrays. These arrays have been designed using systolic synthesis techniques which involve expressing the algorithm as a set of uniform recurrence relations. The final design divorces the fitness function evaluation from the hardware and can process chromosomes of different lengths, giving the design a generic quality. The paper demonstrates the design methodology by progressively re-writing a simple genetic algorithm, expressed in C code, into a form from which systolic structures can be deduced. This paper extends previous work by introducing a simplification to a previous systolic design for the genetic algorithm. The simplification results in the removal of 2N 2 + 4N cells and reduces the time complexity by 3N + 1 cycles.
Resumo:
We have designed a highly parallel design for a simple genetic algorithm using a pipeline of systolic arrays. The systolic design provides high throughput and unidirectional pipelining by exploiting the implicit parallelism in the genetic operators. The design is significant because, unlike other hardware genetic algorithms, it is independent of both the fitness function and the particular chromosome length used in a problem. We have designed and simulated a version of the mutation array using Xilinix FPGA tools to investigate the feasibility of hardware implementation. A simple 5-chromosome mutation array occupies 195 CLBs and is capable of performing more than one million mutations per second. I. Introduction Genetic algorithms (GAs) are established search and optimization techniques which have been applied to a range of engineering and applied problems with considerable success [1]. They operate by maintaining a population of trial solutions encoded, using a suitable encoding scheme.
Resumo:
Hydroponic isotope labelling of entire plants (HILEP) is a cost-effective method enabling metabolic labelling of whole and mature plants with a stable isotope such as N-15. By utilising hydroponic media that contain N-15 inorganic salts as the sole nitrogen source, near to 100% N-15-labelling of proteins can be achieved. In this study, it is shown that HILEP, in combination with mass spectrometry, is suitable for relative protein quantitation of seven week-old Arabidopsis plants submitted to oxidative stress. Protein extracts from pooled N-14- and N-15-hydroponically grown plants were fractionated by SDS-PAGE, digested and analysed by liquid chromatography electrospray ionisation tandem mass spectrometry (LC-ESI-MS/MS). Proteins were identified and the spectra of N-14/N-15 peptide pairs were extracted using their m/z chromatographic retention time, isotopic distributions, and the m/z difference between the N-14 and N-15 peptides. Relative amounts were calculated as the ratio of the sum of the peak areas of the two distinct N-14 and N-15 peptide isotope envelopes. Using Mascot and the open source trans-proteomic pipeline (TPP), the data processing was automated for global proteome quantitation down to the isoform level by extracting isoform specific peptides. With this combination of metabolic labelling and mass spectrometry it was possible to show differential protein expression in the apoplast of plants submitted to oxidative stress. Moreover, it was possible to discriminate between differentially expressed isoforms belonging to the same protein family, such as isoforms of xylanases and pathogen-related glucanases (PR 2). (C) 2008 Elsevier Ltd. All rights reserved.