901 resultados para Submarine Pipeline


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The most promising way to maintain reliable data transfer across the rapidly fluctuating channels used by next generation multiple-input multiple-output communications schemes is to exploit run-time variable modulation and antenna configurations. This demands that the baseband signal processing architectures employed in the communications terminals must provide low cost and high performance with runtime reconfigurability. We present a softcore-processor based solution to this issue, and show for the first time, that such programmable architectures can enable real-time data operation for cutting-edge standards
such as 802.11n; furthermore, by exploiting deep processing pipelines and interleaved task execution, the cost and performance of these architectures is shown to be on a par with traditional dedicated circuit based solutions. We believe this to be the first such programmable architecture to achieve this, and the combination of implementation efficiency and programmability makes this implementation style the most promising approach for hosting such dynamic architectures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cores from slopes east of the Great Barrier Reef (GBR) challenge traditional models for sedimentation on tropical mixed siliciclastic-carbonate margins. However, satisfactory explanations of sediment accumulation on this archetypal margin that include both hemipelagic and turbidite sedimentation remain elusive, as submarine canyons and their role in delivering coarse-grained turbidite deposits, are poorly understood. Towards addressing this problem we investigated the shelf and canyon system bordering the northern Ribbon Reefs and reconstructed the history of turbidite deposition since the Late Pleistocene. High-resolution bathymetric and seismic data show a large paleo-channel system that crosses the shelf before connecting with the canyons via the inter-reef passages between the Ribbon Reefs. High-resolution bathymetry of the canyon axis reveals a complex and active system of channels, sand waves, and local submarine landslides. Multi-proxy examination of three cores from down the axis of the canyon system reveals 18 turbidites and debrites, interlayered with hemipelagic muds, that are derived from a mix of shallow and deep sources. Twenty radiocarbon ages indicate that siliciclastic-dominated and mixed turbidites only occur prior to 31 ka during Marine Isotope Stage (MIS) 3, while carbonate-dominated turbidites are well established by 11 ka in MIS1 until as recently as 1.2 ka. The apparent lack of siliciclastic-dominated turbidites and presence of only a few carbonate-dominated turbidites during the MIS2 lowstand are not consistent with generic models of margin sedimentation but might also reflect a gap in the turbidite record. These data suggest that turbidite sedimentation in the Ribbon Reef canyons, probably reflects the complex relationship between the prolonged period (> 25 ka) of MIS3 millennial sea level changes and local factors such as the shelf, inter-reef passage depth, canyon morphology and different sediment sources. On this basis we predict that the spatial and temporal patterns of turbidite sedimentation could vary considerably along the length of the GBR margin.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Traditional static analysis fails to auto-parallelize programs with a complex control and data flow. Furthermore, thread-level parallelism in such programs is often restricted to pipeline parallelism, which can be hard to discover by a programmer. In this paper we propose a tool that, based on profiling information, helps the programmer to discover parallelism. The programmer hand-picks the code transformations from among the proposed candidates which are then applied by automatic code transformation techniques.

This paper contributes to the literature by presenting a profiling tool for discovering thread-level parallelism. We track dependencies at the whole-data structure level rather than at the element level or byte level in order to limit the profiling overhead. We perform a thorough analysis of the needs and costs of this technique. Furthermore, we present and validate the belief that programs with complex control and data flow contain significant amounts of exploitable coarse-grain pipeline parallelism in the program’s outer loops. This observation validates our approach to whole-data structure dependencies. As state-of-the-art compilers focus on loops iterating over data structure members, this observation also explains why our approach finds coarse-grain pipeline parallelism in cases that have remained out of reach for state-of-the-art compilers. In cases where traditional compilation techniques do find parallelism, our approach allows to discover higher degrees of parallelism, allowing a 40% speedup over traditional compilation techniques. Moreover, we demonstrate real speedups on multiple hardware platforms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe the design of a parallel solution of the inhomogeneous Schrodinger equation, which arises in the construction of continuum orbitals in the R-matrix theory of atomic continuum processes. A prototype system is described which has been programmed in occam2 and implemented on a bi-directional pipeline of transputers. Some timing results for the prototype system are presented, and the development of a full production system is discussed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Ubiquitous parallel computing aims to make parallel programming accessible to a wide variety of programming areas using deterministic and scale-free programming models built on a task abstraction. However, it remains hard to reconcile these attributes with pipeline parallelism, where the number of pipeline stages is typically hard-coded in the program and defines the degree of parallelism.

This paper introduces hyperqueues, a programming abstraction that enables the construction of deterministic and scale-free pipeline parallel programs. Hyperqueues extend the concept of Cilk++ hyperobjects to provide thread-local views on a shared data structure. While hyperobjects are organized around private local views, hyperqueues require shared concurrent views on the underlying data structure. We define the semantics of hyperqueues and describe their implementation in a work-stealing scheduler. We demonstrate scalable performance on pipeline-parallel PARSEC benchmarks and find that hyperqueues provide comparable or up to 30% better performance than POSIX threads and Intel's Threading Building Blocks. The latter are highly tuned to the number of available processing cores, while programs using hyperqueues are scale-free.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sedimentologic and AMS 14C age data are reported for calcareous hemipelagic mud samples taken from gravity cores collected at sites within, or adjacent to five submarine landslides identified with multibeam bathymetry data on the Nerrang Plateau segment and surrounding canyons of eastern Australia's continental slope (Bribie Bowl, Coolangatta-2, Coolangatta-1, Cudgen and Byron). Sediments are comprised of mixtures of calcareous and terrigenous clay (10-20%), silt (50-65%) and sand (15-40%) and are generally uniform in appearance. Their carbonate contents vary between and 17% and 22% by weight while organic carbon contents are less than 10% by weight. Dating of conformably deposited material identified in ten of the twelve cores indicates a range of sediment accumulation rates between 0.017mka-1 and 0.2 mka-1 which are consistent with previous estimates reported for this area. One slide-adjacent core, and four within-landslide cores present depositional hiatus surfaces located at depths of 0.8 to 2.2 meters below the present-day seafloor and identified by a sharp, colour-change boundary; discernable but small increases in sediment stiffness; and a slight increase in sediment bulk density of 0.1 gcm-3. Distinct gaps in AMS 14C age of at least 20ka are recorded across these boundary surfaces. Examination of sub-bottom profiler records of transects through three of the within-slide core-sites and their nearby landslide scarps available for the Coolangatta-1 and Cudgen slides indicate that: 1) the youngest identifiable sediment layer reflectors upslope of these slides, terminate on and are truncated by slide rupture surfaces; and 2) there is no obvious evidence in the sub-bottom profiles for a post-slide sediment layer draped over or otherwise burying slide ruptures or exposed slide detachment surfaces. This suggests that both these submarine landslides are geologically recent and suggests that the hiatus surfaces identified in Coolangatta-1's and Cudgen's within-slide cores are either: a) erosional features that developed after the occurrence of the landslide in which case the hiatus surface age provides a minimum age for landslide occurrence or b) detachment surfaces from which slabs of near-surface sediment were removed during landsliding in which case the post-hiatus sediment dates indicates approximately when landsliding occurred. In either case, it is reasonable to suggest that these two spatially adjacent slides occurred penecontemporaneously approximately 20,000 years ago.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The two families of fluorescent PET (photoinduced electron transfer) sensors (1-9) show that the effective proton density near the surface of several micelle membranes changes over 2-3 orders of magnitude as the microlocation of the sensor (with respect to the membrane) is altered via hydrophobic tuning.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Senior thesis written for Oceanography 445

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have developed an in-house pipeline for the processing and analyses of sequence data generated during Illumina technology-based metagenomic studies of the human gut microbiota. Each component of the pipeline has been selected following comparative analysis of available tools; however, the modular nature of software facilitates replacement of any individual component with an alternative should a better tool become available in due course. The pipeline consists of quality analysis and trimming followed by taxonomic filtering of sequence data allowing reads associated with samples to be binned according to whether they represent human, prokaryotic (bacterial/archaeal), viral, parasite, fungal or plant DNA. Viral, parasite, fungal and plant DNA can be assigned to species level on a presence/absence basis, allowing – for example – identification of dietary intake of plant-based foodstuffs and their derivatives. Prokaryotic DNA is subject to taxonomic and functional analyses, with assignment to taxonomic hierarchies (kingdom, class, order, family, genus, species, strain/subspecies) and abundance determination. After de novo assembly of sequence reads, genes within samples are predicted and used to build a non-redundant catalogue of genes. From this catalogue, per-sample gene abundance can be determined after normalization of data based on gene length. Functional annotation of genes is achieved through mapping of gene clusters against KEGG proteins, and InterProScan. The pipeline is undergoing validation using the human faecal metagenomic data of Qin et al. (2014, Nature 513, 59–64). Outputs from the pipeline allow development of tools for the integration of metagenomic and metabolomic data, moving metagenomic studies beyond determination of gene richness and representation towards microbial-metabolite mapping. There is scope to improve the outputs from viral, parasite, fungal and plant DNA analyses, depending on the depth of sequencing associated with samples. The pipeline can easily be adapted for the analyses of environmental and non-human animal samples, and for use with data generated via non-Illumina sequencing platforms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Keystone XL has a big role for transforming Canadian oil to the USA. The function of the pipeline is decreasing the dependency of the American oil industry on other countries and it will help to limit external debt. The proposed pipeline seeks the most suitable route which cannot damage agricultural and natural water recourses such as the Ogallala Aquifer. Using the Geographic Information System (GIS) techniques, the suggested path in this study got extremely high correct results that will help in the future to use the least cost analysis for similar studies. The route analysis contains different weighted overlay surfaces, each, was influenced by various criteria (slope, geology, population and land use). The resulted least cost path routes for each weighted overlay surface were compared with the original proposed pipeline and each displayed surface was more effective than the proposed Keystone XL pipeline.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Immunotherapy is defined as the treatment of disease by inducing, enhancing, or suppressing an immune response, whereas preventive vaccination is intended to prevent the development of diseases in healthy subjects. Most successful prophylactic vaccines rely on the induction of high titers of neutralizing antibodies. It is generally thought that therapeutic vaccination requires induction of robust T-cell mediated immunity. The diverse array of potential or already in use immunotherapeutic and preventive agents all share the commonality of stimulating the immune system. Hence, measuring those vaccination-induced immune responses gives the earliest indication of vaccine take and its immune modulating effects.