45 resultados para submarine pipeline
Resumo:
We present the current status of the WASP project, a pair of wide angle photometric telescopes, individually called SuperWASP. SuperWASP-I is located in La Palma, and SuperWASP-II at Sutherland in South Africa. SW-I began operations in April 2004. SW-II is expected to be operational in early 2006. Each SuperWASP instrument consists of up to 8 individual cameras using ultra-wide field lenses backed by high-quality passively cooled CCDs. Each camera covers 7.8 x 7.8 sq degrees of sky, for nearly 500 sq degrees of total sky coverage. One of the current aims of the WASP project is the search for extra-solar planet transits with a focus on brighter stars in the magnitude range similar to 8 to 13. Additionally, WASP will search for, optical transients, track Near-Earth Objects, and study many types of variable stars and extragalactic objects. The collaboration has developed a custom-built reduction pipeline that achieves better than I percent photometric precision. We discuss future goals, which include: nightly on-mountain reductions that could be used to automatically drive alerts via a small robotic telescope network, and possible roles of the WASP telescopes as providers in such a network. Additional technical details of the telescopes, data reduction, and consortium members and institutions can be found on the web site at: http://www.superwasp.org/. (c) 2006 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Resumo:
High-speed field-programmable gate array (FPGA) implementations of an adaptive least mean square (LMS) filter with application in an electronic support measures (ESM) digital receiver, are presented. They employ "fine-grained" pipelining, i.e., pipelining within the processor and result in an increased output latency when used in the LMS recursive system. Therefore, the major challenge is to maintain a low latency output whilst increasing the pipeline stage in the filter for higher speeds. Using the delayed LMS (DLMS) algorithm, fine-grained pipelined FPGA implementations using both the direct form (DF) and the transposed form (TF) are considered and compared. It is shown that the direct form LMS filter utilizes the FPGA resources more efficiently thereby allowing a 120 MHz sampling rate.
Resumo:
The WASP project and infrastructure supporting the SuperWASP Facility are described. As the instrument, reduction pipeline and archive system are now fully operative we expect the system to have a major impact in the discovery of bright exo-planet candidates as well in more general variable star projects.
Resumo:
In this paper a novel scalable public-key processor architecture is presented that supports modular exponentiation and Elliptic Curve Cryptography over both prime GF(p) and binary GF(2) extension fields. This is achieved by a high performance instruction set that provides a comprehensive range of integer and polynomial basis field arithmetic. The instruction set and associated hardware are generic in nature and do not specifically support any cryptographic algorithms or protocols. Firmware within the device is used to efficiently implement complex and data intensive arithmetic. A firmware library has been developed in order to demonstrate support for numerous exponentiation and ECC approaches, such as different coordinate systems and integer recoding methods. The processor has been developed as a high-performance asymmetric cryptography platform in the form of a scalable Verilog RTL core. Various features of the processor may be scaled, such as the pipeline width and local memory subsystem, in order to suit area, speed and power requirements. The processor is evaluated and compares favourably with previous work in terms of performance while offering an unparalleled degree of flexibility. © 2006 IEEE.
Resumo:
Background
Interaction of a drug or chemical with a biological system can result in a gene-expression profile or signature characteristic of the event. Using a suitably robust algorithm these signatures can potentially be used to connect molecules with similar pharmacological or toxicological properties by gene expression profile. Lamb et al first proposed the Connectivity Map [Lamb et al (2006), Science 313, 1929–1935] to make successful connections among small molecules, genes, and diseases using genomic signatures.
Results
Here we have built on the principles of the Connectivity Map to present a simpler and more robust method for the construction of reference gene-expression profiles and for the connection scoring scheme, which importantly allows the valuation of statistical significance of all the connections observed. We tested the new method with two randomly generated gene signatures and three experimentally derived gene signatures (for HDAC inhibitors, estrogens, and immunosuppressive drugs, respectively). Our testing with this method indicates that it achieves a higher level of specificity and sensitivity and so advances the original method.
Conclusion
The method presented here not only offers more principled statistical procedures for testing connections, but more importantly it provides effective safeguard against false connections at the same time achieving increased sensitivity. With its robust performance, the method has potential use in the drug development pipeline for the early recognition of pharmacological and toxicological properties in chemicals and new drug candidates, and also more broadly in other 'omics sciences.
Resumo:
Quantum-dot Cellular Automata (QCA) technology is a promising potential alternative to CMOS technology. To explore the characteristics of QCA and suitable design methodologies, digital circuit design approaches have been investigated. Due to the inherent wire delay in QCA, pipelined architectures appear to be a particularly suitable design technique. Also, because of the pipeline nature of QCA technology, it is not suitable for complicated control system design. Systolic arrays take advantage of pipelining, parallelism and simple local control. Therefore, an investigation into these architectures in QCA technology is provided in this paper. Two case studies, (a matrix multiplier and a Galois Field multiplier) are designed and analyzed based on both multilayer and coplanar crossings. The performance of these two types of interconnections are compared and it is found that even though coplanar crossings are currently more practical, they tend to occupy a larger design area and incur slightly more delay. A general semi-conductor QCA systolic array design methodology is also proposed. It is found that by applying a systolic array structure in QCA design, significant benefits can be achieved particularly with large systolic arrays, even more so than when applied in CMOS-based technology.
Resumo:
The POINT-AGAPE (Pixel-lensing Observations with the Isaac Newton Telescope-Andromeda Galaxy Amplified Pixels Experiment) survey is an optical search for gravitational microlensing events towards the Andromeda galaxy (M31). As well as microlensing, the survey is sensitive to many different classes of variable stars and transients. Here we describe the automated detection and selection pipeline used to identify M31 classical novae (CNe) and we present the resulting catalogue of 20 CN candidates observed over three seasons. CNe are observed both in the bulge region as well as over a wide area of the M31 disc. Nine of the CNe are caught during the final rise phase and all are well sampled in at least two colours. The excellent light-curve coverage has allowed us to detect and classify CNe over a wide range of speed class, from very fast to very slow. Among the light curves is a moderately fast CN exhibiting entry into a deep transition minimum, followed by its final decline. We have also observed in detail a very slow CN which faded by only 0.01 mag d(-1) over a 150-d period. We detect other interesting variable objects, including one of the longest period and most luminous Mira variables. The CN catalogue constitutes a uniquely well-sampled and objectively-selected data set with which to study the statistical properties of CNe in M31, such as the global nova rate, the reliability of novae as standard-candle distance indicators and the dependence of the nova population on stellar environment. The findings of this statistical study will be reported in a follow-up paper.
Resumo:
Speeding up sequential programs on multicores is a challenging problem that is in urgent need of a solution. Automatic parallelization of irregular pointer-intensive codes, exempli?ed by the SPECint codes, is a very hard problem. This paper shows that, with a helping hand, such auto-parallelization is possible and fruitful. This paper makes the following contributions: (i) A compiler framework for extracting pipeline-like parallelism from outer program loops is presented. (ii) Using a light-weight programming model based on annotations, the programmer helps the compiler to ?nd thread-level parallelism. Each of the annotations speci?es only a small piece of semantic information that compiler analysis misses, e.g. stating that a variable is dead at a certain program point. The annotations are designed such that correctness is easily veri?ed. Furthermore, we present a tool for suggesting annotations to the programmer. (iii) The methodology is applied to autoparallelize several SPECint benchmarks. For the benchmark with most parallelism (hmmer), we obtain a scalable 7-fold speedup on an AMD quad-core dual processor. The annotations constitute a parallel programming model that relies extensively on a sequential program representation. Hereby, the complexity of debugging is not increased and it does not obscure the source code. These properties could prove valuable to increase the ef?ciency of parallel programming.
Resumo:
Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.
Resumo:
The prevalence of multicore processors is bound to drive most kinds of software development towards parallel programming. To limit the difficulty and overhead of parallel software design and maintenance, it is crucial that parallel programming models allow an easy-to-understand, concise and dense representation of parallelism. Parallel programming models such as Cilk++ and Intel TBBs attempt to offer a better, higher-level abstraction for parallel programming than threads and locking synchronization. It is not straightforward, however, to express all patterns of parallelism in these models. Pipelines are an important parallel construct, although difficult to express in Cilk and TBBs in a straightfor- ward way, not without a verbose restructuring of the code. In this paper we demonstrate that pipeline parallelism can be easily and concisely expressed in a Cilk-like language, which we extend with input, output and input/output dependency types on procedure arguments, enforced at runtime by the scheduler. We evaluate our implementation on real applications and show that our Cilk-like scheduler, extended to track and enforce these dependencies has performance comparable to Cilk++.
Resumo:
Artifact removal from physiological signals is an essential component of the biosignal processing pipeline. The need for powerful and robust methods for this process has become particularly acute as healthcare technology deployment undergoes transition from the current hospital-centric setting toward a wearable and ubiquitous monitoring environment. Currently, determining the relative efficacy and performance of the multiple artifact removal techniques available on real world data can be problematic, due to incomplete information on the uncorrupted desired signal. The majority of techniques are presently evaluated using simulated data, and therefore, the quality of the conclusions is contingent on the fidelity of the model used. Consequently, in the biomedical signal processing community, there is considerable focus on the generation and validation of appropriate signal models for use in artifact suppression. Most approaches rely on mathematical models which capture suitable approximations to the signal dynamics or underlying physiology and, therefore, introduce some uncertainty to subsequent predictions of algorithm performance. This paper describes a more empirical approach to the modeling of the desired signal that we demonstrate for functional brain monitoring tasks which allows for the procurement of a ground truth signal which is highly correlated to a true desired signal that has been contaminated with artifacts. The availability of this ground truth, together with the corrupted signal, can then aid in determining the efficacy of selected artifact removal techniques. A number of commonly implemented artifact removal techniques were evaluated using the described methodology to validate the proposed novel test platform. © 2012 IEEE.
Resumo:
To infect their mammalian hosts, Fasciola hepatica larvae must penetrate and traverse the intestinal wall of the duodenum, move through the peritoneum, and penetrate the liver. After migrating through and feeding on the liver, causing extensive tissue damage, the parasites move to their final niche in the bile ducts where they mature and produce eggs. Here we integrated a transcriptomics and proteomics approach to profile Fasciola secretory proteins that are involved in host-pathogen interactions and to correlate changes in their expression with the migration of the parasite. Prediction of F. hepatica secretory proteins from 14,031 expressed sequence tags (ESTs) available from the Wellcome Trust Sanger Centre using the semiautomated EST2Secretome pipeline showed that the major components of adult parasite secretions are proteolytic enzymes including cathepsin L, cathepsin B, and asparaginyl endopeptidase cysteine proteases as well as novel trypsin-like serine proteases and carboxypeptidases. Proteomics analysis of proteins secreted by infective larvae, immature flukes, and adult F. hepatica showed that these proteases are developmentally regulated and correlate with the passage of the parasite through host tissues and its encounters with different host macromolecules. Proteases such as FhCL3 and cathepsin B have specific functions in larvae activation and intestinal wall penetration, whereas FhCL1, FhCL2, and FhCL5 are required for liver penetration and tissue and blood feeding. Besides proteases, the parasites secrete an array of antioxidants that are also highly regulated according to their migration through host tissues. However, whereas the proteases of F. hepatica are secreted into the parasite gut via a classical endoplasmic reticulum/Golgi pathway, we speculate that the antioxidants, which all lack a signal sequence, are released via a non-classical trans-tegumental pathway.
Resumo:
A novel bit-level systolic array architecture for implementing bit-parallel IIR filter sections is presented. The authors have shown previously how the fundamental obstacle of pipeline latency in recursive structures can be overcome by the use of redundant arithmetic in combination with bit-level feedback. These ideas are extended by optimizing the degree of redundancy used in different parts of the circuit and combining redundant circuit techniques with those of conventional arithmetic. The resultant architecture offers significant improvements in hardware complexity and throughput rate.
Resumo:
Optimized circuits for implementing high-performance bit-parallel IIR filters are presented. Circuits constructed mainly from simple carry save adders and based on most-significant-bit (MSB) first arithmetic are described. Two methods resulting in systems which are 100% efficient in that they are capable of sampling data every cycle are presented. In the first approach the basic circuit is modified so that the level of pipelining used is compatible with the small, but fixed, latency associated with the computation in question. This is achieved through insertion of pipeline delays (half latches) on every second row of cells. This produces an area-efficient solution in which the throughput rate is determined by a critical path of 76 gate delays. A second approach combines the MSB first arithmetic methods with the scattered look-ahead methods. Important design issues are addressed, including wordlength truncation, overflow detection, and saturation.
Resumo:
This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm - known as bag of words - gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline. © 2012 Elsevier B.V. All rights reserved.