129 resultados para Zero sequence components


Relevância:

20.00% 20.00%

Publicador:

Resumo:

High-throughput DNA sequencing (HTS) instruments today are capable of generating millions of sequencing reads in a short period of time, and this represents a serious challenge to current bioinformatics pipeline in processing such an enormous amount of data in a fast and economical fashion. Modern graphics cards are powerful processing units that consist of hundreds of scalar processors in parallel in order to handle the rendering of high-definition graphics in real-time. It is this computational capability that we propose to harness in order to accelerate some of the time-consuming steps in analyzing data generated by the HTS instruments. We have developed BarraCUDA, a novel sequence mapping software that utilizes the parallelism of NVIDIA CUDA graphics cards to map sequencing reads to a particular location on a reference genome. While delivering a similar mapping fidelity as other mainstream programs , BarraCUDA is a magnitude faster in mapping throughput compared to its CPU counterparts. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the mapping throughput. BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the mapping of millions of sequencing reads generated by HTS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology. BarraCUDA is currently available at http://seqbarracuda.sf.net

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The increasing pressure on material availability, energy prices, as well as emerging environmental legislation is leading manufacturers to adopt solutions to reduce their material and energy consumption as well as their carbon footprint, thereby becoming more sustainable. Ultimately manufacturers could potentially become zero carbon by having zero net energy demand and zero waste across the supply chain. The literature on zero carbon manufacturing and the technologies that underpin it are growing, but there is little available on how a manufacturer undertakes the transition. Additionally, the work in this area is fragmented and clustered around technologies rather than around processes that link the technologies together. There is a need to better understand material, energy, and waste process flows in a manufacturing facility from a holistic viewpoint. With knowledge of the potential flows, design methodologies can be developed to enable zero carbon manufacturing facility creation. This paper explores the challenges faced when attempting to design a zero carbon manufacturing facility. A broad scope is adopted from legislation to technology and from low waste to consuming waste. A generic material, energy, and waste flow model is developed and presented to show the material, energy, and waste inputs and outputs for the manufacturing system and the supporting facility and, importantly, how they can potentially interact. Finally the application of the flow model in industrial applications is demonstrated to select appropriate technologies and configure them in an integrated way. © 2009 IMechE.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of mixture-model techniques for motion estimation and image sequence segmentation was discussed. The issues such as modeling of occlusion and uncovering, determining the relative depth of the objects in a scene, and estimating the number of objects in a scene were also investigated. The segmentation algorithm was found to be computationally demanding, but the computational requirements were reduced as the motion parameters and segmentation of the frame were initialized. The method provided a stable description, in whichthe addition and removal of objects from the description corresponded to the entry and exit of objects from the scene.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

BACKGROUND: With the maturation of next-generation DNA sequencing (NGS) technologies, the throughput of DNA sequencing reads has soared to over 600 gigabases from a single instrument run. General purpose computing on graphics processing units (GPGPU), extracts the computing power from hundreds of parallel stream processors within graphics processing cores and provides a cost-effective and energy efficient alternative to traditional high-performance computing (HPC) clusters. In this article, we describe the implementation of BarraCUDA, a GPGPU sequence alignment software that is based on BWA, to accelerate the alignment of sequencing reads generated by these instruments to a reference DNA sequence. FINDINGS: Using the NVIDIA Compute Unified Device Architecture (CUDA) software development environment, we ported the most computational-intensive alignment component of BWA to GPU to take advantage of the massive parallelism. As a result, BarraCUDA offers a magnitude of performance boost in alignment throughput when compared to a CPU core while delivering the same level of alignment fidelity. The software is also capable of supporting multiple CUDA devices in parallel to further accelerate the alignment throughput. CONCLUSIONS: BarraCUDA is designed to take advantage of the parallelism of GPU to accelerate the alignment of millions of sequencing reads generated by NGS instruments. By doing this, we could, at least in part streamline the current bioinformatics pipeline such that the wider scientific community could benefit from the sequencing technology.BarraCUDA is currently available from http://seqbarracuda.sf.net.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Reusing steel and aluminum components would reduce the need for new production, possibly creating significant savings in carbon emissions. Currently, there is no clearly defined set of strategies or barriers to enable assessment of appropriate component reuse; neither is it possible to predict future levels of reuse. This work presents a global assessment of the potential for reusing steel and aluminum components. A combination of top-down and bottom-up analyses is used to allocate the final destinations of current global steel and aluminum production to product types. A substantial catalogue has been compiled for these products characterizing key features of steel and aluminum components including design specifications, requirements in use, and current reuse patterns. To estimate the fraction of end-of-life metal components that could be reused for each product, the catalogue formed the basis of a set of semistructured interviews with industrial experts. The results suggest that approximately 30% of steel and aluminum used in current products could be reused. Barriers against reuse are examined, prompting recommendations for redesign that would facilitate future reuse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this article we call for a new approach to patient safety improvement, one based on the emerging field of evidence-based healthcare risk management (EBHRM). We explore EBHRM in the broader context of the evidence-based healthcare movement, assess the benefits and challenges that might arise in adopting an evidence-based approach, and make recommendations for meeting those challenges and realizing the benefits of a more scientific approach.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study unsupervised learning in a probabilistic generative model for occlusion. The model uses two types of latent variables: one indicates which objects are present in the image, and the other how they are ordered in depth. This depth order then determines how the positions and appearances of the objects present, specified in the model parameters, combine to form the image. We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another. Exact maximum-likelihood learning is intractable. However, we show that tractable approximations to Expectation Maximization (EM) can be found if the training images each contain only a small number of objects on average. In numerical experiments it is shown that these approximations recover the correct set of object parameters. Experiments on a novel version of the bars test using colored bars, and experiments on more realistic data, show that the algorithm performs well in extracting the generating causes. Experiments based on the standard bars benchmark test for object learning show that the algorithm performs well in comparison to other recent component extraction approaches. The model and the learning algorithm thus connect research on occlusion with the research field of multiple-causes component extraction methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our ability to have an experience of another's pain is characteristic of empathy. Using functional imaging, we assessed brain activity while volunteers experienced a painful stimulus and compared it to that elicited when they observed a signal indicating that their loved one--present in the same room--was receiving a similar pain stimulus. Bilateral anterior insula (AI), rostral anterior cingulate cortex (ACC), brainstem, and cerebellum were activated when subjects received pain and also by a signal that a loved one experienced pain. AI and ACC activation correlated with individual empathy scores. Activity in the posterior insula/secondary somatosensory cortex, the sensorimotor cortex (SI/MI), and the caudal ACC was specific to receiving pain. Thus, a neural response in AI and rostral ACC, activated in common for "self" and "other" conditions, suggests that the neural substrate for empathic experience does not involve the entire "pain matrix." We conclude that only that part of the pain network associated with its affective qualities, but not its sensory qualities, mediates empathy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

DYN3D reactor dynamics nodal diffusion code was originally developed for the analysis of Light Water Reactors. In this paper, we demonstrate the feasibility of using DYN3D for modeling of fast spectrum reactors. A homogenized cross sections data library was generated using continuous energy Monte-Carlo code Serpent which provides significant modeling flexibility compared with traditional deterministic lattice transport codes and tolerable execution time. A representative sodium cooled fast reactor core was modeled with the Serpent-DYN3D code sequence and the results were compared with those produced by ERANOS code and with a 3D full core Monte-Carlo solution. Very good agreement between the codes was observed for the core integral parameters and power distribution suggesting that the DYN3D code with cross section library generated using Serpent can be reliably used for the analysis of fast reactors. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents results of a feasibility study aimed at developing a zero-transuranic-discharge fuel cycle based on the U-Th-TRU ternary cycle. The design objective is to find a fuel composition (mixture of thorium, enriched uranium, and recycled transuranic components) and fuel management strategy resulting in an equilibrium charge-discharge mass flow. In such a fuel cycle scheme, the quantity and isotopic vector of the transuranium (TRU) component is identical at the charge and discharge time points, thus allowing the whole amount of the TRU at the end of the fuel irradiation period to be separated and reloaded into the following cycle. The TRU reprocessing activity losses are the only waste stream that will require permanent geological storage, virtually eliminating the long-term radiological waste of the commercial nuclear fuel cycle. A detailed three-dimensional full pressurized water reactor (PWR) core model was used to analyze the proposed fuel composition and management strategy. The results demonstrate the neutronic feasibility of the fuel cycle with zero-TRU discharge. The amount of TRU and enriched uranium loaded reach equilibrium after about four TRU recycles. The reactivity coefficients were found to be within a range typical for a reference PWR core. The soluble boron worth is reduced by a factor of ∼2 from a typical PWR value. Nevertheless, the results indicate the feasibility of an 18-month fuel cycle design with an acceptable beginning-of-cycle soluble boron concentration even without application of burnable poisons.