893 resultados para Scalable Nanofabrication
Resumo:
BACKGROUND: Many analyses of microarray association studies involve permutation, bootstrap resampling and cross-validation, that are ideally formulated as embarrassingly parallel computing problems. Given that these analyses are computationally intensive, scalable approaches that can take advantage of multi-core processor systems need to be developed. RESULTS: We have developed a CUDA based implementation, permGPU, that employs graphics processing units in microarray association studies. We illustrate the performance and applicability of permGPU within the context of permutation resampling for a number of test statistics. An extensive simulation study demonstrates a dramatic increase in performance when using permGPU on an NVIDIA GTX 280 card compared to an optimized C/C++ solution running on a conventional Linux server. CONCLUSIONS: permGPU is available as an open-source stand-alone application and as an extension package for the R statistical environment. It provides a dramatic increase in performance for permutation resampling analysis in the context of microarray association studies. The current version offers six test statistics for carrying out permutation resampling analyses for binary, quantitative and censored time-to-event traits.
Resumo:
The computational detection of regulatory elements in DNA is a difficult but important problem impacting our progress in understanding the complex nature of eukaryotic gene regulation. Attempts to utilize cross-species conservation for this task have been hampered both by evolutionary changes of functional sites and poor performance of general-purpose alignment programs when applied to non-coding sequence. We describe a new and flexible framework for modeling binding site evolution in multiple related genomes, based on phylogenetic pair hidden Markov models which explicitly model the gain and loss of binding sites along a phylogeny. We demonstrate the value of this framework for both the alignment of regulatory regions and the inference of precise binding-site locations within those regions. As the underlying formalism is a stochastic, generative model, it can also be used to simulate the evolution of regulatory elements. Our implementation is scalable in terms of numbers of species and sequence lengths and can produce alignments and binding-site predictions with accuracy rivaling or exceeding current systems that specialize in only alignment or only binding-site prediction. We demonstrate the validity and power of various model components on extensive simulations of realistic sequence data and apply a specific model to study Drosophila enhancers in as many as ten related genomes and in the presence of gain and loss of binding sites. Different models and modeling assumptions can be easily specified, thus providing an invaluable tool for the exploration of biological hypotheses that can drive improvements in our understanding of the mechanisms and evolution of gene regulation.
Resumo:
An enterprise information system (EIS) is an integrated data-applications platform characterized by diverse, heterogeneous, and distributed data sources. For many enterprises, a number of business processes still depend heavily on static rule-based methods and extensive human expertise. Enterprises are faced with the need for optimizing operation scheduling, improving resource utilization, discovering useful knowledge, and making data-driven decisions.
This thesis research is focused on real-time optimization and knowledge discovery that addresses workflow optimization, resource allocation, as well as data-driven predictions of process-execution times, order fulfillment, and enterprise service-level performance. In contrast to prior work on data analytics techniques for enterprise performance optimization, the emphasis here is on realizing scalable and real-time enterprise intelligence based on a combination of heterogeneous system simulation, combinatorial optimization, machine-learning algorithms, and statistical methods.
On-demand digital-print service is a representative enterprise requiring a powerful EIS.We use real-life data from Reischling Press, Inc. (RPI), a digit-print-service provider (PSP), to evaluate our optimization algorithms.
In order to handle the increase in volume and diversity of demands, we first present a high-performance, scalable, and real-time production scheduling algorithm for production automation based on an incremental genetic algorithm (IGA). The objective of this algorithm is to optimize the order dispatching sequence and balance resource utilization. Compared to prior work, this solution is scalable for a high volume of orders and it provides fast scheduling solutions for orders that require complex fulfillment procedures. Experimental results highlight its potential benefit in reducing production inefficiencies and enhancing the productivity of an enterprise.
We next discuss analysis and prediction of different attributes involved in hierarchical components of an enterprise. We start from a study of the fundamental processes related to real-time prediction. Our process-execution time and process status prediction models integrate statistical methods with machine-learning algorithms. In addition to improved prediction accuracy compared to stand-alone machine-learning algorithms, it also performs a probabilistic estimation of the predicted status. An order generally consists of multiple series and parallel processes. We next introduce an order-fulfillment prediction model that combines advantages of multiple classification models by incorporating flexible decision-integration mechanisms. Experimental results show that adopting due dates recommended by the model can significantly reduce enterprise late-delivery ratio. Finally, we investigate service-level attributes that reflect the overall performance of an enterprise. We analyze and decompose time-series data into different components according to their hierarchical periodic nature, perform correlation analysis,
and develop univariate prediction models for each component as well as multivariate models for correlated components. Predictions for the original time series are aggregated from the predictions of its components. In addition to a significant increase in mid-term prediction accuracy, this distributed modeling strategy also improves short-term time-series prediction accuracy.
In summary, this thesis research has led to a set of characterization, optimization, and prediction tools for an EIS to derive insightful knowledge from data and use them as guidance for production management. It is expected to provide solutions for enterprises to increase reconfigurability, accomplish more automated procedures, and obtain data-driven recommendations or effective decisions.
Resumo:
BACKGROUND/AIMS: The obesity epidemic has spread to young adults, and obesity is a significant risk factor for cardiovascular disease. The prominence and increasing functionality of mobile phones may provide an opportunity to deliver longitudinal and scalable weight management interventions in young adults. The aim of this article is to describe the design and development of the intervention tested in the Cell Phone Intervention for You study and to highlight the importance of adaptive intervention design that made it possible. The Cell Phone Intervention for You study was a National Heart, Lung, and Blood Institute-sponsored, controlled, 24-month randomized clinical trial comparing two active interventions to a usual-care control group. Participants were 365 overweight or obese (body mass index≥25 kg/m2) young adults. METHODS: Both active interventions were designed based on social cognitive theory and incorporated techniques for behavioral self-management and motivational enhancement. Initial intervention development occurred during a 1-year formative phase utilizing focus groups and iterative, participatory design. During the intervention testing, adaptive intervention design, where an intervention is updated or extended throughout a trial while assuring the delivery of exactly the same intervention to each cohort, was employed. The adaptive intervention design strategy distributed technical work and allowed introduction of novel components in phases intended to help promote and sustain participant engagement. Adaptive intervention design was made possible by exploiting the mobile phone's remote data capabilities so that adoption of particular application components could be continuously monitored and components subsequently added or updated remotely. RESULTS: The cell phone intervention was delivered almost entirely via cell phone and was always-present, proactive, and interactive-providing passive and active reminders, frequent opportunities for knowledge dissemination, and multiple tools for self-tracking and receiving tailored feedback. The intervention changed over 2 years to promote and sustain engagement. The personal coaching intervention, alternatively, was primarily personal coaching with trained coaches based on a proven intervention, enhanced with a mobile application, but where all interactions with the technology were participant-initiated. CONCLUSION: The complexity and length of the technology-based randomized clinical trial created challenges in engagement and technology adaptation, which were generally discovered using novel remote monitoring technology and addressed using the adaptive intervention design. Investigators should plan to develop tools and procedures that explicitly support continuous remote monitoring of interventions to support adaptive intervention design in long-term, technology-based studies, as well as developing the interventions themselves.
Resumo:
A solar cell relies on its ability to turn photons into current. Because short wavelength photons are typically absorbed near the top surface of a cell, the generated charge carriers recombine before being collected. But when a layer of quantum dots (nanoscale semiconductor particles) is placed on top of the cell, it absorbs short wavelength photons and emits them into the cell at longer wavelengths, which enables more efficient carrier collection. However, the resulting power conversion efficiency of the system depends critically on the quantum dot luminescence efficiency – the nature of this relationship was previously unknown. Our calculations suggest that a quantum dot layer must have high luminescence efficiency (at least 80%) to improve the current output of existing photovoltaic (PV) cells; otherwise, it may worsen the cell’s efficiency. Our quantum dot layer (using quantum dots with over 85% quantum yield) slightly reduced the efficiency of our PV cells. We observed a decrease in short circuit current of a commercial-grade cell from 0.1977 A to 0.1826 A, a 7.6% drop, suggesting that improved optical coupling from the quantum dot emission into the solar cell is needed. With better optical coupling, we predict current enhancements between ~6% and ~8% for a solar cell that already has an antireflection coating. Such improvements could have important commercial impacts if the coating could be deployed in a scalable fashion.
Resumo:
The manual effort required to convert sequential computational mechanics programs into a useful, scalable parallel form is considerable. Tools that can assist in the conversion process are clearly required. Computer aided parallelisation tools (CAPTools) have been developed to generate efficient parallel code for real world structured grid application codes such as Computational Fluid Dynamics. Automatable single-program multi-data (SPMD) overlapping domain decomposition (DD) techniques established for structured grid codes have been adapted by the authors to manually parallelise unstructured mesh applications. Inspector loops have been used to provide generic techniques for the run-time support necessary to extend the capabilities of CAPTools to automatic implementation of SPMD DD techniques in the parallelisation of unstructured mesh codes. Copyright © 1999 John Wiley & Sons, Ltd.
Resumo:
A simulation program has been developed to calculate the power-spectral density of thin avalanche photodiodes, which are used in optical networks. The program extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. We describe our experiences in parallelizing the code using both MPI and OpenMP. Several array partitioning schemes and scheduling policies are implemented and tested Our results show that the OpenMP code is scalable up to 64 processors on an SGI Origin 2000 machine and has small average errors.
Resumo:
An important factor for high-speed optical communication is the availability of ultrafast and low-noise photodetectors. Among the semiconductor photodetectors that are commonly used in today’s long-haul and metro-area fiber-optic systems, avalanche photodiodes (APDs) are often preferred over p-i-n photodiodes due to their internal gain, which significantly improves the receiver sensitivity and alleviates the need for optical pre-amplification. Unfortunately, the random nature of the very process of carrier impact ionization, which generates the gain, is inherently noisy and results in fluctuations not only in the gain but also in the time response. Recently, a theory characterizing the autocorrelation function of APDs has been developed by us which incorporates the dead-space effect, an effect that is very significant in thin, high-performance APDs. The research extends the time-domain analysis of the dead-space multiplication model to compute the autocorrelation function of the APD impulse response. However, the computation requires a large amount of memory space and is very time consuming. In this research, we describe our experiences in parallelizing the code in MPI and OpenMP using CAPTools. Several array partitioning schemes and scheduling policies are implemented and tested. Our results show that the code is scalable up to 64 processors on a SGI Origin 2000 machine and has small average errors.
Resumo:
Code parallelization using OpenMP for shared memory systems is relatively easier than using message passing for distributed memory systems. Despite this, it is still a challenge to use OpenMP to parallelize application codes in a way that yields effective scalable performance when executed on a shared memory parallel system. We describe an environment that will assist the programmer in the various tasks of code parallelization and this is achieved in a greatly reduced time frame and level of skill required. The parallelization environment includes a number of tools that address the main tasks of parallelism detection, OpenMP source code generation, debugging and optimization. These tools include a high quality, fully interprocedural dependence analysis with user interaction capabilities to facilitate the generation of efficient parallel code, an automatic relative debugging tool to identify erroneous user decisions in that interaction and also performance profiling to identify bottlenecks. Finally, experiences of parallelizing some NASA application codes are presented to illustrate some of the benefits of using the evolving environment.
Resumo:
The intrinsic independent features of the optimal codebook cubes searching process in fractal video compression systems are examined and exploited. The design of a suitable parallel algorithm reflecting the concept is presented. The Message Passing Interface (MPI) is chosen to be the communication tool for the implementation of the parallel algorithm on distributed memory parallel computers. Experimental results show that the parallel algorithm is able to reduce the compression time and achieve a high speed-up without changing the compression ratio and the quality of the decompressed image. A scalability test was also performed, and the results show that this parallel algorithm is scalable.
Resumo:
The marine diatom Phaeodactylum tricornutum can accumulate up to 30% of the omega-3 long chain polyunsaturated fatty acid (LC-PUFA) eicosapentaenoic acid (EPA) and, as such, is considered a good source for the industrial production of EPA. However, P. tricornutum does not naturally accumulate significant levels of the more valuable omega-3 LC-PUFA docosahexaenoic acid (DHA). Previously, we have engineered P. tricornutum to accumulate elevated levels of DHA and docosapentaenoic acid (DPA) by overexpressing heterologous genes encoding enzyme activities of the LC-PUFA biosynthetic pathway. Here, the transgenic strain Pt_Elo5 has been investigated for the scalable production of EPA and DHA. Studies have been performed at the laboratory scale on the cultures growing in up to 1 L flasks a 3.5 L bubble column, a 550 L closed photobioreactor and a 1250 L raceway pond with artificial illumination. Detailed studies were carried out on the effect of different media, carbon sources and illumination on omega-3 LC-PUFAs production by transgenic strain Pt_Elo5 and wild type P. tricornutum grown in 3.5 L bubble columns. The highest content of DHA (7.5% of total fatty acids, TFA) in transgenic strain was achieved in cultures grown in seawater salts, Instant Ocean (IO), supplemented with F/2 nutrients (F2N) under continuous light. After identifying the optimal conditions for omega-3 LC-PUFA accumulation in the small-scale experiments we compared EPA and DHA levels of the transgenic strain grown in a larger fence-style tubular photobioreactor and a raceway pond. We observed a significant production of DHA over EPA, generating an EPA/DPA/DHA profile of 8.7%/4.5%/12.3% of TFA in cells grown in a photobioreactor, equivalent to 6.4 μg/mg dry weight DHA in a mid-exponentially growing algal culture. Omega-3 LC-PUFAs production in a raceway pond at ambient temperature but supplemented with artificial illumination (110 μmol photons m-2s-1) on a 16:8h light:dark cycle, in natural seawater and F/2 nutrients was 24.8% EPA and 10.3% DHA. Transgenic strain grown in RP produced the highest levels of EPA (12.8%) incorporated in neutral lipids. However, the highest partitioning of DHA in neutral lipids was observed in cultures grown in PBR (7.1%). Our results clearly demonstrate the potential for the development of the transgenic Pt_Elo5 as a platform for the commercial production of EPA and DHA.
Resumo:
Se analizan y describen las principales líneas de trabajo de la Web Semántica en el ámbito de los archivos de televisión. Para ello, se analiza y contextualiza la web semántica desde una perspectiva general para posteriormente analizar las principales iniciativas que trabajan con lo audiovisual: Proyecto MuNCH, Proyecto S5T, Semantic Television y VideoActive.
Resumo:
Architectures and methods for the rapid design of silicon cores for implementing discrete wavelet transforms over a wide range of specifications are described. These architectures are efficient, modular, scalable, and cover orthonormal and biorthogonal wavelet transform families. They offer efficient hardware utilization by exploiting a number of core wavelet filter properties and allow the creation of silicon designs that are highly parameterized, including in terms of wavelet type and wordlengths. Control circuitry is embedded within these systems allowing them to be cascaded for any desired level of decomposition without any interface glue logic. The time to produce chip designs for a specific wavelet application is typically less than a day and these are comparable in area and performance to handcrafted designs. They are also portable across a wide range of silicon foundries and suitable for field programmable gate array and programmable logic data implementation. The approach described has also been extended to wavelet packet transforms.
Resumo:
A methodology for rapid silicon design of biorthogonal wavelet transform systems has been developed. This is based on generic, scalable architectures for the forward and inverse wavelet filters. These architectures offer efficient hardware utilisation by combining the linear phase property of biorthogonal filters with decimation and interpolation. The resulting designs have been parameterised in terms of types of wavelet and wordlengths for data and coefficients. Control circuitry is embedded within these cores that allows them to be cascaded for any desired level of decomposition without any interface logic. The time to produce silicon designs for a biorthogonal wavelet system is only the time required to run synthesis and layout tools with no further design effort required. The resulting silicon cores produced are comparable in area and performance to hand-crafted designs. These designs are also portable across a range of foundries and are suitable for FPGA and PLD implementations.
Resumo:
The future convergence of voice, video and data applications on the Internet requires that next generation technology provides bandwidth and delay guarantees. Current technology trends are moving towards scalable aggregate-based systems where applications are grouped together and guarantees are provided at the aggregate level only. This solution alone is not enough for interactive video applications with sub-second delay bounds. This paper introduces a novel packet marking scheme that controls the end-to-end delay of an individual flow as it traverses a network enabled to supply aggregate- granularity Quality of Service (QoS). IPv6 Hop-by-Hop extension header fields are used to track the packet delay encountered at each network node and autonomous decisions are made on the best queuing strategy to employ. The results of network simulations are presented and it is shown that when the proposed mechanism is employed the requested delay bound is met with a 20% reduction in resource reservation and no packet loss in the network.