975 resultados para Massively parallel sequencing
Resumo:
Using neuromorphic analog VLSI techniques for modeling large neural systems has several advantages over software techniques. By designing massively-parallel analog circuit arrays which are ubiquitous in neural systems, analog VLSI models are extremely fast, particularly when local interactions are important in the computation. While analog VLSI circuits are not as flexible as software methods, the constraints posed by this approach are often very similar to the constraints faced by biological systems. As a result, these constraints can offer many insights into the solutions found by evolution. This dissertation describes a hardware modeling effort to mimic the primate oculomotor system which requires both fast sensory processing and fast motor control. A one-dimensional hardware model of the primate eye has been built which simulates the physical dynamics of the biological system. It is driven by analog VLSI circuits mimicking brainstem and cortical circuits that control eye movements. In this framework, a visually-triggered saccadic system is demonstrated which generates averaging saccades. In addition, an auditory localization system, based on the neural circuits of the barn owl, is used to trigger saccades to acoustic targets in parallel with visual targets. Two different types of learning are also demonstrated on the saccadic system using floating-gate technology allowing the non-volatile storage of analog parameters directly on the chip. Finally, a model of visual attention is used to select and track moving targets against textured backgrounds, driving both saccadic and smooth pursuit eye movements to maintain the image of the target in the center of the field of view. This system represents one of the few efforts in this field to integrate both neuromorphic sensory processing and motor control in a closed-loop fashion.
Resumo:
A neural network is a highly interconnected set of simple processors. The many connections allow information to travel rapidly through the network, and due to their simplicity, many processors in one network are feasible. Together these properties imply that we can build efficient massively parallel machines using neural networks. The primary problem is how do we specify the interconnections in a neural network. The various approaches developed so far such as outer product, learning algorithm, or energy function suffer from the following deficiencies: long training/ specification times; not guaranteed to work on all inputs; requires full connectivity.
Alternatively we discuss methods of using the topology and constraints of the problems themselves to design the topology and connections of the neural solution. We define several useful circuits-generalizations of the Winner-Take-All circuitthat allows us to incorporate constraints using feedback in a controlled manner. These circuits are proven to be stable, and to only converge on valid states. We use the Hopfield electronic model since this is close to an actual implementation. We also discuss methods for incorporating these circuits into larger systems, neural and nonneural. By exploiting regularities in our definition, we can construct efficient networks. To demonstrate the methods, we look to three problems from communications. We first discuss two applications to problems from circuit switching; finding routes in large multistage switches, and the call rearrangement problem. These show both, how we can use many neurons to build massively parallel machines, and how the Winner-Take-All circuits can simplify our designs.
Next we develop a solution to the contention arbitration problem of high-speed packet switches. We define a useful class of switching networks and then design a neural network to solve the contention arbitration problem for this class. Various aspects of the neural network/switch system are analyzed to measure the queueing performance of this method. Using the basic design, a feasible architecture for a large (1024-input) ATM packet switch is presented. Using the massive parallelism of neural networks, we can consider algorithms that were previously computationally unattainable. These now viable algorithms lead us to new perspectives on switch design.
Resumo:
The Arabidopsis genome contains a highly complex and abundant population of small RNAs, and many of the endogenous siRNAs are dependent on RNA-Dependent RNA Polymerase 2 (RDR2) for their biogenesis. By analyzing an rdr2 loss-of-function mutant using two different parallel sequencing technologies, MPSS and 454, we characterized the complement of miRNAs expressed in Arabidopsis inflorescence to considerable depth. Nearly all known miRNAs were enriched in this mutant and we identified 13 new miRNAs, all of which were relatively low abundance and constitute new families. Trans-acting siRNAs (ta-siRNAs) were even more highly enriched. Computational and gel blot analyses suggested that the minimal number of miRNAs in Arabidopsis is approximately 155. The size profile of small RNAs in rdr2 reflected enrichment of 21-nt miRNAs and other classes of siRNAs like ta-siRNAs, and a significant reduction in 24-nt heterochromatic siRNAs. Other classes of small RNAs were found to be RDR2-independent, particularly those derived from long inverted repeats and a subset of tandem repeats. The small RNA populations in other Arabidopsis small RNA biogenesis mutants were also examined; a dcl2/3/4 triple mutant showed a similar pattern to rdr2, whereas dcl1-7 and rdr6 showed reductions in miRNAs and ta-siRNAs consistent with their activities in the biogenesis of these types of small RNAs. Deep sequencing of mutants provides a genetic approach for the dissection and characterization of diverse small RNA populations and the identification of low abundance miRNAs.
Resumo:
In this thesis we study the general problem of reconstructing a function, defined on a finite lattice from a set of incomplete, noisy and/or ambiguous observations. The goal of this work is to demonstrate the generality and practical value of a probabilistic (in particular, Bayesian) approach to this problem, particularly in the context of Computer Vision. In this approach, the prior knowledge about the solution is expressed in the form of a Gibbsian probability distribution on the space of all possible functions, so that the reconstruction task is formulated as an estimation problem. Our main contributions are the following: (1) We introduce the use of specific error criteria for the design of the optimal Bayesian estimators for several classes of problems, and propose a general (Monte Carlo) procedure for approximating them. This new approach leads to a substantial improvement over the existing schemes, both regarding the quality of the results (particularly for low signal to noise ratios) and the computational efficiency. (2) We apply the Bayesian appraoch to the solution of several problems, some of which are formulated and solved in these terms for the first time. Specifically, these applications are: teh reconstruction of piecewise constant surfaces from sparse and noisy observationsl; the reconstruction of depth from stereoscopic pairs of images and the formation of perceptual clusters. (3) For each one of these applications, we develop fast, deterministic algorithms that approximate the optimal estimators, and illustrate their performance on both synthetic and real data. (4) We propose a new method, based on the analysis of the residual process, for estimating the parameters of the probabilistic models directly from the noisy observations. This scheme leads to an algorithm, which has no free parameters, for the restoration of piecewise uniform images. (5) We analyze the implementation of the algorithms that we develop in non-conventional hardware, such as massively parallel digital machines, and analog and hybrid networks.
Resumo:
The objective of this thesis is the exploration and characterisation of the nanoscale electronic properties of conjugated polymers and nanocrystals. In Chapter 2, the first application of conducting-probe atomic force microscopy (CP-AFM)-based displacement-voltage (z-V) spectroscopy to local measurement of electronic properties of conjugated polymer thin films is reported. Charge injection thresholds along with corresponding single particle gap and exciton binding energies are determined for a poly[2-methoxy-5-(2-ethylhexyloxy)-1,4-phenylenevinylene] thin film. By performing measurements across a grid of locations on the film, a series of exciton binding energy distributions are identified. The variation in measured exciton binding energies is in contrast to the smoothness of the film suggesting that the variation may be attributable to differences in the nano-environment of the polymer molecules within the film at each measurement location. In Chapter 3, the CP-AFM-based z-V spectroscopy method is extended for the first time to local, room temperature measurements of the Coulomb blockade voltage thresholds arising from sequential single electron charging of 28 kDa Au nanocrystal arrays. The fluid-like properties of the nanocrystal arrays enable reproducible formation of nanoscale probe-array-substrate junctions, allowing the influence of background charge on the electronic properties of the array to be identified. CP-AFM also allows complementary topography and phase data to be acquired before and after spectroscopy measurements, enabling comparison of local array morphology with local measurements of the Coulomb blockade thresholds. In Chapter 4, melt-assisted template wetting is applied for the first time to massively parallel fabrication of poly-(3-hexylthiophene) nanowires. The structural characteristics of the wires are first presented. Two-terminal electrical measurements of individual nanowires, utilising a CP-AFM tip as the source electrode, are then used to obtain the intrinsic nanowire resistivity and the total nanowire-electrode contact resistance subsequently allowing single nanowire hole mobility and mean nanowire-electrode barrier height values to be estimated. In Chapter 5, solution-assisted template wetting is used for fabrication of fluorene-dithiophene co-polymer nanowires. The structural characteristics of these wires are also presented. Two-terminal electrical measurements of individual nanowires indicate barrier formation at the nanowire-electrode interfaces and measured resistivity values suggest doping of the nanowires, possibly due to air exposure. The first report of single conjugated polymer nanowires as ultra-miniature photodetectors is presented, with single wire devices yielding external quantum efficiencies ~ 0.1 % and responsivities ~ 0.4 mA/W under monochromatic illumination.
Resumo:
Intense-field ionization of the hydrogen molecular ion by linearly polarized light is modelled by direct solution of the fixed-nuclei time-dependent Schrodinger equation and compared with recent experiments. Parallel transitions are calculated using algorithms which exploit massively parallel computers. We identify and calculate dynamic tunnelling ionization resonances that depend on laser wavelength and intensity, and molecular bond length. Results for lambda similar to 1064 nm are consistent with static tunnelling ionization. At shorter wavelengths lambda similar to 790 nm large dynamic corrections are observed. The results agree very well with recent experimental measurements of the ion spectra. Our results reproduce the single peak resonance and provide accurate ionization rate estimates at high intensities. At lower intensities our results confirm a double peak in the ionization rate as the bond length varies.
Resumo:
A non-adiabatic quantum molecular dynamics approach for treating the interaction of matter with intense, short-duration laser pulses is developed. This approach, which is parallelized to run on massively-parallel supercomputers, is shown to be both accurate and efficient. Illustrative results are presented for harmonic generation occurring in diatomic molecules using linearly polarized laser pulses.
Resumo:
We describe a new ab initio method for solving the time-dependent Schrödinger equation for multi-electron atomic systems exposed to intense short-pulse laser light. We call the method the R-matrix with time-dependence (RMT) method. Our starting point is a finite-difference numerical integrator (HELIUM), which has proved successful at describing few-electron atoms and atomic ions in strong laser fields with high accuracy. By exploiting the R-matrix division-of-space concept, we bring together a numerical method most appropriate to the multi-electron finite inner region (R-matrix basis set) and a different numerical method most appropriate to the one-electron outer region (finite difference). In order to exploit massively parallel supercomputers efficiently, we time-propagate the wavefunction in both regions by employing Arnoldi methods, originally developed for HELIUM.
Resumo:
Massively parallel networks of highly efficient, high performance Single Instruction Multiple Data (SIMD) processors have been shown to enable FPGA-based implementation of real-time signal processing applications with performance and
cost comparable to dedicated hardware architectures. This is achieved by exploiting simple datapath units with deep processing pipelines. However, these architectures are highly susceptible to pipeline bubbles resulting from data and control hazards; the only way to mitigate against these is manual interleaving of
application tasks on each datapath, since no suitable automated interleaving approach exists. In this paper we describe a new automated integrated mapping/scheduling approach to map algorithm tasks to processors and a new low-complexity list scheduling technique to generate the interleaved schedules. When applied to a spatial Fixed-Complexity Sphere Decoding (FSD) detector
for next-generation Multiple-Input Multiple-Output (MIMO) systems, the resulting schedules achieve real-time performance for IEEE 802.11n systems on a network of 16-way SIMD processors on FPGA, enable better performance/complexity balance than current approaches and produce results comparable to handcrafted implementations.
Resumo:
Cystic fibrosis (CF) is characterized by defective mucociliary clearance and chronic airway infection by a complex microbiota. Infection, persistent inflammation and periodic episodes of acute pulmonary exacerbation contribute to an irreversible decline in CF lung function. While the factors leading to acute exacerbations are poorly understood, antibiotic treatment can temporarily resolve pulmonary symptoms and partially restore lung function. Previous studies indicated that exacerbations may be associated with changes in microbial densities and the acquisition of new microbial species. Given the complexity of the CF microbiota, we applied massively parallel pyrosequencing to identify changes in airway microbial community structure in 23 adult CF patients during acute pulmonary exacerbation, after antibiotic treatment and during periods of stable disease. Over 350,000 sequences were generated, representing nearly 170 distinct microbial taxa. Approximately 60% of sequences obtained were from the recognized CF pathogens Pseudomonas and Burkholderia, which were detected in largely non-overlapping patient subsets. In contrast, other taxa including Prevotella, Streptococcus, Rothia and Veillonella were abundant in nearly all patient samples. Although antibiotic treatment was associated with a small decrease in species richness, there was minimal change in overall microbial community structure. Furthermore, microbial community composition was highly similar in patients during an exacerbation and when clinically stable, suggesting that exacerbations may represent intrapulmonary spread of infection rather than a change in microbial community composition. Mouthwash samples, obtained from a subset of patients, showed a nearly identical distribution of taxa as expectorated sputum, indicating that aspiration may contribute to colonization of the lower airways. Finally, we observed a strong correlation between low species richness and poor lung function. Taken together, these results indicate that the adult CF lung microbiome is largely stable through periods of exacerbation and antibiotic treatment and that short-term compositional changes in the airway microbiota do not account for CF pulmonary exacerbations.
Resumo:
This article documents the public availability of (i) microbiomes in diet and gut of larvae from the dipteran Dilophus febrilis using massive parallel sequencing, (ii) SNP and SSR discovery and characterization in the transcriptome of the Atlantic mackerel (Scomber scombrus, L) and (iii) assembled transcriptome for an endangered, endemic Iberian cyprinid fish (Squalius pyrenaicus).
Resumo:
For a number of years, there has been a major effort to calculate electron-impact excitation data for every ion stage of iron embodied by the ongoing efforts of the IRON project by Hummer et al (1993 Astron. Astrophys. 279 298). Due to the complexity of the targets, calculations for the lower stages of ionization have been limited to either intermediate-coupling calculations within the ground configurations or LS -coupling calculations of the ground and excited configurations. However, accurate excitation data between individual levels within both the ground and excited configurations of the low charge-state ions are urgently required for applications to both astrophysical and laboratory plasmas. Here we report on the results of the first intermediate-coupling R -matrix calculation of electron-impact excitation for Fe 4+ for which the close-coupling (CC) expansion includes not only those levels of the 3d 4 ground configuration, but also the levels of the 3d 3 4s, 3d 3 4p, 3d 3 4d and 3d 2 4s 2 excited configurations. With 359 levels in the CC expansion and over 2400 scattering channels for many of the J Π partial waves, this represents the largest electron–ion scattering calculation to date and it was performed on massively parallel computers using a recently developed set of relativistic parallel R -matrix programs.
Resumo:
Over the last decade an Auburn-Rollins-Strathclyde consortium has developed several suites of parallel R-matrix codes [1, 2, 3] that can meet the fundamental data needs required for the interpretation of astrophysical observation and/or plasma experiments. Traditionally our collisional work on light fusion-related atoms has been focused towards spectroscopy and impurity transport for magnetically confined fusion devices. Our approach has been to provide a comprehensive data set for the excitation/ionization for every ion stage of a particular element. As we progress towards a burning fusion plasma, there is a demand for the collisional processes involving tungsten, which has required a revitalization of the relativistic R-matrix approach. The implementation of these codes on massively parallel supercomputers has facilitated the progression to models involving thousands of levels in the close-coupling expansion required by the open d and f sub-shell systems of mid Z tungsten. This work also complements the electron-impact excitation of Fe-Peak elements required by astrophysics, in particular the near neutral species, which offer similar atomic structure challenges. Although electron-impact excitation work is our primary focus in terms of fusion application, the single photon photoionisation codes are also being developed in tandem, and benefit greatly from this ongoing work.
Resumo:
Neural Network has emerged as the topic of the day. The spectrum of its application is as wide as from ECG noise filtering to seismic data analysis and from elementary particle detection to electronic music composition. The focal point of the proposed work is an application of a massively parallel connectionist model network for detection of a sonar target. This task is segmented into: (i) generation of training patterns from sea noise that contains radiated noise of a target, for teaching the network;(ii) selection of suitable network topology and learning algorithm and (iii) training of the network and its subsequent testing where the network detects, in unknown patterns applied to it, the presence of the features it has already learned in. A three-layer perceptron using backpropagation learning is initially subjected to a recursive training with example patterns (derived from sea ambient noise with and without the radiated noise of a target). On every presentation, the error in the output of the network is propagated back and the weights and the bias associated with each neuron in the network are modified in proportion to this error measure. During this iterative process, the network converges and extracts the target features which get encoded into its generalized weights and biases.In every unknown pattern that the converged network subsequently confronts with, it searches for the features already learned and outputs an indication for their presence or absence. This capability for target detection is exhibited by the response of the network to various test patterns presented to it.Three network topologies are tried with two variants of backpropagation learning and a grading of the performance of each combination is subsequently made.
Resumo:
Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.