3 resultados para Theoretical Computer Science

em Duke University


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Nucleic Acid hairpins have been a subject of study for the last four decades. They are composed of single strand that is

hybridized to itself, and the central section forming an unhybridized loop. In nature, they stabilize single stranded RNA, serve as nucleation

sites for RNA folding, protein recognition signals, mRNA localization and regulation of mRNA degradation. On the other hand,

DNA hairpins in biological contexts have been studied with respect to forming cruciform structures that can regulate gene expression.

The use of DNA hairpins as fuel for synthetic molecular devices, including locomotion, was proposed and experimental demonstrated in 2003. They

were interesting because they bring to the table an on-demand energy/information supply mechanism.

The energy/information is hidden (from hybridization) in the hairpin’s loop, until required.

The energy/information is harnessed by opening the stem region, and exposing the single stranded loop section.

The loop region is now free for possible hybridization and help move the system into a thermodynamically favourable state.

The hidden energy and information coupled with

programmability provides another functionality, of selectively choosing what reactions to hide and

what reactions to allow to proceed, that helps develop a topological sequence of events.

Hairpins have been utilized as a source of fuel for many different DNA devices. In this thesis, we program four different

molecular devices using DNA hairpins, and experimentally validate them in the

laboratory. 1) The first device: A

novel enzyme-free autocatalytic self-replicating system composed entirely of DNA that operates isothermally. 2) The second

device: Time-Responsive Circuits using DNA have two properties: a) asynchronous: the final output is always correct

regardless of differences in the arrival time of different inputs.

b) renewable circuits which can be used multiple times without major degradation of the gate motifs

(so if the inputs change over time, the DNA-based circuit can re-compute the output correctly based on the new inputs).

3) The third device: Activatable tiles are a theoretical extension to the Tile assembly model that enhances

its robustness by protecting the sticky sides of tiles until a tile is partially incorporated into a growing assembly.

4) The fourth device: Controlled Amplification of DNA catalytic system: a device such that the amplification

of the system does not run uncontrollably until the system runs out of fuel, but instead achieves a finite

amount of gain.

Nucleic acid circuits with the ability

to perform complex logic operations have many potential practical applications, for example the ability to achieve point of care diagnostics.

We discuss the designs of our DNA Hairpin molecular devices, the results we have obtained, and the challenges we have overcome

to make these truly functional.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work explores the use of statistical methods in describing and estimating camera poses, as well as the information feedback loop between camera pose and object detection. Surging development in robotics and computer vision has pushed the need for algorithms that infer, understand, and utilize information about the position and orientation of the sensor platforms when observing and/or interacting with their environment.

The first contribution of this thesis is the development of a set of statistical tools for representing and estimating the uncertainty in object poses. A distribution for representing the joint uncertainty over multiple object positions and orientations is described, called the mirrored normal-Bingham distribution. This distribution generalizes both the normal distribution in Euclidean space, and the Bingham distribution on the unit hypersphere. It is shown to inherit many of the convenient properties of these special cases: it is the maximum-entropy distribution with fixed second moment, and there is a generalized Laplace approximation whose result is the mirrored normal-Bingham distribution. This distribution and approximation method are demonstrated by deriving the analytical approximation to the wrapped-normal distribution. Further, it is shown how these tools can be used to represent the uncertainty in the result of a bundle adjustment problem.

Another application of these methods is illustrated as part of a novel camera pose estimation algorithm based on object detections. The autocalibration task is formulated as a bundle adjustment problem using prior distributions over the 3D points to enforce the objects' structure and their relationship with the scene geometry. This framework is very flexible and enables the use of off-the-shelf computational tools to solve specialized autocalibration problems. Its performance is evaluated using a pedestrian detector to provide head and foot location observations, and it proves much faster and potentially more accurate than existing methods.

Finally, the information feedback loop between object detection and camera pose estimation is closed by utilizing camera pose information to improve object detection in scenarios with significant perspective warping. Methods are presented that allow the inverse perspective mapping traditionally applied to images to be applied instead to features computed from those images. For the special case of HOG-like features, which are used by many modern object detection systems, these methods are shown to provide substantial performance benefits over unadapted detectors while achieving real-time frame rates, orders of magnitude faster than comparable image warping methods.

The statistical tools and algorithms presented here are especially promising for mobile cameras, providing the ability to autocalibrate and adapt to the camera pose in real time. In addition, these methods have wide-ranging potential applications in diverse areas of computer vision, robotics, and imaging.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.

While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.

For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.