631 resultados para algorithmic skeletons


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Singular Value Decomposition (SVD) is a key linear algebraic operation in many scientific and engineering applications. In particular, many computational intelligence systems rely on machine learning methods involving high dimensionality datasets that have to be fast processed for real-time adaptability. In this paper we describe a practical FPGA (Field Programmable Gate Array) implementation of a SVD processor for accelerating the solution of large LSE problems. The design approach has been comprehensive, from the algorithmic refinement to the numerical analysis to the customization for an efficient hardware realization. The processing scheme rests on an adaptive vector rotation evaluator for error regularization that enhances convergence speed with no penalty on the solution accuracy. The proposed architecture, which follows a data transfer scheme, is scalable and based on the interconnection of simple rotations units, which allows for a trade-off between occupied area and processing acceleration in the final implementation. This permits the SVD processor to be implemented both on low-cost and highend FPGAs, according to the final application requirements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The asymmetric construction of quaternary stereocenters is a topic of great interest in the organic chemistry community given their prevalence in natural products and biologically active molecules. Over the last decade, the Stoltz group has pursued the synthesis of this challenging motif via a palladium-catalyzed allylic alkylation using chiral phosphinooxazoline (PHOX) ligands. Recent results indicate that the alkylation of lactams and imides consistently proceeds with enantioselectivities substantially higher than any other substrate class previously examined in this system. This observation prompted exploration of the characteristics that distinguish these molecules as superior alkylation substrates, resulting in newfound insights and marked improvements in the allylic alkylation of carbocyclic compounds.

General routes to cyclopentanoid and cycloheptanoid core structures have been developed that incorporate the palladium-catalyzed allylic alkylation as a key transformation. The unique reactivity of α-quaternary vinylogous esters upon addition of hydride or organometallic reagents enables divergent access to γ-quaternary acylcyclopentenes or cycloheptenones through respective ring contraction or carbonyl transposition pathways. Derivatization of the resulting molecules provides a series of mono-, bi-, and tricyclic systems that can serve as valuable intermediates for the total synthesis of complex natural products.

The allylic alkylation and ring contraction methodology has been employed to prepare variably functionalized bicyclo[5.3.0]decane molecules and enables the enantioselective total syntheses of daucene, daucenal, epoxydaucenal B, and 14-p-anisoyloxydauc-4,8-diene. This route overcomes the challenge of accessing β-substituted acylcyclopentenes by employing a siloxyenone to effect the Grignard addition and ring opening in a single step. Subsequent ring-closing metathesis and aldol reactions form the hydroazulene core of these targets. Derivatization of a key enone intermediate allows access to either the daucane sesquiterpene or sphenobolane diterpene carbon skeletons, as well as other oxygenated scaffolds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Therapy employing epidural electrostimulation holds great potential for improving therapy for patients with spinal cord injury (SCI) (Harkema et al., 2011). Further promising results from combined therapies using electrostimulation have also been recently obtained (e.g., van den Brand et al., 2012). The devices being developed to deliver the stimulation are highly flexible, capable of delivering any individual stimulus among a combinatorially large set of stimuli (Gad et al., 2013). While this extreme flexibility is very useful for ensuring that the device can deliver an appropriate stimulus, the challenge of choosing good stimuli is quite substantial, even for expert human experimenters. To develop a fully implantable, autonomous device which can provide useful therapy, it is necessary to design an algorithmic method for choosing the stimulus parameters. Such a method can be used in a clinical setting, by caregivers who are not experts in the neurostimulator's use, and to allow the system to adapt autonomously between visits to the clinic. To create such an algorithm, this dissertation pursues the general class of active learning algorithms that includes Gaussian Process Upper Confidence Bound (GP-UCB, Srinivas et al., 2010), developing the Gaussian Process Batch Upper Confidence Bound (GP-BUCB, Desautels et al., 2012) and Gaussian Process Adaptive Upper Confidence Bound (GP-AUCB) algorithms. This dissertation develops new theoretical bounds for the performance of these and similar algorithms, empirically assesses these algorithms against a number of competitors in simulation, and applies a variant of the GP-BUCB algorithm in closed-loop to control SCI therapy via epidural electrostimulation in four live rats. The algorithm was tasked with maximizing the amplitude of evoked potentials in the rats' left tibialis anterior muscle. These experiments show that the algorithm is capable of directing these experiments sensibly, finding effective stimuli in all four animals. Further, in direct competition with an expert human experimenter, the algorithm produced superior performance in terms of average reward and comparable or superior performance in terms of maximum reward. These results indicate that variants of GP-BUCB may be suitable for autonomously directing SCI therapy.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The proliferation of smartphones and other internet-enabled, sensor-equipped consumer devices enables us to sense and act upon the physical environment in unprecedented ways. This thesis considers Community Sense-and-Response (CSR) systems, a new class of web application for acting on sensory data gathered from participants' personal smart devices. The thesis describes how rare events can be reliably detected using a decentralized anomaly detection architecture that performs client-side anomaly detection and server-side event detection. After analyzing this decentralized anomaly detection approach, the thesis describes how weak but spatially structured events can be detected, despite significant noise, when the events have a sparse representation in an alternative basis. Finally, the thesis describes how the statistical models needed for client-side anomaly detection may be learned efficiently, using limited space, via coresets.

The Caltech Community Seismic Network (CSN) is a prototypical example of a CSR system that harnesses accelerometers in volunteers' smartphones and consumer electronics. Using CSN, this thesis presents the systems and algorithmic techniques to design, build and evaluate a scalable network for real-time awareness of spatial phenomena such as dangerous earthquakes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Algorithmic DNA tiles systems are fascinating. From a theoretical perspective, they can result in simple systems that assemble themselves into beautiful, complex structures through fundamental interactions and logical rules. As an experimental technique, they provide a promising method for programmably assembling complex, precise crystals that can grow to considerable size while retaining nanoscale resolution. In the journey from theoretical abstractions to experimental demonstrations, however, lie numerous challenges and complications.

In this thesis, to examine these challenges, we consider the physical principles behind DNA tile self-assembly. We survey recent progress in experimental algorithmic self-assembly, and explain the simple physical models behind this progress. Using direct observation of individual tile attachments and detachments with an atomic force microscope, we test some of the fundamental assumptions of the widely-used kinetic Tile Assembly Model, obtaining results that fit the model to within error. We then depart from the simplest form of that model, examining the effects of DNA sticky end sequence energetics on tile system behavior. We develop theoretical models, sequence assignment algorithms, and a software package, StickyDesign, for sticky end sequence design.

As a demonstration of a specific tile system, we design a binary counting ribbon that can accurately count from a programmable starting value and stop growing after overflowing, resulting in a single system that can construct ribbons of precise and programmable length. In the process of designing the system, we explain numerous considerations that provide insight into more general tile system design, particularly with regards to tile concentrations, facet nucleation, the construction of finite assemblies, and design beyond the abstract Tile Assembly Model.

Finally, we present our crystals that count: experimental results with our binary counting system that represent a significant improvement in the accuracy of experimental algorithmic self-assembly, including crystals that count perfectly with 5 bits from 0 to 31. We show some preliminary experimental results on the construction of our capping system to stop growth after counters overflow, and offer some speculation on potential future directions of the field.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This report is an introduction to the concept of treewidth, a property of graphs that has important implications in algorithms. Some basic concepts of graph theory are presented in the first chapter for those readers that are not familiar with the notation. In Chapter 2, the definition of treewidth and some different ways of characterizing it are explained. The last two chapters focus on the algorithmic implications of treewidth, which are very relevant in Computer Science. An algorithm to compute the treewidth of a graph is presented and its result can be later applied to many other problems in graph theory, like those introduced in the last chapter.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider the quanti fied constraint satisfaction problem (QCSP) which is to decide, given a structure and a first-order sentence (not assumed here to be in prenex form) built from conjunction and quanti fication, whether or not the sentence is true on the structure. We present a proof system for certifying the falsity of QCSP instances and develop its basic theory; for instance, we provide an algorithmic interpretation of its behavior. Our proof system places the established Q-resolution proof system in a broader context, and also allows us to derive QCSP tractability results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[ES] La estimación del sexo y la edad son esenciales para la identificación individual de restos humanos y para el conocimiento e interpretación de la variabilidad biológica del pasado. El objetivo de este trabajo es la elaboración de un protocolo metodológico de análisis morfológico y morfométrico, para la estimación del sexo en adultos y de la edad en subadultos y adultos. Las principales metodologías incluidas en este trabajo pueden emplearse en restos esqueléticos de procedencia europea. Se realizó una revisión bibliográfica en revistas de ámbito forense y antropológico a fin de reunir un conjunto de metodologías aplicables tanto a esqueletos completos como parciales. Finalmente, en la elección de los métodos incluidos en este protocolo se tuvieron en cuenta dos criterios: la bondad de ajuste y el grado de complejidad del método.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An approach to reconfiguring control systems in the event of major failures is advocated. The approach relies on the convergence of several technologies which are currently emerging: Constrained predictive control, High-fidelity modelling of complex systems, Fault detection and identification, and Model approximation and simplification. Much work is needed, both theoretical and algorithmic, to make this approach practical, but we believe that there is enough evidence, especially from existing industrial practice, for the scheme to be considered realistic. After outlining the problem and proposed solution, the paper briefly reviews constrained predictive control and object-oriented modelling, which are the essential ingredients for practical implementation. The prospects for automatic model simplification are also reviewed briefly. The paper emphasizes some emerging trends in industrial practice, especially as regards modelling and control of complex systems. Examples from process control and flight control are used to illustrate some of the ideas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Current research into the process of engineering design is extending the use of computers towards the acquisition, representation and application of design process knowledge in addition to the existing storage and manipulation of product-based models of design objects. This is a difficult task because the design of mechanical systems is a complex, often unpredictable process involving ill-structured problem solving skills and large amounts of knowledge, some which may be of an incomplete and subjective nature. Design problems require the integration of a variety of modes of working such as numerical, graphical, algorithmic or heuristic and demand products through synthesis, analysis and evaluation activities.

This report presents the results of a feasibility study into the blackboard approach and discusses the development of an initial prototype system that will enable an alphanumeric design dialogue between a designer and an expert to be analysed in a formal way, thus providing real-life protocol data on which to base the blackboard message structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Forecasting the returns of assets at high frequency is the key challenge for high-frequency algorithmic trading strategies. In this paper, we propose a jump-diffusion model for asset price movements that models price and its trend and allows a momentum strategy to be developed. Conditional on jump times, we derive closed-form transition densities for this model. We show how this allows us to extract a trend from high-frequency finance data by using a Rao-Blackwellized variable rate particle filter to filter incoming price data. Our results show that even in the presence of transaction costs our algorithm can achieve a Sharpe ratio above 1 when applied across a portfolio of 75 futures contracts at high frequency. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While a large amount of research over the past two decades has focused on discrete abstractions of infinite-state dynamical systems, many structural and algorithmic details of these abstractions remain unknown. To clarify the computational resources needed to perform discrete abstractions, this paper examines the algorithmic properties of an existing method for deriving finite-state systems that are bisimilar to linear discrete-time control systems. We explicitly find the structure of the finite-state system, show that it can be enormous compared to the original linear system, and give conditions to guarantee that the finite-state system is reasonably sized and efficiently computable. Though constructing the finite-state system is generally impractical, we see that special cases could be amenable to satisfiability based verification techniques. ©2009 IEEE.