902 resultados para Computational time


Relevância:

30.00% 30.00%

Publicador:

Resumo:

While a large amount of research over the past two decades has focused on discrete abstractions of infinite-state dynamical systems, many structural and algorithmic details of these abstractions remain unknown. To clarify the computational resources needed to perform discrete abstractions, this paper examines the algorithmic properties of an existing method for deriving finite-state systems that are bisimilar to linear discrete-time control systems. We explicitly find the structure of the finite-state system, show that it can be enormous compared to the original linear system, and give conditions to guarantee that the finite-state system is reasonably sized and efficiently computable. Though constructing the finite-state system is generally impractical, we see that special cases could be amenable to satisfiability based verification techniques. ©2009 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper considers a group of agents that aim to reach an agreement on individually received time-varying signals by local communication. In contrast to static network averaging problem, the consensus considered in this paper is reached in a dynamic sense. A discrete-time dynamic average consensus protocol can be designed to allow all the agents tracking the average of their reference inputs asymptotically. We propose a minimal-time dynamic consensus algorithm, which only utilises a minimal number of local observations of a randomly picked node in a network to compute the final consensus signal. Our results illustrate that with memory and computational ability, the running time of distributed averaging algorithms can be indeed improved dramatically as suggested by Olshevsky and Tsitsiklis. © 2012 AACC American Automatic Control Council).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Human listeners can identify vowels regardless of speaker size, although the sound waves for an adult and a child speaking the ’same’ vowel would differ enormously. The differences are mainly due to the differences in vocal tract length (VTL) and glottal pulse rate (GPR) which are both related to body size. Automatic speech recognition machines are notoriously bad at understanding children if they have been trained on the speech of an adult. In this paper, we propose that the auditory system adapts its analysis of speech sounds, dynamically and automatically to the GPR and VTL of the speaker on a syllable-to-syllable basis. We illustrate how this rapid adaptation might be performed with the aid of a computational version of the auditory image model, and we propose that an auditory preprocessor of this form would improve the robustness of speech recognisers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an efficient algorithm for robust network reconstruction of Linear Time-Invariant (LTI) systems in the presence of noise, estimation errors and unmodelled nonlinearities. The method here builds on previous work [1] on robust reconstruction to provide a practical implementation with polynomial computational complexity. Following the same experimental protocol, the algorithm obtains a set of structurally-related candidate solutions spanning every level of sparsity. We prove the existence of a magnitude bound on the noise, which if satisfied, guarantees that one of these structures is the correct solution. A problem-specific model-selection procedure then selects a single solution from this set and provides a measure of confidence in that solution. Extensive simulations quantify the expected performance for different levels of noise and show that significantly more noise can be tolerated in comparison to the original method. © 2012 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The accurate prediction of time-changing covariances is an important problem in the modeling of multivariate financial data. However, some of the most popular models suffer from a) overfitting problems and multiple local optima, b) failure to capture shifts in market conditions and c) large computational costs. To address these problems we introduce a novel dynamic model for time-changing covariances. Over-fitting and local optima are avoided by following a Bayesian approach instead of computing point estimates. Changes in market conditions are captured by assuming a diffusion process in parameter values, and finally computationally efficient and scalable inference is performed using particle filters. Experiments with financial data show excellent performance of the proposed method with respect to current standard models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, the effects of cooling time prior to reprocessing spent LWR fuel has on the reactor physics characteristics of a PWR fully loaded with homogeneously mixed U-Pu or U-TRU oxide (MOX) fuel is examined. A reactor physics analysis was completed using the CASM04e code. A void reactivity feedback coefficient analysis was also completed for an infinite lattice of fresh fuel assemblies. Some useful conclusions can be made regarding the effect that cooling time prior to reprocessing spent LWR fuel has on a closed homogeneous MOX fuel cycle. The computational analysis shows that it is more neutronically efficient to reprocess cooled spent fuel into homogeneous MOX fuel rods earlier rather than later as the fissile fuel content decreases with time. Also, the number of spent fuel rods needed to fabricate one MOX fuel rod increases as cooling time increases. In the case of TRU MOX fuel, with time, there is an economic tradeoff between fuel handling difficulty and higher throughput of fuel to be reprocessed. The void coefficient analysis shows that the void coefficient becomes progressively more restrictive on fuel Pu content with increasing spent fuel cooling time before reprocessing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

At present, optimisation is an enabling technology in innovation. Multi-objective and multi-disciplinary design tools are essential in the engineering design process, and have been applied successfully in aerospace and turbomachinery applications extensively. These approaches give insight into the design space and identify the trade-offs between the competing performance measures satisfying a number of constraints at the same time. It is anticipated here that the same benefits can be obtained for the design of micro-scale combustors. In this paper, a multi-disciplinary automated design optimisation system was developed for this purpose, which comprises a commercial computational fluid dynamics package and a multi-objective variant of the Tabu Search optimisation algorithm. The main objectives that are considered in this study are to optimise the main micro-scale combustor design characteristics and to satisfy manufacturability considerations from the very beginning of the whole design operation. Hydrogen-air combustion as well as 14 geometrical and 2 operational parameters are used to describe and model the design problem. Two illustrative test cases will be presented, in which the most important device operational requirements are optimised, and the efficiency of the developed optimisation system is demonstrated. The identification, assessment and suitability of the optimum design configurations are discussed in detail. Copyright © 2012 by ASME.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A multi-objective design optimisation study has been carried out with the objectives to improve the overall efficiency of the device and to reduce the fuel consumption for the proposed micro-scale combustor design configuration. In a previous study we identified the topology of the combustion chamber that produced improved behaviour of the device in terms of the above design criteria. We now extend our design approach, and we propose a new configuration by the addition of a micro-cooling channel that will improve the thermal behaviour of the design as previously suggested in literature. Our initial numerical results revealed an improvement of 2.6% in the combustion efficiency when we applied the micro-cooling channel to an optimum design configuration we identified from our earlier multi-objective optimisation study, and under the same operating conditions. The computational modelling of the combustion process is implemented in the commercial computational fluid dynamics package ANSYS-CFX using Finite Rate Chemistry and a single step hydrogen-air reaction. With this model we try to balance good accuracy of the combustion solution and at the same time practicality within the context of an optimisation process. The whole design system comprises also the ANSYS-ICEM CFD package for the automatic geometry and mesh generation and the Multi-Objective Tabu Search algorithm for the design space exploration. We model the design problem with 5 geometrical parameters and 3 operational parameters subject to 5 design constraints that secure practicality and feasibility of the new optimum design configurations. The final results demonstrate the reliability and efficiency of the developed computational design system and most importantly we assess the practicality and manufacturability of the revealed optimum design configurations of micro-combustor devices. Copyright © 2013 by ASME.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Starting from nonhydrostatic Boussinesq approximation equations, a general method is introduced to deduce the dispersion relationships. A comparative investigation is performed on inertia-gravity wave with horizontal lengths of 100, 10 and 1 km. These are examined using the second-order central difference scheme and the fourth-order compact difference scheme on vertical grids that are currently available from the perspectives of frequency, horizontal and vertical component of group velocity. These findings are compared to analytical solutions. The obtained results suggest that whether for the second-order central difference scheme or for the fourth-order compact difference scheme, Charny-Phillips and Lorenz ( L) grids are suitable for studying waves at the above-mentioned horizontal scales; the Lorenz time-staggered and Charny-Phillips time staggered (CPTS) grids are applicable only to the horizontal scales of less than 10 km, and N grid ( unstaggered grid) is unsuitable for simulating waves at any horizontal scale. Furthermore, by using fourth-order compact difference scheme with higher difference precision, the errors of frequency and group velocity in horizontal and vertical directions produced on all vertical grids in describing the waves with horizontal lengths of 1, 10 and 100 km cannot inevitably be decreased. So in developing a numerical model, the higher-order finite difference scheme, like fourth-order compact difference scheme, should be avoided as much as possible, typically on L and CPTS grids, since it will not only take many efforts to design program but also make the calculated group velocity in horizontal and vertical directions even worse in accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper explores the relationships between a computation theory of temporal representation (as developed by James Allen) and a formal linguistic theory of tense (as developed by Norbert Hornstein) and aspect. It aims to provide explicit answers to four fundamental questions: (1) what is the computational justification for the primitive of a linguistic theory; (2) what is the computational explanation of the formal grammatical constraints; (3) what are the processing constraints imposed on the learnability and markedness of these theoretical constructs; and (4) what are the constraints that a linguistic theory imposes on representations. We show that one can effectively exploit the interface between the language faculty and the cognitive faculties by using linguistic constraints to determine restrictions on the cognitive representation and vice versa. Three main results are obtained: (1) We derive an explanation of an observed grammatical constraint on tense?? Linear Order Constraint??m the information monotonicity property of the constraint propagation algorithm of Allen's temporal system: (2) We formulate a principle of markedness for the basic tense structures based on the computational efficiency of the temporal representations; and (3) We show Allen's interval-based temporal system is not arbitrary, but it can be used to explain independently motivated linguistic constraints on tense and aspect interpretations. We also claim that the methodology of research developed in this study??oss-level" investigation of independently motivated formal grammatical theory and computational models??a powerful paradigm with which to attack representational problems in basic cognitive domains, e.g., space, time, causality, etc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The primary goal of this report is to demonstrate how considerations from computational complexity theory can inform grammatical theorizing. To this end, generalized phrase structure grammar (GPSG) linguistic theory is revised so that its power more closely matches the limited ability of an ideal speaker--hearer: GPSG Recognition is EXP-POLY time hard, while Revised GPSG Recognition is NP-complete. A second goal is to provide a theoretical framework within which to better understand the wide range of existing GPSG models, embodied in formal definitions as well as in implemented computer programs. A grammar for English and an informal explanation of the GPSG/RGPSG syntactic features are included in appendices.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Load balancing is often used to ensure that nodes in a distributed systems are equally loaded. In this paper, we show that for real-time systems, load balancing is not desirable. In particular, we propose a new load-profiling strategy that allows the nodes of a distributed system to be unequally loaded. Using load profiling, the system attempts to distribute the load amongst its nodes so as to maximize the chances of finding a node that would satisfy the computational needs of incoming real-time tasks. To that end, we describe and evaluate a distributed load-profiling protocol for dynamically scheduling time-constrained tasks in a loosely-coupled distributed environment. When a task is submitted to a node, the scheduling software tries to schedule the task locally so as to meet its deadline. If that is not feasible, it tries to locate another node where this could be done with a high probability of success, while attempting to maintain an overall load profile for the system. Nodes in the system inform each other about their state using a combination of multicasting and gossiping. The performance of the proposed protocol is evaluated via simulation, and is contrasted to other dynamic scheduling protocols for real-time distributed systems. Based on our findings, we argue that keeping a diverse availability profile and using passive bidding (through gossiping) are both advantageous to distributed scheduling for real-time systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.

We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.

We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.

Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.

This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The manufacture of materials products involves the control of a range of interacting physical phenomena. The material to be used is synthesised and then manipulated into some component form. The structure and properties of the final component are influenced by both interactions of continuum-scale phenomena and those at an atomistic-scale level. Moreover, during the processing phase there are some properties that cannot be measured (typically the liquid-solid phase change). However, it seems there is a potential to derive properties and other features from atomistic-scale simulations that are of key importance at the continuum scale. Some of the issues that need to be resolved in this context focus upon computational techniques and software tools facilitating: (i) the multiphysics modeling at continuum scale; (ii) the interaction and appropriate degrees of coupling between the atomistic through microstructure to continuum scale; and (iii) the exploitation of high-performance parallel computing power delivering simulation results in a practical time period. This paper discusses some of the attempts to address each of the above issues, particularly in the context of materials processing for manufacture.