906 resultados para computational cost


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recognizing standard computational structures (cliches) in a program can help an experienced programmer understand the program. We develop a graph parsing approach to automating program recognition in which programs and cliches are represented in an attributed graph grammar formalism and recognition is achieved by graph parsing. In studying this approach, we evaluate our representation's ability to suppress many common forms of variation which hinder recognition. We investigate the expressiveness of our graph grammar formalism for capturing programming cliches. We empirically and analytically study the computational cost of our recognition approach with respect to two medium-sized, real-world simulator programs.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Essery, RLH & JW, Pomeroy, (2004). Vegetation and topographic control of wind-blown snow distributions in distributed and aggregated simulations. Journal of Hydrometeorology, 5, 735-744.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we study the efficacy of genetic algorithms in the context of combinatorial optimization. In particular, we isolate the effects of cross-over, treated as the central component of genetic search. We show that for problems of nontrivial size and difficulty, the contribution of cross-over search is marginal, both synergistically when run in conjunction with mutation and selection, or when run with selection alone, the reference point being the search procedure consisting of just mutation and selection. The latter can be viewed as another manifestation of the Metropolis process. Considering the high computational cost of maintaining a population to facilitate cross-over search, its marginal benefit renders genetic search inferior to its singleton-population counterpart, the Metropolis process, and by extension, simulated annealing. This is further compounded by the fact that many problems arising in practice may inherently require a large number of state transitions for a near-optimal solution to be found, making genetic search infeasible given the high cost of computing a single iteration in the enlarged state-space.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Accurate head tilt detection has a large potential to aid people with disabilities in the use of human-computer interfaces and provide universal access to communication software. We show how it can be utilized to tab through links on a web page or control a video game with head motions. It may also be useful as a correction method for currently available video-based assistive technology that requires upright facial poses. Few of the existing computer vision methods that detect head rotations in and out of the image plane with reasonable accuracy can operate within the context of a real-time communication interface because the computational expense that they incur is too great. Our method uses a variety of metrics to obtain a robust head tilt estimate without incurring the computational cost of previous methods. Our system runs in real time on a computer with a 2.53 GHz processor, 256 MB of RAM and an inexpensive webcam, using only 55% of the processor cycles.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is focused on the application of numerical atomic basis sets in studies of the structural, electronic and transport properties of silicon nanowire structures from first-principles within the framework of Density Functional Theory. First we critically examine the applied methodology and then offer predictions regarding the transport properties and realisation of silicon nanowire devices. The performance of numerical atomic orbitals is benchmarked against calculations performed with plane waves basis sets. After establishing the convergence of total energy and electronic structure calculations with increasing basis size we have shown that their quality greatly improves with the optimisation of the contraction for a fixed basis size. The double zeta polarised basis offers a reasonable approximation to study structural and electronic properties and transferability exists between various nanowire structures. This is most important to reduce the computational cost. The impact of basis sets on transport properties in silicon nanowires with oxygen and dopant impurities have also been studied. It is found that whilst transmission features quantitatively converge with increasing contraction there is a weaker dependence on basis set for the mean free path; the double zeta polarised basis offers a good compromise whereas the single zeta basis set yields qualitatively reasonable results. Studying the transport properties of nanowire-based transistor setups with p+-n-p+ and p+-i-p+ doping profiles it is shown that charge self-consistency affects the I-V characteristics more significantly than the basis set choice. It is predicted that such ultrascaled (3 nm length) transistors would show degraded performance due to relatively high source-drain tunnelling currents. Finally, it is shown the hole mobility of Si nanowires nominally doped with boron decreases monotonically with decreasing width at fixed doping density and increasing dopant concentration. Significant mobility variations are identified which can explain experimental observations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Surrogate-based-optimization methods provide a means to achieve high-fidelity design optimization at reduced computational cost by using a high-fidelity model in combination with lower-fidelity models that are less expensive to evaluate. This paper presents a provably convergent trust-region model-management methodology for variableparameterization design models: that is, models for which the design parameters are defined over different spaces. Corrected space mapping is introduced as a method to map between the variable-parameterization design spaces. It is then used with a sequential-quadratic-programming-like trust-region method for two aerospace-related design optimization problems. Results for a wing design problem and a flapping-flight problem show that the method outperforms direct optimization in the high-fidelity space. On the wing design problem, the new method achieves 76% savings in high-fidelity function calls. On a bat-flight design problem, it achieves approximately 45% time savings, although it converges to a different local minimum than did the benchmark.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The motivation for this paper is to present procedures for automatically creating idealised finite element models from the 3D CAD solid geometry of a component. The procedures produce an accurate and efficient analysis model with little effort on the part of the user. The technique is applicable to thin walled components with local complex features and automatically creates analysis models where 3D elements representing the complex regions in the component are embedded in an efficient shell mesh representing the mid-faces of the thin sheet regions. As the resulting models contain elements of more than one dimension, they are referred to as mixed dimensional models. Although these models are computationally more expensive than some of the idealisation techniques currently employed in industry, they do allow the structural behaviour of the model to be analysed more accurately, which is essential if appropriate design decisions are to be made. Also, using these procedures, analysis models can be created automatically whereas the current idealisation techniques are mostly manual, have long preparation times, and are based on engineering judgement. In the paper the idealisation approach is first applied to 2D models that are used to approximate axisymmetric components for analysis. For these models 2D elements representing the complex regions are embedded in a 1D mesh representing the midline of the cross section of the thin sheet regions. Also discussed is the coupling, which is necessary to link the elements of different dimensionality together. Analysis results from a 3D mixed dimensional model created using the techniques in this paper are compared to those from a stiffened shell model and a 3D solid model to demonstrate the improved accuracy of the new approach. At the end of the paper a quantitative analysis of the reduction in computational cost due to shell meshing thin sheet regions demonstrates that the reduction in degrees of freedom is proportional to the square of the aspect ratio of the region, and for long slender solids, the reduction can be proportional to the aspect ratio of the region if appropriate meshing algorithms are used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Flutter prediction as currently practiced is almost always deterministic in nature, based on a single structural model that is assumed to represent a fleet of aircraft. However, it is also recognized that there can be significant structural variability, even for different flights of the same aircraft. The safety factor used for flutter clearance is in part meant to account for this variability. Simulation tools can, however, represent the consequences of structural variability in the flutter predictions, providing extra information that could be useful in planning physical tests and assessing risk. The main problem arising for this type of calculation when using high-fidelity tools based on computational fluid dynamics is the computational cost. The current paper uses an eigenvalue-based stability method together with Euler-level aerodynamics and different methods for propagating structural variability to stability predictions. The propagation methods are Monte Carlo, perturbation, and interval analysis. The feasibility of this type of analysis is demonstrated. Results are presented for the Goland wing and a generic fighter configuration.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper the use of eigenvalue stability analysis of very large dimension aeroelastic numerical models arising from the exploitation of computational fluid dynamics is reviewed. A formulation based on a block reduction of the system Jacobian proves powerful to allow various numerical algorithms to be exploited, including frequency domain solvers, reconstruction of a term describing the fluid–structure interaction from the sparse data which incurs the main computational cost, and sampling to place the expensive samples where they are most needed. The stability formulation also allows non-deterministic analysis to be carried out very efficiently through the use of an approximate Newton solver. Finally, the system eigenvectors are exploited to produce nonlinear and parameterised reduced order models for computing limit cycle responses. The performance of the methods is illustrated with results from a number of academic and large dimension aircraft test cases.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, we introduce an efficient method for particle selection in tracking objects in complex scenes. Firstly, we improve the proposal distribution function of the tracking algorithm, including current observation, reducing the cost of evaluating particles with a very low likelihood. In addition, we use a partitioned sampling approach to decompose the dynamic state in several stages. It enables to deal with high-dimensional states without an excessive computational cost. To represent the color distribution, the appearance of the tracked object is modelled by sampled pixels. Based on this representation, the probability of any observation is estimated using non-parametric techniques in color space. As a result, we obtain a Probability color Density Image (PDI) where each pixel points its membership to the target color model. In this way, the evaluation of all particles is accelerated by computing the likelihood p(z|x) using the Integral Image of the PDI.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In human motion analysis, the joint estimation of appearance, body pose and location parameters is not always tractable due to its huge computational cost. In this paper, we propose a Rao-Blackwellized Particle Filter for addressing the problem of human pose estimation and tracking. The advantage of the proposed approach is that Rao-Blackwellization allows the state variables to be splitted into two sets, being one of them analytically calculated from the posterior probability of the remaining ones. This procedure reduces the dimensionality of the Particle Filter, thus requiring fewer particles to achieve a similar tracking performance. In this manner, location and size over the image are obtained stochastically using colour and motion clues, whereas body pose is solved analytically applying learned human Point Distribution Models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A novel non-linear dimensionality reduction method, called Temporal Laplacian Eigenmaps, is introduced to process efficiently time series data. In this embedded-based approach, temporal information is intrinsic to the objective function, which produces description of low dimensional spaces with time coherence between data points. Since the proposed scheme also includes bidirectional mapping between data and embedded spaces and automatic tuning of key parameters, it offers the same benefits as mapping-based approaches. Experiments on a couple of computer vision applications demonstrate the superiority of the new approach to other dimensionality reduction method in term of accuracy. Moreover, its lower computational cost and generalisation abilities suggest it is scalable to larger datasets. © 2010 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an Invariant Information Local Sub-map Filter (IILSF) as a technique for consistent Simultaneous Localisation and Mapping (SLAM) in a large environment. It harnesses the benefits of sub-map technique to improve the consistency and efficiency of Extended Kalman Filter (EKF) based SLAM. The IILSF makes use of invariant information obtained from estimated locations of features in independent sub-maps, instead of incorporating every observation directly into the global map. Then the global map is updated at regular intervals. Applying this technique to the EKF based SLAM algorithm: (a) reduces the computational complexity of maintaining the global map estimates and (b) simplifies transformation complexities and data association ambiguities usually experienced in fusing sub-maps together. Simulation results show that the method was able to accurately fuse local map observations to generate an efficient and consistent global map, in addition to significantly reducing computational cost and data association ambiguities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The solution of the time-dependent Schrodinger equation for systems of interacting electrons is generally a prohibitive task, for which approximate methods are necessary. Popular approaches, such as the time-dependent Hartree-Fock (TDHF) approximation and time-dependent density functional theory (TDDFT), are essentially single-configurational schemes. TDHF is by construction incapable of fully accounting for the excited character of the electronic states involved in many physical processes of interest; TDDFT, although exact in principle, is limited by the currently available exchange-correlation functionals. On the other hand, multiconfigurational methods, such as the multiconfigurational time-dependent Hartree-Fock (MCTDHF) approach, provide an accurate description of the excited states and can be systematically improved. However, the computational cost becomes prohibitive as the number of degrees of freedom increases, and thus, at present, the MCTDHF method is only practical for few-electron systems. In this work, we propose an alternative approach which effectively establishes a compromise between efficiency and accuracy, by retaining the smallest possible number of configurations that catches the essential features of the electronic wavefunction. Based on a time-dependent variational principle, we derive the MCTDHF working equation for a multiconfigurational expansion with fixed coefficients and specialise to the case of general open-shell states, which are relevant for many physical processes of interest. (C) 2011 American Institute of Physics. [doi: 10.1063/1.3600397]

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recently, two fast selective encryption methods for context-adaptive variable length coding and context-adaptive binary arithmetic coding in H.264/AVC were proposed by Shahid et al. In this paper, it was demonstrated that these two methods are not as efficient as only encrypting the sign bits of nonzero coefficients. Experimental results showed that without encrypting the sign bits of nonzero coefficients, these two methods can not provide a perceptual scrambling effect. If a much stronger scrambling effect is required, intra prediction modes, and the sign bits of motion vectors can be encrypted together with the sign bits of nonzero coefficients. For practical applications, the required encryption scheme should be customized according to a user's specified requirement on the perceptual scrambling effect and the computational cost. Thus, a tunable encryption scheme combining these three methods is proposed for H.264/AVC. To simplify its implementation and reduce the computational cost, a simple control mechanism is proposed to adjust the control factors. Experimental results show that this scheme can provide different scrambling levels by adjusting three control factors with no or very little impact on the compression performance. The proposed scheme can run in real-time and its computational cost is minimal. The security of the proposed scheme is also discussed. It is secure against the replacement attack when all three control factors are set to one.