889 resultados para Computational effort
Resumo:
The finite volume method is used as a numerical method for solving the fluid flow equations. This method is appropriate to employ under structured and unstructured meshes. Mixed grids, combining both types of grids, are investigated. The coupling of different grids is done by overlapping strategy. The computational effort for the mixed grid is evaluated by the CPU-time, with different percentage of covering area of the unstructured mesh. The present scheme is tested for the driven cavity problem, where the incompressible fluid is integrated by calculating the velocity fields and computing the pressure field in each time step. Several schemes for unstructured grid are examined, and the compatibility condition is applied to check their consistency. A scheme to verify the compatibility condition for the unstructured grids is presented. (c) 2006 IMACS. Published by Elsevier B.V. All rights reserved.
Resumo:
Implementation of stable aeroelastic models with the ability to capture the complex features of Multi concept smartblades is a prime step in reducing the uncertainties that come along with blade dynamics. The numerical simulations of fluid structure interaction can thus be used to test a realistic scenarios comprising of full-scale blades at a reasonably low computational cost. A code which was a combination of two advanced numerical models was designed and was run with the help of paralell HPC supercomputer platform. The first model was based on a variation of dimensional reduction technique proposed by Hodges and Yu. This model was the one to record the structural response of heterogenous composite blades. This technique reduces the geometrical complexities of the heterogenous blade section into a stiffness matrix for an equivalent beam. This derived equivalent 1-D strain energy matrix is similar to the actual 3-D strain energy matrix in an asymptotic sense. As this 1-D matrix helps in accurately modeling the blade structure as a 1-D finite element problem, this substantially redues the computational effort and subsequently the computational cost that are required to model the structural dynamics at each step. Second model comprises of implementation of the Blade Element Momentum Theory. In this approach we map all the velocities and the forces with the help of orthogonal matrices that help in capturing the large deformations and the effects of rotations in calculating the aerodynamic forces. This ultimately helps us to take into account the complex flexo torsional deformations. In this thesis we have succesfully tested these computayinal tools developed by MTU’s research team lead by for the aero elastic analysis of wind-turbine blades. The validation in this thesis is majorly based on several experiments done on NREL-5MW blade, as this is widely accepted as a benchmark blade in the wind industry. Along with the use of this innovative model the internal blade structure was also changed to add up to the existing benefits of the already advanced numerical models.
Resumo:
The local-density approximation (LDA) together with the half occupation (transitionstate) is notoriously successful in the calculation of atomic ionization potentials. When it comes to extended systems, such as a semiconductor infinite system, it has been very difficult to find a way to half ionize because the hole tends to be infinitely extended (a Bloch wave). The answer to this problem lies in the LDA formalism itself. One proves that the half occupation is equivalent to introducing the hole self-energy (electrostatic and exchange correlation) into the Schrodinger equation. The argument then becomes simple: The eigenvalue minus the self-energy has to be minimized because the atom has a minimal energy. Then one simply proves that the hole is localized, not infinitely extended, because it must have maximal self-energy. Then one also arrives at an equation similar to the self- interaction correction equation, but corrected for the removal of just 1/2 electron. Applied to the calculation of band gaps and effective masses, we use the self- energy calculated in atoms and attain a precision similar to that of GW, but with the great advantage that it requires no more computational effort than standard LDA.
Resumo:
This paper deals with the traditional permutation flow shop scheduling problem with the objective of minimizing mean flowtime, therefore reducing in-process inventory. A new heuristic method is proposed for the scheduling problem solution. The proposed heuristic is compared with the best one considered in the literature. Experimental results show that the new heuristic provides better solutions regarding both the solution quality and computational effort.
Resumo:
This work deals with nonlinear geometric plates in the context of von Karman`s theory. The formulation is written such that only the boundary in-plane displacement and deflection integral equations for boundary collocations are required. At internal points, only out-of-plane rotation, curvature and in-plane internal force representations are used. Thus, only integral representations of these values are derived. The nonlinear system of equations is derived by approximating all densities in the domain integrals as single values, which therefore reduces the computational effort needed to evaluate the domain value influences. Hyper-singular equations are avoided by approximating the domain values using only internal nodes. The solution is obtained using a Newton scheme for which a consistent tangent operator was derived. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This work is related to the so-called non-conventional finite element formulations. Essentially, a methodology for the enrichment of the initial approximation which is typical of the meshless methods and based on the clouds concept is introduced in the hybrid-Trefftz formulation for plane elasticity. The formulation presented allows for the approximation and direct enrichment of two independent fields: stresses in the domains and displacements on the boundaries of the elements. Defined by a set of elements and interior boundaries sharing a common node, the cloud notion is employed to select the enrichment support for the approximation fields. The numerical analysis performed reveals an excellent performance of the resulting formulation, characterized by the good approximation ability and a reduced computational effort. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
A matrix method is presented for simulating acoustic levitators. A typical acoustic levitator consists of an ultrasonic transducer and a reflector. The matrix method is used to determine the potential for acoustic radiation force that acts on a small sphere in the standing wave field produced by the levitator. The method is based on the Rayleigh integral and it takes into account the multiple reflections that occur between the transducer and the reflector. The potential for acoustic radiation force obtained by the matrix method is validated by comparing the matrix method results with those obtained by the finite element method when using an axisymmetric model of a single-axis acoustic levitator. After validation, the method is applied in the simulation of a noncontact manipulation system consisting of two 37.9-kHz Langevin-type transducers and a plane reflector. The manipulation system allows control of the horizontal position of a small levitated sphere from -6 mm to 6 mm, which is done by changing the phase difference between the two transducers. The horizontal position of the sphere predicted by the matrix method agrees with the horizontal positions measured experimentally with a charge-coupled device camera. The main advantage of the matrix method is that it allows simulation of non-symmetric acoustic levitators without requiring much computational effort.
Resumo:
The practicability of estimating directional wave spectra based on a vessel`s 1st order response has been recently addressed by several researchers. Different alternatives regarding statistical inference methods and possible drawbacks that could arise from their application have been extensively discussed, with an apparent preference for estimations based on Bayesian inference algorithms. Most of the results on this matter, however, rely exclusively on numerical simulations or at best on few and sparse full-scale measurements, comprising a questionable basis for validation purposes. This paper discusses several issues that have recently been debated regarding the advantages of Bayesian inference and different alternatives for its implementation. Among those are the definition of the best set of input motions, the number of parameters required for guaranteeing smoothness of the spectrum in frequency and direction and how to determine their optimum values. These subjects are addressed in the light of an extensive experimental campaign performed with a small-scale model of an FPSO platform (VLCC hull), which was conducted in an ocean basin in Brazil. Tests involved long and short crested seas with variable levels of directional spreading and also bimodal conditions. The calibration spectra measured in the tank by means of an array of wave probes configured the paradigm for estimations. Results showed that a wide range of sea conditions could be estimated with good precision, even those with somewhat low peak periods. Some possible drawbacks that have been pointed out in previous works concerning the viability of employing large vessels for such a task are then refuted. Also, it is shown that a second parameter for smoothing the spectrum in frequency may indeed increase the accuracy in some situations, although the criterion usually proposed for estimating the optimum values (ABIC) demands large computational effort and does not seem adequate for practical on-board systems, which require expeditious estimations. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
O principal objetivo deste trabalho foi identificar e caracterizar a evolução diária da Camada Limite Atmosférica (CLA) na Região da Grande Vitória (RGV), Estado do Espírito Santo, Brasil e na Região de Dunkerque (RD), Departamento Nord Pas-de-Calais, França, avaliando a acurácia de parametrizações usadas no modelo meteorológico Weather Research and Forecasting (WRF) em detectar a formação e atributos da Camada Limite Interna (CLI) que é formada pelas brisas marítimas. A RGV tem relevo complexo, em uma região costeira de topografia acidentada e uma cadeia de montanhas paralela à costa. A RD tem relevo simples, em uma região costeira com pequenas ondulações que não chegam a ultrapassar 150 metros, ao longo do domínio de estudos. Para avaliar os resultados dos prognósticos feitos pelo modelo, foram utilizados os resultados de duas campanhas: uma realizada na cidade de Dunkerque, no norte da França, em Julho de 2009, utilizando um sistema light detection and ranging (LIDAR), um sonic detection and ranging (SODAR) e dados de uma estação meteorológica de superfície (EMS); outra realizada na cidade de Vitória – Espírito Santo, no mês de julho de 2012, também usando um LIDAR, um SODAR e dados de uma EMS. Foram realizadas simulações usando três esquemas de parametrizações para a CLA, dois de fechamento não local, Yonsei University (YSU) e Asymmetric Convective Model 2 (ACM2) e um de fechamento local, Mellor Yamada Janjic (MYJ) e dois esquemas de camada superficial do solo (CLS), Rapid Update Cycle (RUC) e Noah. Tanto para a RGV quanto para a RD, foram feitas simulações com as seis possíveis combinações das três parametrizações de CLA e as duas de CLS, para os períodos em que foram feitas as campanhas, usando quatro domínios aninhados, sendo os três maiores quadrados com dimensões laterais de 1863 km, 891 km e 297 km, grades de 27 km, 9 km e 3 km, respectivamente, e o domínio de estudo, com dimensões de 81 km na direção Norte-Sul e 63 km na Leste-Oeste, grade de 1 km, com 55 níveis verticais, até um máximo de, aproximadamente, 13.400 m, mais concentrados próximos ao solo. Os resultados deste trabalho mostraram que: a) dependendo da configuração adotada, o esforço computacional pode aumentar demasiadamente, sem que ocorra um grande aumento na acurácia dos resultados; b) para a RD, a simulação usando o conjunto de parametrizações MYJ para a CLA com a parametrização Noah produziu a melhor estimativa captando os fenômenos da CLI. As simulações usando as parametrizações ACM2 e YSU inferiram a entrada da brisa com atraso de até três horas; c) para a RGV, a simulação que usou as parametrizações YSU para a CLA em conjunto com a parametrização Noah para CLS foi a que conseguiu fazer melhores inferências sobre a CLI. Esses resultados sugerem a necessidade de avaliações prévias do esforço computacional necessário para determinadas configurações, e sobre a acurácia de conjuntos de parametrizações específicos para cada região pesquisada. As diferenças estão associadas com a capacidade das diferentes parametrizações em captar as informações superficiais provenientes das informações globais, essenciais para determinar a intensidade de mistura turbulenta vertical e temperatura superficial do solo, sugerindo que uma melhor representação do uso de solo é fundamental para melhorar as estimativas sobre a CLI e demais parâmetros usados por modelos de dispersão de poluentes atmosféricos.
Resumo:
The basic motivation of this work was the integration of biophysical models within the interval constraints framework for decision support. Comparing the major features of biophysical models with the expressive power of the existing interval constraints framework, it was clear that the most important inadequacy was related with the representation of differential equations. System dynamics is often modelled through differential equations but there was no way of expressing a differential equation as a constraint and integrate it within the constraints framework. Consequently, the goal of this work is focussed on the integration of ordinary differential equations within the interval constraints framework, which for this purpose is extended with the new formalism of Constraint Satisfaction Differential Problems. Such framework allows the specification of ordinary differential equations, together with related information, by means of constraints, and provides efficient propagation techniques for pruning the domains of their variables. This enabled the integration of all such information in a single constraint whose variables may subsequently be used in other constraints of the model. The specific method used for pruning its variable domains can then be combined with the pruning methods associated with the other constraints in an overall propagation algorithm for reducing the bounds of all model variables. The application of the constraint propagation algorithm for pruning the variable domains, that is, the enforcement of local-consistency, turned out to be insufficient to support decision in practical problems that include differential equations. The domain pruning achieved is not, in general, sufficient to allow safe decisions and the main reason derives from the non-linearity of the differential equations. Consequently, a complementary goal of this work proposes a new strong consistency criterion, Global Hull-consistency, particularly suited to decision support with differential models, by presenting an adequate trade-of between domain pruning and computational effort. Several alternative algorithms are proposed for enforcing Global Hull-consistency and, due to their complexity, an effort was made to provide implementations able to supply any-time pruning results. Since the consistency criterion is dependent on the existence of canonical solutions, it is proposed a local search approach that can be integrated with constraint propagation in continuous domains and, in particular, with the enforcing algorithms for anticipating the finding of canonical solutions. The last goal of this work is the validation of the approach as an important contribution for the integration of biophysical models within decision support. Consequently, a prototype application that integrated all the proposed extensions to the interval constraints framework is developed and used for solving problems in different biophysical domains.
Resumo:
In visual sensor networks, local feature descriptors can be computed at the sensing nodes, which work collaboratively on the data obtained to make an efficient visual analysis. In fact, with a minimal amount of computational effort, the detection and extraction of local features, such as binary descriptors, can provide a reliable and compact image representation. In this paper, it is proposed to extract and code binary descriptors to meet the energy and bandwidth constraints at each sensing node. The major contribution is a binary descriptor coding technique that exploits the correlation using two different coding modes: Intra, which exploits the correlation between the elements that compose a descriptor; and Inter, which exploits the correlation between descriptors of the same image. The experimental results show bitrate savings up to 35% without any impact in the performance efficiency of the image retrieval task. © 2014 EURASIP.
Resumo:
Tese de Doutoramento - Programa Doutoral em Engenharia Industrial e Sistemas (PDEIS)
Resumo:
Dissertação de mestrado em Engenharia Industrial
Credit risk contributions under the Vasicek one-factor model: a fast wavelet expansion approximation
Resumo:
To measure the contribution of individual transactions inside the total risk of a credit portfolio is a major issue in financial institutions. VaR Contributions (VaRC) and Expected Shortfall Contributions (ESC) have become two popular ways of quantifying the risks. However, the usual Monte Carlo (MC) approach is known to be a very time consuming method for computing these risk contributions. In this paper we consider the Wavelet Approximation (WA) method for Value at Risk (VaR) computation presented in [Mas10] in order to calculate the Expected Shortfall (ES) and the risk contributions under the Vasicek one-factor model framework. We decompose the VaR and the ES as a sum of sensitivities representing the marginal impact on the total portfolio risk. Moreover, we present technical improvements in the Wavelet Approximation (WA) that considerably reduce the computational effort in the approximation while, at the same time, the accuracy increases.
Resumo:
The Generalized Assignment Problem consists in assigning a setof tasks to a set of agents with minimum cost. Each agent hasa limited amount of a single resource and each task must beassigned to one and only one agent, requiring a certain amountof the resource of the agent. We present new metaheuristics forthe generalized assignment problem based on hybrid approaches.One metaheuristic is a MAX-MIN Ant System (MMAS), an improvedversion of the Ant System, which was recently proposed byStutzle and Hoos to combinatorial optimization problems, and itcan be seen has an adaptive sampling algorithm that takes inconsideration the experience gathered in earlier iterations ofthe algorithm. Moreover, the latter heuristic is combined withlocal search and tabu search heuristics to improve the search.A greedy randomized adaptive search heuristic (GRASP) is alsoproposed. Several neighborhoods are studied, including one basedon ejection chains that produces good moves withoutincreasing the computational effort. We present computationalresults of the comparative performance, followed by concludingremarks and ideas on future research in generalized assignmentrelated problems.