984 resultados para NUMERICAL-INTEGRATION
Resumo:
In this paper, a method for the integration of several numerical analytical techniques that are used in microsystems design and failure analysis is presented. The analytical techniques are categorized into four groups in the discussion, namely the high-fidelity analytical tools, i.e. finite element (FE) method, the fast analytical tools referring to reduced order modeling (ROM); the optimization tools, and probability based analytical tools. The characteristics of these four tools are investigated. The interactions between the four tools are discussed and a methodology for the coupling of these four tools is offered. This methodology consists of three stages, namely reduced order modeling, deterministic optimization and probabilistic optimization. Using this methodology, a case study for optimization of a solder joint is conducted. It is shown that these analysis techniques have mutual relationship of interaction and complementation. Synthetic application of these techniques can fully utilize the advantages of these techniques and satisfy various design requirements. The case study shows that the coupling method of different tools provided by this paper is effective and efficient and it is highly relevant in the design and reliability analysis of microsystems
Resumo:
Electrodeposition is a widely used technique for the fabrication of high aspect ratio microstructure components. In recent years much research has been focused within this area with an aim to understanding the physics behind the filling of high-aspect ratio vias and trenches on PCB's and in particular how they can be made without the formation of voids in the deposited material. This paper describes some of the fundamental work towards the advancement of numerical models that can predict the electrodeposition process and addresses: i) A novel technique for interface motion based on a variation of a donor-acceptor technique ii) A methodology for the investigation of stress profiles in deposits iii) The implementation of acoustic forces to generate replenishing electrolytic flow circulation in recessed features.
Resumo:
Here we consider the numerical optimization of active surface plasmon polariton (SPP) trench waveguides suited for integration with luminescent polymers for use as highly localized SPP source devices in short-scale communication integrated circuits. The numerical analysis of the SPP modes within trench waveguide systems provides detailed information on the mode field components, effective indices, propagation lengths and mode areas. Such trench waveguide systems offer extremely high confinement with propagation on length scales appropriate to local interconnects, along with high efficiency coupling of dipolar emitters to waveguided plasmonic modes which can be close to 80%. The large Purcell factor exhibited in these structures will further lead to faster modulation capabilities along with an increased quantum yield beneficial for the proposed plasmon-emitting diode, a plasmonic analog of the light-emitting diode. The confinement of studied guided modes is on the order of 50 nm and the delay over the shorter 5 μm length scales will be on the order of 0.1 ps for the slowest propagating modes of the system, and significantly less for the faster modes.
Resumo:
A comprehensive continuum damage mechanics model [1] had been developed to capture the detailed
behaviour of a composite structure under a crushing load. This paper explores some of the difficulties
encountered in the implementation of this model and their mitigation. The use of reduced integration
element and a strain softening model both negatively affect the accuracy and stability of the
simulation. Damage localisation effects demanded an accurate measure of characteristic length. A
robust algorithm for determining the characteristic length was implemented. Testing showed that this
algorithm produced marked improvements over the use of the default characteristic length provided
by Abaqus. Zero-energy or hourglass modes, in reduced integration elements, led to reduced
resistance to bending. This was compounded by the strain softening model, which led to the formation
of elements with little resistance to deformation that could invert if left unchecked. It was shown,
through benchmark testing, that by deleting elements with excess distortions and controlling the mesh
using inbuilt distortion/hourglass controls, these issues can be alleviated. These techniques
contributed significantly to the viability and usability of the damage model.
Resumo:
Por parte da indústria de estampagem tem-se verificado um interesse crescente em simulações numéricas de processos de conformação de chapa, incluindo também métodos de engenharia inversa. Este facto ocorre principalmente porque as técnicas de tentativa-erro, muito usadas no passado, não são mais competitivas a nível económico. O uso de códigos de simulação é, atualmente, uma prática corrente em ambiente industrial, pois os resultados tipicamente obtidos através de códigos com base no Método dos Elementos Finitos (MEF) são bem aceites pelas comunidades industriais e científicas Na tentativa de obter campos de tensão e de deformação precisos, uma análise eficiente com o MEF necessita de dados de entrada corretos, como geometrias, malhas, leis de comportamento não-lineares, carregamentos, leis de atrito, etc.. Com o objetivo de ultrapassar estas dificuldades podem ser considerados os problemas inversos. No trabalho apresentado, os seguintes problemas inversos, em Mecânica computacional, são apresentados e analisados: (i) problemas de identificação de parâmetros, que se referem à determinação de parâmetros de entrada que serão posteriormente usados em modelos constitutivos nas simulações numéricas e (ii) problemas de definição geométrica inicial de chapas e ferramentas, nos quais o objetivo é determinar a forma inicial de uma chapa ou de uma ferramenta tendo em vista a obtenção de uma determinada geometria após um processo de conformação. São introduzidas e implementadas novas estratégias de otimização, as quais conduzem a parâmetros de modelos constitutivos mais precisos. O objetivo destas estratégias é tirar vantagem das potencialidades de cada algoritmo e melhorar a eficiência geral dos métodos clássicos de otimização, os quais são baseados em processos de apenas um estágio. Algoritmos determinísticos, algoritmos inspirados em processos evolucionários ou mesmo a combinação destes dois são usados nas estratégias propostas. Estratégias de cascata, paralelas e híbridas são apresentadas em detalhe, sendo que as estratégias híbridas consistem na combinação de estratégias em cascata e paralelas. São apresentados e analisados dois métodos distintos para a avaliação da função objetivo em processos de identificação de parâmetros. Os métodos considerados são uma análise com um ponto único ou uma análise com elementos finitos. A avaliação com base num único ponto caracteriza uma quantidade infinitesimal de material sujeito a uma determinada história de deformação. Por outro lado, na análise através de elementos finitos, o modelo constitutivo é implementado e considerado para cada ponto de integração. Problemas inversos são apresentados e descritos, como por exemplo, a definição geométrica de chapas e ferramentas. Considerando o caso da otimização da forma inicial de uma chapa metálica a definição da forma inicial de uma chapa para a conformação de um elemento de cárter é considerado como problema em estudo. Ainda neste âmbito, um estudo sobre a influência da definição geométrica inicial da chapa no processo de otimização é efetuado. Este estudo é realizado considerando a formulação de NURBS na definição da face superior da chapa metálica, face cuja geometria será alterada durante o processo de conformação plástica. No caso dos processos de otimização de ferramentas, um processo de forjamento a dois estágios é apresentado. Com o objetivo de obter um cilindro perfeito após o forjamento, dois métodos distintos são considerados. No primeiro, a forma inicial do cilindro é otimizada e no outro a forma da ferramenta do primeiro estágio de conformação é otimizada. Para parametrizar a superfície livre do cilindro são utilizados diferentes métodos. Para a definição da ferramenta são também utilizados diferentes parametrizações. As estratégias de otimização propostas neste trabalho resolvem eficientemente problemas de otimização para a indústria de conformação metálica.
Resumo:
The motion instability is an important issue that occurs during the operation of towed underwater vehicles (TUV), which considerably affects the accuracy of high precision acoustic instrumentations housed inside the same. Out of the various parameters responsible for this, the disturbances from the tow-ship are the most significant one. The present study focus on the motion dynamics of an underwater towing system with ship induced disturbances as the input. The study focus on an innovative system called two-part towing. The methodology involves numerical modeling of the tow system, which consists of modeling of the tow-cables and vehicles formulation. Previous study in this direction used a segmental approach for the modeling of the cable. Even though, the model was successful in predicting the heave response of the tow-body, instabilities were observed in the numerical solution. The present study devises a simple approach called lumped mass spring model (LMSM) for the cable formulation. In this work, the traditional LMSM has been modified in two ways. First, by implementing advanced time integration procedures and secondly, use of a modified beam model which uses only translational degrees of freedoms for solving beam equation. A number of time integration procedures, such as Euler, Houbolt, Newmark and HHT-α were implemented in the traditional LMSM and the strength and weakness of each scheme were numerically estimated. In most of the previous studies, hydrodynamic forces acting on the tow-system such as drag and lift etc. are approximated as analytical expression of velocities. This approach restricts these models to use simple cylindrical shaped towed bodies and may not be applicable modern tow systems which are diversed in shape and complexity. Hence, this particular study, hydrodynamic parameters such as drag and lift of the tow-system are estimated using CFD techniques. To achieve this, a RANS based CFD code has been developed. Further, a new convection interpolation scheme for CFD simulation, called BNCUS, which is blend of cell based and node based formulation, was proposed in the study and numerically tested. To account for the fact that simulation takes considerable time in solving fluid dynamic equations, a dedicated parallel computing setup has been developed. Two types of computational parallelisms are explored in the current study, viz; the model for shared memory processors and distributed memory processors. In the present study, shared memory model was used for structural dynamic analysis of towing system, distributed memory one was devised in solving fluid dynamic equations.
Resumo:
We consider numerical methods for the compressible time dependent Navier-Stokes equations, discussing the spatial discretization by Finite Volume and Discontinuous Galerkin methods, the time integration by time adaptive implicit Runge-Kutta and Rosenbrock methods and the solution of the appearing nonlinear and linear equations systems by preconditioned Jacobian-Free Newton-Krylov, as well as Multigrid methods. As applications, thermal Fluid structure interaction and other unsteady flow problems are considered. The text is aimed at both mathematicians and engineers.
Resumo:
In this paper, we consider the ATM networks in which the virtual path concept is implemented. The question of how to multiplex two or more diverse traffic classes while providing different quality of service requirements is a very complicated open problem. Two distinct options are available: integration and segregation. In an integration approach all the traffic from different connections are multiplexed onto one VP. This implies that the most restrictive QOS requirements must be applied to all services. Therefore, link utilization will be decreased because unnecessarily stringent QOS is provided to all connections. With the segregation approach the problem can be much simplified if different types of traffic are separated by assigning a VP with dedicated resources (buffers and links). Therefore, resources may not be efficiently utilized because no sharing of bandwidth can take place across the VP. The probability that the bandwidth required by the accepted connections exceeds the capacity of the link is evaluated with the probability of congestion (PC). Since the PC can be expressed as the CLP, we shall simply carry out bandwidth allocation using the PC. We first focus on the influence of some parameters (CLP, bit rate and burstiness) on the capacity required by a VP supporting a single traffic class using the new convolution approach. Numerical results are presented both to compare the required capacity and to observe which conditions under each approach are preferred
Resumo:
The proposal presented in this thesis is to provide designers of knowledge based supervisory systems of dynamic systems with a framework to facilitate their tasks avoiding interface problems among tools, data flow and management. The approach is thought to be useful to both control and process engineers in assisting their tasks. The use of AI technologies to diagnose and perform control loops and, of course, assist process supervisory tasks such as fault detection and diagnose, are in the scope of this work. Special effort has been put in integration of tools for assisting expert supervisory systems design. With this aim the experience of Computer Aided Control Systems Design (CACSD) frameworks have been analysed and used to design a Computer Aided Supervisory Systems (CASSD) framework. In this sense, some basic facilities are required to be available in this proposed framework: ·
Resumo:
Estimating the magnitude of Agulhas leakage, the volume flux of water from the Indian to the Atlantic Ocean, is difficult because of the presence of other circulation systems in the Agulhas region. Indian Ocean water in the Atlantic Ocean is vigorously mixed and diluted in the Cape Basin. Eulerian integration methods, where the velocity field perpendicular to a section is integrated to yield a flux, have to be calibrated so that only the flux by Agulhas leakage is sampled. Two Eulerian methods for estimating the magnitude of Agulhas leakage are tested within a high-resolution two-way nested model with the goal to devise a mooring-based measurement strategy. At the GoodHope line, a section halfway through the Cape Basin, the integrated velocity perpendicular to that line is compared to the magnitude of Agulhas leakage as determined from the transport carried by numerical Lagrangian floats. In the first method, integration is limited to the flux of water warmer and more saline than specific threshold values. These threshold values are determined by maximizing the correlation with the float-determined time series. By using the threshold values, approximately half of the leakage can directly be measured. The total amount of Agulhas leakage can be estimated using a linear regression, within a 90% confidence band of 12 Sv. In the second method, a subregion of the GoodHope line is sought so that integration over that subregion yields an Eulerian flux as close to the float-determined leakage as possible. It appears that when integration is limited within the model to the upper 300 m of the water column within 900 km of the African coast the time series have the smallest root-mean-square difference. This method yields a root-mean-square error of only 5.2 Sv but the 90% confidence band of the estimate is 20 Sv. It is concluded that the optimum thermohaline threshold method leads to more accurate estimates even though the directly measured transport is a factor of two lower than the actual magnitude of Agulhas leakage in this model.
Resumo:
Exact error estimates for evaluating multi-dimensional integrals are considered. An estimate is called exact if the rates of convergence for the low- and upper-bound estimate coincide. The algorithm with such an exact rate is called optimal. Such an algorithm has an unimprovable rate of convergence. The problem of existing exact estimates and optimal algorithms is discussed for some functional spaces that define the regularity of the integrand. Important for practical computations data classes are considered: classes of functions with bounded derivatives and Holder type conditions. The aim of the paper is to analyze the performance of two optimal classes of algorithms: deterministic and randomized for computing multidimensional integrals. It is also shown how the smoothness of the integrand can be exploited to construct better randomized algorithms.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
A series of numerical models have been used to investigate the predictability of atmospheric blocking for an episode selected from FGGE Special Observing Period I. Level II-b FGGE data have been used in the experiment. The blocking took place over the North Atlantic region and is a very characteristic example of high winter blocking. It is found that the very high resolution models developed at ECMWF, in a remarkable way manage to predict the blocking event in great detail, even beyond 1 week. Although models with much less resolution manage to predict the blocking phenomenon as such, the actual evolution differs very much from the observed and consequently the practical value is substantially reduced. Wind observations from the geostationary satellites are shown to have a substantial impact on the forecast beyond 5 days, as well as an extension of the integration domain to the whole globe. Quasi-geostrophic baroclinic models and, even more, barotropic models, are totally inadequate to predict blocking except in its initial phase. The prediction experiment illustrates clearly that efforts which have gone into the improvement of numerical prediction models in the last decades have been worth while.
Resumo:
This paper will introduce the Baltex research programme and summarize associated numerical modelling work which has been undertaken during the last five years. The research has broadly managed to clarify the main mechanisms determining the water and energy cycle in the Baltic region, such as the strong dependence upon the large scale atmospheric circulation. It has further been shown that the Baltic Sea has a positive water balance, albeit with large interannual variations. The focus on the modelling studies has been the use of limited area models at ultra-high resolution driven by boundary conditions from global models or from reanalysis data sets. The programme has further initiated a comprehensive integration of atmospheric, land surface and hydrological modelling incorporating snow, sea ice and special lake models. Other aspects of the programme include process studies such as the role of deep convection, air sea interaction and the handling of land surface moisture. Studies have also been undertaken to investigate synoptic and sub-synoptic events over the Baltic region, thus exploring the role of transient weather systems for the hydrological cycle. A special aspect has been the strong interests and commitments of the meteorological and hydrological services because of the potentially large societal interests of operational applications of the research. As a result of this interests special attention has been put on data-assimilation aspects and the use of new types of data such as SSM/I, GPS-measurements and digital radar. A series of high resolution data sets are being produced. One of those, a 1/6 degree daily precipitation climatology for the years 1996–1999, is such a unique contribution. The specific research achievements to be presented in this volume of Meteorology and Atmospheric Physics is the result of a cooperative venture between 11 European research groups supported under the EU-Framework programmes.