920 resultados para Numerical example
Resumo:
Fluid structure interaction, as applied to flexible structures, has wide application in diverse areas such as flutter in aircraft, flow in elastic pipes and blood vessels and extrusion of metals through dies. However a comprehensive computational model of these multi-physics phenomena is a considerable challenge. Until recently work in this area focused on one phenomenon and represented the behaviour of the other more simply even to the extent in metal forming, for example, that the deformation of the die is totally ignored. More recently, strategies for solving the full coupling between the fluid and soild mechanics behaviour have developed. Conventionally, the computational modelling of fluid structure interaction is problematical since computational fluid dynamics (CFD) is solved using finite volume (FV) methods and computational structural mechanics (CSM) is based entirely on finite element (FE) methods. In the past the concurrent, but rather disparate, development paths for the finite element and finite volume methods have resulted in numerical software tools for CFD and CSM that are different in almost every respect. Hence, progress is frustrated in modelling the emerging multi-physics problem of fluid structure interaction in a consistent manner. Unless the fluid-structure coupling is either one way, very weak or both, transferring and filtering data from one mesh and solution procedure to another may lead to significant problems in computational convergence. Using a novel three phase technique the full interaction between the fluid and the dynamic structural response are represented. The procedure is demonstrated on some challenging applications in complex three dimensional geometries involving aircraft flutter, metal forming and blood flow in arteries.
Resumo:
Numerical models are important tools used in engineering fields to predict the behaviour and the impact of physical elements. There may be advantages to be gained by combining Case-Based Reasoning (CBR) techniques with numerical models. This paper considers how CBR can be used as a flexible query engine to improve the usability of numerical models. Particularly they can help to solve inverse and mixed problems, and to solve constraint problems. We discuss this idea with reference to the illustrative example of a pneumatic conveyor problem. The paper describes example problems faced by design engineers in this context and the issues that need to be considered in this approach. Solution of these problems require methods to handle constraints in both the retrieval phase and the adaptation phase of a typical CBR cycle. We show approaches to the solution of these problesm via a CBR tool.
Resumo:
When studying heterogeneous aquifer systems, especially at regional scale, a degree of generalization is anticipated. This can be due to sparse sampling regimes, complex depositional environments or lack of accessibility to measure the subsurface. This can lead to an inaccurate conceptualization which can be detrimental when applied to groundwater flow models. It is important that numerical models are based on observed and accurate geological information and do not rely on the distribution of artificial aquifer properties. This can still be problematic as data will be modelled at a different scale to which it was collected. It is proposed here that integrating geophysics and upscaling techniques can assist in a more realistic and deterministic groundwater flow model. In this study, the sedimentary aquifer of the Lagan Valley in Northern Ireland is chosen due to intruding sub-vertical dolerite dykes. These dykes are of a lower permeability than the sandstone aquifer. The use of airborne magnetics allows the delineation of heterogeneities, confirmed by field analysis. Permeability measured at the field scale is then upscaled to different levels using a correlation with the geophysical data, creating equivalent parameters that can be directly imported into numerical groundwater flow models. These parameters include directional equivalent permeabilities and anisotropy. Several stages of upscaling are modelled in finite element. Initial modelling is providing promising results, especially at the intermediate scale, suggesting an accurate distribution of aquifer properties. This deterministic based methodology is being expanded to include stochastic methods of obtaining heterogeneity location based on airborne geophysical data. This is through the Direct Sample method of Multiple-Point Statistics (MPS). This method uses the magnetics as a training image to computationally determine a probabilistic occurrence of heterogeneity. There is also a need to apply the method to alternate geological contexts where the heterogeneity is of a higher permeability than the host rock.
Resumo:
BACKGROUND: Assessing methodological quality of primary studies is an essential component of systematic reviews. Following a systematic review which used a domain based system [United States Preventative Services Task Force (USPSTF)] to assess methodological quality, a commonly used numerical rating scale (Downs and Black) was also used to evaluate the included studies and comparisons were made between quality ratings assigned using the two different methods. Both tools were used to assess the 20 randomized and quasi-randomized controlled trials examining an exercise intervention for chronic musculoskeletal pain which were included in the review. Inter-rater reliability and levels of agreement were determined using intraclass correlation coefficients (ICC). Influence of quality on pooled effect size was examined by calculating the between group standardized mean difference (SMD).
RESULTS: Inter-rater reliability indicated at least substantial levels of agreement for the USPSTF system (ICC 0.85; 95% CI 0.66, 0.94) and Downs and Black scale (ICC 0.94; 95% CI 0.84, 0.97). Overall level of agreement between tools (ICC 0.80; 95% CI 0.57, 0.92) was also good. However, the USPSTF system identified a number of studies (n = 3/20) as "poor" due to potential risks of bias. Analysis revealed substantially greater pooled effect sizes in these studies (SMD -2.51; 95% CI -4.21, -0.82) compared to those rated as "fair" (SMD -0.45; 95% CI -0.65, -0.25) or "good" (SMD -0.38; 95% CI -0.69, -0.08).
CONCLUSIONS: In this example, use of a numerical rating scale failed to identify studies at increased risk of bias, and could have potentially led to imprecise estimates of treatment effect. Although based on a small number of included studies within an existing systematic review, we found the domain based system provided a more structured framework by which qualitative decisions concerning overall quality could be made, and was useful for detecting potential sources of bias in the available evidence.
Resumo:
We present a comprehensive model for predicting the full performance of a second harmonic generation-optical parametric amplification system that aims at enhancing the temporal contrast of laser pulses. The model simultaneously takes into account all the main parameters at play in the system such as the group velocity mismatch, the beam divergence, the spectral content, the pump depletion, and the length of the nonlinear crystals. We monitor the influence of the initial parameters of the input pulse and the interdependence of the two related non-linear processes on the performance of the system and show its optimum configuration. The influence of the initial beam divergence on the spectral and the temporal characteristics of the generated pulse is discussed. In addition, we show that using a crystal slightly longer than the optimum length and introducing small delay between the seed and the pump ensures maximum efficiency and compensates for the spectral shift in the optical parametric amplification stage in case of chirped input pulse. As an example, calculations for bandwidth transform limited and chirped pulses of sub-picosecond duration in beta barium borate crystal are presented.
Resumo:
Por parte da indústria de estampagem tem-se verificado um interesse crescente em simulações numéricas de processos de conformação de chapa, incluindo também métodos de engenharia inversa. Este facto ocorre principalmente porque as técnicas de tentativa-erro, muito usadas no passado, não são mais competitivas a nível económico. O uso de códigos de simulação é, atualmente, uma prática corrente em ambiente industrial, pois os resultados tipicamente obtidos através de códigos com base no Método dos Elementos Finitos (MEF) são bem aceites pelas comunidades industriais e científicas Na tentativa de obter campos de tensão e de deformação precisos, uma análise eficiente com o MEF necessita de dados de entrada corretos, como geometrias, malhas, leis de comportamento não-lineares, carregamentos, leis de atrito, etc.. Com o objetivo de ultrapassar estas dificuldades podem ser considerados os problemas inversos. No trabalho apresentado, os seguintes problemas inversos, em Mecânica computacional, são apresentados e analisados: (i) problemas de identificação de parâmetros, que se referem à determinação de parâmetros de entrada que serão posteriormente usados em modelos constitutivos nas simulações numéricas e (ii) problemas de definição geométrica inicial de chapas e ferramentas, nos quais o objetivo é determinar a forma inicial de uma chapa ou de uma ferramenta tendo em vista a obtenção de uma determinada geometria após um processo de conformação. São introduzidas e implementadas novas estratégias de otimização, as quais conduzem a parâmetros de modelos constitutivos mais precisos. O objetivo destas estratégias é tirar vantagem das potencialidades de cada algoritmo e melhorar a eficiência geral dos métodos clássicos de otimização, os quais são baseados em processos de apenas um estágio. Algoritmos determinísticos, algoritmos inspirados em processos evolucionários ou mesmo a combinação destes dois são usados nas estratégias propostas. Estratégias de cascata, paralelas e híbridas são apresentadas em detalhe, sendo que as estratégias híbridas consistem na combinação de estratégias em cascata e paralelas. São apresentados e analisados dois métodos distintos para a avaliação da função objetivo em processos de identificação de parâmetros. Os métodos considerados são uma análise com um ponto único ou uma análise com elementos finitos. A avaliação com base num único ponto caracteriza uma quantidade infinitesimal de material sujeito a uma determinada história de deformação. Por outro lado, na análise através de elementos finitos, o modelo constitutivo é implementado e considerado para cada ponto de integração. Problemas inversos são apresentados e descritos, como por exemplo, a definição geométrica de chapas e ferramentas. Considerando o caso da otimização da forma inicial de uma chapa metálica a definição da forma inicial de uma chapa para a conformação de um elemento de cárter é considerado como problema em estudo. Ainda neste âmbito, um estudo sobre a influência da definição geométrica inicial da chapa no processo de otimização é efetuado. Este estudo é realizado considerando a formulação de NURBS na definição da face superior da chapa metálica, face cuja geometria será alterada durante o processo de conformação plástica. No caso dos processos de otimização de ferramentas, um processo de forjamento a dois estágios é apresentado. Com o objetivo de obter um cilindro perfeito após o forjamento, dois métodos distintos são considerados. No primeiro, a forma inicial do cilindro é otimizada e no outro a forma da ferramenta do primeiro estágio de conformação é otimizada. Para parametrizar a superfície livre do cilindro são utilizados diferentes métodos. Para a definição da ferramenta são também utilizados diferentes parametrizações. As estratégias de otimização propostas neste trabalho resolvem eficientemente problemas de otimização para a indústria de conformação metálica.
Resumo:
This thesis studies properties and applications of different generalized Appell polynomials in the framework of Clifford analysis. As an example of 3D-quasi-conformal mappings realized by generalized Appell polynomials, an analogue of the complex Joukowski transformation of order two is introduced. The consideration of a Pascal n-simplex with hypercomplex entries allows stressing the combinatorial relevance of hypercomplex Appell polynomials. The concept of totally regular variables and its relation to generalized Appell polynomials leads to the construction of new bases for the space of homogeneous holomorphic polynomials whose elements are all isomorphic to the integer powers of the complex variable. For this reason, such polynomials are called pseudo-complex powers (PCP). Different variants of them are subject of a detailed investigation. Special attention is paid to the numerical aspects of PCP. An efficient algorithm based on complex arithmetic is proposed for their implementation. In this context a brief survey on numerical methods for inverting Vandermonde matrices is presented and a modified algorithm is proposed which illustrates advantages of a special type of PCP. Finally, combinatorial applications of generalized Appell polynomials are emphasized. The explicit expression of the coefficients of a particular type of Appell polynomials and their relation to a Pascal simplex with hypercomplex entries are derived. The comparison of two types of 3D Appell polynomials leads to the detection of new trigonometric summation formulas and combinatorial identities of Riordan-Sofo type characterized by their expression in terms of central binomial coefficients.
Resumo:
This article is concerned with the numerical simulation of flows at low Mach numbers which are subject to the gravitational force and strong heat sources. As a specific example for such flows, a fire event in a car tunnel will be considered in detail. The low Mach flow is treated with a preconditioning technique allowing the computation of unsteady flows, while the source terms for gravitation and heat are incorporated via operator splitting. It is shown that a first order discretization in space is not able to compute the buoyancy forces properly on reasonable grids. The feasibility of the method is demonstrated on several test cases.
Resumo:
This work demonstrates how partial evaluation can be put to practical use in the domain of high-performance numerical computation. I have developed a technique for performing partial evaluation by using placeholders to propagate intermediate results. For an important class of numerical programs, a compiler based on this technique improves performance by an order of magnitude over conventional compilation techniques. I show that by eliminating inherently sequential data-structure references, partial evaluation exposes the low-level parallelism inherent in a computation. I have implemented several parallel scheduling and analysis programs that study the tradeoffs involved in the design of an architecture that can effectively utilize this parallelism. I present these results using the 9- body gravitational attraction problem as an example.
Resumo:
These are the slides used in the joint lectures for MATH3018/MATH6111. They focus on the examples that do not appear in the course notes (see related material). Each lecture comes with example Matlab files that generate the figures used in the lectures.
Resumo:
A series of numerical models have been used to investigate the predictability of atmospheric blocking for an episode selected from FGGE Special Observing Period I. Level II-b FGGE data have been used in the experiment. The blocking took place over the North Atlantic region and is a very characteristic example of high winter blocking. It is found that the very high resolution models developed at ECMWF, in a remarkable way manage to predict the blocking event in great detail, even beyond 1 week. Although models with much less resolution manage to predict the blocking phenomenon as such, the actual evolution differs very much from the observed and consequently the practical value is substantially reduced. Wind observations from the geostationary satellites are shown to have a substantial impact on the forecast beyond 5 days, as well as an extension of the integration domain to the whole globe. Quasi-geostrophic baroclinic models and, even more, barotropic models, are totally inadequate to predict blocking except in its initial phase. The prediction experiment illustrates clearly that efforts which have gone into the improvement of numerical prediction models in the last decades have been worth while.
Resumo:
This paper presents a numerical model for predicting the evolution of the pattern of ionospheric convection in response to general time-dependent magnetic reconnection at the dayside magnetopause and in the cross-tail current sheet of the geomagnetic tail. The model quantifies the concepts of ionospheric flow excitation by Cowley and Lockwood (1992), assuming a uniform spatial distribution of ionospheric conductivity. The model is demonstrated using an example in which travelling reconnection pulses commence near noon and then move across the dayside magnetopause towards both dawn and dusk. Two such pulses, 8 min apart, are used and each causes the reconnection to be active for 1 min at every MLT that they pass over. This example demonstrates how the convection response to a given change in the interplanetary magnetic field (via the reconnection rate) depends on the previous reconnection history. The causes of this effect are explained. The inherent assumptions and the potential applications of the model are discussed.
Resumo:
We use the deformed sine-Gordon models recently presented by Bazeia et al [1] to take the first steps towards defining the concept of quasi-integrability. We consider one such definition and use it to calculate an infinite number of quasi-conserved quantities through a modification of the usual techniques of integrable field theories. Performing an expansion around the sine-Gordon theory we are able to evaluate the charges and the anomalies of their conservation laws in a perturbative power series in a small parameter which describes the ""closeness"" to the integrable sine-Gordon model. We show that in the case of the two-soliton scattering the charges, up to first order of perturbation, are conserved asymptotically, i.e. their values are the same in the distant past and future, when the solitons are well separated. We indicate that this property may hold or not to higher orders depending on the behavior of the two-soliton solution under a special parity transformation. For closely bound systems, such as breather-like field configurations, the situation however is more complex and perhaps the anomalies have a different structure implying that the concept of quasi-integrability does not apply in the same way as in the scattering of solitons. We back up our results with the data of many numerical simulations which also demonstrate the existence of long lived breather-like and wobble-like states in these models.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)