950 resultados para 3D numerical modeling
Resumo:
The ultrasonic non-destructive testing of components may encounter considerable difficulties to interpret some inspections results mainly in anisotropic crystalline structures. A numerical method for the simulation of elastic wave propagation in homogeneous elastically anisotropic media, based on the general finite element approach, is used to help this interpretation. The successful modeling of elastic field associated with NDE is based on the generation of a realistic pulsed ultrasonic wave, which is launched from a piezoelectric transducer into the material under inspection. The values of elastic constants are great interest information that provide the application of equations analytical models, until small and medium complexity problems through programs of numerical analysis as finite elements and/or boundary elements. The aim of this work is the comparison between the results of numerical solution of an ultrasonic wave, which is obtained from transient excitation pulse that can be specified by either force or displacement variation across the aperture of the transducer, and the results obtained from a experiment that was realized in an aluminum block in the IEN Ultrasonic Laboratory. The wave propagation can be simulated using all the characteristics of the material used in the experiment evaluation associated to boundary conditions and from these results, the comparison can be made.
Resumo:
The use of the Design by Analysis (DBA) route is a modern trend in pressure vessel and piping international codes in mechanical engineering. However, to apply the DBA to structures under variable mechanical and thermal loads, it is necessary to assure that the plastic collapse modes, alternate plasticity and incremental collapse (with instantaneous plastic collapse as a particular case), be precluded. The tool available to achieve this target is the shakedown theory. Unfortunately, the practical numerical applications of the shakedown theory result in very large nonlinear optimization problems with nonlinear constraints. Precise, robust and efficient algorithms and finite elements to solve this problem in finite dimension has been a more recent achievements. However, to solve real problems in an industrial level, it is necessary also to consider more realistic material properties as well as to accomplish 3D analysis. Limited kinematic hardening, is a typical property of the usual steels and it should be considered in realistic applications. In this paper, a new finite element with internal thermodynamical variables to model kinematic hardening materials is developed and tested. This element is a mixed ten nodes tetrahedron and through an appropriate change of variables is possible to embed it in a shakedown analysis software developed by Zouain and co-workers for elastic ideally-plastic materials, and then use it to perform 3D shakedown analysis in cases with limited kinematic hardening materials
Resumo:
Cette thèse concerne la modélisation des interactions fluide-structure et les méthodes numériques qui s’y rattachent. De ce fait, la thèse est divisée en deux parties. La première partie concerne l’étude des interactions fluide-structure par la méthode des domaines fictifs. Dans cette contribution, le fluide est incompressible et laminaire et la structure est considérée rigide, qu’elle soit immobile ou en mouvement. Les outils que nous avons développés comportent la mise en oeuvre d’un algorithme fiable de résolution qui intégrera les deux domaines (fluide et solide) dans une formulation mixte. L’algorithme est basé sur des techniques de raffinement local adaptatif des maillages utilisés permettant de mieux séparer les éléments du milieu fluide de ceux du solide que ce soit en 2D ou en 3D. La seconde partie est l’étude des interactions mécaniques entre une structure flexible et un fluide incompressible. Dans cette contribution, nous proposons et analysons des méthodes numériques partitionnées pour la simulation de phénomènes d’interaction fluide-structure (IFS). Nous avons adopté à cet effet, la méthode dite «arbitrary Lagrangian-Eulerian» (ALE). La résolution fluide est effectuée itérativement à l’aide d’un schéma de type projection et la structure est modélisée par des modèles hyper élastiques en grandes déformations. Nous avons développé de nouvelles méthodes de mouvement de maillages pour aboutir à de grandes déformations de la structure. Enfin, une stratégie de complexification du problème d’IFS a été définie. La modélisation de la turbulence et des écoulements à surfaces libres ont été introduites et couplées à la résolution des équations de Navier-Stokes. Différentes simulations numériques sont présentées pour illustrer l’efficacité et la robustesse de l’algorithme. Les résultats numériques présentés attestent de la validité et l’efficacité des méthodes numériques développées.
Resumo:
Les besoins toujours croissants en terme de transfert de données numériques poussent au développement de nouvelles technologies pour accroître la capacité des réseaux, notamment en ce qui concerne les réseaux de fibre optique. Parmi ces nouvelles technologies, le multiplexage spatial permet de multiplier la capacité des liens optiques actuels. Nous nous intéressons particulièrement à une forme de multiplexage spatial utilisant le moment cinétique orbital de la lumière comme base orthogonale pour séparer un certain nombre de canaux. Nous présentons d’abord les notions d’électromagnétisme et de physique nécessaires à la compréhension des développements ultérieurs. Les équations de Maxwell sont dérivées afin d’expliquer les modes scalaires et vectoriels de la fibre optique. Nous présentons également d’autres propriétés modales, soit la coupure des modes, et les indices de groupe et de dispersion. La notion de moment cinétique orbital est ensuite introduite, avec plus particulièrement ses applications dans le domaine des télécommunications. Dans une seconde partie, nous proposons la carte modale comme un outil pour aider au design des fibres optiques à quelques modes. Nous développons la solution vectorielle des équations de coupure des modes pour les fibres en anneau, puis nous généralisons ces équations pour tous les profils de fibres à trois couches. Enfin, nous donnons quelques exemples d’application de la carte modale. Dans la troisième partie, nous présentons des designs de fibres pour la transmission des modes avec un moment cinétique orbital. Les outils développés dans la seconde partie sont utilisés pour effectuer ces designs. Un premier design de fibre, caractérisé par un centre creux, est étudié et démontré. Puis un second design, une famille de fibres avec un profil en anneau, est étudié. Des mesures d’indice effectif et d’indice de groupe sont effectuées sur ces fibres. Les outils et les fibres développés auront permis une meilleure compréhension de la transmission dans la fibre optique des modes ayant un moment cinétique orbital. Nous espérons que ces avancements aideront à développer prochainement des systèmes de communications performants utilisant le multiplexage spatial.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
Structural Health Monitoring (SHM) is an emerging area of research associated to improvement of maintainability and the safety of aerospace, civil and mechanical infrastructures by means of monitoring and damage detection. Guided wave structural testing method is an approach for health monitoring of plate-like structures using smart material piezoelectric transducers. Among many kinds of transducers, the ones that have beam steering feature can perform more accurate surface interrogation. A frequency steerable acoustic transducer (FSATs) is capable of beam steering by varying the input frequency and consequently can detect and localize damage in structures. Guided wave inspection is typically performed through phased arrays which feature a large number of piezoelectric transducers, complexity and limitations. To overcome the weight penalty, the complex circuity and maintenance concern associated with wiring a large number of transducers, new FSATs are proposed that present inherent directional capabilities when generating and sensing elastic waves. The first generation of Spiral FSAT has two main limitations. First, waves are excited or sensed in one direction and in the opposite one (180 ̊ ambiguity) and second, just a relatively rude approximation of the desired directivity has been attained. Second generation of Spiral FSAT is proposed to overcome the first generation limitations. The importance of simulation tools becomes higher when a new idea is proposed and starts to be developed. The shaped transducer concept, especially the second generation of spiral FSAT is a novel idea in guided waves based of Structural Health Monitoring systems, hence finding a simulation tool is a necessity to develop various design aspects of this innovative transducer. In this work, the numerical simulation of the 1st and 2nd generations of Spiral FSAT has been conducted to prove the directional capability of excited guided waves through a plate-like structure.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
The present work consists of a detailed numerical analysis of a 4-way joint made of a precast column and two partially precast beams. The structure has been previously built and experimentally analyzed through a series of cyclic loads at the Laboratory of Tests on Structures (Laboratorio di Prove su Strutture, La. P. S.) of the University of Bologna. The aim of this work is to design a 3D model of the joint and then apply the techniques of nonlinear finite element analysis (FEA) to computationally reproduce the behavior of the structure under cyclic loads. Once the model has been calibrated to correctly emulate the joint, it is possible to obtain new insights useful to understand and explain the physical phenomena observed in the laboratory and to describe the properties of the structure, such as the cracking patterns, the force-displacement and the moment-curvature relations, as well as the deformations and displacements of the various elements composing the joint.
Resumo:
One of the biggest challenges that contaminant hydrogeology is facing, is how to adequately address the uncertainty associated with model predictions. Uncertainty arise from multiple sources, such as: interpretative error, calibration accuracy, parameter sensitivity and variability. This critical issue needs to be properly addressed in order to support environmental decision-making processes. In this study, we perform Global Sensitivity Analysis (GSA) on a contaminant transport model for the assessment of hydrocarbon concentration in groundwater. We provide a quantification of the environmental impact and, given the incomplete knowledge of hydrogeological parameters, we evaluate which are the most influential, requiring greater accuracy in the calibration process. Parameters are treated as random variables and a variance-based GSA is performed in a optimized numerical Monte Carlo framework. The Sobol indices are adopted as sensitivity measures and they are computed by employing meta-models to characterize the migration process, while reducing the computational cost of the analysis. The proposed methodology allows us to: extend the number of Monte Carlo iterations, identify the influence of uncertain parameters and lead to considerable saving computational time obtaining an acceptable accuracy.
Resumo:
Abstract : Images acquired from unmanned aerial vehicles (UAVs) can provide data with unprecedented spatial and temporal resolution for three-dimensional (3D) modeling. Solutions developed for this purpose are mainly operating based on photogrammetry concepts, namely UAV-Photogrammetry Systems (UAV-PS). Such systems are used in applications where both geospatial and visual information of the environment is required. These applications include, but are not limited to, natural resource management such as precision agriculture, military and police-related services such as traffic-law enforcement, precision engineering such as infrastructure inspection, and health services such as epidemic emergency management. UAV-photogrammetry systems can be differentiated based on their spatial characteristics in terms of accuracy and resolution. That is some applications, such as precision engineering, require high-resolution and high-accuracy information of the environment (e.g. 3D modeling with less than one centimeter accuracy and resolution). In other applications, lower levels of accuracy might be sufficient, (e.g. wildlife management needing few decimeters of resolution). However, even in those applications, the specific characteristics of UAV-PSs should be well considered in the steps of both system development and application in order to yield satisfying results. In this regard, this thesis presents a comprehensive review of the applications of unmanned aerial imagery, where the objective was to determine the challenges that remote-sensing applications of UAV systems currently face. This review also allowed recognizing the specific characteristics and requirements of UAV-PSs, which are mostly ignored or not thoroughly assessed in recent studies. Accordingly, the focus of the first part of this thesis is on exploring the methodological and experimental aspects of implementing a UAV-PS. The developed system was extensively evaluated for precise modeling of an open-pit gravel mine and performing volumetric-change measurements. This application was selected for two main reasons. Firstly, this case study provided a challenging environment for 3D modeling, in terms of scale changes, terrain relief variations as well as structure and texture diversities. Secondly, open-pit-mine monitoring demands high levels of accuracy, which justifies our efforts to improve the developed UAV-PS to its maximum capacities. The hardware of the system consisted of an electric-powered helicopter, a high-resolution digital camera, and an inertial navigation system. The software of the system included the in-house programs specifically designed for camera calibration, platform calibration, system integration, onboard data acquisition, flight planning and ground control point (GCP) detection. The detailed features of the system are discussed in the thesis, and solutions are proposed in order to enhance the system and its photogrammetric outputs. The accuracy of the results was evaluated under various mapping conditions, including direct georeferencing and indirect georeferencing with different numbers, distributions and types of ground control points. Additionally, the effects of imaging configuration and network stability on modeling accuracy were assessed. The second part of this thesis concentrates on improving the techniques of sparse and dense reconstruction. The proposed solutions are alternatives to traditional aerial photogrammetry techniques, properly adapted to specific characteristics of unmanned, low-altitude imagery. Firstly, a method was developed for robust sparse matching and epipolar-geometry estimation. The main achievement of this method was its capacity to handle a very high percentage of outliers (errors among corresponding points) with remarkable computational efficiency (compared to the state-of-the-art techniques). Secondly, a block bundle adjustment (BBA) strategy was proposed based on the integration of intrinsic camera calibration parameters as pseudo-observations to Gauss-Helmert model. The principal advantage of this strategy was controlling the adverse effect of unstable imaging networks and noisy image observations on the accuracy of self-calibration. The sparse implementation of this strategy was also performed, which allowed its application to data sets containing a lot of tie points. Finally, the concepts of intrinsic curves were revisited for dense stereo matching. The proposed technique could achieve a high level of accuracy and efficiency by searching only through a small fraction of the whole disparity search space as well as internally handling occlusions and matching ambiguities. These photogrammetric solutions were extensively tested using synthetic data, close-range images and the images acquired from the gravel-pit mine. Achieving absolute 3D mapping accuracy of 11±7 mm illustrated the success of this system for high-precision modeling of the environment.
Resumo:
Na medida em que os produtos e os processos de criação são cada vez mais mediados digitalmente, existe uma reflexão recente acerca da relação entre as imagens e as ferramentas usadas para a sua produção. A relação natural e estreita entre a dimensão conceptual e a dimensão física abre a discussão ao nível da semântica e dos processos da projetação e manipulação das imagens, nas quais estão naturalmente incluídas as ferramentas CAD. Tendo o desenho um papel inequívoco e fundamental no exercício da projetação e da modelação 3D é pertinente perceber a relação e a articulação entre estas duas ferramentas. Reconhecendo o desenho como uma ferramenta de domínio físico capaz de expressar o pensamento que opera a transformação de concepções abstratas em concepções concretas, reconhecê-lo refletido na dimensão virtual através de um software CAD 3D não é trivial, já que este, na generalidade, é processado através de um pensamento cujo contexto é distante da materialidade. Metodologicamente, abordaremos esta questão procurando a verificação da hipótese através de uma proposta de exercício prático que procura avaliar o efeito que as imagens analógicas poderão ter sobre o reconhecimento e operatividade da ferramenta Blender num enquadramento académico. Pretende-se, pois, perceber como o desenho analógico pode integrar o processo de modelação 3D e qual a relação que mantém com quem elas opera. A articulação do desenho com as ferramentas de produção de design, especificamente CAD 3D, permitirá compreender na especialidade a articulação entre ferramentas de diferentes naturezas tanto no processo da projetação quanto na criação de artefactos visuais. Assim como poderá lançar a discussão acerca das estratégias pedagógicas de ensino do desenho e do 3D num curso de Design.
Resumo:
La modélisation de la cryolite, utilisée dans la fabrication de l’aluminium, implique plusieurs défis, notament la présence de discontinuités dans la solution et l’inclusion de la difference de densité entre les phases solide et liquide. Pour surmonter ces défis, plusieurs éléments novateurs ont été développés dans cette thèse. En premier lieu, le problème du changement de phase, communément appelé problème de Stefan, a été résolu en deux dimensions en utilisant la méthode des éléments finis étendue. Une formulation utilisant un multiplicateur de Lagrange stable spécialement développée et une interpolation enrichie a été utilisée pour imposer la température de fusion à l’interface. La vitesse de l’interface est déterminée par le saut dans le flux de chaleur à travers l’interface et a été calculée en utilisant la solution du multiplicateur de Lagrange. En second lieu, les effets convectifs ont été inclus par la résolution des équations de Stokes dans la phase liquide en utilisant la méthode des éléments finis étendue aussi. Troisièmement, le changement de densité entre les phases solide et liquide, généralement négligé dans la littérature, a été pris en compte par l’ajout d’une condition aux limites de vitesse non nulle à l’interface solide-liquide pour respecter la conservation de la masse dans le système. Des problèmes analytiques et numériques ont été résolus pour valider les divers composants du modèle et le système d’équations couplés. Les solutions aux problèmes numériques ont été comparées aux solutions obtenues avec l’algorithme de déplacement de maillage de Comsol. Ces comparaisons démontrent que le modèle par éléments finis étendue reproduit correctement le problème de changement phase avec densités variables.
Resumo:
This dissertation contains four essays that all share a common purpose: developing new methodologies to exploit the potential of high-frequency data for the measurement, modeling and forecasting of financial assets volatility and correlations. The first two chapters provide useful tools for univariate applications while the last two chapters develop multivariate methodologies. In chapter 1, we introduce a new class of univariate volatility models named FloGARCH models. FloGARCH models provide a parsimonious joint model for low frequency returns and realized measures, and are sufficiently flexible to capture long memory as well as asymmetries related to leverage effects. We analyze the performances of the models in a realistic numerical study and on the basis of a data set composed of 65 equities. Using more than 10 years of high-frequency transactions, we document significant statistical gains related to the FloGARCH models in terms of in-sample fit, out-of-sample fit and forecasting accuracy compared to classical and Realized GARCH models. In chapter 2, using 12 years of high-frequency transactions for 55 U.S. stocks, we argue that combining low-frequency exogenous economic indicators with high-frequency financial data improves the ability of conditionally heteroskedastic models to forecast the volatility of returns, their full multi-step ahead conditional distribution and the multi-period Value-at-Risk. Using a refined version of the Realized LGARCH model allowing for time-varying intercept and implemented with realized kernels, we document that nominal corporate profits and term spreads have strong long-run predictive ability and generate accurate risk measures forecasts over long-horizon. The results are based on several loss functions and tests, including the Model Confidence Set. Chapter 3 is a joint work with David Veredas. We study the class of disentangled realized estimators for the integrated covariance matrix of Brownian semimartingales with finite activity jumps. These estimators separate correlations and volatilities. We analyze different combinations of quantile- and median-based realized volatilities, and four estimators of realized correlations with three synchronization schemes. Their finite sample properties are studied under four data generating processes, in presence, or not, of microstructure noise, and under synchronous and asynchronous trading. The main finding is that the pre-averaged version of disentangled estimators based on Gaussian ranks (for the correlations) and median deviations (for the volatilities) provide a precise, computationally efficient, and easy alternative to measure integrated covariances on the basis of noisy and asynchronous prices. Along these lines, a minimum variance portfolio application shows the superiority of this disentangled realized estimator in terms of numerous performance metrics. Chapter 4 is co-authored with Niels S. Hansen, Asger Lunde and Kasper V. Olesen, all affiliated with CREATES at Aarhus University. We propose to use the Realized Beta GARCH model to exploit the potential of high-frequency data in commodity markets. The model produces high quality forecasts of pairwise correlations between commodities which can be used to construct a composite covariance matrix. We evaluate the quality of this matrix in a portfolio context and compare it to models used in the industry. We demonstrate significant economic gains in a realistic setting including short selling constraints and transaction costs.
Resumo:
This research focuses on finding a fashion design methodology to reliably translate innovative two-dimensional ideas on paper, via a structural design sculpture, into an intermediate model. The author, both as a fashion designer and a researcher, has witnessed the issues which arise, regarding the loss of some of the initial ideas and distortion during the two-dimensional creative sketch to three-dimensional garment transfer process. Therefore, this research is concerned with fashion designers engaged in transferring a two-dimensional sketch through the method ‘sculptural form giving’. This research method applies the ideal model of conceptual sculpture, in the fashion design process, akin to those used in the disciplines of architecture. These parallel design disciplines share similar processes for realizing design ideas. Moreover, this research investigates and formalizes the processes that utilize the measurable space between the garment and the body, to help transfer garment variation and scale. In summation, this research proposition focuses on helping fashion designers to produce a creative method that helps the designer transfer their imaginative concept through intermediate modeling.
Resumo:
The blast furnace is the main ironmaking production unit in the world which converts iron ore with coke and hot blast into liquid iron, hot metal, which is used for steelmaking. The furnace acts as a counter-current reactor charged with layers of raw material of very different gas permeability. The arrangement of these layers, or burden distribution, is the most important factor influencing the gas flow conditions inside the furnace, which dictate the efficiency of the heat transfer and reduction processes. For proper control the furnace operators should know the overall conditions in the furnace and be able to predict how control actions affect the state of the furnace. However, due to high temperatures and pressure, hostile atmosphere and mechanical wear it is very difficult to measure internal variables. Instead, the operators have to rely extensively on measurements obtained at the boundaries of the furnace and make their decisions on the basis of heuristic rules and results from mathematical models. It is particularly difficult to understand the distribution of the burden materials because of the complex behavior of the particulate materials during charging. The aim of this doctoral thesis is to clarify some aspects of burden distribution and to develop tools that can aid the decision-making process in the control of the burden and gas distribution in the blast furnace. A relatively simple mathematical model was created for simulation of the distribution of the burden material with a bell-less top charging system. The model developed is fast and it can therefore be used by the operators to gain understanding of the formation of layers for different charging programs. The results were verified by findings from charging experiments using a small-scale charging rig at the laboratory. A basic gas flow model was developed which utilized the results of the burden distribution model to estimate the gas permeability of the upper part of the blast furnace. This combined formulation for gas and burden distribution made it possible to implement a search for the best combination of charging parameters to achieve a target gas temperature distribution. As this mathematical task is discontinuous and non-differentiable, a genetic algorithm was applied to solve the optimization problem. It was demonstrated that the method was able to evolve optimal charging programs that fulfilled the target conditions. Even though the burden distribution model provides information about the layer structure, it neglects some effects which influence the results, such as mixed layer formation and coke collapse. A more accurate numerical method for studying particle mechanics, the Discrete Element Method (DEM), was used to study some aspects of the charging process more closely. Model charging programs were simulated using DEM and compared with the results from small-scale experiments. The mixed layer was defined and the voidage of mixed layers was estimated. The mixed layer was found to have about 12% less voidage than layers of the individual burden components. Finally, a model for predicting the extent of coke collapse when heavier pellets are charged over a layer of lighter coke particles was formulated based on slope stability theory, and was used to update the coke layer distribution after charging in the mathematical model. In designing this revision, results from DEM simulations and charging experiments for some charging programs were used. The findings from the coke collapse analysis can be used to design charging programs with more stable coke layers.