937 resultados para Mesh smoothing
Resumo:
As pontes rodoviárias de concreto armado estão sujeitas às ações dinâmicas variáveis devido ao tráfego de veículos sobre o tabuleiro. Estas ações dinâmicas podem gerar o surgimento das fraturas ou mesmo a sua propagação na estrutura. A correta consideração destes aspectos objetivou o desenvolvimento de um estudo, de forma a avaliar os esforços do tráfego de veículos pesados sobre o tabuleiro. As técnicas para a contagem de ciclos de esforços e a aplicação das regras de dano acumulado foram analisadas através das curvas S-N de diversas normas estudadas. A ponte rodoviária investigada é constituída por quatro vigas longitudinais, três transversinas e por um tabuleiro de concreto armado. O modelo computacional, desenvolvido para a análise dinâmica da ponte, foi concebido com base no emprego de técnicas usuais de discretização através do método dos elementos finitos. O modelo estrutural da obra de arte rodoviária estudada foi simulado com base no emprego de elementos finitos sólidos tridimensionais. Os veículos são representados a partir de sistemas massa-mola-amortecedor. O tráfego dessas viaturas é considerado mediante a simulação de comboios semi-infinitos, deslocando-se com velocidade constante sobre o tabuleiro da ponte. As conclusões deste trabalho versam acerca da vida útil de serviço dos elementos estruturais de pontes rodoviárias de concreto armado submetidas às ações dinâmicas provenientes do tráfego de veículos pesados sobre o tabuleiro.
Resumo:
The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.
Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.
This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.
Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.
We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.
Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.
To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.
Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.
To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.
Resumo:
Kohn-Sham density functional theory (KSDFT) is currently the main work-horse of quantum mechanical calculations in physics, chemistry, and materials science. From a mechanical engineering perspective, we are interested in studying the role of defects in the mechanical properties in materials. In real materials, defects are typically found at very small concentrations e.g., vacancies occur at parts per million, dislocation density in metals ranges from $10^{10} m^{-2}$ to $10^{15} m^{-2}$, and grain sizes vary from nanometers to micrometers in polycrystalline materials, etc. In order to model materials at realistic defect concentrations using DFT, we would need to work with system sizes beyond millions of atoms. Due to the cubic-scaling computational cost with respect to the number of atoms in conventional DFT implementations, such system sizes are unreachable. Since the early 1990s, there has been a huge interest in developing DFT implementations that have linear-scaling computational cost. A promising approach to achieving linear-scaling cost is to approximate the density matrix in KSDFT. The focus of this thesis is to provide a firm mathematical framework to study the convergence of these approximations. We reformulate the Kohn-Sham density functional theory as a nested variational problem in the density matrix, the electrostatic potential, and a field dual to the electron density. The corresponding functional is linear in the density matrix and thus amenable to spectral representation. Based on this reformulation, we introduce a new approximation scheme, called spectral binning, which does not require smoothing of the occupancy function and thus applies at arbitrarily low temperatures. We proof convergence of the approximate solutions with respect to spectral binning and with respect to an additional spatial discretization of the domain. For a standard one-dimensional benchmark problem, we present numerical experiments for which spectral binning exhibits excellent convergence characteristics and outperforms other linear-scaling methods.
Resumo:
This thesis presents a new class of solvers for the subsonic compressible Navier-Stokes equations in general two- and three-dimensional spatial domains. The proposed methodology incorporates: 1) A novel linear-cost implicit solver based on use of higher-order backward differentiation formulae (BDF) and the alternating direction implicit approach (ADI); 2) A fast explicit solver; 3) Dispersionless spectral spatial discretizations; and 4) A domain decomposition strategy that negotiates the interactions between the implicit and explicit domains. In particular, the implicit methodology is quasi-unconditionally stable (it does not suffer from CFL constraints for adequately resolved flows), and it can deliver orders of time accuracy between two and six in the presence of general boundary conditions. In fact this thesis presents, for the first time in the literature, high-order time-convergence curves for Navier-Stokes solvers based on the ADI strategy---previous ADI solvers for the Navier-Stokes equations have not demonstrated orders of temporal accuracy higher than one. An extended discussion is presented in this thesis which places on a solid theoretical basis the observed quasi-unconditional stability of the methods of orders two through six. The performance of the proposed solvers is favorable. For example, a two-dimensional rough-surface configuration including boundary layer effects at Reynolds number equal to one million and Mach number 0.85 (with a well-resolved boundary layer, run up to a sufficiently long time that single vortices travel the entire spatial extent of the domain, and with spatial mesh sizes near the wall of the order of one hundred-thousandth the length of the domain) was successfully tackled in a relatively short (approximately thirty-hour) single-core run; for such discretizations an explicit solver would require truly prohibitive computing times. As demonstrated via a variety of numerical experiments in two- and three-dimensions, further, the proposed multi-domain parallel implicit-explicit implementations exhibit high-order convergence in space and time, useful stability properties, limited dispersion, and high parallel efficiency.
Resumo:
Este trabalho apresenta um estudo teórico e numérico sobre os erros que ocorrem nos cálculos de gradientes em malhas não estruturadas constituídas pelo diagrama de Voronoi, malhas estas, formadas também pela triangulação de Delaunay. As malhas adotadas, no trabalho, foram as malhas cartesianas e as malhas triangulares, esta última é gerada pela divisão de um quadrado em dois ou quatro triângulos iguais. Para tal análise, adotamos a escolha de três metodologias distintas para o cálculo dos gradientes: método de Green Gauss, método do Mínimo Resíduo Quadrático e método da Média do Gradiente Projetado Corrigido. O texto se baseia em dois enfoques principais: mostrar que as equações de erros dadas pelos gradientes podem ser semelhantes, porém com sinais opostos, para pontos de cálculos em volumes vizinhos e que a ordem do erro das equações analíticas pode ser melhorada em malhas uniformes quando comparada as não uniformes, nos casos unidimensionais, e quando analisada na face de tais volumes vizinhos nos casos bidimensionais.
Resumo:
As análises de erros são conduzidas antes de qualquer projeto a ser desenvolvido. A necessidade do conhecimento do comportamento do erro numérico em malhas estruturadas e não-estruturadas surge com o aumento do uso destas malhas nos métodos de discretização. Desta forma, o objetivo deste trabalho foi criar uma metodologia para analisar os erros de discretização gerados através do truncamento na Série de Taylor, aplicados às equações de Poisson e de Advecção-Difusão estacionárias uni e bidimensionais, utilizando-se o Método de Volumes Finitos em malhas do tipo Voronoi. A escolha dessas equações se dá devido a sua grande utilização em testes de novos modelos matemáticos e função de interpolação. Foram usados os esquemas Central Difference Scheme (CDS) e Upwind Difference Scheme(UDS) nos termos advectivos. Verificou-se a influência do tipo de condição de contorno e a posição do ponto gerador do volume na solução numérica. Os resultados analíticos foram confrontados com resultados experimentais para dois tipos de malhas de Voronoi, uma malha cartesiana e outra triangular comprovando a influência da forma do volume finito na solução numérica obtida. Foi percebido no estudo que a discretização usando o esquema CDS tem erros menores do que a discretização usando o esquema UDS conforme literatura. Também se percebe a diferença nos erros em volumes vizinhos nas malhas triangulares o que faz com que não se tenha uma uniformidade nos gráficos dos erros estudados. Percebeu-se que as malhas cartesianas com nó no centróide do volume tem menor erro de discretização do que malhas triangulares. Mas o uso deste tipo de malha depende da geometria do problema estudado
Resumo:
Multimesh, multidepth gillnet fleets are useful in assessing fish stock abundance, size distribution and depth distribution. Using data collected on net mesh selectivity for Nile perch, Lates niloticus (L.), in the Kenyan waters of Lake Victoria, suitable mesh sizes for gillnet fleets for use in the Lake Victoria Fisheries Research Project were determined. The modal selection length for Nile perch in the mesh sized used in the earlier experiment were determined, as was the size range vulnerable to capture. Initial trials suggest 60% of the Nile perch swim within 5 m of the bottom. Setting and hauling of the nets is simple and quick, allowing the nets to be used at the same time as other sampling programmes.
Resumo:
The number and size composition of gillnets, fishing grounds, and the quantity and composition of fish catches were related to the size of fishing boat. The overall number of gillnets per boat increased from 20.9 + or - 2.3 nets in 5-6 m long boats to 88.6 + or - 11.8 nets in 11-12 m long boats. The proportion of large mesh sizes, + or more than 127 mm, also increased from 40% in 5-6 m long boats to 100% in boats longer than 10 m. Fish catches are related to the size of boat and this should be considered when formulating management guidelines of the lake's fishery. Promotion of large fishing boats 8 m or longer and restriction on the number and/or mesh size of gillnets of smaller boats could increase ecological and socio-economic benefits.
Resumo:
Four fleets of hanging coefficients 0.2, 0.4, 0.6 and 0.8 were used to determine size selectivity and selection factors of Nile perch populations. There was a linear relationship between mesh size and modal length of capture. Positively skewed length frequency distributions were found for smaller mesh sizes with entanglement becoming more prominent in mesh sizes above 101 mm. Nets of 114 to 141 mm stretched mesh yielded higher economic returns than small meshes, the catch consisting of few largefish.
Resumo:
The zooplankton community of the littoral zone of Nyanza Gulf, Lake Victoria, was studied between June 1998 and June 1999 to identify and quantify various zooplankton groups, and investigate the interactions that occur between them and the littoral fish through the food chain. Zooplankton samples were collected from five stations using a 83 micro-m mesh size plankton net hauled vertically through the water column. Fish samples were obtained by beach seine, except at Gingra (May 1999), where trawl samples were used. Gut/stomach analysis was carried out on the three major commercial species, Lates niloticus (L.), Oreochromis niloticus (L.) and Rastrineobola argentea (Pellegrin).
Resumo:
A study is made of the accuracy of electronic digital computer calculations of ground displacement and response spectra from strong-motion earthquake accelerograms. This involves an investigation of methods of the preparatory reduction of accelerograms into a form useful for the digital computation and of the accuracy of subsequent digital calculations. Various checks are made for both the ground displacement and response spectra results, and it is concluded that the main errors are those involved in digitizing the original record. Differences resulting from various investigators digitizing the same experimental record may become as large as 100% of the maximum computed ground displacements. The spread of the results of ground displacement calculations is greater than that of the response spectra calculations. Standardized methods of adjustment and calculation are recommended, to minimize such errors.
Studies are made of the spread of response spectral values about their mean. The distribution is investigated experimentally by Monte Carlo techniques using an electric analog system with white noise excitation, and histograms are presented indicating the dependence of the distribution on the damping and period of the structure. Approximate distributions are obtained analytically by confirming and extending existing results with accurate digital computer calculations. A comparison of the experimental and analytical approaches indicates good agreement for low damping values where the approximations are valid. A family of distribution curves to be used in conjunction with existing average spectra is presented. The combination of analog and digital computations used with Monte Carlo techniques is a promising approach to the statistical problems of earthquake engineering.
Methods of analysis of very small earthquake ground motion records obtained simultaneously at different sites are discussed. The advantages of Fourier spectrum analysis for certain types of studies and methods of calculation of Fourier spectra are presented. The digitizing and analysis of several earthquake records is described and checks are made of the dependence of results on digitizing procedure, earthquake duration and integration step length. Possible dangers of a direct ratio comparison of Fourier spectra curves are pointed out and the necessity for some type of smoothing procedure before comparison is established. A standard method of analysis for the study of comparative ground motion at different sites is recommended.
Resumo:
[ES]El objetivo principal de este proyecto se centra en modelizar correctamente la capa límite sobre el perfil alar donde se produce la transición del régimen laminar al régimen transitorio. Como objetivo secundario se encuentra el afianzamiento de las bases teóricas de mecánica de fluidos obtenidas en la escuela y la adquisición de más conocimientos relacionados con la aerodinámica, concretamente con la capa límite. En una primera parte se tratarán los conceptos generales de los perfiles alares y se hará una breve introducción a los distintos tipos de mallado existentes. También se explicará el concepto de capa límite y todo lo relacionado con ella. A continuación, se establecerán los criterios de selección del modelo de turbulencia más adecuado y se mostrarán los resultados obtenidos de los distintos tipos de modelos de turbulencia anteriormente mencionados. Una vez seleccionado un modelo de turbulencia se profundizará en su estudio, aplicándolo a varios perfiles NACA. Se analizarán los resultados obtenidos y los errores y se buscarán posibles soluciones. Finalmente, se procederá a sacar las conclusiones del modelo escogido y se comparará con una serie de ensayos experimentales con objeto de poder validarlo.
Resumo:
Part I
Solutions of Schrödinger’s equation for system of two particles bound in various stationary one-dimensional potential wells and repelling each other with a Coulomb force are obtained by the method of finite differences. The general properties of such systems are worked out in detail for the case of two electrons in an infinite square well. For small well widths (1-10 a.u.) the energy levels lie above those of the noninteresting particle model by as much as a factor of 4, although excitation energies are only half again as great. The analytical form of the solutions is obtained and it is shown that every eigenstate is doubly degenerate due to the “pathological” nature of the one-dimensional Coulomb potential. This degeneracy is verified numerically by the finite-difference method. The properties of the square-well system are compared with those of the free-electron and hard-sphere models; perturbation and variational treatments are also carried out using the hard-sphere Hamiltonian as a zeroth-order approximation. The lowest several finite-difference eigenvalues converge from below with decreasing mesh size to energies below those of the “best” linear variational function consisting of hard-sphere eigenfunctions. The finite-difference solutions in general yield expectation values and matrix elements as accurate as those obtained using the “best” variational function.
The system of two electrons in a parabolic well is also treated by finite differences. In this system it is possible to separate the center-of-mass motion and hence to effect a considerable numerical simplification. It is shown that the pathological one-dimensional Coulomb potential gives rise to doubly degenerate eigenstates for the parabolic well in exactly the same manner as for the infinite square well.
Part II
A general method of treating inelastic collisions quantum mechanically is developed and applied to several one-dimensional models. The formalism is first developed for nonreactive “vibrational” excitations of a bound system by an incident free particle. It is then extended to treat simple exchange reactions of the form A + BC →AB + C. The method consists essentially of finding a set of linearly independent solutions of the Schrödinger equation such that each solution of the set satisfies a distinct, yet arbitrary boundary condition specified in the asymptotic region. These linearly independent solutions are then combined to form a total scattering wavefunction having the correct asymptotic form. The method of finite differences is used to determine the linearly independent functions.
The theory is applied to the impulsive collision of a free particle with a particle bound in (1) an infinite square well and (2) a parabolic well. Calculated transition probabilities agree well with previously obtained values.
Several models for the exchange reaction involving three identical particles are also treated: (1) infinite-square-well potential surface, in which all three particles interact as hard spheres and each two-particle subsystem (i.e. BC and AB) is bound by an attractive infinite-square-well potential; (2) truncated parabolic potential surface, in which the two-particle subsystems are bound by a harmonic oscillator potential which becomes infinite for interparticle separations greater than a certain value; (3) parabolic (untruncated) surface. Although there are no published values with which to compare our reaction probabilities, several independent checks on internal consistency indicate that the results are reliable.
Resumo:
This thesis presents a topology optimization methodology for the systematic design of optimal multifunctional silicon anode structures in lithium-ion batteries. In order to develop next generation high performance lithium-ion batteries, key design challenges relating to the silicon anode structure must be addressed, namely the lithiation-induced mechanical degradation and the low intrinsic electrical conductivity of silicon. As such, this work considers two design objectives of minimum compliance under design dependent volume expansion, and maximum electrical conduction through the structure, both of which are subject to a constraint on material volume. Density-based topology optimization methods are employed in conjunction with regularization techniques, a continuation scheme, and mathematical programming methods. The objectives are first considered individually, during which the iteration history, mesh independence, and influence of prescribed volume fraction and minimum length scale are investigated. The methodology is subsequently extended to a bi-objective formulation to simultaneously address both the compliance and conduction design criteria. A weighting method is used to derive the Pareto fronts, which demonstrate a clear trade-off between the competing design objectives. Furthermore, a systematic parameter study is undertaken to determine the influence of the prescribed volume fraction and minimum length scale on the optimal combined topologies. The developments presented in this work provide a foundation for the informed design and development of silicon anode structures for high performance lithium-ion batteries.
Resumo:
Optical Coherence Tomography(OCT) is a popular, rapidly growing imaging technique with an increasing number of bio-medical applications due to its noninvasive nature. However, there are three major challenges in understanding and improving an OCT system: (1) Obtaining an OCT image is not easy. It either takes a real medical experiment or requires days of computer simulation. Without much data, it is difficult to study the physical processes underlying OCT imaging of different objects simply because there aren't many imaged objects. (2) Interpretation of an OCT image is also hard. This challenge is more profound than it appears. For instance, it would require a trained expert to tell from an OCT image of human skin whether there is a lesion or not. This is expensive in its own right, but even the expert cannot be sure about the exact size of the lesion or the width of the various skin layers. The take-away message is that analyzing an OCT image even from a high level would usually require a trained expert, and pixel-level interpretation is simply unrealistic. The reason is simple: we have OCT images but not their underlying ground-truth structure, so there is nothing to learn from. (3) The imaging depth of OCT is very limited (millimeter or sub-millimeter on human tissues). While OCT utilizes infrared light for illumination to stay noninvasive, the downside of this is that photons at such long wavelengths can only penetrate a limited depth into the tissue before getting back-scattered. To image a particular region of a tissue, photons first need to reach that region. As a result, OCT signals from deeper regions of the tissue are both weak (since few photons reached there) and distorted (due to multiple scatterings of the contributing photons). This fact alone makes OCT images very hard to interpret.
This thesis addresses the above challenges by successfully developing an advanced Monte Carlo simulation platform which is 10000 times faster than the state-of-the-art simulator in the literature, bringing down the simulation time from 360 hours to a single minute. This powerful simulation tool not only enables us to efficiently generate as many OCT images of objects with arbitrary structure and shape as we want on a common desktop computer, but it also provides us the underlying ground-truth of the simulated images at the same time because we dictate them at the beginning of the simulation. This is one of the key contributions of this thesis. What allows us to build such a powerful simulation tool includes a thorough understanding of the signal formation process, clever implementation of the importance sampling/photon splitting procedure, efficient use of a voxel-based mesh system in determining photon-mesh interception, and a parallel computation of different A-scans that consist a full OCT image, among other programming and mathematical tricks, which will be explained in detail later in the thesis.
Next we aim at the inverse problem: given an OCT image, predict/reconstruct its ground-truth structure on a pixel level. By solving this problem we would be able to interpret an OCT image completely and precisely without the help from a trained expert. It turns out that we can do much better. For simple structures we are able to reconstruct the ground-truth of an OCT image more than 98% correctly, and for more complicated structures (e.g., a multi-layered brain structure) we are looking at 93%. We achieved this through extensive uses of Machine Learning. The success of the Monte Carlo simulation already puts us in a great position by providing us with a great deal of data (effectively unlimited), in the form of (image, truth) pairs. Through a transformation of the high-dimensional response variable, we convert the learning task into a multi-output multi-class classification problem and a multi-output regression problem. We then build a hierarchy architecture of machine learning models (committee of experts) and train different parts of the architecture with specifically designed data sets. In prediction, an unseen OCT image first goes through a classification model to determine its structure (e.g., the number and the types of layers present in the image); then the image is handed to a regression model that is trained specifically for that particular structure to predict the length of the different layers and by doing so reconstruct the ground-truth of the image. We also demonstrate that ideas from Deep Learning can be useful to further improve the performance.
It is worth pointing out that solving the inverse problem automatically improves the imaging depth, since previously the lower half of an OCT image (i.e., greater depth) can be hardly seen but now becomes fully resolved. Interestingly, although OCT signals consisting the lower half of the image are weak, messy, and uninterpretable to human eyes, they still carry enough information which when fed into a well-trained machine learning model spits out precisely the true structure of the object being imaged. This is just another case where Artificial Intelligence (AI) outperforms human. To the best knowledge of the author, this thesis is not only a success but also the first attempt to reconstruct an OCT image at a pixel level. To even give a try on this kind of task, it would require fully annotated OCT images and a lot of them (hundreds or even thousands). This is clearly impossible without a powerful simulation tool like the one developed in this thesis.