912 resultados para Images - Computational methods
Resumo:
Smoothed particle hydrodynamics (SPH) is a meshfree particle method based on Lagrangian formulation, and has been widely applied to different areas in engineering and science. This paper presents an overview on the SPH method and its recent developments, including (1) the need for meshfree particle methods, and advantages of SPH, (2) approximation schemes of the conventional SPH method and numerical techniques for deriving SPH formulations for partial differential equations such as the Navier-Stokes (N-S) equations, (3) the role of the smoothing kernel functions and a general approach to construct smoothing kernel functions, (4) kernel and particle consistency for the SPH method, and approaches for restoring particle consistency, (5) several important numerical aspects, and (6) some recent applications of SPH. The paper ends with some concluding remarks.
Resumo:
The dissertation studies the general area of complex networked systems that consist of interconnected and active heterogeneous components and usually operate in uncertain environments and with incomplete information. Problems associated with those systems are typically large-scale and computationally intractable, yet they are also very well-structured and have features that can be exploited by appropriate modeling and computational methods. The goal of this thesis is to develop foundational theories and tools to exploit those structures that can lead to computationally-efficient and distributed solutions, and apply them to improve systems operations and architecture.
Specifically, the thesis focuses on two concrete areas. The first one is to design distributed rules to manage distributed energy resources in the power network. The power network is undergoing a fundamental transformation. The future smart grid, especially on the distribution system, will be a large-scale network of distributed energy resources (DERs), each introducing random and rapid fluctuations in power supply, demand, voltage and frequency. These DERs provide a tremendous opportunity for sustainability, efficiency, and power reliability. However, there are daunting technical challenges in managing these DERs and optimizing their operation. The focus of this dissertation is to develop scalable, distributed, and real-time control and optimization to achieve system-wide efficiency, reliability, and robustness for the future power grid. In particular, we will present how to explore the power network structure to design efficient and distributed market and algorithms for the energy management. We will also show how to connect the algorithms with physical dynamics and existing control mechanisms for real-time control in power networks.
The second focus is to develop distributed optimization rules for general multi-agent engineering systems. A central goal in multiagent systems is to design local control laws for the individual agents to ensure that the emergent global behavior is desirable with respect to the given system level objective. Ideally, a system designer seeks to satisfy this goal while conditioning each agent’s control on the least amount of information possible. Our work focused on achieving this goal using the framework of game theory. In particular, we derived a systematic methodology for designing local agent objective functions that guarantees (i) an equivalence between the resulting game-theoretic equilibria and the system level design objective and (ii) that the resulting game possesses an inherent structure that can be exploited for distributed learning, e.g., potential games. The control design can then be completed by applying any distributed learning algorithm that guarantees convergence to the game-theoretic equilibrium. One main advantage of this game theoretic approach is that it provides a hierarchical decomposition between the decomposition of the systemic objective (game design) and the specific local decision rules (distributed learning algorithms). This decomposition provides the system designer with tremendous flexibility to meet the design objectives and constraints inherent in a broad class of multiagent systems. Furthermore, in many settings the resulting controllers will be inherently robust to a host of uncertainties including asynchronous clock rates, delays in information, and component failures.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
Today our understanding of the vibrational thermodynamics of materials at low temperatures is emerging nicely, based on the harmonic model in which phonons are independent. At high temperatures, however, this understanding must accommodate how phonons interact with other phonons or with other excitations. We shall see that the phonon-phonon interactions give rise to interesting coupling problems, and essentially modify the equilibrium and non-equilibrium properties of materials, e.g., thermodynamic stability, heat capacity, optical properties and thermal transport of materials. Despite its great importance, to date the anharmonic lattice dynamics is poorly understood and most studies on lattice dynamics still rely on the harmonic or quasiharmonic models. There have been very few studies on the pure phonon anharmonicity and phonon-phonon interactions. The work presented in this thesis is devoted to the development of experimental and computational methods on this subject.
Modern inelastic scattering techniques with neutrons or photons are ideal for sorting out the anharmonic contribution. Analysis of the experimental data can generate vibrational spectra of the materials, i.e., their phonon densities of states or phonon dispersion relations. We obtained high quality data from laser Raman spectrometer, Fourier transform infrared spectrometer and inelastic neutron spectrometer. With accurate phonon spectra data, we obtained the energy shifts and lifetime broadenings of the interacting phonons, and the vibrational entropies of different materials. The understanding of them then relies on the development of the fundamental theories and the computational methods.
We developed an efficient post-processor for analyzing the anharmonic vibrations from the molecular dynamics (MD) calculations. Currently, most first principles methods are not capable of dealing with strong anharmonicity, because the interactions of phonons are ignored at finite temperatures. Our method adopts the Fourier transformed velocity autocorrelation method to handle the big data of time-dependent atomic velocities from MD calculations, and efficiently reconstructs the phonon DOS and phonon dispersion relations. Our calculations can reproduce the phonon frequency shifts and lifetime broadenings very well at various temperatures.
To understand non-harmonic interactions in a microscopic way, we have developed a numerical fitting method to analyze the decay channels of phonon-phonon interactions. Based on the quantum perturbation theory of many-body interactions, this method is used to calculate the three-phonon and four-phonon kinematics subject to the conservation of energy and momentum, taking into account the weight of phonon couplings. We can assess the strengths of phonon-phonon interactions of different channels and anharmonic orders with the calculated two-phonon DOS. This method, with high computational efficiency, is a promising direction to advance our understandings of non-harmonic lattice dynamics and thermal transport properties.
These experimental techniques and theoretical methods have been successfully performed in the study of anharmonic behaviors of metal oxides, including rutile and cuprite stuctures, and will be discussed in detail in Chapters 4 to 6. For example, for rutile titanium dioxide (TiO2), we found that the anomalous anharmonic behavior of the B1g mode can be explained by the volume effects on quasiharmonic force constants, and by the explicit cubic and quartic anharmonicity. For rutile tin dioxide (SnO2), the broadening of the B2g mode with temperature showed an unusual concave downwards curvature. This curvature was caused by a change with temperature in the number of down-conversion decay channels, originating with the wide band gap in the phonon dispersions. For silver oxide (Ag2O), strong anharmonic effects were found for both phonons and for the negative thermal expansion.
Resumo:
Motivated by recent MSL results where the ablation rate of the PICA heatshield was over-predicted, and staying true to the objectives outlined in the NASA Space Technology Roadmaps and Priorities report, this work focuses on advancing EDL technologies for future space missions.
Due to the difficulties in performing flight tests in the hypervelocity regime, a new ground testing facility called the vertical expansion tunnel is proposed. The adverse effects from secondary diaphragm rupture in an expansion tunnel may be reduced or eliminated by orienting the tunnel vertically, matching the test gas pressure and the accelerator gas pressure, and initially separating the test gas from the accelerator gas by density stratification. If some sacrifice of the reservoir conditions can be made, the VET can be utilized in hypervelocity ground testing, without the problems associated with secondary diaphragm rupture.
The performance of different constraints for the Rate-Controlled Constrained-Equilibrium (RCCE) method is investigated in the context of modeling reacting flows characteristic to ground testing facilities, and re-entry conditions. The effectiveness of different constraints are isolated, and new constraints previously unmentioned in the literature are introduced. Three main benefits from the RCCE method were determined: 1) the reduction in number of equations that need to be solved to model a reacting flow; 2) the reduction in stiffness of the system of equations needed to be solved; and 3) the ability to tabulate chemical properties as a function of a constraint once, prior to running a simulation, along with the ability to use the same table for multiple simulations.
Finally, published physical properties of PICA are compiled, and the composition of the pyrolysis gases that form at high temperatures internal to a heatshield is investigated. A necessary link between the composition of the solid resin, and the composition of the pyrolysis gases created is provided. This link, combined with a detailed investigation into a reacting pyrolysis gas mixture, allows a much needed consistent, and thorough description of many of the physical phenomena occurring in a PICA heatshield, and their implications, to be presented.
Through the use of computational fluid mechanics and computational chemistry methods, significant contributions have been made to advancing ground testing facilities, computational methods for reacting flows, and ablation modeling.
Resumo:
O objetivo deste estudo foi comparar os resultados da microinfiltração marginal obtidos por diferentes meios de aquisição de imagens e métodos de mensuração da penetração de prata em restaurações de resina composta classe V, in vitro. Dezoito pré-molares humanos hígidos, recém extraídos, foram divididos em três grupos, de acordo com o tipo de instrumento para preparação cavitária utilizado. Grupo 1: ponta diamantada número 3100, em alta rotação. Grupo 2: broca carbide número 330, em alta rotação. Grupo 3: ponta CVDentus código 82137, em aparelho de ultrassom. Foram realizados preparos cavitários padronizados (3x4x2mm) classe V nas faces vestibular e lingual de todos os dentes, com margens oclusais em esmalte e cervicais em dentina/cemento. As cavidades foram restauradas com o sistema adesivo Solobond M (VOCO) e resina composta Grandio (VOCO), a qual foi inserida e fotoativada em três incrementos. Os corpos de prova ficaram imersos em água destilada por 24h a 37oC; receberam acabamento e polimento com discos SofLex (3M) e foram novamente armazenados em água destilada, por sete dias. Posteriormente, as superfícies dentárias foram coberta com duas camadas de esmalte para unhas vermelho, exceto as áreas adjacentes às restaurações. Os espécimes ficaram imersos em solução aquosa de nitrato de prata a 50% por 24h e em solução fotorreveladora por 2h e foram seccionados no sentido vestíbulo-lingual, passando pelo centro das restaurações, com disco diamantado em baixa rotação. As amostras foram polidas em politriz horizontal e analisadas por diferentes métodos. À extensão da microinfiltração foi atribuído escores de 0 a 3 através de análises por meio de estereomicroscópio tradicional e com leds e microscópio ótico. As imagens obtidas na lupa com leds e no microscópio ótico tiveram as áreas infiltradas medidas através do software AxioVision. O teste χ2 de McNemar-Bowker revelou concordância estatística entre estereomicroscópio tradicional e o com leds (p=0,809) durante análises semiquantitativas. Porém, houve diferenças significantes entre microscópio ótico e estereomicroscópios (p<0,001). Houve boa correlação entre análises semiquantitativas e quantitativas de acordo com o teste de Spearmann (p<0,001). O teste de Kruskall-Wallis não revelou diferenças estatisticamente significantes (p=0,174) entre os grupos experimentais na análise quantitativa por microscópio ótico, em esmalte. Ao contrário do que se observa com a mesma em lupa (p<0,001). Conclui-se que o método de atribuição de escores comumente aplicado com a lupa nos estudos da microinfiltração marginal é uma opção confiável para análise da microinfiltração.
Resumo:
Os recentes avanços tecnológicos fizeram aumentar o nível de qualificação do pesquisador em epidemiologia. A importância do papel estratégico da educação não pode ser ignorada. Todavia, a Associação Brasileira de Pós-graduação em Saúde Coletiva (ABRASCO), no seu último plano diretor (2005-2009), aponta uma pequena valorização na produção de material didático-pedagógico e, ainda, a falta de uma política de desenvolvimento e utilização de software livre no ensino da epidemiologia. É oportuno, portanto, investir em uma perspectiva relacional, na linha do que a corrente construtivista propõe, uma vez que esta teoria tem sido reconhecida como a mais adequada no desenvolvimento de materiais didáticos informatizados. Neste sentido, promover cursos interativos e, no bojo destes, desenvolver material didático conexo é oportuno e profícuo. No âmbito da questão política de desenvolvimento e utilização de software livre no ensino da epidemiologia, particularmente em estatística aplicada, o R tem se mostrado um software de interesse emergente. Ademais, não só porque evita possíveis penalizações por utilização de software comercial sem licença, mas também porque o franco acesso aos códigos e programação o torna uma ferramenta excelente para a elaboração de material didático em forma de hiperdocumentos, importantes alicerces para uma tão desejada interação docentediscente em sala de aula. O principal objetivo é desenvolver material didático em R para os cursos de bioestatística aplicada à análise epidemiológica. Devido a não implementação de certas funções estatísticas no R, também foi incluída a programação de funções adicionais. Os cursos empregados no desenvolvimento desse material fundamentaram-se nas disciplinas Uma introdução à Plataforma R para Modelagem Estatística de Dados e Instrumento de Aferição em Epidemiologia I: Teoria Clássica de Medidas (Análise) vinculadas ao departamento de Epidemiologia, Instituto de Medicina Social (IMS) da Universidade do Estado do Rio de Janeiro (UERJ). A base teórico-pedagógica foi definida a partir dos princípios construtivistas, na qual o indivíduo é agente ativo e crítico de seu próprio conhecimento, construindo significados a partir de experiências próprias. E, à ótica construtivista, seguiu-se a metodologia de ensino da problematização, abrangendo problemas oriundos de situações reais e sistematizados por escrito. Já os métodos computacionais foram baseados nas Novas Tecnologias da Informação e Comunicação (NTIC). As NTICs exploram a busca pela consolidação de currículos mais flexíveis, adaptados às características diferenciadas de aprendizagem dos alunos. A implementação das NTICs foi feita através de hipertexto, que é uma estrutura de textos interligados por nós ou vínculos (links), formando uma rede de informações relacionadas. Durante a concepção do material didático, foram realizadas mudanças na interface básica do sistema de ajuda do R para garantir a interatividade aluno-material. O próprio instrutivo é composto por blocos, que incentivam a discussão e a troca de informações entre professor e alunos.
Resumo:
The potential impact that offshore wind farms may cause on nearby marine radars should be considered before the wind farm is installed. Strong radar echoes from the turbines may degrade radars' detection capability in the area around the wind farm. Although conventional computational methods provide accurate results of scattering by wind turbines, they are not directly implementable in software tools that can be used to conduct the impact studies. This paper proposes a simple model to assess the clutter that wind turbines may generate on marine radars. This method can be easily implemented in the system modeling software tools for the impact analysis of a wind farm in a real scenario.
Resumo:
In this paper we present Poisson sum series representations for α-stable (αS) random variables and a-stable processes, in particular concentrating on continuous-time autoregressive (CAR) models driven by α-stable Lévy processes. Our representations aim to provide a conditionally Gaussian framework, which will allow parameter estimation using Rao-Blackwellised versions of state of the art Bayesian computational methods such as particle filters and Markov chain Monte Carlo (MCMC). To overcome the issues due to truncation of the series, novel residual approximations are developed. Simulations demonstrate the potential of these Poisson sum representations for inference in otherwise intractable α-stable models. © 2011 IEEE.