479 resultados para regularization
Resumo:
The density distribution of inhomogeneous dense deuterium-tritium plasmas in laser fusion is revealed by the energy loss of fast protons going through the plasma. In our simulation of a plasma density diagnostics, the fast protons used for the diagnostics may be generated in the laser-plasma interaction. Dividing a two-dimensional area into grids and knowing the initial and final energies of the protons, we can obtain a large linear and ill-posed equation set. for the densities of all grids, which is solved with the Tikhonov regularization method. We find that the accuracy of the set plan with four proton sources is better than those of the set plans with less than four proton sources. Also we have done the density reconstruction especially. for four proton sources with and without assuming circularly symmetrical density distribution, and find that the accuracy is better for the reconstruction assuming circular symmetry. The error is about 9% when no noise is added to the final energy for the reconstruction of four proton sources assuming circular symmetry. The accuracies for different random noises to final proton energies with four proton sources are also calculated.
Resumo:
We aim to characterize fault slip behavior during all stages of the seismic cycle in subduction megathrust environments with the eventual goal of understanding temporal and spatial variations of fault zone rheology, and to infer possible causal relationships between inter-, co- and post-seismic slip, as well as implications for earthquake and tsunami hazard. In particular we focus on analyzing aseismic deformation occurring during inter-seismic and post-seismic periods of the seismic cycle. We approach the problem using both Bayesian and optimization techniques. The Bayesian approach allows us to completely characterize the model parameter space by searching a posteriori estimates of the range of allowable models, to easily implement any kind of physically plausible a priori information and to perform the inversion without regularization other than that imposed by the parameterization of the model. However, the Bayesian approach computational expensive and not currently viable for quick response scenarios. Therefore, we also pursue improvements in the optimization inference scheme. We present a novel, robust and yet simple regularization technique that allows us to infer robust and somewhat more detailed models of slip on faults. We apply such methodologies, using simple quasi-static elastic models, to perform studies of inter- seismic deformation in the Central Andes subduction zone, and post-seismic deformation induced by the occurrence of the 2011 Mw 9.0 Tohoku-Oki earthquake in Japan. For the Central Andes, we present estimates of apparent coupling probability of the subduction interface and analyze its relationship to past earthquakes in the region. For Japan, we infer high spatial variability in material properties of the megathrust offshore Tohoku. We discuss the potential for a large earthquake just south of the Tohoku-Oki earthquake where our inferences suggest dominantly aseismic behavior.
Resumo:
This work is concerned with the derivation of optimal scaling laws, in the sense of matching lower and upper bounds on the energy, for a solid undergoing ductile fracture. The specific problem considered concerns a material sample in the form of an infinite slab of finite thickness subjected to prescribed opening displacements on its two surfaces. The solid is assumed to obey deformation-theory of plasticity and, in order to further simplify the analysis, we assume isotropic rigid-plastic deformations with zero plastic spin. When hardening exponents are given values consistent with observation, the energy is found to exhibit sublinear growth. We regularize the energy through the addition of nonlocal energy terms of the strain-gradient plasticity type. This nonlocal regularization has the effect of introducing an intrinsic length scale into the energy. We also put forth a physical argument that identifies the intrinsic length and suggests a linear growth of the nonlocal energy. Under these assumptions, ductile fracture emerges as the net result of two competing effects: whereas the sublinear growth of the local energy promotes localization of deformation to failure planes, the nonlocal regularization stabilizes this process, thus resulting in an orderly progression towards failure and a well-defined specific fracture energy. The optimal scaling laws derived here show that ductile fracture results from localization of deformations to void sheets, and that it requires a well-defined energy per unit fracture area. In particular, fractal modes of fracture are ruled out under the assumptions of the analysis. The optimal scaling laws additionally show that ductile fracture is cohesive in nature, i.e., it obeys a well-defined relation between tractions and opening displacements. Finally, the scaling laws supply a link between micromechanical properties and macroscopic fracture properties. In particular, they reveal the relative roles that surface energy and microplasticity play as contributors to the specific fracture energy of the material. Next, we present an experimental assessment of the optimal scaling laws. We show that when the specific fracture energy is renormalized in a manner suggested by the optimal scaling laws, the data falls within the bounds predicted by the analysis and, moreover, they ostensibly collapse---with allowances made for experimental scatter---on a master curve dependent on the hardening exponent, but otherwise material independent.
Resumo:
In the first part I perform Hartree-Fock calculations to show that quantum dots (i.e., two-dimensional systems of up to twenty interacting electrons in an external parabolic potential) undergo a gradual transition to a spin-polarized Wigner crystal with increasing magnetic field strength. The phase diagram and ground state energies have been determined. I tried to improve the ground state of the Wigner crystal by introducing a Jastrow ansatz for the wave function and performing a variational Monte Carlo calculation. The existence of so called magic numbers was also investigated. Finally, I also calculated the heat capacity associated with the rotational degree of freedom of deformed many-body states and suggest an experimental method to detect Wigner crystals.
The second part of the thesis investigates infinite nuclear matter on a cubic lattice. The exact thermal formalism describes nucleons with a Hamiltonian that accommodates on-site and next-neighbor parts of the central, spin-exchange and isospin-exchange interaction. Using auxiliary field Monte Carlo methods, I show that energy and basic saturation properties of nuclear matter can be reproduced. A first order phase transition from an uncorrelated Fermi gas to a clustered system is observed by computing mechanical and thermodynamical quantities such as compressibility, heat capacity, entropy and grand potential. The structure of the clusters is investigated with the help two-body correlations. I compare symmetry energy and first sound velocities with literature and find reasonable agreement. I also calculate the energy of pure neutron matter and search for a similar phase transition, but the survey is restricted by the infamous Monte Carlo sign problem. Also, a regularization scheme to extract potential parameters from scattering lengths and effective ranges is investigated.
Resumo:
This thesis aims at a simple one-parameter macroscopic model of distributed damage and fracture of polymers that is amenable to a straightforward and efficient numerical implementation. The failure model is motivated by post-mortem fractographic observations of void nucleation, growth and coalescence in polyurea stretched to failure, and accounts for the specific fracture energy per unit area attendant to rupture of the material.
Furthermore, it is shown that the macroscopic model can be rigorously derived, in the sense of optimal scaling, from a micromechanical model of chain elasticity and failure regularized by means of fractional strain-gradient elasticity. Optimal scaling laws that supply a link between the single parameter of the macroscopic model, namely the critical energy-release rate of the material, and micromechanical parameters pertaining to the elasticity and strength of the polymer chains, and to the strain-gradient elasticity regularization, are derived. Based on optimal scaling laws, it is shown how the critical energy-release rate of specific materials can be determined from test data. In addition, the scope and fidelity of the model is demonstrated by means of an example of application, namely Taylor-impact experiments of polyurea rods. Hereby, optimal transportation meshfree approximation schemes using maximum-entropy interpolation functions are employed.
Finally, a different crazing model using full derivatives of the deformation gradient and a core cut-off is presented, along with a numerical non-local regularization model. The numerical model takes into account higher-order deformation gradients in a finite element framework. It is shown how the introduction of non-locality into the model stabilizes the effect of strain localization to small volumes in materials undergoing softening. From an investigation of craze formation in the limit of large deformations, convergence studies verifying scaling properties of both local- and non-local energy contributions are presented.
Resumo:
This thesis presents a novel class of algorithms for the solution of scattering and eigenvalue problems on general two-dimensional domains under a variety of boundary conditions, including non-smooth domains and certain "Zaremba" boundary conditions - for which Dirichlet and Neumann conditions are specified on various portions of the domain boundary. The theoretical basis of the methods for the Zaremba problems on smooth domains concern detailed information, which is put forth for the first time in this thesis, about the singularity structure of solutions of the Laplace operator under boundary conditions of Zaremba type. The new methods, which are based on use of Green functions and integral equations, incorporate a number of algorithmic innovations, including a fast and robust eigenvalue-search algorithm, use of the Fourier Continuation method for regularization of all smooth-domain Zaremba singularities, and newly derived quadrature rules which give rise to high-order convergence even around singular points for the Zaremba problem. The resulting algorithms enjoy high-order convergence, and they can tackle a variety of elliptic problems under general boundary conditions, including, for example, eigenvalue problems, scattering problems, and, in particular, eigenfunction expansion for time-domain problems in non-separable physical domains with mixed boundary conditions.
Resumo:
This thesis presents a topology optimization methodology for the systematic design of optimal multifunctional silicon anode structures in lithium-ion batteries. In order to develop next generation high performance lithium-ion batteries, key design challenges relating to the silicon anode structure must be addressed, namely the lithiation-induced mechanical degradation and the low intrinsic electrical conductivity of silicon. As such, this work considers two design objectives of minimum compliance under design dependent volume expansion, and maximum electrical conduction through the structure, both of which are subject to a constraint on material volume. Density-based topology optimization methods are employed in conjunction with regularization techniques, a continuation scheme, and mathematical programming methods. The objectives are first considered individually, during which the iteration history, mesh independence, and influence of prescribed volume fraction and minimum length scale are investigated. The methodology is subsequently extended to a bi-objective formulation to simultaneously address both the compliance and conduction design criteria. A weighting method is used to derive the Pareto fronts, which demonstrate a clear trade-off between the competing design objectives. Furthermore, a systematic parameter study is undertaken to determine the influence of the prescribed volume fraction and minimum length scale on the optimal combined topologies. The developments presented in this work provide a foundation for the informed design and development of silicon anode structures for high performance lithium-ion batteries.
Resumo:
The centralized paradigm of a single controller and a single plant upon which modern control theory is built is no longer applicable to modern cyber-physical systems of interest, such as the power-grid, software defined networks or automated highways systems, as these are all large-scale and spatially distributed. Both the scale and the distributed nature of these systems has motivated the decentralization of control schemes into local sub-controllers that measure, exchange and act on locally available subsets of the globally available system information. This decentralization of control logic leads to different decision makers acting on asymmetric information sets, introduces the need for coordination between them, and perhaps not surprisingly makes the resulting optimal control problem much harder to solve. In fact, shortly after such questions were posed, it was realized that seemingly simple decentralized optimal control problems are computationally intractable to solve, with the Wistenhausen counterexample being a famous instance of this phenomenon. Spurred on by this perhaps discouraging result, a concerted 40 year effort to identify tractable classes of distributed optimal control problems culminated in the notion of quadratic invariance, which loosely states that if sub-controllers can exchange information with each other at least as quickly as the effect of their control actions propagates through the plant, then the resulting distributed optimal control problem admits a convex formulation.
The identification of quadratic invariance as an appropriate means of "convexifying" distributed optimal control problems led to a renewed enthusiasm in the controller synthesis community, resulting in a rich set of results over the past decade. The contributions of this thesis can be seen as being a part of this broader family of results, with a particular focus on closing the gap between theory and practice by relaxing or removing assumptions made in the traditional distributed optimal control framework. Our contributions are to the foundational theory of distributed optimal control, and fall under three broad categories, namely controller synthesis, architecture design and system identification.
We begin by providing two novel controller synthesis algorithms. The first is a solution to the distributed H-infinity optimal control problem subject to delay constraints, and provides the only known exact characterization of delay-constrained distributed controllers satisfying an H-infinity norm bound. The second is an explicit dynamic programming solution to a two player LQR state-feedback problem with varying delays. Accommodating varying delays represents an important first step in combining distributed optimal control theory with the area of Networked Control Systems that considers lossy channels in the feedback loop. Our next set of results are concerned with controller architecture design. When designing controllers for large-scale systems, the architectural aspects of the controller such as the placement of actuators, sensors, and the communication links between them can no longer be taken as given -- indeed the task of designing this architecture is now as important as the design of the control laws themselves. To address this task, we formulate the Regularization for Design (RFD) framework, which is a unifying computationally tractable approach, based on the model matching framework and atomic norm regularization, for the simultaneous co-design of a structured optimal controller and the architecture needed to implement it. Our final result is a contribution to distributed system identification. Traditional system identification techniques such as subspace identification are not computationally scalable, and destroy rather than leverage any a priori information about the system's interconnection structure. We argue that in the context of system identification, an essential building block of any scalable algorithm is the ability to estimate local dynamics within a large interconnected system. To that end we propose a promising heuristic for identifying the dynamics of a subsystem that is still connected to a large system. We exploit the fact that the transfer function of the local dynamics is low-order, but full-rank, while the transfer function of the global dynamics is high-order, but low-rank, to formulate this separation task as a nuclear norm minimization problem. Finally, we conclude with a brief discussion of future research directions, with a particular emphasis on how to incorporate the results of this thesis, and those of optimal control theory in general, into a broader theory of dynamics, control and optimization in layered architectures.
Resumo:
O presente trabalho tem como objetivo analisar os conceitos de liberdade e identidade através da proposta de um "liberalismo cultural", apresentada pelo filósofo canadense Will Kymlicka, tal como defendida em suas obras Multicultural Citizenship: A Liberal Theory of Minority Rights (1995), Politics in the Vernacular: Nationalism, Multiculturalism and Citizenship (2001) e Multicultural Odysseys. Navigatingthe New International Politics of Diversity (2007). Através dessas leituras, buscou-se compreender em particular de que modo a língua e o território se configuram como elementos definidores das culturas de povos nacionais e étnicos que empenham suas lutas para garantir a permanência desses atributos, tanto em nível doméstico como no plano internacional, a fim de assegurar a singularidade de seus modos de vida e de suas visões de mundo, enquanto grupos diferenciados. Para tanto, tornou-se fundamental a realização de uma análise crítica do processo de construção nacional dos Estados modernos, como um projeto levado a cabo por parte de inúmeros países na modernidade com o intuito de promover a unidade nacional de seus Estados, através da invisibilização das expressões culturais e da participação política de grupos culturalmente minoritários. Ao final, desenvolve-se uma pequena reflexão sobre como esse debate pode contribuir para uma melhor compreensão acerca das reivindicações de populações indígenas e remanescentes de quilombos no Brasil pela regularização de seus territórios e reconhecimento de suas práticas culturais.
Resumo:
Several alpine vertebrates share a distribution pattern that extends across the South-western Palearctic but is limited to the main mountain massifs. Although they are usually regarded as cold-adapted species, the range of many alpine vertebrates also includes relatively warm areas, suggesting that factors beyond climatic conditions may be driving their distribution. In this work we first recognize the species belonging to the mentioned biogeographic group and, based on the environmental niche analysis of Plecotus macrobullaris, we identify and characterize the environmental factors constraining their ranges. Distribution overlap analysis of 504 European vertebrates was done using the Sorensen Similarity Index, and we identified four birds and one mammal that share the distribution with P. macrobullaris. We generated 135 environmental niche models including different variable combinations and regularization values for P. macrobullaris at two different scales and resolutions. After selecting the best models, we observed that topographic variables outperformed climatic predictors, and the abruptness of the landscape showed better predictive ability than elevation. The best explanatory climatic variable was mean summer temperature, which showed that P. macrobullaris is able to cope with mean temperature ranges spanning up to 16 degrees C. The models showed that the distribution of P. macrobullaris is mainly shaped by topographic factors that provide rock-abundant and open-space habitats rather than climatic determinants, and that the species is not a cold-adapted, but rather a cold-tolerant eurithermic organism. P. macrobullaris shares its distribution pattern as well as several ecological features with five other alpine vertebrates, suggesting that the conclusions obtained from this study might be extensible to them. We concluded that rock-dwelling and open-space foraging vertebrates with broad temperature tolerance are the best candidates to show wide alpine distribution in the Western Palearctic.
Resumo:
O papel do Estado ao longo do Historia foi bem diversificado, ora com um caráter interventor, e ora com uma postura de regular o mínimo necessário. Esta última postura, proporcionou grandes déficit no setor de infraestrutura, desequilíbrios sociais, favelização, loteamentos irregulares e a não efetivação do direito à moradia. Deste modo, o Estado precisou ampliar a sua atuação na regularização do solo, visando uma regularização fundiária plena que incluiria desde a instalação da urbanização e infraestrutura adequada à concessão de títulos reconhecendo a posse e/ou propriedade do indivíduo. Suprir a carência de infraestrutura, urbanização e organização do solo que se acumularam nas últimas décadas, esbarra na falência fiscal do Estado Brasileiro, que precisa tomar para si a responsabilidade da regularização, mas, principalmente buscar parcerias com o setor privado. A atuação das organizações sociais, das organizações da sociedade civil de interesse público e as parcerias público-privadas precisam ser ampliadas na efetivação da regularização fundiária. Necessário se faz que o investimento não seja exclusivamente público, possibilitando conceder ao parceiro privado, através da utilização de certos instrumentos jurídicos do próprio Estatuto da Cidade como uma contraprestação interessante a este parceiro. Somente vivenciando uma interpretação e aplicação conjunta dos instrumentos jurídicos à disposição do Estado aliado a vontade política, que poderá ser garantido o desenvolvimento prometido à população brasileira e a efetivação do direito constitucional à moradia.
Resumo:
O uso de técnicas com o funcional de Tikhonov em processamento de imagens tem sido amplamente usado nos últimos anos. A ideia básica nesse processo é modificar uma imagem inicial via equação de convolução e encontrar um parâmetro que minimize esse funcional afim de obter uma aproximação da imagem original. Porém, um problema típico neste método consiste na seleção do parâmetro de regularização adequado para o compromisso entre a acurácia e a estabilidade da solução. Um método desenvolvido por pesquisadores do IPRJ e UFRJ, atuantes na área de problemas inversos, consiste em minimizar um funcional de resíduos através do parâmetro de regularização de Tikhonov. Uma estratégia que emprega a busca iterativa deste parâmetro visando obter um valor mínimo para o funcional na iteração seguinte foi adotada recentemente em um algoritmo serial de restauração. Porém, o custo computacional é um fator problema encontrado ao empregar o método iterativo de busca. Com esta abordagem, neste trabalho é feita uma implementação em linguagem C++ que emprega técnicas de computação paralela usando MPI (Message Passing Interface) para a estratégia de minimização do funcional com o método de busca iterativa, reduzindo assim, o tempo de execução requerido pelo algoritmo. Uma versão modificada do método de Jacobi é considerada em duas versões do algoritmo, uma serial e outra em paralelo. Este algoritmo é adequado para implementação paralela por não possuir dependências de dados como de Gauss-Seidel que também é mostrado a convergir. Como indicador de desempenho para avaliação do algoritmo de restauração, além das medidas tradicionais, uma nova métrica que se baseia em critérios subjetivos denominada IWMSE (Information Weighted Mean Square Error) é empregada. Essas métricas foram introduzidas no programa serial de processamento de imagens e permitem fazer a análise da restauração a cada passo de iteração. Os resultados obtidos através das duas versões possibilitou verificar a aceleração e a eficiência da implementação paralela. A método de paralelismo apresentou resultados satisfatórios em um menor tempo de processamento e com desempenho aceitável.
Resumo:
O presente trabalho versa sobre o usucapião especial coletivo, uma vez que o mesmo revela-se como um dos instrumentos jurídicos escolhidos pelo legislador para promover a efetivação de valores constitucionais, especialmente a função social da propriedade. O referido instituto encontra-se disciplinado nos arts. 10 a 14 do Estatuto da Cidade e tem por objeto áreas urbanas com mais de duzentos e cinqüenta m, desde que ocupadas por população de baixa renda para sua moradia, com posse qualificada com os requisitos do art. 183 da Constituição Federal de 1988, onde não seja possível identificar os terrenos ocupados por cada possuidor. Incumbe a ele, portanto, dupla tarefa, isto é, não apenas regularizar a situação fundiária, mas também permitir a urbanização de áreas ocupadas por população de baixa renda. Neste passo, encarecer-se-á a posse, como situação fática e existencial, de apossamento e ocupação da coisa, dotada de natureza autônoma, eis que por meio dela a pessoa tem possibilidade de atender às suas necessidades vitais, como a moradia e o cultivo, daí falar-se em uma posse qualificada, isto é, na posse-trabalho. Entretanto, é acurado salientar que, mesmo para que o Estado possa atuar no sentido de promover uma efetiva regularização fundiária via usucapião especial coletivo, verifica-se imperioso o reconhecimento daqueles que serão beneficiados pela sua atuação como titulares de direitos, isto é, como membros de igual valor da coletividade política. Desta maneira, aborda-se o tema do usucapião especial coletivo sob o prisma das teorias concernentes ao reconhecimento, mais especificamente, a partir do enfoque adotado por Axel Honneth e Nancy Fraser. Tais teorias consistem no fio condutor dos capítulos da tese e através delas busca-se superar a existência de diferentes classes e status sociais, bem como remodelar os paradigmas que culminaram nessa situação como forma de se efetivar e promover o direito à moradia.
Resumo:
A Restauração de Imagens é uma técnica que possui aplicações em várias áreas, por exemplo, medicina, biologia, eletrônica, e outras, onde um dos objetivos da restauração de imagens é melhorar o aspecto final de imagens de amostras que por algum motivo apresentam imperfeições ou borramentos. As imagens obtidas pelo Microscópio de Força Atômica apresentam borramentos causados pela interação de forças entre a ponteira do microscópio e a amostra em estudo. Além disso apresentam ruídos aditivos causados pelo ambiente. Neste trabalho é proposta uma forma de paralelização em GPU de um algoritmo de natureza serial que tem por fim a Restauração de Imagens de Microscopia de Força Atômica baseado na Regularização de Tikhonov.
Resumo:
A pesquisa proposta pretende esclarecer os pontos obscuros e controvertidos do artigo 1228, 4 e 5 do Código Civil, tendo por finalidade a busca da efetividade de tal dispositivo legal, que possui, na sua essência, o reconhecimento do direito fundamental de moradia e, ainda, tutela o direito ao trabalho. O dispositivo em questão rompe com o paradigma da posse como mera sentinela avançada do direito de propriedade e reconhece a defesa da posse autônoma exercida por aqueles que realmente cumprem com a sua função social. A partir do preenchimento dos requisitos previstos na lei, concede-se a legitimação da posse aos possuidores e, com o pagamento da indenização ao proprietário, converte-se a posse em direito de propriedade. Dessa forma, o instituto visa não apenas à regularização fundiária de áreas urbanas ou rurais, mas, principalmente, à efetividade dos direitos fundamentais de moradia e trabalho, que dão substância ao principio norteador de qualquer sociedade civilizada: o princípio da dignidade da pessoa humana. Assim sendo, na busca pela efetividade do dispositivo, o estudo tem ainda como finalidade desenvolver a natureza jurídica específica do instituto, reconhecendo-o como um modo autônomo de aquisição onerosa do direito de propriedade, não se equiparando a formas de desapropriação ou de usucapião do direito de propriedade.